Antinuclear

Australian news, and some related international items

All is fair in A.I. warfare. But what do Christian ethics have to say?

Laurie Johnston, January 31, 2024,  https://www.americamagazine.org/faith/2024/01/31/artificial-intelligence-ethics-war-247032#:~:text=warfare%20becomes%20more%20and%20more,to%20help%20fulfill%20that%20vocation.–

Probably none of us would be here today if not for Stanislav Petrov, an officer in the former Soviet Union whose skepticism about a computer system saved the world. When, on Sept. 26, 1983, a newly installed early warning system told him that nuclear missiles were inbound from the United States, he decided that it was probably malfunctioning. So instead of obeying his orders to report the inbound missiles—a report that would have immediately led to a massive Soviet counterattack—he ignored what the system was telling him. He was soon proved correct, as no U.S. missiles ever struck. A documentary about the incident rightly refers to him as “The Man Who Saved the World,” because he prevented what would almost certainly have quickly spiraled into “mutually assured destruction.”

Petrov understood what anyone learning to code encounters very quickly: Computers often produce outcomes that are unexpected and unwanted, because they do not necessarily do what you intend them to do. They do just what you tell them to do. Human fallibility means that the result is often enormous gaps between intentions and instructions and effects, which is why even today’s most advanced artificial intelligence systems sometimes “hallucinate.”

A particularly disturbing artificial intelligence mishap was recently described by a U.S. Air Force colonel in a hypothetical scenario involving an A.I.-equipped drone. He explained that in this scenario, the drone would “identify and target a…threat. And then the operator would say ‘Yes, kill that threat.’ The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” he wrote. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” Logical, but terrible.

Much of the public conversation about A.I. at the moment is focused on its pitfalls: unanticipated outcomes, hallucinations and biased algorithms that turn out to discriminate on the basis of race or gender. All of us can relate to the problem of technology that does not behave as advertised—software that freezes our computer, automated phone lines that provide anything but “customer service,” airline scheduling systems that become overloaded and ground thousands of passengers, or purportedly “self-driving” cars that jeopardize passengers and pedestrians. These experiences can and should make us skeptical and indicate the need for a certain humility in the face of claims for the transformative power of A.I. The great danger of A.I., however, is that it can also perform quite effectively. In fact, it is already transforming modern warfare.

Force Multiplier

In Pope Francis’ World Day of Peace message this year, he reminds us that the most important moral questions about any new technology relate to how it is used.

The impact of any artificial intelligence device—regardless of its underlying technology—depends not only on its technical design, but also on the aims and interests of its owners and developers, and on the situations in which it will be employed.

It is clear that the military use of A.I. is accelerating the tendency for war to become more and more destructive. It is certainly possible that A.I. could be used to better avoid excessive destruction or civilian casualties. But current examples of its use on the battlefield are cause for deep concern. For example, Israel is currently using an A.I. system to identify bombing targets in Gaza. “Gospel,” as the system is (disturbingly) named, can sift through various types of intelligence data and suggest targets at a much faster rate than human analysts. Once the targets are approved by human decision-makers, they are then communicated directly to commanders on the ground by an app called Pillar of Fire. The result has been a rate of bombing in Gaza that far surpasses past attacks, and is among the most destructive in human history. Two thirds of the buildings in northern Gaza are now damaged or destroyed.

A.I. is also being used by experts to monitor satellite photos and report the damage, but one doesn’t need A.I. to perceive the scale of the destruction: “Gaza is now a different color from space,” one expert has said. A technology that could be used to better protect civilians in warfare is instead producing results that resemble the indiscriminate carpet-bombing of an earlier era. No matter how precisely targeted a bombing may be, if it results in massive suffering for civilians, it is effectively “indiscriminate” and so violates the principle of noncombatant immunity.

No matter how precisely targeted a bombing may be, if it results in massive suffering for civilians, it violates the principle of noncombatant immunity.

Questions of Conscience

What about the effects of A.I. on those who are using it to wage war? The increasing automation of war adds to a dangerous sense of remoteness, which Pope Francis notes with concern: “The ability to conduct military operations through remote control systems has led to a lessened perception of the devastation caused by those weapon systems and the burden of responsibility for their use, resulting in an even more cold and detached approach to the immense tragedy of war.” Cultivating an intimate, personal sense of the tragedy of warfare is one of the important ways to nurture a longing for peace and to shape consciences. A.I. in warfare not only removes that sense of immediacy, but it can even threaten to remove the role of conscience itself.

Continue reading

February 1, 2024 Posted by | Uncategorized | , , , , | Leave a comment