TODAY. The digital system is a threat to the nuclear industry – it’ll get worse with AI

At last, somebody noticed! And the amazing thing is that this warning from think tank Chatham House came on July 12 – 10 days before the global digital outage.
I have wondered about the cybersecurity of nuclear facilities. There have been warnings – that terrorists or “bad actors” – might attack them digitally.
But now – the global collapse of information technology showed the awful truth- things can go wrong with just a teensy little stuffup of “normal” digital operations!
Bad enough if airline booking systems suddenly don’t work, and cash registers at supermarkets don’t work, and all sorts of economic systems grind to an expensive halt.
But what if digital things go wrong in nuclear facilities – reactors, cooling systems, waste management facilities , and hell – nuclear weapons!
But surely, I, a mere amateur, am exaggerating!
Well, the experts at Chatham House are on the same page as I am:
many nuclear plants rely on software that is “built on insecure foundations and requiring frequent patches or updates” or “has reached the end of its supported lifespan and can no longer be updated”.
with operators opting to run the facility by a central computer system without human presence. Increased reliance on cloud systems to run infrastructure is bound to enhance the cybersecurity risks.
Even Chatham House still uses that lying term “cloud” system, when we all know damn well – there is no benign “cloud” – only acres of steel canisters and conglomerations of metal and wires.
We now live in a strange global digital monoculture. A single software update gone wrong and Microsoft Windows computers around the world crash. There is something awfully wrong with our lives being dependent on one, or a very few, digital systems run by great corporations run by a few powerful squillionaires.
And of course, that includes the so-called “defense” systems – limbering up to attack China etc. It’s a sobering thought that Armageddon might come – not from a decision by some evil dictator – but just from a teensy computer glitch.
AI, now being incorporated in weapons systems, might now make digital technology more vulnerable to glitches?
The global IT outage has surely been a wake-up call – as businesses, governments and individuals cope with its expensive after-effects.
But it should be even more of a wake-up call for the public – to think about the danger we are all in, allowing the nuclear industry to proliferate.
AI, climate change, pandemics and nuclear warfare puts humanity in ‘grave danger’, open letter warns
More than 100 politicians, academics and celebrities urge world leaders to act now against the existential threats facing mankind
Samuel Lovett, DEPUTY EDITOR OF GLOBAL HEALTH SECURITY, 15 February 2024 https://www.telegraph.co.uk/global-health/terror-and-security/ai-climate-change-pandemic-nuclear-warfare-humanity-danger/
Climate change, pandemics, nuclear warfare and artificial intelligence all pose an existential threat to humanity and need to be addressed with “wisdom and urgency”, more than 100 politicians, academics, and celebrities have warned in an open letter.
The signatories, including Annie Lennox, Richard Branson, Gordon Brown and Charles Oppenheimer, whose grandfather developed the atom bomb, said today’s world leaders prioritise “short-term fixes over long-term solutions” and “lack the political will to take decisive action” against the many dangers facing mankind.
“Our world is in grave danger. We face a set of threats that put all humanity at risk. Our leaders are not responding with the wisdom and urgency required,” the letter reads. “We are at a precipice.”
The signatories list four key demands for future-proofing humanity: a global financing plan to ease the transition to clean energy; arms control talks to reduce the risk of nuclear war; an equitable pandemic treaty to prepare for future outbreaks; and international governance for regulating AI to make it “a force for good”.
“The biggest risks facing us cannot be tackled by any country acting alone. Yet when nations work together, these challenges can all be addressed, for the good of us all,” the letter states.
The call for action is led by the Elders, an independent group of global leaders campaigning for peace and human rights founded by Nelson Mandela, and the Future of Life Institute, a non-profit working to develop transformative technologies for the benefit of humanity.
Other signatories of the letter include Ban Ki-moon, the former UN Secretary-General, Sir Malcolm Rifkind, the former UK foreign secretary, Helen Clark, the former prime minister of New Zealand, Mary Robinson, the former president of Ireland, and Amber Valletta, the American model and actress.
The letter also encourages the world’s decision-makers to be “bold” in abandoning their short termism in favour of “long-view leadership”.
“In a year when half the world’s adult population face elections, we urge all those seeking office to take a bold new approach,” it reads.
“We need long-view leadership from decision-makers who understand the urgency of the existential threats we face, and believe in our ability to overcome them.
“Long-view leadership means showing the determination to resolve intractable problems not just manage them, the wisdom to make decisions based on scientific evidence and reason, and the humility to listen to all those affected.”
The letter comes ahead of the Munich Security Conference, where government officials, military leaders and diplomats will meet on Thursday to discuss international security.
Each year, the conference brings together roughly 350 senior figures from more than 70 countries to engage in an intensive debate on current and future security challenges facing humanity.
Commenting on the open letter, Ban Ki-moon said the range of signatories “makes clear our shared concern: we need world leaders who understand the existential threats we face and the urgent need to address them”.
All is fair in A.I. warfare. But what do Christian ethics have to say?

Laurie Johnston, January 31, 2024, https://www.americamagazine.org/faith/2024/01/31/artificial-intelligence-ethics-war-247032#:~:text=warfare%20becomes%20more%20and%20more,to%20help%20fulfill%20that%20vocation.–
Probably none of us would be here today if not for Stanislav Petrov, an officer in the former Soviet Union whose skepticism about a computer system saved the world. When, on Sept. 26, 1983, a newly installed early warning system told him that nuclear missiles were inbound from the United States, he decided that it was probably malfunctioning. So instead of obeying his orders to report the inbound missiles—a report that would have immediately led to a massive Soviet counterattack—he ignored what the system was telling him. He was soon proved correct, as no U.S. missiles ever struck. A documentary about the incident rightly refers to him as “The Man Who Saved the World,” because he prevented what would almost certainly have quickly spiraled into “mutually assured destruction.”
Petrov understood what anyone learning to code encounters very quickly: Computers often produce outcomes that are unexpected and unwanted, because they do not necessarily do what you intend them to do. They do just what you tell them to do. Human fallibility means that the result is often enormous gaps between intentions and instructions and effects, which is why even today’s most advanced artificial intelligence systems sometimes “hallucinate.”
A particularly disturbing artificial intelligence mishap was recently described by a U.S. Air Force colonel in a hypothetical scenario involving an A.I.-equipped drone. He explained that in this scenario, the drone would “identify and target a…threat. And then the operator would say ‘Yes, kill that threat.’ The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” he wrote. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” Logical, but terrible.
Much of the public conversation about A.I. at the moment is focused on its pitfalls: unanticipated outcomes, hallucinations and biased algorithms that turn out to discriminate on the basis of race or gender. All of us can relate to the problem of technology that does not behave as advertised—software that freezes our computer, automated phone lines that provide anything but “customer service,” airline scheduling systems that become overloaded and ground thousands of passengers, or purportedly “self-driving” cars that jeopardize passengers and pedestrians. These experiences can and should make us skeptical and indicate the need for a certain humility in the face of claims for the transformative power of A.I. The great danger of A.I., however, is that it can also perform quite effectively. In fact, it is already transforming modern warfare.
Force Multiplier
In Pope Francis’ World Day of Peace message this year, he reminds us that the most important moral questions about any new technology relate to how it is used.
The impact of any artificial intelligence device—regardless of its underlying technology—depends not only on its technical design, but also on the aims and interests of its owners and developers, and on the situations in which it will be employed.
It is clear that the military use of A.I. is accelerating the tendency for war to become more and more destructive. It is certainly possible that A.I. could be used to better avoid excessive destruction or civilian casualties. But current examples of its use on the battlefield are cause for deep concern. For example, Israel is currently using an A.I. system to identify bombing targets in Gaza. “Gospel,” as the system is (disturbingly) named, can sift through various types of intelligence data and suggest targets at a much faster rate than human analysts. Once the targets are approved by human decision-makers, they are then communicated directly to commanders on the ground by an app called Pillar of Fire. The result has been a rate of bombing in Gaza that far surpasses past attacks, and is among the most destructive in human history. Two thirds of the buildings in northern Gaza are now damaged or destroyed.
A.I. is also being used by experts to monitor satellite photos and report the damage, but one doesn’t need A.I. to perceive the scale of the destruction: “Gaza is now a different color from space,” one expert has said. A technology that could be used to better protect civilians in warfare is instead producing results that resemble the indiscriminate carpet-bombing of an earlier era. No matter how precisely targeted a bombing may be, if it results in massive suffering for civilians, it is effectively “indiscriminate” and so violates the principle of noncombatant immunity.
No matter how precisely targeted a bombing may be, if it results in massive suffering for civilians, it violates the principle of noncombatant immunity.
Questions of Conscience
What about the effects of A.I. on those who are using it to wage war? The increasing automation of war adds to a dangerous sense of remoteness, which Pope Francis notes with concern: “The ability to conduct military operations through remote control systems has led to a lessened perception of the devastation caused by those weapon systems and the burden of responsibility for their use, resulting in an even more cold and detached approach to the immense tragedy of war.” Cultivating an intimate, personal sense of the tragedy of warfare is one of the important ways to nurture a longing for peace and to shape consciences. A.I. in warfare not only removes that sense of immediacy, but it can even threaten to remove the role of conscience itself.
Continue reading