Imagine you are a passenger on a plane flying through heavy turbulence. Inside the cockpit, the pilot panics and suddenly decides to let the autopilot system make all the critical decisions. While the autopilot can handle routine flying, clearly it doesn’t have the judgment to react to emergencies or unpredictable situations. After some deep breaths, when the computer isn’t doing what it should, the pilot retakes control of the plane.
When the stakes are unimaginably high, like in decision making around nuclear weapons, relying solely on artificial intelligence, or AI, is similarly risky. Recently, Russia has elevated this risk by lowering its threshold for nuclear first-use, granting leaders official permission to use nuclear weapons against Ukraine if motivated to do so. In the event of a nuclear crisis, the integration of AI into nuclear launch protocols can push decisionmakers toward hasty actions that might encourage escalation instead of restraint.
Given the risk that AI poses for nuclear escalation, FCNL supported H.R. 8070, Sec. 1627 of the National Defense Authorization Act for the 2025 fiscal year (NDAA). This bipartisan provision, approved by the House earlier this year, prohibits the use of federal funds for launching nuclear weapons with autonomous systems lacking meaningful human control.
The current “compromise version” of the NDAA, which is very likely to remain in the final version, reflects this sense of caution and restraint. Section 1638 states that “the use of artificial intelligence efforts should not compromise the integrity of nuclear safeguards.’’ The policy continues by upholding “the principle of requiring positive human actions in execution of decisions by the President with respect to the employment of nuclear weapons.” While this is a positive step in the right direction, policymakers must continue to consider the ways that AI could increase the likelihood of nuclear war and potentially encourage a costly arms race.
AI Makes Launching Nuclear Weapons Convenient.
The concern of AI isn’t just about technical glitches; it’s about the potential for dangerous miscalculations. Under pressure, AI could misread signals, such as efforts from another country to diffuse a situation. For example an autonomous system could view threats from the adversary, intended to deter further attacks, as signs of imminent escalation, resulting in a new wave of preemptive strikes. AI’s focus on speed might encourage hasty decisions, especially if its training data assumes nuclear war can be won—despite the leaders of nuclear weapons states agreeing it cannot.
AI’s focus on speed might encourage hasty decisions.
AI could give the president ready-made decisions during a nuclear crisis. This might reduce their sense of responsibility for the consequences. This lack of responsibility would be further complicated by the fact that the AI cannot be trained to fully comprehend the massive human and environmental toll of nuclear use.
Nuclear AI Encourages a New Arms Race.
Military advancements in AI are starting to look like an arms race. This could make the world less safe and drive up Pentagon spending on nuclear weapons, which is already projected to exceed $756 billion over the next ten years.
According to the Biden Administration’s October National Security Memorandum on AI, the United States aims to be the world leader in the “safe, secure, and trustworthy” development of AI, while implementing it to bolster national security aims.
The memo doesn’t immediately call for AI in weapons systems, but testing these tools might lead to their eventual use. Past attempts from the Pentagon to develop autonomous weapons systems, including the replicator drone program, have been widely criticized for being unethical and destabilizing to the current nuclear landscape. For instance, many arms control experts are concerned that autonomous drones may attack nuclear facilities, inadvertently leading to nuclear escalation.
It is not too late to stop this arms race mentality from getting out of hand.
It is not too late to stop this arms race mentality from getting out of hand. Keeping AI out of nuclear decisions gives U.S. policymakers a way to start arms control talks with China. In October, President Joe Biden and Chinese President Xi Jinping agreed that human beings and not artificial intelligence should make decisions over the use of nuclear weapons, while simultaneously stressing the need to “carefully consider the potential risks and develop AI technology in the military field in a prudent and responsible manner.”
AI makes nuclear weapons less safe.
While the existence of nuclear weapons is inherently risky, their destructive power means they are highly regulated. This includes strict regulations and safety checks to prevent accidents.
As AI emerges, nuclear weapons are becoming more technologically complex, requiring the need for more thoughtful and innovative ethical considerations. Keeping nuclear decisions in human hands can help avoid the risks of AI, build understanding of new technologies, and potentially open the door to broader arms control agreements. While no-one should have access to the immense destructive power of nuclear weapons, AI in nuclear decision making is uniquely dangerous.