Dylan Scott here. It's Thursday. Your one-week countdown to Thanksgiving has officially begun. For today's main story, I spoke with Joshua Keating, our leading expert on warfare, foreign policy, and nuclear weapons. He just published a fresh feature story on AI and nuclear risk — but not the risk that you may be thinking of. Let's get into it. |
|
|
When it comes to nukes and AI, people are worried about the wrong thing |
|
|
Mark Harris for Vox; Getty Images |
|
|
| Dylan Scott So I, a layperson, worry about the Skynet scenario where a rogue artificial intelligence takes over our nuclear weapons and decides to wipe us out. But that's not really the situation that actual experts worry about based on your piece. What is the AI nuclear apocalypse scenario that keeps them up at night? |
|
|
| Joshua Keating I think what people imagine — having seen War Games or The Terminator or the last Mission Impossible movie — is a system that basically turns control of the decision to use nuclear weapons over to a computer system, over to an AI.
There's pretty widespread agreement that we're not going to do that. If you talk to the commanders of US Strategic Command, they say we're not interested in that. President Joe Biden and Chinese President Xi Jinping last year signed an agreement saying they wouldn't do that. And I believe them that that’s not really in the cards. There is, however, an increasing trend toward integrating AI much more into the nuclear command and control system to do things like analyzing data on potential targets or determining whether things that look like threats really are threats.
The problem that a lot of experts are pointing out now is that it all gets entangled, that the attempts to silo off AI into one part of the system are going to be hard to do. And in a crisis where leaders have to make a decision in a very short amount of time, they may not know the degree to which the information they're getting upon which they're making those decisions is actually provided by AI with all the inherent built-in biases and problems that AI comes with.
|
|
|
| Dylan Scott
AI is only as good as the data and information that's been fed into it. So, that could lead to AI unintentionally steering us toward nuclear catastrophe, as you write in your piece. |
|
|
| Joshua Keating
There's a number of scenarios from history of technology leading to nuclear near-misses.
There's the famous Stanislav Petrov case, where the Russian commander was told by his computer system that there were missiles incoming. It turned out to be light reflecting off the clouds. Modern AIs are better than the systems we had in the '70s and '80s; that's undeniably true. But there are still issues. The targeting system the Israelis were using for their military strikes in Gaza to identify Hamas targets may have had an error rate up to 10 percent.
AIs can be fooled; sensing systems can be tricked. And the problem, as with a lot of AI problems, is that we don't often fully understand why these systems are making the judgments they are making.
We know the data going in, but we don't know the middle step, the decision, how it's weighing certain factors. If you have unlimited time, you can go in, and get under the hood, and try to interrogate the premises the AI is using and double check its work. In a crisis, they may not take the time to do that. There's a phenomenon called automation bias, which is that people just inherently trust what AI is telling them and often trust it over human judgment. I worry about it getting to a situation where it's harder to imagine a human taking a moment when seconds count to overrule the machines and say, "Actually, what it's telling me doesn't really make sense."
|
|
| | Dylan Scott Yet, you also write about how it feels somewhat inevitable that AI will be incorporated into these nuclear processes. Why is that? |
|
|
| Joshua Keating
It's an imperative for speed. These things can process massive amounts of data very quickly. They cut down on a lot of grunt work. There's a lot of precision and a lot of data analysis involved in figuring out potential nuclear targets, and it involves a lot of people spending a lot of time. It’s a perfect application for AI, because it's a bounded data set, and you can just read the numbers in, and it can punch you out results pretty quickly.
So, the temptation is there, and, even outside of the nuclear domain, I think there's a major push right now to integrate AI in a more widespread way across the military chain of command. |
|
|
| Dylan Scott So, how do we respond to this? Is there a way to safely integrate AI into the nuclear process? |
|
|
| Joshua Keating
The AI nuclear question is a subset of the nuclear risk question as a whole.
There was a time, during the Cold War, when nuclear policy issues were way more at the forefront of public debate. Many members of Congress had some understanding of nuclear strategy. If you look at the biggest protest in American history, it was in 1982 — the nuclear freeze movement. Whether you thought we should ban all nuclear weapons entirely, or you thought we needed a strong deterrent, at least people were engaging with these problems.
I think, after the Cold War and the turn to counterterrorism, there was more of a shift in focus, and people cared about nuclear weapons less.
But now, people are talking about a new nuclear age. I think that Putin's threats during the war in Ukraine showed that nuclear brinkmanship is still part of great power politics, and now, we've got a third growing nuclear power in China, so it's a three-way race.
It's not so much how to prevent a catastrophe caused by AI and nuclear weapons. I think we should be thinking more broadly about how we prevent a catastrophe with nuclear weapons. And AI is part of that. But, if you're worried about AI and nuclear weapons, you should really be worried about nuclear weapons. |
|
|
⮕ Keep tabs
Trump ❤️ MBS: The giddy bromance between the American and Saudi leaders has been much discussed during the latter's travels to the US. But the reality is more complicated than it may seem, Vox's Joshua Keating writes.
MTG vs. MAGA: Marjorie Taylor Greene, once the most bona fide MAGA supporter you could find, is now sounding a very different tune. What the hell is going on? In his latest piece, Vox's Christian Paz explains.
A difficult diagnosis: When screening patients for suicidal ideation, doctors depend on their patients to be honest about their feelings. But should they? Or is there a better way to identify people at risk? [NYT] Polar vortex: It's back, and it's conspiring to make the next month — aka the holiday travel season — a freezing and snowy nightmare. [CNN] |
|
|
The Vox Membership program is getting even better with access to Vox’s Patreon, where members can unlock exclusive videos, livestreams, and chats with our newsroom.
Become a Vox Member to get access to it all. |
|
|
|
Two military generals are responsible for Sudan's brutal civil war. The American president just pledged to get involved. |
|
|
My household has gotten a big kick out of the live cam of the cheetah cubs at the Smithsonian Zoo in Washington, DC. Check it out; they're adorable! (h/t my mom) |
|
|
Today’s edition was produced and edited by me, senior correspondent Dylan Scott. Thanks for reading! |
|
|
Are you enjoying the Today, Explained newsletter? Forward it to a friend; they can sign up here. And as always, we want to know what you think. Let us know by filling out this form or just replying to this email.
|
|
|
|