This visually striking figure from the paper shows how respondents think about what to expect if high-level machine intelligence is developed: Most consider both extremely good outcomes and extremely bad outcomes probable.
As for what to do about it, there experts seem to disagree even more than they do about whether there's a problem in the first place.
Are these results for real?
The 2016 AI impacts survey was immediately controversial. In 2016, barely anyone was talking about the risk of catastrophe from powerful AI. Could it really be that mainstream researchers rated it plausible? Had the researchers conducting the survey — who were themselves concerned about human extinction resulting from artificial intelligence — biased their results somehow?
The survey authors had systematically reached out to "all researchers who published at the 2015 NIPS and ICML conferences (two of the premier venues for peer-reviewed research in machine learning," and managed to get responses from roughly a fifth of them. They asked a wide range of questions about progress in machine learning and got a wide range of answers: Really, aside from the eye-popping "human extinction" answers, the most notable result was how much ML experts disagreed with one another. (Which is hardly unusual in the sciences.)
But one could reasonably be skeptical. Maybe there were experts who simply hadn't thought very hard about their "human extinction" answer. And maybe the people who were most optimistic about AI hadn't bothered to answer the survey.
When AI Impacts reran the survey in 2022, again contacting thousands of researchers who published at top machine learning conferences, their results were about the same. The median probability of an "extremely bad, e.g., human extinction" outcome was 5 percent.
That median obscures some fierce disagreement. In fact, 48 percent of respondents gave at least a 10 percent chance of an extremely bad outcome, while 25 percent gave a 0 percent chance. Responding to criticism of the 2016 survey, the team asked for more detail: how likely did respondents think it was that AI would lead to "human extinction or similarly permanent and severe disempowerment of the human species?" Depending on how they asked the question, this got results between 5 percent and 10 percent.
In 2023, in order to reduce and measure the impact of framing effects (different answers based on how the question is phrased), many of the key questions on the survey were asked of different respondents with different framings. But again, the answers to the question about human extinction were broadly consistent — in the 5-10 percent range — no matter how the question was asked.
The fact the 2022 and 2023 surveys found results so similar to the 2016 result makes it hard to believe that the 2016 result was a fluke. And while in 2016 critics could correctly complain that most ML researchers had not seriously considered the issue of existential risk, by 2023 the question of whether powerful AI systems will kill us all had gone mainstream. It's hard to imagine that many peer-reviewed machine learning researchers were answering a question they'd never considered before.
So ... is AI going to kill us?
I think the most reasonable reading of this survey is that ML researchers, like the rest of us, are radically unsure about whether to expect the development of powerful AI systems to be an amazing thing for the world or a catastrophic one.
Nor do they agree on what to do about it. Responses varied enormously on questions about whether slowing down AI would make good outcomes for humanity more likely. While a large majority of respondents wanted more resources and attention to go into AI safety research, many of the same respondents didn't think that working on AI alignment was unusually valuable compared to working on other open problems in machine learning.
In a situation with lots of uncertainty — like about the consequences of a technology like superintelligent AI, which doesn't yet exist — there's a natural tendency to want to look to experts for answers. That's reasonable. But in a case like AI, it's important to keep in mind that even the most well-regarded machine learning researchers disagree with one another and are radically uncertain about where all of us are headed.
—Kelsey Piper, senior writer
Questions? Comments? Tell us what you think! Email us at futureperfect@vox.com.
And if you want to recommend this newsletter to your friends or colleagues, tell them to sign up at vox.com/future-perfect-newsletter.