The most recent jobs numbers paint a pretty grim picture of the labor market and the apparent havoc AI is wreaking on it. After warnings about unemployment among recent grads earlier this year, the newest report suggests that AI’s impact is reaching a broader group of workers. There were over 150,000 layoffs in October, which makes it the worst October for layoffs in over two decades, and about 50,000 of those have been attributed to AI. Overall, 2025 has seen more job cuts than any year since 2020.
It’s too soon to tell how much AI is really to blame for these job losses, even if companies are blaming AI in public statements. A team of researchers from the Yale Budget Lab and Brookings has argued that the broader labor market isn’t being disrupted any more by AI than it was by the internet or PCs, and that recent college grads are being displaced due to sector-specific factors. Anthropic CEO Dario Amodei, however, has predicted that AI could eliminate half of entry-level white collar jobs. So, which is it?
There is a lot we don’t know about what will happen with AI in general — looking at you, AI bubble — and it’s too soon to tell whether AI will actually deliver on its most ambitious promises or be more transformative than past tech revolutions.
But, to shed some light on the jobs question in particular, I called up Neil Thompson, principal research scientist at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). He’s been studying everything from why diminishing returns on frontier models will shape AI’s future to how automation changes the value of labor. Our conversation has been edited for length and clarity.
For the past couple of years, your work has pushed back on the idea that automation is always bad for workers and that AI will take all of our jobs. But, in the past few months, we've seen tens of thousands of job losses attributed to AI. What’s going on?
My guess is that we have two different phenomena going on at the same time. One is that AI is becoming more prevalent in the economy. I think, for some cases, like customer service, that's probably pretty legitimate. Indeed, these systems seem awfully good at those tasks, and so, there are going to be some jobs that are being taken over by these systems.
At the same time, it would be surprising to me if these systems were able to do as many things as the job loss numbers imply. And so, I suspect that there's also a mix of either people deciding to cut the jobs and put some of that blame on AI, or they’re cutting the jobs in advance with an aim to do more AI. They're sort of pushing their businesses toward it and seeing what's going to happen.
Why is there such dissonance between those who say AI will take away half our jobs and those who say AI isn’t the reason we’re seeing so much upheaval in the labor market?
A whole bunch of people are talking about incredibly rapid change — a capability increase, which could do things that humans can do. For most businesses, there are very large last-mile costs that are involved with actually adopting these systems. Someone using ChatGPT just in the interface is very different than “we now run our business and trust that every time the system is going to run, it's going to get it right.” That's a different level. You often need to bring in specific data. There are a lot of costs that come with that. So, these last-mile costs can be very important and can really slow adoption even when systems are quite good.
Apart from that cost, there's also a matter of a system being good, and a system being good enough to be better than a human. They’re not quite the same thing.
Earlier this year, you published a paper with your MIT colleague David Autor that used expertise as a framework for understanding how automation affects the value of labor. Historically, it’s not all bad, right?
When we think of automation, we have in our mind a sort of doom scenario, where, as automation happens, the number of jobs that are out there in that occupation go down, the wages in that occupation go down, and you're like, “boy, this has been a pretty terrible story.”
But, if you look at the last 40 years of automation — this is not AI automation, this is just computerization and things like that — we know that a lot of routine tasks were automated by this process. If you look at people who had routine tasks, what you find is a bunch of that stuff got automated, but also their wages didn't go down. Some went up, some went down. That’s kind of a puzzle.
What we think is going on is that, when automation happens to a particular occupation, it really, really matters which of the tasks of that occupation are getting automated. In particular, if you have automation of high-expert tasks — so the things that you do that are most expert — that has one effect, and if you have automation at the least-expert tasks, you'll get a different effect.
Can you give me a couple of examples?
Think about taxi drivers. The most expert thing you did was know all of the roads in a city. You knew all the little back roads. You knew all the little shortcuts. You were the expert on that. Then, Google Maps and MapQuest come in, and all of a sudden, anybody who can drive a car can do a pretty good job of doing that. In that case, your most expert tasks got automated away. Because the most expert things are gone, your wages go down.
But, counter to this doom cycle version of this, wages go down, but the number of people in that profession goes up, because now, a whole bunch of people who didn't used to know all the streets can suddenly drive an Uber.
At the other extreme, think of proofreaders. Spellcheck comes in. A whole bunch of stuff that they used to do is now automated, but it was the least expert thing that they did. The meaningful thing they did was to reorganize your paragraphs and make sure that you were thinking about the right thing and phrasing things in the right way, not the spelling part.
So, if you look at what happens to them, their least expert tasks got automated. What was left was more expert. And so, because they were using their expert stuff more of the time, their wages have actually gone up faster than the average — but there are now fewer of them.
So, you have this interesting effect where the Uber drivers’ wages went down, but there were more of them. And for the proofreaders, wages went up, and there were fewer of them. And both of those have pluses and minuses.
So, clearly, AI is not the first technology to automate aspects of work in the computer era. But does the same expertise framework hold true further back in history? Would we see similar patterns in the Industrial Revolution and automating textile workers' work?
One of the examples that my co-author likes to talk about is skilled artisans. Think about the wheelwrights, and the blacksmith, and all of those people, these used to be incredibly expert jobs. And through industrialization, we figured out how to do that on production lines and other places where the average expertise was lower, but there were vastly more wheels being produced and vastly more people involved in the production of wheels.
And then, of course, we have lots of modern examples as automation comes in, and some of the things that we do get automated, we actually become more expert in the things we're doing because we don't have to do the basic things anymore.
Companies like Google and OpenAI are promising that their technology will do much more than automate basic tasks, and they’re spending hundreds of billions of dollars on infrastructure to make it — call it artificial general intelligence or superintelligence — happen. We’re hearing a lot about an AI bubble lately, because it’s not clear if these tools will actually work before the bill comes due. How will we know when AI has proven itself?
I don't think that the question is really, is AI going to prove itself. I think it is clear that these capabilities are improving fast enough. It's going to be incredibly useful, I think, and I think there’s going to be a lot of adoption. There's going to be a lot of benefits that flow from it.
To me, the question in terms of the AI bubble is more about valuations. This is going to be useful, but is that the right valuation? It is going to matter a lot. It's going to have a lot of these effects. The question is, are we building out even faster than those effects are going to kick in, or the opposite?
A recent Pew Research Center survey showed that Americans are more concerned than excited about the technology. Why is AI so unpopular?
I want to be hesitant about putting myself too much in people's heads, but I think it is understandable that people have anxiety about what AI is going to do and how it's going to change their jobs, because it's a very powerful tool. I think it will change a lot of people's jobs — yours included, mine included.
I think it is particularly hard when faced with that and not knowing how much of the job is going to be replaced or how much am I going to have to adjust in ways that could be painful. I think we will learn more about that in the next little while.
There's a second piece which is really, really hard. Historically, when new technologies have come in and automated things, humans have moved to doing new tasks. New tasks are created that didn't exist before but are actually important for employment. We really don't know what those new tasks are going to be ahead of time. That lack of visibility is a challenge. But it is worth saying that, historically, there's been a remarkable wellspring of new tasks and new jobs that have emerged. And so, I think we should feel confident that there are going to be a bunch of those that will come.
There will be a transition. In many cases, we should think of that as being similar to previous transformations. The question is how fast it happens. If it’s medium- to long-term, humans are pretty good at saying, “Okay, if these are new tasks that we are particularly good at and the technology is not, let's adapt to do those tasks.” But if it happens all at once, and a lot of the transitions and displacement happens in a compressed period of time, that's going to make it much harder for the economy to adjust.
It sounds like you're saying that there's a fear of the unknown, and there are a lot of unknowns right now. But we've gone through major technological transformations before this one. We just don't know how long it will take, or what we'll be doing on the other side of it. That doesn’t sound super comforting.
Let me just add a little twist to that. It is definitely the case that if you look historically, we have seen patterns where new technologies come in. There is some churn in the economy, some people are hurt by that, and we should be cognizant of that. We should expect that could happen now, as well. But in the medium term, we adjust well.
In terms of AI, I think we can take some comfort from those historical lessons. And the question is just: Is AI in some way different than these previous technologies that would make us think that we would get a different outcome?
I think the people who think that we're going to get to AGI quickly, their answer would be yes. If it can do everything we can do, and it can do that next year or the year after, that is very different than previous technologies. That makes it pretty hard to adjust. If it rolls out, it does some tasks, it takes a long time to do other tasks, well then I think we're much more in a world where we can adjust in the way that we have in the past.
—Adam Clark Estes, senior technology correspondent