AI’s rate of progress is hitting a wall. It may not matter.
That was the prevailing theme from this week’s Cerebral Valley AI Summit in San Francisco — a gathering of about 350 CEOs, engineers, and investors in the AI industry that I attended on Wednesday.
Until now, the AI hype cycle has been predicated on the theory that throwing more data and compute at training new AI models will result in exponentially better results. But as I first reported in this newsletter, Google and others are starting to see diminishing returns from training their next models. This proverbial “wall” challenges the assumption that the next crop of major AI models will be dramatically smarter than what exists today.
“The wall has been the topic du jour for the past few weeks,” Scale AI CEO Alexandr Wang, whose company helps OpenAI, Meta, and others train their models, told the conference’s host, Eric Newcomer, during the morning’s first session. “Have we hit a wall? Yes and no.”
While the next AI models from OpenAI, Anthropic, and others may not be significantly smarter than what exists today, the people building with AI think there is plenty of room to create better experiences with what exists today. And while the “reasoning” capability OpenAI showed with its most recent o1 model is prohibitively expensive and slow to use right now, it signals a shift everyone seemed to agree on: the next breakthrough will be making the LLMs of today smarter.
“There’s a very real shift in what being a frontier lab means,” Wang said onstage. Much of the investment going into AI has been based on the belief that the scaling law “would hold,” he said. Now, “it’s the biggest question in the industry.”
Slowing AI progress may not be a bad thing given how frenetic the past year has been. When I attended the first Cerebral Valley AI Summit in March 2023, Sam Altman had yet to be fired and rehired, Mark Zuckerberg had yet to go all in on giving away Llama, and Elon Musk was calling for a pause on AI development while he was staffing up to create xAI. Stability AI founder Emad Mostaque was onstage boasting that he was “going to build one of the biggest and best companies in the world.” Since then, Stability nearly imploded, and Mostaque is no longer CEO.
The buzz in AI circles is now concentrated on agents — LLMs that can take over a computer and act on one’s behalf. There are whispers that Google will show off its Gemini agent next month followed by OpenAI’s unveiling in January. Meta is working on agents. And Anthropic, whose CEO, Dario Amodei, showed up at the end of the day flanked by two bodyguards...