■ In this week's AI Lab:The most computationally intensive AI models may have less of an edge. Also: Google’s teams of tool-using agents, Microsoft’s cutting-edge image model, and a new way to jail-break LLM.
A new study from MIT suggests the biggest and most computationally intensive AI models may soon offer diminishing returns compared to smaller models. By mapping scaling laws against continued improvements in model efficiency, the researchers found that it could become harder to wring leaps in performance from giant models whereas efficiency gains could make models running on more modest hardware increasingly capable over the next decade...
Subscribe today to continue receiving my full newsletter each week. I've been writing about AI and related themes for over a decade (since neural networks were considered a dead end, in fact). I'm fascinated by how innovation occurs and how it affects the economy. And I want to understand the global picture, rather than just the world as viewed through the lens of Silicon Valley.
So, if you're keen to keep track of advances from the world of computing—AI, robotics, quantum, chipmaking, and more—that have the potential to transform the way we live ... this is the newsletter you for you.