■ In this week's AI Lab:Large language models experience cognitive decline when fed low-quality, viral content. Also: Emergent behavior in AI agents; how to scale reinforcement learning; and an impressive new image generator.
A new study from the University of Texas at Austin, Texas A&M, and Purdue University shows that large language models fed a diet of popular but low-quality social media content experience a kind of “brain rot” that may be familiar to anyone who has spent too long doomscrolling on X or TikTok.
“We live in an age where information grows faster than attention spans—and much of it is engineered to capture clicks, not convey truth or depth,” says Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the study as a graduate student at UT Austin. “We wondered: What happens when AIs are trained on the same stuff?”
Subscribe today to continue receiving my full newsletter each week. I've been writing about AI and related themes for over a decade (since neural networks were considered a dead end, in fact). I'm fascinated by how innovation occurs and how it affects the economy. And I want to understand the global picture, rather than just the world as viewed through the lens of Silicon Valley.
So, if you're keen to keep track of advances from the world of computing—AI, robotics, quantum, chipmaking, and more—that have the potential to transform the way we live ... this is the newsletter you for you.