You’re reading Read Max, a twice-weekly newsletter that tries to explain the future to normal people. Read Max is supported entirely by paying subscribers. If you like it, find it useful, and want to support its mission, please upgrade to a paid subscription! Greetings from Read Max HQ! In today’s edition, an examination of the “A.I. bubble,” various vibe shifts, and the long shadow cast by the crypto bubble over how we talk about it. A reminder: This newsletter is totally free to read, but it costs me hours of labor to produce: Reading, talking, writing, taking long walks, complaining to my wife, staring at the wall, etc. I want to keep these weekly columns free, but in order to do so, because of the nature of the newsletter business, I need to keep growing. If you value what you’re reading here, and find it helpful to you as you navigate existence--and if you’d like to also receive the second weekly, paywalled newsletter of excellent book-and-movie recommendations--please consider upgrading to a paid subscription. It costs about the price of one beer a month.¹ Eons ago, I wrote a piece called “The A.I. backlash backlash,” about a pendulum swing, then occurring in what I suppose we’d call “the discourse,” against a previously dominant cycle of A.I. backlash (which itself was a reaction to a dominant cycle of A.I. hype that dated back to the debut of ChatGPT). At the time, L.L.M. chatbots had improved significantly over the preceding 18 months; many people had managed to incorporate A.I. into their work in ways that seemed useful to them; and the “vibe” in Silicon Valley, as New York Times columnist Kevin Roose wrote had the time, had “shifted” to anticipate so-called “artificial general intelligence” on a short timeline. In the hothouse hubs of A.I. Discourse (X.com, Substack, Bluesky), the hype was bubbling up, and the skeptics and critics seemed to be in retreat. But, as the man says, Want to feel old? That was March. In the five months since Ezra Klein wrote in his Times column that “person after person… has been coming to me saying… We’re about to get to artificial general intelligence,” Meta has announced efforts to reorganize and downsize its A.I. division; NVIDIA’s “tepid” revenue forecast is suggesting a wide slowdown; Sam Altman is warning that “investors as a whole are overexcited about AI”; and Gary Marcus, prince of L.L.M. haters, is on his fifth or sixth victory lap. The renewed hype has sputtered; the most fervent enthusiasts have become disillusioned; critics reign triumphant: The backlash to the backlash to the backlash has arrived. The A.I. vibe shift: technically, economically, geopolitically, and morallyWhat’s happened? It wouldn’t be wrong to point to widespread disappointment in OpenAI’s new flagship model, GPT-5, as the most important inflection point. Long hinted to be a significant step toward “A.G.I.,” if not the thing itself, and teased by Sam Altman with an image of the Death Star, the model seems to represent overall a minor improvement over its predecessors, subject to many of the same problems and errors that have afflicted L.L.M. chatbots since the earliest days. If nothing else it has been a good reminder of how unreliable A.I. researchers and investors can be as judges of the significance of their own work: Remember those whispers and rumors of imminent epochal progress from this past winter, frequently cited to legitimize the new hype cycle? “Are you feeling the AGI?” Well, as it turns out, not really. But beyond the A.I.-enthusiast distress at OpenAI’s uncharacteristic under-delivery, the the model’s weaknesses crystallize an existential question for broader investment in A.I. research: Is this about as far as we can go with large language models? If OpenAI, one of the small number of companies that can be said to have the resources (capital, technical, and intellectual) to push L.L.M. capabilities forward, is finding itself reaching the point of diminishing returns, how much further can we really go with this paradigm? Accompanying this return to technical skepticism has been a background drumbeat of anxiety about the increasingly large role of A.I. investment in the wider economy. A widely circulated Wall Street Journal article by Christopher Mims from earlier this month put in stark terms the scale of new investment:
Whether this investment is “propping up the U.S. economy,” as Brian Merchant argues, or, alternately, “crowding out other activities,” as Jason Furman has it, it’s a staggering amount of money being put toward a technology whose measurably productive uses are far from clear, and it’s hard to imagine it not affecting the economy in important ways. (At the very least, if nothing else, the increased demand represented by these data centers is likely to drive up electricity prices for Americans.) Two weeks ago Joe Weisenthal wondered if the poor reception of GPT-5 might intersect with this ambient unease about A.I. investment to produce an elite vibe shift:
I’d say the vibe shift is already on us, as signaled by former Google C.E.O. Eric Schmidt, who took to the Times op-ed pages last week to castigate his tech-industry peers for, effectively, scaring the hoes:
Schmidt--who was claiming “the contours of an AGI future are beginning to take shape” as recently as February--is the definition of an elite bellwether, an enormously respected figure not just on the Burning Man playa but in foreign policy circles, and his op-ed should be seen as more than the idle musings of a respected former businessman uneasy with his peers’ avidity. Henry Farrell argues that it’s best read as the swan song for a totalizing (if short-lived) geopolitical worldview and alliance Farrell calls “tech unilateralism,” built around a firm belief in imminent A.G.I., is in its twilight:
So, just to recap, over the past month we’ve seen:
This is more than enough to constitute a “vibe shift” on its own, but to the list of suddenly relevant technical, financial, and elite-political concerns I’d add a long-brewing moral concern that’s helping drive and sustain the renewed A.I. backlash: A spate of troubling news stories about delusional and vulnerable people developing unhealthy attachments to, or even being “talked into” harmful behavior by, their chipper A.I. chatbot companions--culminating this week in an almost unreadably tragic and infuriating Times report about a depressed teenager whose eventual suicide was abetted by ChatGPT. Unlike the fantastical warnings of robotical paperclip omnicide--or even the more standard, if still somewhat abstract, fears about people learning how to build chemical weapons from a chatbot--these horrifying stories reveal a set of immediate, potentially mortal, inescapably grim dangers of widespread L.L.M. chatbots, now taking center stage just as whatever ultimate benefits might be said to justify that danger recede into a far more distant future. Which “A.I. Bubbles” are popping, and how?But does this mean--as many recent headlines would have it--that the “A.I. bubble is popping”? The answer depends, annoyingly, on how you define “A.I.,” and how you define “bubble,” and, also, how you define “popping.” Is the “A.I.” of “A.I. bubble” the entire field of machine learning? Only large language models? Only chatbots? The implementation thereof into pre-existing software? And is the “bubble” of “A.I. bubble” excessive equity valuations? Inflated expectations for or faith in L.L.M. performance? Excessive industry and management directives around A.I. use? Too many annoying guys on X.com talking too much about “A.I.”? And, maybe most importantly, what would it mean for it to “pop”? A stock market crash and a recession? A few V.C.s losing their shirts? An “A.I. winter”? Google removing “A.I. Overview” and a reversion to the pre-L.L.M. web? Fewer annoying guys on X.com? In the financial press, at least, the “A.I. bubble popping” means a sharp decline in equity valuations for A.I.-focused companies. Fair enough: It seems more likely than not that a financial bubble of some size will pop soon--even Sam Altman says so--though the extent of total damage is hard to gauge from here. But I suspect that for many people the “A.I. bubble” is something larger and slightly harder to define than “equity valuations”--a way of articulating and describing the inflated rhetoric and excessive encroachment around A.I. that we’ve all experienced over the last few years. To say “the A.I. bubble is popping” is to say something like “all this L.L.M. bullshit is going away soon.” Unfortunately, fat chance. Much of the Online A.I. Discourse, and especially the meta-discourse about the Online A.I. Discourse, which is of course what this newsletter specializes in, is occurring in the shadow of the COVID-era crypto mania--the eager promises of the blockchain, the suffocating coverage of “web3,” and the subsequent embarrassment brought about by the implosion of FTX. A few weeks ago I made a cameo appearance on the podcast “Hard Fork,” asking my friends Kevin Roose and Casey Newton, in my capacity as an A.I. critic, how their experience of covering the web3 era had changed their approach to writing about A.I. I was struck by this portion of Kevin’s reply:
There is a pervasive suspicion among A.I. skeptics (many of whom were and are also crypto skeptics) that the A.I. boom is a redux of the crypto boom--which is to say, effectively, a grift forced on consumers and abetted by unwitting journalists and other eager marks. And this suspicion can be converted by the more cavalier A.I. haters on Bluesky into a kind of fantasy of righteous vindication: When the A.I. bubble pops, “A.I.” will be revealed as a scam and a waste of time, and “all come crashing down.” Financial ruin for bad actors, eternal shame for duped boosters, and a rollback of all stupid L.L.M. implementations online and at the workplace. There are good reasons to say that the A.I. bubble and the crypto bubble are fundamentally different. (For starters: large language models, on a technical level, don’t have a speculative securities market literally built in.) But even if you believe that L.L.M.s are a scam and a waste of time, I’m not sure the crypto bubble provides cheering precedent. It’s true that the hype died down, and some people were prosecuted, and major food-and-beverage conglomerates mostly stopped launching memecoins on Twitter. But only two years after the industry’s supposed collapse it “accounted for nearly half of all corporate money” donated to SuperPACs in the 2024 election. If I can quote myself:
One way of thinking about the vibe shift described in the first part of this post is as the broader A.I. “discursive bubble” of hype and prophecy leaking air. (Dave Karpf calls this “the end of naive A.I. futurism.”) But as the crypto precedent suggests, you wouldn’t want to mistake it for the end of A.I., or of its deep-pocketed champions. 1 There is, in fact, not far from me in Brownstone Brooklyn, a new bar serving $5 draft pints: Doppelganger, on Myrtle. There’s also, obviously, the Alibi, which still serves $4 draft Bud pints last time I was there, though frankly the older I get the less I feel confident in the Alibi’s tap line cleaning facilities, or in my ability to get in and out of there without getting trapped in a very bad conversation with a regular. You're currently a free subscriber to Read Max. For the full experience, upgrade your subscription. |