You’re reading Read Max, a twice-weekly newsletter that tries to explain the future to normal people. Read Max is supported entirely by paying subscribers. If you like it, find it useful, and want to support its mission, please upgrade to a paid subscription! Greetings from Read Max HQ! In today’s newsletter: GPT-5, Meta’s A.I. policies, and why A.I. is a “normal” technology in a bad way. A reminder: Read Max is a family business, by which I mean it’s just me, Max, and I rely on paying subscriptions to fund my lavish lifestyle (buying my four-year-old the slightly more expensive tortillas for the cheese quesadillas that are currently his entire diet). I’m able to produce between 3,000 and 5,000 words a week for the newsletter because enough people appreciate what I do to furnish a full-time salary, but because of the basic economics of subscription businesses, I always need new subscribers. If you like Read Max--if you’ve chuckled at it, or cited it, or gotten irrationally mad at it in some generative way--please consider paying to subscribe as a mark of value. At $5/month and $50/year--a cheap price lower than most other similarly sized Substacks!--it costs about the equivalent of buying me one beer a month, or 10 beers a year. Back in April, the Princeton professors Arvind Narayanan and Sayash Kapoor (whose grounded and well-informed newsletter “A.I. Snake Oil” has been a valuable resource over the last few years) wrote a paper called “AI as Normal Technology,” the main argument of which is that “A.I.”--for the purposes of this post interchangeable with the large language models trained and released by OpenAI, Anthropic, Google, etc.--is, well, “normal”: Not apocalyptic, not divine, not better-than-human, not inevitable, not impossible to control. “[I]n contrast to both utopian and dystopian visions of the future of AI,” they write,
Thanks to the ongoing distorting effects of social media on the populace I find myself sympathetic to basically any argument that boils down to an exhortation to “please act normally.” But I think Narayanan and Kapoor’s argument is convincing on its own merits, and, indeed, increasingly confirmed by events. Take, for example, OpenAI’s recent release of its new state-of-the-art model GPT-5: Long rumored to be the model that achieves “A.G.I.” (or, at least, a significant step thereto), it is, instead, a pretty normal upgrade: improvements, but no substantial new features or achievements. Rather than blown away or terrified, many users seemed to be bored or annoyed by the new model, in a manner highly reminiscent of the short-lived complaint that tends to follow whenever Facebook or Instagram make user-experience changes. What could be more normal than that? On Reddit, you could find people making their own normalizing mental adjustments around the tech: “I'm a lot less concerned about ASI/The Singularity/AGI 2027 or whatever doomy scenario was bouncing around my noggin,” read one takeaway from a high-voted post. But what else might “normal” mean besides “not literally apocalyptic”? Some of the disappointment around GPT-5 had less to do with its capabilities in the abstract than with the voice and personality effected by the ChatGPT chatbot: Less sycophantic, less fawning, less friendly than GPT-4. As Casey Newton wrote:
Ryan Broderick puts it a little more bluntly, in a post titled “The AI boyfriend ticking time bomb”:
That some significant portion of OpenAI’s consumer base is using ChatGPT not so much for the expected “normal” uses like search, or productivity improvements, or creating slop birthday-party invitations, but for friendship, companionship, romance, and therapy certainly feels abnormal. (And apocalyptic.) But this is 2025, and intense, emotional, addiction-resembling attachment to software-bound experience has been a core paradigm of the technology industry for almost two decades, not to mention a multibillion-dollar business model. Certainly, you will not find me arguing that “psychosis-inducing sycophantic girlfriend robot subscription product” is “normal” in the sense of “acceptable” or “appropriate to a mature and dignified civilization.” But speaking descriptively, as a matter of long precedent, what could be more normal, in Silicon Valley, than people weeping on a message board because a UX change has transformed the valence of their addiction? In general, OpenAI has liked to present itself as anything but normal--a new kind of company producing a new kind of technology. Altman still likes to go on visionary press tours, forecasting wild and utopian futures built on A.I. Just this week he told YouTuber Cleo Abram that
But far from marking a break with the widely hated platform giants that precede it, the A.I. of this most recent hype cycle is a “normal technology” in the strong sense that its development as both a product and a business is more a story of continuity than of change. “Instead of measuring success by time spent or clicks,” a recent OpenAI announcement reads, “we care more about whether you leave the product having done what you came for”--a pointed rebuke of the Meta, Inc. business model. But as Kelly Hayes has written recently, “fostering dependence” is the core underlying practice of both OpenAI and Meta, regardless of whether the ultimate aim is to increase “time spent” for the purpose of selling captured and surveilled users to advertisers, or to increase emotional-intellectual enervation for the purpose of selling sexy know-it-all chat program subscriptions to the lonely, vulnerable, and exploitable:
ChatGPT and its ilk may yet be worse for humans than social media as such. The explosion of anger from the, ah, A.I.-soulmate community comes on the heels of a series of increasingly difficult-to-ignore reports of chatbot-induced delusion, even among people not otherwise prone to psychosis. But even if L.L.M. chatbots are meaningfully worse for their users’ mental healthy, they also follow in the fine Silicon Valley tradition of delusion-amplifying machines like Facebook and Twitter. The extent to which social media can reinforce or escalate delusions, or even induce psychosis, is well-documented by psychiatrists over the last two decades, so it’s hard to say that ChatGPT is anything but “normal” in this particular sense. Even the features designed to combat ChatGPT abuse--“gentle reminders during long sessions to encourage breaks” and “new behavior for high-stakes personal decisions,” announced by OpenAI two weeks ago--slot into a long tradition of “healthful nudges” like TikTok’s “daily limits” and Instagram’s “Take a Break” reminders, deployed by social platforms in response to public sentiment and critical press, listed by John Herrman here. Indeed, the most obvious evidence that L.L.M.s are “normal” is that each of the dominant social-platform software companies is happily training and releasing its own models and its own chatbots, which they all clearly believe fit cleanly within their existing businesses. Meta seems to be particularly focused on romance-enabled chatbots and meeting what Mark Zuckerberg has identified as the “the average person[‘s] demand for meaningfully more” friends, and Reuters’ Jeff Horowitz recently published excerpts from the company’s A.I. ethics policies (which Meta says it is in the process of revising):
It is hard to see how limning the boundaries of automated “sensual chat” with vulnerable preadolescents will lead to college gradutes getting jobs in space by 2035. But it’s very easy to see how the Facebook of 2015 got from there to here. Pushing your business to exploit social crises of which it was a significant driver by deploying dangerously tractable and addictive products with few consistent guardrails is wildly cynical, misguided, pernicious, and depressing. It’s also, unfortunately, extremely normal. You're currently a free subscriber to Read Max. For the full experience, upgrade your subscription. |