You’re reading Read Max, a twice-weekly newsletter that tries to explain the future to normal people. Read Max is supported entirely by paying subscribers. If you like it, find it useful, and want to support its mission, please upgrade to a paid subscription! Greetings from Read Max HQ! This week, a collection of thoughts about a new trend in tech criticism masquerading as a lumpy and overstuffed essay. A reminder: This piece, and all the pieces you read on the Read Max newsletter, is funded almost entirely by paying subscribers of this newsletter. I am able to write these columns and record the podcasts and videos thanks to the support of nearly 4,000 people who value what I do, for whatever strange reason. Unfortunately, because of the reality of subscription businesses, I need to keep growing in order to not shrink, which means every week I need to beg more people to sign up. Do you like Read Max? Do you find it entertaining, educational, distracting, fascinating, or otherwise valuable, such that you would buy me a cheap-ish beer at a bar every month? If so, consider signing up for the low price of $5/month or $50/year A new wave of techlashThe new moderate-liberal Substack publication The Argument ran a fascinating piece by civil rights attorney (and Tottenham blogger) Joel Wertheimer last week arguing that policymakers should “Treat Big Tech like Big Tobacco”:
Wertheimer argues that the famous Section 230 of the Communications Decency Act--which protects companies from liability for content posted by users to their websites--needs to be reinterpreted to exclude “platforms that actively promote content using reinforcement learning-based recommendation algorithms.” I’m not exactly qualified to weigh in on the legal questions, but I find the logic of the argument persuasive in its broad strokes: The idea is that while message boards and blog comment sections--which host third-person speech but do nothing active to promote it--deserve Section 230 protection, platforms that use algorithmic recommendations (i.e. Facebook, Instagram, TikTok, and X.com) are not simply “passively hosting content but actively recommending” it, an act that should be considered “first-person speech” and therefore subject to liability claims. But what really strikes me about Wertheimer’s piece is the public-health metaphor he uses to explain the particular harms of social-media platforms (and that, in turn, justify his remedy). The contemporary web is bad for us, the argument goes, in the way cigarettes are bad for us: Cheap, readily available, highly addictive, and making us incredibly sick at unbelievably high cost. In this, Wertheimer is following a line of argument increasingly prominent among both pundits and politicians. In April, David Grimes made a less policy-focused version of the same argument in Scientific American; just last month, speaking with Ezra Klein on his podcast, Utah Governor Spencer Cox drew on both Big Tobacco and the opioid industry:
A few days after Wertheimer’s piece, Abundance author Derek Thompson posted a podcast interview of Massachusetts Representative Jake Auchincloss, who has proposed a digital value-added tax designed, like Wertheimer’s proposal around Section 230, to internalize the costs of social media. In his introduction, Thompson directly compared the digital V.A.T. to “sugar taxes and cigarette taxes”:
Comparing Big Social to Big Tobacco (or Big Opioid or Big Sugar) is in some sense a no-brainer, and certainly such analogies have been drawn many times over the last few decades. But the increasing popularity of this conceit is less a coincidence, I’d argue, than a function of the gathering power of a new wave of the now-decade old “techlash.” This burgeoning movement seeks to root criticism of (and response to) Big Tech in ideas of health (public, social, intellectual, and spiritual) and morality rather than size and power, positioning the rise of social media and the platform giants as something between a public-health scare and a spiritual threat, rather than (solely) a problem of political economy or market design. I see versions of this school of thought not just in speeches and op-eds from Auchincloss and Cox or blog posts from Thompson, but in Chris Hayes’ book The Siren’s Call and in the inescapable work of Jonathan Haidt. (You might broadly think of Hayes and Haidt as representing “left” and “right” tendencies of the broader movement.) Notably, all of the above-mentioned have found platforms on Klein’s podcast. Back in a January interview with Hayes, Klein offered up a kind of political vision or prediction rooted in this tendency:
Thompson dubs this loose movement, or at least the version touted by Auchincloss, “touch-grass populism,” but I think this is wrong: The framework in question is distinctly not “populist” (unlike, say, the neo-Brandeisian “new antitrust” movement that has been a major focus of the “techlash” to date) so much as progressive in the original sense, a reform ideology rooted in middle-class concerns for general social welfare in the wake of sweeping technological change. At its broadest you could maybe call this budding program of restriction, restraint, and regulation “Platform Temperance,” and regard the scattered campaign to ban smartphones in schools as its first stirrings as a movement. Why Platform Temperance now?One way of thinking about the past half-decade or so of life on the internet is that we’ve all become test subjects in a grand experiment to see just how bad “good enough” can be. Since Elon Musk’s purchase of Twitter in 2022, and the subsequent industry-wide cutbacks to “trust and safety” teams meant to moderate content, most of the major social platforms have been flooded with fraud, bait, and spam--a process exponentially accelerated by the arrival of ChatGPT and its generative-A.I. peers. Take, e.g., these YouTube ads discovered by BlueSky user Ken Plume: Another bizarre A.I.-generated *paid ad* currently running during videos on YouTube - And, again, what the hell is going on with their vetting process (or lack thereof)... Sat, 06 Sep 2025 21:40:31 GMT View on BlueskyAs I tweeted at the time, I’ve been covering tech companies for years and I still find myself taken aback at how completely they’ve abdicated any kind of oversight or moderation. Setting aside the increasingly toxic and directly corrosive “politics” now inescapable on social platforms, Facebook and Instagram and YouTube are utterly awash in depressing low-rent non-political slop, and no one who owns, runs, or even works at these platforms seems even to be embarrassed, let alone appalled. But why would they be? People working at tech giants are watching the metrics and seeing that the depressing low-rent slop is getting engagement--probably even to a greater extent than whatever expensive, substantive, wholesome content it’s being placed next to on the feed. Their sense, backed up by unprecedentedly large data sets, is that slop of various kinds is what people want, because it’s what they click on, watch, and engage with. (I would even go so far as to suggest that some portion of Silicon Valley’s broad reactionary turn since 2020 can be chalked up to what I think of as “black-pilling via metrics”: The industry’s longtime condescension toward its users finally curdling into outright contempt.) For much of the past decade, this revealed preference for fake news, engagement bait, sexualized content, and other types of feedslop has been blamed on “platform manipulation”: Bad, possibly foreign actors were “manipulating” the platforms, or, worse, the platforms themselves were “manipulating” their users, deploying “dopamine feedback loops” and “exploiting a vulnerability in human psychology,” as Sean Parker said back in 2017. But these accounts have never been wholly satisfying: Too technical, too determinist, too reliant on the idea that there is “authentic” or “innocent” desire being “manipulated.” In some sense they don’t blame we, the users, enough, or assign us the kind of agency we know we have. Most of us are aware from everyday experience that even absent “manipulation” we desire all kinds things that are bad for us, and that we give in or restrain from temptation based on any number of factors. This, I think, is the basic dynamic from which Platform Temperance evolves: a general, non-partisan, somewhat moralistic disgust at even the non-political outcomes of unregulated and unmoderated platforms; a dissatisfaction with both the “revealed preference” framework and the more rigidly behavioralist explanations of platform activity; and a sense of an accelerating downward spiral with the advent of generative A.I., which both further debases the platforms and provides a possibly even more tractable and dangerous user experience itself. In response, Platform Temperance offers a focus on health, social welfare, and the idea of discipline and restraint in the face of unmoderated consumption--that is, temperance. The politics of platform temperanceThere’s another aspect to mention here. Platform Temperance as it has evolved recently seems to be largely a school of thought from the (broad) center--adjacent, in its membership and institutional affiliations, to the “Abundance” faction of elite politics. There’s an obvious cynical reading here: As Klein says, this “political space is weirdly open,” and “somebody is going to grab it.” Thompson’s podcast interview with Auchincloss is framed around the idea of Platform Temperance (or “Touch-Grass Populism”) as a “big idea” around which moderates can rally:
It’s easy, given this kind of positioning, to read Platform Temperance as a new front in an ongoing factional war within the Democratic party--and, indeed, the tendency is often pointedly positioned against the more populist and anti-establishment New Antitrust movement that has been among the most prominent strains of the Techlash thus far. But while the political valence is important for context, I don’t think Platform Temperance is wholly (or even mostly) a cynical Trojan Horse for intra-Democrat political battles. Versions of the ideas, remedies, and rhetoric emerging from the big tent that I’m calling Platform Temperance have been circulating in the Techlash for many years, usually from left-wing critics and academics. I’m thinking here of, e.g., James Bridle’s A New Dark Age, Jenny Odell’s How to Do Nothing, and Richard Seymour’s The Twittering Machine, three excellent books that take psychosocial and psychoanalytic approaches to the problems posed by platforms. In the early days of this newsletter I myself made a kind of proto-Platform Temperance argument under the headline “Maybe we need a moral panic about Facebook”:
I still believe--as Klein does--that there is a lot of political power in harnessing people’s deep ambivalence about (or outright disgust with) a platform-mediated social, cultural, and political life. And I often find myself eager to reach for public-health metaphors when discussing the experience of life under the thumb of the software industry, if not outright spiritual ones. (I don’t believe in the “soul,” but I am hard-pressed to think of a better way to succinctly describe the effects of, say, TikTok than to say it’s bad for your soul.) But I also want to be conscious of how easy it is for this kind of rhetoric to slip into reactionary moral panics. As David Sessions recently wrote:
I think Platform Temperance as an affective and political framework has a lot to offer tech skeptics, critics, and “the left” more broadly. But like the progressive movements that emerged in the late 19th century, it can produce both grounded, persuasive, important liberal-technocratic visions, and paternalistic, pseudoscientific moral panics. We should be careful about which we’re pursuing. You're currently a free subscriber to Read Max. For the full experience, upgrade your subscription. |

