You’re reading Read Max, a twice-weekly newsletter that tries to explain the future to normal people. Read Max is supported entirely by paying subscribers. If you like it, find it useful, and want to support its mission, please upgrade to a paid subscription! Greetings from Read Max HQ, now returned to its rightful place in Brooklyn! Today, we’re writing about Luke Farritor, “cracked coders,” and “epistemic arrogance.” Read Max, in case you have forgotten, is funded almost entirely by paying subscribers. The reading, writing, panic attacks, anxiety scrolling, spacing out, etc. that goes in to any given newsletter is extensive, and the support of generous readers helps make this a full-time job. Think of it this way: If you’re a free subscriber who’s found Read Max entertaining, informative, enjoyable, distracting, “not the worst thing you’ve read,” you can do the monetary equivalent of “buying me a beer” by subscribing, for almost the exact price of a beer. Plus, paying subscribers will get a second weekly newsletter featuring recommendations for underrated and overlooked movies, books, and music. This week Bloomberg published a write-around profile of Luke Farritor, maybe the most prominent non-testicular name among the army of young men hired out of the tech industry to staff Elon Musk’s Department of Government Efficiency in the early days of the Trump administration. Farritor was relatively well-known even before his involvement with D.O.G.E. because he was a member of the team that won the Vesuvius Challenge Prize in 2024, having written the script that identified legible characters in the famously indecipherable Herculaneum Papyri, and the framing of the article (written by Susan Berfield, Margi Murphy and Jason Leopold) is something like “how did this ambitious, smart, super-online, home-schooled libertarian kid who won a scientific-A.I. prize end up illegally gutting USAID and the N.I.H.?” The answer, of course, is that he’s ambitious, smart, super-online, home-schooled, and libertarian; the reporting about Farritor’s pre-D.O.G.E life suggests that he’s always been “cracked” (i.e., a great and obsessive programmer), hard-working, and abrasive, easily influenced by the whims and obsessions of his Twitter feed and his tech heroes:
Maybe unsurprisingly, his former classmates seem somewhat skeptical of Farritor’s sudden rise to power:
As interesting as the article itself, I would say, was the reaction from the broad universe of Musk-sympathetic tech posters on Twitter. Casey Handmer, a V.C. and Vesuvius Challenge collaborator who was quoted in the Bloomberg piece, complained on Twitter that the article was “irresponsible… defamatory” and “brings dishonor upon the profession” of journalism: The anti-woke tech publicist Lulu Chang Meservey, meanwhile, singled out for ridiculte an anonymous government source who told Bloomberg’s reporters that “Luke’s résumé didn’t pass muster” because “you have to bring some expertise”: As many have already pointed out on Twitter, what is so particularly upsetting about the article to people like Handmer and Meservey is less that it doesn’t pcredit Farritor with intellectual ability (it does, consistently) or that it doesn’t properly contextualize his glib cruelty with reference to the correct fever-dream conspiracy theories that would justify it, but that straightforward facts of the D.O.G.E. saga challenge one of the fundamental beliefs of the new Tech Right: That a sufficient number of sufficiently “cracked” programmers can solve any problem put in front of them. Or, put more broadly, that “intelligence” is a quality measurable on on a single scale equally applicable across all spheres of human activity. This is so obviously wrong it seems strange to even have to describe why: Writing a Python script to identify Greek characters, impressive though it certainly is, doesn’t translate in any direct way into “administering budget cuts across a range of government agencies.” But in Silicon Valley, steeped in I.Q. fetishism, an obsession with “agency,” and a moral universe still governed by fantasy high-school resentments, the belief that (heritable) single-vector “intelligence” endows one with full-spectrum authority (and, inversely, that failure to demonstrate this intelligence is delegitimizing), holds sway. “Just put 10 cracked programmers in charge of it” has become the (admittedly at least somewhat trollish) stance of the Tech Right when faced with any sufficiently un-deferential institution, enterprise, or bureaucracy.¹ (Politically speaking, this idea overlaps appealingly and naturally with the widespread low-information voter belief that a single sufficiently driven and common-sensical guy could “fix” the government--see, e.g., the movies Swing Voter, Dave, Man of the Year, or any interview with a swing Trump voter.) But as the article pretty plainly demonstrates, D.O.G.E. is the highest-profile and most consequential example of how ineffective and destructive this idea is. The agency’s failure to succeed even on its own terms (it didn’t come anywhere close to making cuts of the size initially promised by Musk), and the fact that its legacy is, at best, the needless deaths of hundreds of thousands of people around the globe, is about as clear an indication as possible that “put 10 cracked programmers in charge” (let alone just one) is not a good solution to basically any problem faced by any large organization, and especially not particularly complex and sensitive ones like the U.S. government. Farritor is certainly very smart, in the sense of being a good programmer and problem-solver. But working effectively at a high-level position within a complex bureaucracy requires not just “cracked” coding ability and work ethic but domain expertise, relevant experience--and even those widely derided “soft skills.” One particularly noteworthy aspect of both the article and the reaction was the way it revealed what I suppose you’d call epistemic arrogance--a total lack of curiosity about how the agencies D.O.G.E. gutting might work, or why a government source might suggest that even an impressive “Python A.I. thing” is not a sufficient C.V. for this kind of government work. These tweets from Bo Austin help get at what I mean: Epistemic arrogance is baked in to the culture of Silicon Valley: Blind, foolhardy confidence may be terrible for operating within large and intricate systems, but it’s great for founding and investing in regulations-flouting software companies. Many of the industry’s leading lights are proud ignoramuses, completely unaware of the gaps and blind spots in their knowledge, and ambitious young hackers and programmers are no doubt modeling their own attitudes toward the world on the overconfident performance of genius by people like Elon Musk. But what seems particularly striking about this arrogance at this moment, though, is the extent to which it’s also baked into--and reinforced by--the L.L.M.-based chatbots now driving billions of dollars of investment. L.L.M.-based chatbots are effectively epistemic-arrogance machines: They “themselves” have no idea what they “know” or don’t, and in many circumstances will generate baldly incorrect text before admitting to lacking knowledge. Their accuracy has improved significantly over the past three years, but an L.L.M. chatbot fundamentally can’t know what it doesn’t know. Worse, they’re all but designed to negotiate around and even reinforce the epistemic arrogance of their users, obsequiously confirming their insights, praising their fluency, and overlooking their blind spots and knowledge gaps. I was agog, for example, to read that Travis Kalanick recently claimed that he’s come “close to some interesting breakthroughs” in quantum physics just by conversing with Grok--truly the definition of a man who doesn’t know what he doesn’t know:
What’s strange here is that epistemic humility is in some sense the single most important skill necessary to make good use of these chatbots--an awareness not just of their limitations, but also of your own. Unleashing the bots among a population of billionaires who share their precise weakness seems like a good way to compound the cruelty and destruction. 1 I suspect the recent popularity of this sentiment in Silicon Valley is, like many other aspects of the tech reaction of the 2020s, fundamentally a function of the business environment of the software industry: The fast-growing, relatively lawless startups for which Silicon Valley was known in the 2010s have evolved into a small number of large, slow-moving bureaucracies. The fantasy of replacing restive employees and annoying H.R. departments with a handful of “cracked” 22-year-olds must be a powerful one both to executives and to the 22-year-old programmers, whose job prospects are otherwise fairly dismal. Invite your friends and earn rewardsIf you enjoy Read Max, share it with your friends and earn rewards when they subscribe. |