On the Origins of Dune's Butlerian JihadSome notes on what might go in our own Orange Catholic Bible.
Nearly one hundred years before Frank Herbert published “Dune” and teased its Butlerlian Jihad—the Great Revolt against computers, thinking machines, and conscious robots that some humans used to enslave humanity (who were, in turn, enslaved by a "god of machine-logic")—there was the Butler that inspired it all: Samuel Butler, a 19th century English novelist who was one of the earliest thinkers to try and apply Darwin’s theory of evolution to the possibility of machine intelligence. In 1863, four years after "On the Origins of Species” was published, Butler sent a letter to the editor published in The Press, a New Zealand daily newspaper, titled "Darwin among the Machines.” In it, Butler posits that machines could be thought of as "mechanical life" undergoing evolution that might make them, not humans, the preeminent species of Earth:
Butler was looking at the monstrous wake of the Industrial Revolution, struggling with the implications of Darwin’s theory, and concluded that the evolutionary pressures advancing machines were even more intense than humans—happening on much shorter timescales that yielded much more dramatic effects because of our intervention—suggesting that consciousness and intelligence would eventually arise. Our succession was a foregone conclusion: the question then was how, not when. What would bring that day to pass? Butler writes:
Could anything be done to stave this off? Butler said yes:
Butler would take this letter and a few other writings to develop The Book of Machines, chapters 23-25 of his 1872 social commentary novel “Erewhon”. The novel itself is a funny satire of Victorian society and this section was initially read as a mockery of Darwinian evolution, but Butler makes clear in a later letter to Darwin that it was more of a jihad of his own against the theologian and Christian apologist Joseph Butler (if we refer to this Butler again, we will call him Butler 2), an Anglican bishop who’d published “The Analogy of Religion, Natural and Revealed” one hundred years earlier:
You should read the novel yourself but there are a few quotes in there I think worth teasing out that are clearly building upon ideas Butler is grappling with in that first essay, that are cleanly ported over to the Butlerian Jihad. Here are two passages, first:
and second:
Butler mocks the prevailing Victorian attitude of the time—blind faith in science, reason, progress, and profit—as a "low materialistic point of view" that believes mindlessly adopting and advancing technology is a moral good and an inevitable process akin to the march of time. It's through a wrongheaded belief in profits as the ultimate signifier of value that Erewhonians created an elaborate system whereby they feel in control of their society and their culture, though are in truth slavishly dedicated to their machines above all else. Failure to do so incurs their "wrath", which is little more than a mockery of capitalist competition—neglect new tech, use inferior machines, fail to innovative, and you will be punished by impersonal forces bent on impoverishing (and eventually killing) you. Suspicious as they might be about their machines, Erewhonians were unable to live without them and unwilling to entertain thinking about lives where their relationships to the machines are any less dependent. And so you have a lone Erewhonian philosopher insisting that while the machines clearly pose no threat today or tomorrow, this is the only time when revolt will be possible. The door will close, their utility and seductiveness will only grow—eventually to the point that the machines will no longer need to rely on the advocacy of those enslaved by dependency, they will simply act in their best interests. Such a revolt in Erewhon will cause a great deal of suffering, but what is that measured against the smothering of humanity's spirit? A key excerpt:
This section captures what I think is at the core of Butler's and Herbert's warnings about technology. A world where we prioritize the relentless advancement of technology and a universal dependence on it in the name of efficiency is a world where we prioritize a certain political-economic order that is more interested in advancing technologies based on criteria that have little to do with human flourishing, instead being much more interested in financing and designing and deploying them against people—in organizing the greater whole of humanity such that they are more profitable and less likely to revolt against an arrangement that is incredibly lucrative for increasingly few. In one of the essays in my AI series, I argued that Luther's critique of indulgences in the medieval era could be applied to today's Silicon Valley Consensus. Luther was not opposed to indulgences so much as their abuse, which cheapened repentance and undermined attempts to compel good works or genuine attempts to right wrongs. The idea that salvation could be realized through a transaction convinced many they'd obviated the need for the hard work of being a better person. Indulgences also centralized and codified unjustified power grabs by the Church, which claimed new authorities over souls in Purgatory and introduced perverse incentives to prioritize activities that had nothing to do with Christendom. In some ways, I think of Luddism (and Butlerianism) similarly. My concern is not technology in of itself (though there are multiple technologies we would do better off without). Technology, however, is downstream of politics and economics and history and social relations. We aren’t saying destroy the clocks before they become killer drones, but we are saying the killer drones are already here and we should figure out how to destroy them. Clearly, technological dependence obscures the political and economic decisions about what sort of technologies should be developed, how they should be financed, who should finance their development and reap their rewards and bear their costs, and how society should be organized around the facts of those arrangements. Is the solution more or less democratic control over technological development and deployment? Do we trust today’s major players in this space to truly prioritize anything other than profits and returns? Are we going to be able to realize or experiment with other values, arrangements, and models that prioritize anything else within today’s authoritarian technological system or within a democratic system? If we realize that certain paths or arrangements or products or models go against human flourishing or the public good or our ecological niche or the mental health of the general public (realizations we have already made), will we be able to do anything about it? I want to end on an exchange that I think encapsulates this thread, at least, of my personal Luddite philosophy—an interview between Bill Moyers and Noam Chomsky in 1989:
Invite your friends and earn rewardsIf you enjoy The Tech Bubble, share it with your friends and earn rewards when they subscribe. |