By Adrian Kosmaczewski, July 7th, 2025
Welcome to the 82nd issue of De Programmatica Ipsum, about Futurism.
In this edition:
Download this issue in DRM-free PDF or EPUB format, and read it on your preferred device. You can also subscribe to our RSS feed, featuring the full content of our articles.
We would like to thank our patrons who generously contribute every month (or have contributed in the past) to our work and help us run this magazine. Thank you so much! In alphabetical order: Adam Guest, Adrian Tineo Cabello, Benjamin Sheldon, Christopher Nascone, Colin Powell, Franz Lucien Moersdorf, Guillermo Ramos Álvarez, Jean-Paul de Vooght, Dr. Juande Santander-Vela, Patryk Matuszewski, Paul Hudson, Quico Moya, Roger Turner, Szymon Licau, and countless more leaving anonymous tips every month.
Enjoy this issue! Please subscribe to our free newsletter to stay updated about new releases, share the articles on social media, or contribute if you would like to support our work with a donation via Liberapay.
Cover photo by Viva Luna Studios on Unsplash.
|
|
The Best Way To Predict The Future
|
|
By Adrian Kosmaczewski, July 7th, 2025
In a scene of the vastly underrated 2003 sequel film “The Matrix Reloaded”, Neo, played by Keanu Reeves, meets “The Oracle”, a sentient program interpreted by the late Gloria Foster. In that fragment, The Oracle offers Neo a piece of candy, and Neo asks whether she knows if he is going to accept it or not. The unfazed Oracle responds “Wouldn’t be much of an Oracle if I didn’t!”
We human beings are obsessed with the future, to the detriment of our present. Even though our current scientific observations confirm the second law of thermodynamics, and its famous corollary, namely the direction of the arrow of time as explained by Stephen Hawking, we dream about predicting the future since the most ancient of times, and spend inordinate amounts of energy in order to model and ultimately control it, albeit partially.
In the software industry, this obsession begat Gartner hype cycles; Agile poker planning sessions; Waterfall specification documents; financial forecasting on Excel spreadsheets; Steve McConnell’s book “Software Estimation: Demystifying the Black Art”; freely downloadable analysis papers in PDF format; Barry Boehm’s COCOMO models; and countless predictions on specialized press outlets, including many hilariously ridiculous or downright failed ones.
Of course, it is too easy to laugh at those failed predictions from the vantage point of 2025 (just like Bret Victor once did), but we know that Schadenfreude is another key component of our psyche. Let us review some famous ones.
Ken Olsen, founder of Digital Equipment Corporation, allegedly said in 1977 that he “couldn’t see any need or any use for a computer in someone’s home”. He was right insofar as very few people have a PDP-11 at home these days.
Of all people, it was Robert Metcalfe who predicted in 1995 that the Internet would go “spectacularly supernova and collapse” one year later. To his credit, he ate his words in public in 1997 (and we mean this in the most literal sense of the verb “eating”, with a blender and everything).
Nobel Memorial Prize in Economics winner Paul Krugman stated in 1998 that by “2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s”. Instead, that year we got Ruby on Rails.
Microsoft CEO Steve Ballmer was quoted on Wired Magazine in 2007 claiming “there’s no chance that the iPhone is going to get any significant market share”. Seven years later, at a psychotherapy session with Charlie Rose, he finally acknowledged the blunder.
Even highly respected computer scientists can fail in this area. We have quoted in a previous article the 2006 paper “A View of 20th and 21st Century Software Engineering” by none other than Barry Boehm himself:
Assuming that Moore’s Law holds, another 20 years of doubling computing element performance every 18 months will lead to a performance improvement factor of 220/1.5 = 213.33 = 10,000 by 2025. Similar factors will apply to the size and power consumption of the competing elements.
Oh, and did I mention that it has been “the year of Linux on the desktop” for the past 25 years already?
You get the idea. But none of this is new; historically, most disruptive technologies have been attacked as soon as they appeared, as the examples of the television, cars, and packet switching networks show. This phenomenon was very well explained by E. T. Jaynes in a brilliant (and funny) paper titled “Notes on Present Status and Future Prospects” published in 1991:
The Establishment and the lunatic fringe have the common feature that they do not understand the new idea, and attack it on philosophical grounds without making any attempt to learn its technical features so they might try it and see for themselves how it works. Many will not even deign to examine the results which others have found using it; they know that it is wrong, whatever results it gives.
Le sigh.
We also have those colorful predictions that have stuck in the collective psyche, are abundantly cited in pretty much every blog post or conference talk, yet have been proven to be fabulously inaccurate. Like the one attributed to IBM founder and president Thomas Watson Sr., stating that “there is a world market for maybe five computers” or that one allegedly uttered (but later debunked) by Bill Gates, that “640K ought to be enough for anybody”.
Many of these predictions have been stored in an aptly-named “Predictions Database” available at the Elon University’s website. (Do not worry, nothing to do with the other Elon, just an unfortunate coincidence. Unfortunate for the University, that is.)
Of course, not everyone gets predictions wrong; in particular, Popular Mechanics got at least one right in March 1949:
Where a calculator like ENIAC today is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1000 vacuum tubes and perhaps weigh only 1½ tons.
Tru dat. The Lenovo laptop I am using to write these words has definitely less than 1000 vacuum tubes, and does not even weigh 1½… kilograms.
Despite the obvious shortcomings of human prediction capabilities, the field of technology is filled with futurism. Programmers need to plan which programming language (or LLM) to learn next. Project managers need estimations to keep their stakeholders happy (and their jobs secure). Businesses need forecasting to plan their course of action and their strategy.
Where there is demand, there is a market. Let us review some famous examples.
In 1952, Ida Rhodes gave a talk in Los Angeles (followed by its eponymous article) titled “The Human Computer’s Dreams of the Future”.
J. C. R. Licklider published at least three ground-breaking documents describing the computing of the future: the “Man-Computer Symbiosis” paper of 1960, the “Intergalactic Computer Network” 1963 memo, and the book “Libraries of the Future”, published in 1965. (And that is without counting “The Computer as a Communication Device” co-authored with Robert W. Taylor in 1968.)
Martin Greenberger published in 1964 “Computers and the World of the Future”, including papers by C. P. Snow, Vannevar Bush, J. C. R. Licklider, Grace Hopper, Alan Perlis, Claude Shannon, John Kemeny, and many more luminaires.
That same year, Herbert Marshall McLuhan published “Understanding Media: The Extensions of Man”, a book that activated more than a few synapses among technologists, introducing words and concepts such as “media”, “information age”, and “global village” into the global lexicon.
The World Future Society was founded in 1966, an organization of which Carl Sagan and Peter Drucker were members, and which was the publisher of a magazine named “The Futurist” from 1967 to 2015.
Jean Sammet published in 1972 a paper titled “Programming Languages: History and Future”. Spoiler alert: she did not mention Java nor Rust. Instead,
The major broad concepts that we should expect to see in the future are: (1) use of natural language (e.g. English), (2) user defined languages, (3) nonprocedural (sic) and problem defining languages, (4) an improvement in the user’s computing environment, and (5) new theoretical developments.
The ultimate ease of communication with the computer allows the user to specify his instructions–or wishes–in a natural language such as English.
Or, as the cool kids call it nowadays, vibe coding.
In September 1991, Scientific American published a seminal article by Mark Weiser called “The computer for the 21st century”. In two of the pictures of the article there are various people using tablets, styluses, and interactive whiteboards.
From 1978 to 1995 you could read articles about the future in the now extinct Omni Magazine, largely replaced by Wired Magazine since then.
In 1999 John Naughton published “A Brief History of the Future”, telling the story of the Internet and its possible future impact.
Lawrence Lessig published in 2002 “The Future of Ideas”, explaining the potential of societal change at the beginning of the 21st century, largely thanks to the aforementioned Internet.
In 2021, Ion Stoica and Scott Shenker argued that we were moving “From cloud computing to sky computing”. I suppose Kubernetes will still be around.
Last but not least, a few months before the publication of the article you are reading, a team of researchers released “AI 2027”, an overly optimistic and quite biased report featuring a suite of scenarios, supposedly showcasing the disruptive potential of LLMs in our near future. Around the same subject, we could not leave out “Thousands of AI Authors on the Future of AI”, a paper published in 2024.
Instead of such toxic positivity around the hype of AI, I would rather yield here to Luc Julia. He is a former designer of Apple’s Siri, and the author of a book beautifully titled “Artificial Intelligence Does Not Exist”. In a recent hearing at the French Senate, Mr. Julia said (translated from the original French):
I don’t claim to predict the future for the next thousand years, and it’s possible that quantum physics will open up new possibilities. The fundamental difference is that quantum physics is a branch of physics, while mathematics is only an approximation of physics; it attempts to describe the world, but it is not the world. Therefore, in statistics, 100% does not exist, and perfection is unattainable.
But talking about AGI means talking about perfection, about an entity that would do everything, always better than us. This is mathematically impossible. A concrete example of AGI, in a specific field, would be the level 5 autonomous car that Mr. Musk has been promising us since 2014. As a reminder, there are five levels of autonomy; level 5 represents complete autonomy, a vehicle capable of getting from point A to point B without ever hitting anyone along the way. However, this car does not exist and, as I will demonstrate mathematically, it never will.
(Which means my open letter to a future AGI will remain unread. Maybe it is better like that.)
How can anyone spot plausible futures among a sea of predictions? Who can you believe? Well, as it is often the thesis in this magazine, we argue that the study of the past might provide some clues as to the ever-changing direction of the arrow of time.
So here is a suggestion, since we happen to live in the world of the future as dreamt by our ancestors of 1950 (let us be honest: bar its lack of subspace band support, the iPhone looks much better than the clunky communicator used in the original Star Trek TV series of the 1960s). We can analyze the problem of the viability of predictions from two perspectives: the scientific one, and the fantastic one.
On the scientific side, we could start with a book that MIT Press published in 2021: “Ideas That Created the Future: Classic Papers of Computer Science”, including material from the antiquity to the late 1970s, edited by Harry R. Lewis. This might be a good starting point; these papers, each in its own way, and unbeknownst to each of the authors, described a possible future, in some cases with uncanny precision. Of course, caveat lector: survivor bias applies here. The ideas enumerated are those that stood the test of time; nothing else.
On the fantastic side of the spectrum, Gary Westfahl’s, Wong Kin Yuen’s, and Amy Kit-Sze Chan’s 2011 book “Science Fiction and the Prediction of the Future: Essays on Foresight and Fallacy” would serve to understand the other side of the coin. In this case we will see a list of outlandish ideas that did not stand said tests of time or technological feasibility, yet opened up serious questions about our future.
The precision of our analysis notwithstanding, the sad truth is that the future looks bleak and insurrectional, particularly for those in the USA. As these words hit the web, the SCOTUS has effectively abolished universal injunctions to presidential power, thereby kicking off the democracy overrule process we predicted exactly three years ago in this magazine–needless to say, a prediction we would have loved to get wrong. And there is more to come. What will happen now? As Yoda famously said in front of the (then) Senator Palpatine,
The dark side clouds everything. Impossible to see the future is.
Hear you we do, Master Yoda.
Paraphrasing Alan Kay and William Gibson, instead of predicting those countless techbro utopias and committing to what is probably the very worst of them, as technologists we should have worked harder to invent precisely one future: without fascism, without human rights violations, without hunger, and without war. A future that would be evenly distributed for everyone on Earth to enjoy.
But just as the Oracle said to Neo in the scene quoted at the beginning of this article, as a society we have already made quite a few questionable choices. It is now our responsibility to understand that those choices were terrible, and then, maybe, we could choose better the next time–well, unless we are really plugged into a highly skeuomorphic Matrix that has been pulled over our eyes to blind us from the truth.
Cover photo by Deepak Gupta on Unsplash.
|
|
By Adrian Kosmaczewski, July 7th, 2025
As in any human group, there are certain pilgrimages, certain rites of passage that all software developers apparently must go through. They are various and equally anecdotical: to compile and boot their own Linux kernel; to read “The Art of Computer Programming” in its entirety (and then send their CV to Bill Gates); to give a presentation in a conference; and finally, to sit through the whole duration of “The Mother of All Demos”, the recording of the unavoidable presentation made by Douglas Engelbart, Bill English, and their team. And this month’s Vidéothèque entry is, precisely, said recording.
There is not a lot that can be said about this demo that has not been said; the references to that event are vast and many of the attendees are still alive. There is a whole Wikipedia page about it; be our guest.
Not only that, but there are countless reports of that demo in literature; the most compelling being “The Dream Machine” by M. Mitchell Waldrop, the subject of this month’s Library issue, which contains a full section in chapter 7 (from pages 280 to 286) aptly titled “ARPA’s Woodstock” dedicated to the session, describing in outstanding detail every step of the demo, but also, providing a funny look behind the scenes (spoiler alert: a lot of sweat was poured by a lot of people to make that event happen).
Another noteworthy example would be Engelbart’s biography in chapter 8 (“The Personal Computer”) of the 2014 book “The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution” by American biographer Walter Isaacson (of “Steve Jobs” fame).
Instead, and following the style of this magazine, I will be very blunt: “The Mother of All Demos” is a fascinating, albeit quite a tedious video to go through. Why? Well, because we are in 2025, and the world that Doug (everyone calls him that, so I guess we will too) showed in that historical session is… well, in front of you.
Here is the thing: the video describes, in an excruciatingly slow pace (at least by the standards of our generation’s TikTok-formatted brains), with the terms and phraseology of 1968, and during no less than one and a half hours, the very computer you are using to read this article. Yes, the one on your lap or on your desk. Hyperlinked documents, word processors with outlining capabilities, a mouse next to the keyboard, command menus on a “bitmap screen”, live editing with remote participants, and so much more.
I know: booooooooriiiiiing, but with fascinating sound effects all along; every mouse click and command generating its own beep, sometimes putting in evidence the limited computing power available at the moment.
To really appreciate the video, we must understand the context: this was December 1968 (a year we have often referred to as an annus mirabilis in this magazine). Computers had existed only for slightly more than 20 years at this point; the world was aghast after the assassinations of Martin Luther King and Robert F. Kennedy; the Vietnam War was raging and about to get worse; Apollo 8 had not even taken off yet, and I guess Margaret Hamilton was still busy debugging the lunar module code. It might sound stupid to make this point but, hear me out again: computers, back in those days, were not interactive. Most of them where IBM machines, and most of them were fed JCL scripts, triggering the execution of some batch job or this or that COBOL program. Here is a bunch of data in a big file, here is a reference to a COBOL program, please process payroll, thankyousomuch.
The very idea of a “personal computer” providing live feedback to the user was preposterous. Hence, the brilliance and the significance of this recording.
However brilliant, not only did Engelbart had the support of a fantastic team behind the curtain during the demo, he also stood on the shoulder of other giants who, before or simultaneously to him, provided a glimpse into the future. Probably the most notable in this category would be, in chronological order:
Indeed, ours was precisely the future that Doug wowed everyone with that day of December 1968. Pretty much the crème de la crème of the computer world of that day was sitting in the audience watching this moment unfold live. Many of them took these images, these sounds, these interactions, and as a matter of fact, started working on them almost immediately after (most notably at the legendary Xerox PARC, which we have so often written about on the pages of this magazine).
(Well, not entirely. To be fair, Engelbart’s demo did not showcase any online trolls, billionaires co-opting entire social networks to spread fascism, corporations trading private data as if it were corn, state-sponsored malware attacks on public infrastructure, nor other niceties of our modern world.)
The future (computer), actually our future (computer), was outlined on a live presentation in front of a select audience, during the Association of Computer Machinery’s Fall Joint Computer Conference in San Francisco, on December 9th, 1968.
In January this year we celebrated the 100th anniversary of the birth of Douglas Engelbart, who passed away in 2013. This last piece of information yields a quite astonishing insight: he was able to see with his own eyes quite a few wonders during the 45 years that followed the demo. He watched every technology in that session become a mass-market reality, piece by piece, year after year.
Quite a destiny, if you think about it.
What made me smile during this presentation, however, is the fact that this demo carries the germ of the whole field of “Developer Relations” in it. In one way or another, those of us who have been through the initiation rite of speaking in public, we can relate with Mr. Engelbart (and even guess some not-so-obvious traits of nervousness in his face). I got the impression that the preparation and the delivery of this event (both stellar, by all means) set the tone for pretty much every live demo given ever since. I can even tell in the eyes of Mr. Engelbart, at the very end of the session, a smirk of deep satisfaction and gratitude towards the ever-fleeting “Demo Gods” (in uppercase, please), those we all pray to before starting such an endeavor.
Watch “The Mother of All Demos” by Doug Engelbart, Bill English, and team, on YouTube, on the channel of the Doug Engelbart Institute.
Cover snapshot chosen by the author.
|
|
J. C. R. Licklider & M. Mitchell Waldrop
|
|
By Adrian Kosmaczewski, July 7th, 2025
The writings of Jorge Luis Borges twist our perception of time and space. In between articles about Shaw, Chesterton, Wilde, and Coleridge, his 1952 book “Otras Inquisiciones” included an unexpected gem: a short story called “El Tiempo y J. W. Dunne”. The question is, who was this John William Dunne and what does he have to do with time? Well, his name might be forgotten by contemporary audiences, but Dunne was the author of one of the biggest bestsellers of the first half of the twentieth century.
Dunne, aviator, engineer, and philosopher according to Wikipedia, published in 1927 the influential book “An Experiment with Time”, in which Dunne explains that not only precognitive dreams happen… but we can also analyze them and understand them, gaining some knowledge about the future in the process.
Or, to put it in a TikTok-friendly manner: manifestation is a thing.
More seriously; Borges says à propos of Dunne:
Los teólogos definen la eternidad como la simultánea y lúcida posesión de todos los instantes del tiempo y la declaran uno de los atributos divinos. Dunne, asombrosamente, supone que ya es nuestra la eternidad y que los sueños de cada noche lo corroboran.
Which translated in English would read as such:
Theologians define eternity as the simultaneous and lucid possession of all moments in time and declare it one of the divine attributes. Dunne, astonishingly, assumes that eternity is already ours and that our dreams each night corroborate this.
Did Joseph Carl Robnett Licklider (1915-1990) read “An Experiment in Time”? Could it be that he had a series of dreams between 1960 and 1968, and that he quickly wrote them down in his diary before breakfast? We can only speculate. But we do know for a fact that those dreams begat a nothing short of extraordinary sequence of writings: “Man-Computer Symbiosis” (1960); “Intergalactic Computer Network” (1963); “Libraries of the Future” (1965); and “The Computer as a Communication Device”, this last one from a dream he shared with Bob Taylor, and published in 1968.
In all of these manifestations works, Lick (as he was commonly referred to, and you know that we like to sound cool and be familiar with our celebrities in this magazine) explored and exposed the possibilities and the wonders of not only giving each person a computer, but also, and most important of all, of connecting all of those computers together in a common network.
Lick was the OG dream machine, all right. To such an extent, that the J. C. R. Licklider paper collection, set up by MIT after Lick’s death in 1990 with the help of his family, has a volume of 25 cubic feet (around 0.7 cubic meters), much of which has not been entirely reviewed. For the rest of us without access to the MIT vault, an ad hoc collection of Lick’s related papers is available on the Internet Archive.
Precisely, the story of Lick’s dreams, their extent and impact, and how they became a reality (but sadly not whether he wrote them down before breakfast or not) is the subject of the monumental work of M. Mitchell Waldrop, “The Dream Machine”, originally published in 2001. It was re-released by Stripe Press in 2018 in a stunning volume that includes not only the original text, but also the contents of three of Lick writings enumerated above (that is, all except “Libraries of the Future”).
It was a vision that was downright Jeffersonian in its idealism, and perhaps in its naïveté as well. Nonetheless, Lick insisted, “the renewed hope I referred to is more than just a feeling in the air… It is a feeling one experiences at the console. The information revolution is bringing with it a key that may open the door to a new era of involvement and participation. The key is the self-motivating exhilaration that accompanies truly effective interaction with information and knowledge through a good console connected through a good network to a good computer”.
(Waldrop, page 401.)
When we say monumental, you had better believe it; the 500+ pages of this volume, laid out with astonishing detail (and a very small font size) summarize the history and evolution of computers from 1945 to 1990. Throughout these pages, Waldrop reveals that the backbone, the axis, the arrow, the orientation, the mastermind of all that history was none other than Lick himself: he was the incarnation of the phrase “being at the right place at the right time”.
Speaking about personification: in a theme dear to this magazine, we cannot but acknowledge that Lick embodied the fight against the impossible dialogue between engineering and business. He was a psychologist and a computer scientist. He understood the minds of humans and those of computers alike. Lick was the thread and the needle, shaping the path from Alan Turing, Claude Shannon, and John von Neumann, to the World Wide Web, Object-Oriented Programming, and the home office.
Why deny it, he was also the purse and the wallet along the path, having used his influence in BBN, MIT, ARPA, and anywhere he could, to fund those around him working on the bits and pieces required to bring his dream to life. Imagine the privilege of being able to sign checks with such nonchalance.
Paraphrasing some famous credit card advertising slogan: having ideas is great. Having cash to make them happen is priceless.
“The Dream Machine” appeared merely months after Michael Hiltzik’s “Dealers of Lightning” hit the shelves; in many ways, Waldrop’s book is an extended version of Hiltzik’s, both backwards and forwards in time. To the extent that a full chapter in “The Dream Machine” (number eight, to be precise) is dedicated to Xerox PARC, giving a much better context to its creation than Hiltzik’s work can, albeit with slightly less detail (easy, since after all, Hiltzik’s subject is narrower in scope).
Waldrop’s acknowledges this himself in page 419, speaking about the now legendary visit of Steve Jobs to PARC in December 1979:
Nonetheless, Hiltzik’s book includes perhaps the most careful and complete reconstruction of the event to date, and it makes a number of key points. First, Jobs and his crew didn’t need a special presentation to learn about graphical user interfaces. That idea was already in the air by 1979, along with everything else PARC had done.
In any case, both books complement each other wonderfully. (OK, OK, OK; if you want to know, in general I actually enjoyed more “The Dream Machine” than “Dealers of Lightning”, but both are great.)
No need to stop here: if you are interested in learning more about Lick, there is a short biography in chapter seven (titled “The Internet”) of “The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution” by Walter Isaacson.
And tangentially related, even if more appropriate for the Vidéothèque section of this magazine, yet also distributed by Stripe Press, you might want to watch “We Are As Gods”, a documentary about a certain Stewart Brand… another person who, in between LSD trips, transcribed his premonitory dreams in the “Whole Earth Catalog” for a whole generation to get inspiration from.
Bob Taylor (1932-2017) said about Lick:
Most of the significant advances in computer technology—including the work that my group did at Xerox PARC—were simply extrapolations of Lick’s vision. They were not really new visions of their own. So he was really the father of it all.
Waldrop quotes another Bob in page 252: Robert Fano (1917-2016) praising Lick in his own words:
Second, says Fano, when Lick was presented with a miraculous, never-to-be-repeated opportunity to turn his vision into reality, he had the guts to go for it, and the skills to make it work. Lick had the power to spin his dreams so persuasively that Jack Ruina and company were willing to go along with him–and to trust him with the Pentagon’s money. Once he had that money in hand, moreover, Lick had the taste to recognize and cultivate good ideas wherever he found them. Indeed, the ideas he fostered in 1962 would ultimately lay the foundations for computing as we know it today.
Maybe premonitory dreams are a thing, after all. It certainly is comforting to think that we can dream a better future for all of us, involving computers or not, and then make it a reality–if we have the guts… and the cash, that is. Think about the possibilities.
(Or, to put it differently: imagine the negative impact DOGE will have in the US economy during the next 50 years. You have been warned.)
Let us finish this article with some of Waldrop’s own closing words:
Technology isn’t destiny, no matter how inexorable its evolution may seem; the way its capabilities are used is as much a matter of cultural choice and historical accident as politics is, or fashion.
Cover photo by the author.
|
|
|
|
|