The recent shakiness in the stock market has everyone talking about the AI bubble. To be sure, the concerns didn’t come out of nowhere. Critics have been calling AI another mix of tech fantasy and financial bubble since soon after ChatGPT’s release at the end of 2022 because we know how these cycles work. More recently, establishment voices like Goldman Sachs and Sequoia Capital joined the chorus. There’s no question tech stocks are overvalued and that part of that comes from the hype behind generative AI. The questions isn’t whether there will be a market correction, but when it will happen and how deep the decline will be. It seems quite certain that while the recent stock market volatility did hint at underlying concerns about tech valuations, it was ultimately driven much more by growing questions about the Japanese market that briefly shook the wider confidence of investors. I’m not going to pretend to gaze into a crystal ball and come back with an answer as to when this bubble will burst, but I do think it’s time we start thinking about the aftermath of the correction instead of just what’s happening right this moment — though that’s important to watch too. Tech bubbles operate on a cycle, but there’s also a cycle to the aftermath of the crash, where attention starts to move on before the tech itself has been properly taken care of. The crypto bubble imploded through 2022, but the crypto industry is far from dead. It’s one of the biggest funders in the ongoing US election cycle, seeking candidates that will pass permissive legislations for its fraudulent activities. The gig economy isn’t in the limelight these days either, even as Uber continues its campaign to carve workers out of employment law with significant consequences. Meanwhile, smart glasses are back and more invasive than ever, and social media continues to churn out social harm despite years of discussion about its problems. We can’t allow that same cycle of entrenchment to play out with generative AI. Chatbots and image generators may have more tangible use cases than crypto, but that also means they can be used against people in many more ways once the hype fades. We need to understand what that may look like to try to proactively head it off.
Become a paid subscriber to help Disconnect hold the tech industry to account.
The ongoing harms of AIIn the early days of the chatbot hype, OpenAI CEO Sam Altman was making a lot of promises about what large language models (LLMs) would mean for the future of human society. In Altman’s vision, our doctors and teachers would become chatbots and eventually everyone would have their own tailored AI assistant to help with whatever they needed. It wasn’t hard to see what that could mean for people’s jobs, if his predictions were true. The problem for Altman is that those claims were pure fantasy. Over the 20 months that have passed since, it’s become undeniably clear that LLMs have limitations many companies do not want to acknowledge, as that might torpedo the hype keeping their executives relevant and their corporate valuations sky high. The problem of false information, often deceptively termed “hallucinations,” cannot be effectively tackled and the notion that the technologies will continue getting infinitely better with more and more data has been called into question by the minimal improvements new AI models have been able to deliver. However, once the AI bubble bursts, that doesn’t mean chatbots and image generators will be relegated to the trash bin of history. Rather, there will be a reassessment of where it makes sense to implement them, and if attention moves on too fast, they may be able to do that with minimal pushback. The challenge visual artists and video game workers are already finding with employers making use of generative AI to worsen the labor conditions in their industries may become entrenched, especially if artists fail in their lawsuits against AI companies for training on their work without permission. But it could be far worse than that. Microsoft is already partnering with Palantir to feed generative AI into militaries and intelligence agencies, while governments around the world are looking at how they can implement generative AI to reduce the cost of service delivery, often without effective consideration of the potential harms that can come of relying on tools that are well known to output false information. This is a problem Resisting AI author Dan McQuillan has pointed to as a key reason why we must push back against these technologies. There are already countless examples of algorithmic systems have been used to harm welfare recipients, childcare benefit applicants, immigrants, and other vulnerable groups. We risk a repetition, if not an intensification, of those harmful outcomes. When the AI bubble bursts, investors will lose money, companies will close, and workers will lose jobs. Those developments will be splashed across the front pages of major media organizations and will receive countless hours of public discussion. But it’s those lasting harms that will be harder to immediately recognize, and that could fade as the focus moves on to whatever Silicon Valley places starts pushing as the foundation of its next investment cycle. All the benefits Altman and his fellow AI boosters promised will fade, just as did the promises of the gig economy, the metaverse, the crypto industry, and countless others. But the harmful uses of the technology will stick around, unless concerted action is taken to stop those use cases from lingering long after the bubble bursts. The big data center buildoutThe generative AI tools themselves are one part of the AI bubble, but there’s another that shouldn’t be forgotten: all those hyperscale data centers each packed full of thousands of servers that Amazon, Google, Microsoft, and some other major players are building out around the world to power the computationally intensive future they’re working to realize. Data center infrastructure is fueling opposition the world over for its water and energy demands, but the cloud giants are pushing forward with hundreds of billions of dollars in investment allocated to ramp up construction. The future of the data center expansion could take a few different directions. After the early part of the pandemic, Amazon found it had overestimated future demand for its ecommerce platform and was planning to build far more warehouses than it actually needed. In 2022, amid a cost-cutting push, it closed and canceled numerous fulfillment center projects to adjust to the new reality. It’s possible something similar will happen with data centers. For example, there are already questions about whether Amazon’s plan to build three large data centers in Auckland, New Zealand will actually come to fruition. On the other side of the equation, we could look to the initial dot-com boom which fueled massive overbuilding of fiber infrastructure, which left the United States with far more than it needed in the moment but plenty to work with for the expansion of the internet economy that followed. More recently, after the crypto boom went bust, some of the computing power that was powering crypto mining operations ended up getting reallocated to training AI models and powering their rollout. Ultimately, the major cloud providers will likely rein in their expansion plans once the generative AI hype finally crashes, but they won’t cancel them completely. Given that Microsoft, Amazon, and Google benefit so much from making our lives and the services we use more computationally intensive, regardless of the wider environmental and social impacts, they have an incentive to ensure whatever follows generative AI will similarly ramp up the amount of computation our societies will collectively require, while ensuring we’re dependent on them to provide it. Keep up the fightMake no mistake: there is an AI bubble and its day of reckoning may be closer than we expect. I’m eagerly looking forward to it. It’s important to understand what distortions that’s fueling in the tech industry and the wider society, but we should also be ready for what comes after the crash. Generative AI does not have the wide array of use cases the industry tried to convince us it did, and it’s far too computationally intensive to be used for mundane tasks. But once the hype fades, there will surely be a greater push to make certain implementations more efficient so they can stick around, and there will undoubtedly be countless efforts to keep them running regardless of the social and environmental impacts they might have. They’ll need to be challenged, even after the attention moves on to the next source of tech hype.
Become a paid subscriber to help Disconnect hold the tech industry to account.
|