Welcome to The Tech Bubble. This week, enjoy Part 1.5 of the Silicon Valley Consensus, which revisits my framework for understanding the AI bubble and examines a few more actors central to sustaining it.
My series of essays on Artificial Intelligence thus far:
Trapped in the Maw of a Stillborn God - On Vegas as a laboratory for surveillance and social control, the explosion of gambling as a sign of a degenerate culture seized by despair, AI delusions at CES, and the future.
If these essays sound interesting or if you’ve found them insightful, I’m happy to tell you that you can help make more of this work possible with a subscription. Your support allows me to keep my main essays free for everyone (and supports me as I make the newsletter sustainable). Consider supporting The Tech Bubble (me) with a subscription: $7 a month (the price of a few hard ciders at your local bodega) or $70 a year (a copy of METAL GEAR SOLID Δ: SNAKE EATER Tactical Edition). Not that much, huh?
Silicon Valley Consensus & The Limits of an “AI Economy”
The “AI economy” is less a story of productivity or innovation, then an attempt to graft a new political-economic order—let’s call it the Silicon Valley Consensus—that is ostensibly concerned with building our stillborn God. A coalition of hyperscalers, venture capitalists, fossil fuel firms, conservatives, and reactionaries are engaged in a frenzy of overbuilding, overvaluing, and overinvesting in compute infrastructure. Their goal is not to realize AGI or radically improve life for humanity, but to reallocate capital such that it enriches themselves, transmutes their wealth into even more political power that imposes constraints on countervailing political forces, and liberates capitalism from its recent defects (e.g. democracy), consolidating benefits to its architectures regardless of the actual social utility of the technologies they pursue.
SPECTACLE & SUBSTANCE
Building out generative AI’s compute infrastructure and energy supply is an incredibly capital-intensive enterprise (McKinsey expects $7 trillion will be spent by 2030). It will only grow more so. As of late, models have become more compute-intensive as they attempt to resemble “reasoning” via chains of queries that self-check, search, and otherwise radically increase the resources needed for individual requests.
Compute infrastructure is expansive (chips, servers, clusters, data centers, data sets, labelers, cleaners, energy supply, etc.) but it's largely been hyperscaler capex that’s drawn attention. In August, we got a flood of commentary reacting to the news that their spending on data center contributed more to the US economy than all of consumer spending. Given revenues are weak and profitability/sustainability is not in sight, we might expect some scrutiny about the claims these firms are making. There has been relatively little until recently. Much of it begins and ends with surface-level comparisons to telecom and the internet during the dot-com bubble—partly because Magnificent 7 generative AI capex exceeded $102.5 billion last quarter (meaning this buildout is on track to be the largest since the Gilded Age railroad bubble), partly because of the tech sector has undertaken a wildly successful and sophisticated marketing campaign that has drowned out skepticism and critical analysis.
I want to cobble together three perspectives that I think are complementary, running the gamut from techno-optimism to a much more critical framework asking “why are we doing this?”
Today we will start with Derek Thompson’s from August:
Thompson’s piece is pretty rosy on the AI boom and what its scale could mean for worker adoption as well as economic impact. Distilled, its main points are:
The AI investment and adoption boom is not a future event but a present-day economic reality. We’re creating a bifurcated economy: a “rip-roaring” AI sector powered by hyperscaler capex that eclipses the rest of our “lackluster” economy. This buildout is comparable in scale (and potentially impact) to the dot-com era or Gilded Age railroad bubbles. It may even rival them!
The primary capital source for this infrastructure buildout isn’t external debt, but internal cash flows—primarily at hyperscalers—that dominate our stock market. Their profitability is so extreme that they can put “oodles of oodles of money” towards such an ambitious project without touching risky financing options, even if revenues and profits have yet to materialize.
Given early evidence on adoption and productivity, we should be bullish with minor caveats about its potential impact. Generative AI is being adopted at twice the rate of the internet and certain sectors (e.g. startups on Stripe, teachers) report massive efficiency gains. Some objective studies reveal workers dramatically overestimate these gains, however: workers often feel more productive using AI (such as developers claiming 20 percent gains) even as we see the opposite (developers taking 20 percent longer), suggesting the positive impact is there and building but not fully understood.
ON AI INVESTMENT AND COMPARISONS TO THE EARLY DAYS OF THE INTERNET
If this buildout rivals that of previous bubble overbuilds in size and potential impact, a very simple question to ask is: where is the money? It has been a year since Goldman Sachs’ report “Gen AI: Too Much Spend, Too Little Benefit?” and many of its core concerns remain unaddressed. One of its key sections, an interview with Goldman’s Head of Global Equity Research Jim Covello, pointed out that trillions over the next few years were being planned for investment despite failing to convincingly answer a single question: “What trillion dollar problem will AI solve?”
Responses to this usually hinge on comparisons to earlier technological innovations, but are superficial or ill-informed. The internet, Covello notes, “was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions.” AI, however is “exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.” Covello also argues that the core investment theory promoting generative AI is junk science at best:
The idea that technology typically starts out expensive before becoming cheaper is revisionist history. Ecommerce, as we just discussed, was cheaper from day one, not ten years down the road. But even beyond that misconception, the tech world is too complacent in its assumption that AI costs will decline substantially over time. Moore’s law in chips that enabled the smaller, faster, cheaper paradigm driving the history of technological innovation only proved true because competitors to Intel, like Advanced Micro Devices, forced Intel and others to reduce costs and innovate over time to remain competitive.
Today, Nvidia is the only company currently capable of producing the GPUs that power AI. Some people believe that competitors to Nvidia from within the semiconductor industry or from the hyperscalers—Google, Amazon, and Microsoft— themselves will emerge, which is possible. But that's a big leap from where we are today given that chip companies have tried and failed to dethrone Nvidia from its dominant GPU position for the last 10 years. Technology can be so difficult to replicate that no competitors are able to do so, allowing companies to maintain their monopoly and pricing power. For example, Advanced Semiconductor Materials Lithography (ASML) remains the only company in the world able to produce leadingedge lithography tools and, as a result, the cost of their machines has increased from tens of millions of dollars twenty years ago to, in some cases, hundreds of millions of dollars today. Nvidia may not follow that pattern, and the scale in dollars is different, but the market is too complacent about the certainty of cost declines.
The starting point for costs is also so high that even if costs decline, they would have to do so dramatically to make automating tasks with AI affordable. People point to the enormous cost decline in servers within a few years of their inception in the late 1990s, but the number of $64,000 Sun Microsystems servers required to power the internet technology transition in the late 1990s pales in comparison to the number of expensive chips required to power the AI transition today, even without including the replacement of the power grid and other costs necessary to support this transition that on their own are enormously expensive.
Covello's argument here is not that compute costs for generative AI will never fall, but that they do not follow the historical patterns that boosters and optimists are consistently using to justify their enormous bets. First, generative AI does not follow historical tech cost curves, where day one the technology is cheaper. Second, generative AI does not have competitive enough markets to drive down data center hardware costs, something that was the key driver behind cost depreciation associated with Moore's Law—in fact, NVIDIA's dominance may yield monopolistic pricing power. Third, there is a fundamental difference of scale between today's genAI compute infrastructure costs and those of previous bubbles that make them astronomically higher and much more inflexible. To start: there is the actual real estate (land, physical space) that must be bought or leased for data centers, energy supply, and then water/cooling infrastructure.
There are three other parts I want to quote Covello on that really drive home the point here. The first is on the explicit comparison of today to the early days of the Internet:
The idea that the transformative potential of the internet and smartphones wasn’t understood early on is false. I was a semiconductor analyst when smartphones were first introduced and sat through literally hundreds of presentations in the early 2000s about the future of the smartphone and its functionality, with much of it playing out just as the industry had expected. One example was the integration of GPS into smartphones, which wasn’t yet ready for prime time but was predicted to replace the clunky GPS systems commonly found in rental cars at the time. The roadmap on what other technologies would eventually be able to do also existed at their inception. No comparable roadmap exists today. AI bulls seem to just trust that use cases will proliferate as the technology evolves. But eighteen months after the introduction of generative AI to the world, not one truly transformative—let alone cost-effective—application has been found.
The second is on this AI capex arms race that’s completely unmoored from reality:
The big tech companies have no choice but to engage in the AI arms race right now given the hype around the space and FOMO, so the massive spend on the AI buildout will continue. This is not the first time a tech hype cycle has resulted in spending on technologies that don’t pan out in the end; virtual reality, the metaverse, and blockchain are prime examples of technologies that saw substantial spend but have few—if any—real world applications today. And companies outside of the tech sector also face intense investor pressure to pursue AI strategies even though these strategies have yet to yield results. Some investors have accepted that it may take time for these strategies to pay off, but others aren’t buying that argument. Case in point: Salesforce, where AI spend is substantial, recently suffered the biggest daily decline in its stock price since the mid-2000s after its Q2 results showed little revenue boost despite this spend.
The third is on the prospects for AI-related revenue expansion:
I place low odds on AI-related revenue expansion because I don't think the technology is, or will likely be, smart enough to make employees smarter. Even one of the most plausible use cases of AI, improving search functionality, is much more likely to enable employees to find information faster than enable them to find better information. And if AI’s benefits remain largely limited to efficiency improvements, that probably won’t lead to multiple expansion because cost savings just get arbitraged away. If a company can use a robot to improve efficiency, so can the company’s competitors. So, a company won’t be able to charge more or increase margins.
...
Since the substantial spend on AI infrastructure will continue despite my skepticism, investors should remain invested in the beneficiaries of this spend, in rank order: chip manufacturers, utilities and other companies exposed to the coming buildout of the power grid to support AI technology, and the hyperscalers, which are spending substantial money themselves but will also garner incremental revenue from the AI buildout. These companies have indeed already run up substantially, but history suggests that an expensive valuation alone won’t stop a company’s stock price from rising further if the fundamentals that made the company expensive in the first place remain intact. I’ve never seen a stock decline only because it’s expensive—a deterioration in fundamentals is almost always the culprit, and only then does valuation come into play.
A few days before Goldman Sachs’ report, Sequoia Capital partner David Cahn published an argument building on his 2023 argument that there was “a big gap between the revenue expectations implied by the AI infrastructure build-out, and actual revenue growth in the AI ecosystem, which is also a proxy for end-user value.” In 2023, revenue expectations were $200 billion and there was a “$125B hole that needs to be filled for each year of CapEx at today’s levels.” In his 2024 analysis, revenue expectations ballooned to $600 billion and the revenue hole grew even larger to $500 billion.
Cahn is pretty brutal about AI hype, but still retains hope for the future:
Speculative frenzies are part of technology, and so they are not something to be afraid of. Those who remain level-headed through this moment have the chance to build extremely important companies. But we need to make sure not to believe in the delusion that has now spread from Silicon Valley to the rest of the country, and indeed the world. That delusion says that we’re all going to get rich quick, because AGI is coming tomorrow, and we all need to stockpile the only valuable resource, which is GPUs.
Cahn’s argument is a splash of cold water on the techno-optimist narrative, but I want to hone in on his rebuttal of the railroad bubble analogy because it’s typically made on superficial grounds. Generally, these comparisons serve a rhetorical purpose of suggesting the overbuild will have immense value (while also obscuring reality and our ability to understand it). I’m going to plagiarize myself summarizing Cahn’s argument—I engaged with it in March when I offered my first iteration of the Silicon Valley Consensus framework. I have some qualms with Cahn’s argument (specifically the prospects for AI revenue growth), but where I’ve settled is that he’s in the right direction about the gap between revenues and data center and wrong on some areas that give him hope about climbing out of the revenue-capex hole.
Summary:
Lack of pricing power: Cahn believes that while railroads confer natural monopolistic advantages (there is only so much track space between two points), the same is not true of GPU data centers. GPU computing is becoming commoditized, AI compute can be offered through the cloud, and so prices are getting competed down to their marginal cost (airlines are offered as an example).
Investment incineration: Speculative investment frenzies are common! The underlying asset is not more impervious to zeroing out because large hoards of capital were poured into it; "It's hard to pick winners, but much easier to pick losers (canals, in the case of railroads).”
Depreciation: Compute differs from physical infrastructure in that it follows Moore’s Law. The continuous production of next-generation chips with lower costs and higher performance will accelerate the depreciation of the last-generation of chips. This goes both ways: markets will overestimate the value of today’s chips and under-appreciate the value of tomorrow’s chips. "Because the market under-appreciates the B100 and the rate at which next-gen chips will improve, it overestimates the extent to which H100s purchased today will hold their value in 3-4 years."
Winners vs losers: Long term, declining prices for GPU computing will be a boon for innovation/startups, but a bane for investors. "Founders and company builders will continue to build in AI—and they will be more likely to succeed, because they will benefit both from lower costs and from learnings accrued during this period of experimentation."
This from a venture capitalist at a firm that believes in AI! Railroads are not a good way of understanding these technologies:
They appeal to a superficial interest in historical details and patterns that prioritize telling a story, as opposed to paying attention to revenue, pricing power, investment returns, market structure, commodities, and products.
Emphasizing the difference between revisionist history and reality introduces us to concerns and questions that will inform what we do next better than an analysis that barely rises above the level of vibes.
What is the “AI economy” materially? Does it actually tell us anything about what is going on in the sector? Investment is real, it’s happening, but it’s not yielded returns—so what is it doing? What decisions are being made and what interests emerge when we raise our heads to look beyond? What is motivating interest and development? Who is able to ignore these lack of returns, what moves are being made to realize them, what moves are being made anticipating they will not emerge?
ON PROFITS, NOT DEBT, BEING THE PRIMARY BUILDOUT CAPITAL SOURCE WORTH EXAMINING
It is true that internal cash flows are the major source for hyperscalers, but: not all firms burning capex are hyperscalers; not all firms, even hyperscalers, are financing AI capex equally.We will tackle this more later when we arrive at Noah Smith’s piece but for now, let’s look at two of his sources.
The first is Paul Kedrosky, who lays out a clear list of the major capital sources for the ongoing data center buildout:
Where is all this capital coming from?
For the most part, six sources:
Internal Cash Flows (Primary for Microsoft, Google, Amazon, Meta, etc.)
Debt Issuance (Rising role)
Equity & Follow-on Offerings
Venture Capital / Private Equity (CoreWeave, Lambda, etc.)
SPVs, Leasing, and Asset-Backed Vehicles (like Meta’s recent)
On a very basic level, every single capital source is not the same. Some are riskier than others, some demand higher returns than others, some are much more abundant than others, and so they have different uses for different firms involved in the AI infrastructure buildout (and go a long way in determining what various firms and their projects will prioritize to meet demands imposed by financing).
The second source (The Economist) lays this out a bit more clearly:
>”[C]apex is growing faster than [Big Tech’s] cashflows …The hot centre of the AI boom is moving from stockmarkets to debt markets … During the first half of the year investment-grade borrowing by tech firms was 70% higher than in the first six months of 2024. In April Alphabet issued bonds for the first time since 2020. Microsoft has reduced its cash pile but its finance leases—a type of debt mostly related to data centres—nearly tripled since 2023, to $46bn (a further $93bn of such liabilities are not yet on its balance-sheet). Meta is in talks to borrow around $30bn from private-credit lenders including Apollo, Brookfield and Carlyle. The market for debt securities backed by borrowing related to data centres, where liabilities are pooled and sliced up in a way similar to mortgage bonds, has grown from almost nothing in 2018 to around $50bn today
…
CoreWeave, an ai cloud firm, has borrowed liberally from private-credit funds and bond investors to buy chips from Nvidia. Fluidstack, another cloud-computing startup, is also borrowing heavily, using its chips as collateral. SoftBank, a Japanese firm, is financing its share of a giant partnership with Openai, the maker of ChatGPT, with debt. “They don’t actually have the money,” wrote Elon Musk when the partnership was announced in January. After raising $5bn of debt earlier this year xAI, Mr Musk’s own startup, is reportedly borrowing $12bn to buy chips.
...
This symbiotic escalation is, in some ways, an advert for American innovation. The country has both the world’s best AI engineers and its most enthusiastic financial engineers. For some it is also a warning sign. Lenders may find themselves taking technology risk, as well as the default and interest-rate risks to which they are accustomed. The history of previous capital cycles should also make them nervous. Capex booms frequently lead to overbuilding, which leads to bankruptcies when returns fall. Equity investors can weather such a crash. The sorts of leveraged investors, such as banks and life insurers, who hold highly rated debt they believe to be safe, cannot.”
A deal where you have immense profits looks different from a deal where your sponsor is not flush with profits, which looks different from a deal where you have a few deep-pocketed clients, which looks different from a deal where you have nothing but talent or certain assets to offer up as leverage. Any analysis of the “AI economy” that pretends as if this uneven terrain does not exist (or minimizes its import) is, at best, an incomplete analysis.
But to really drive home the point that debt is incredibly important to think about, let’s look at a March piece from Ed Zitron that argued CoreWeave, an "AI cloud provider" that lets AI firms rent GPU compute. Its core business model: buying and selling high-end GPUs and the infrastructure needed to run them, largely from NVIDIA thanks to a cozy relationship that gives them priority access. Just as the company delayed its IPO, the first one of the generative AI sector, Zitron issued this report to lay out why CoreWeave might prove to be a ticking time bomb:
CoreWeave’s customer base is extremely concentrated, meaning revenue was vulnerable. Microsoft accounted for 62 percent of its 2024 revenue and NVIDIA probably accounts for another 15 percent. The close relationship between NVIDIA and CoreWeave has been accused of resembling “round-tripping” ("a practice where companies inflate their top lines through reciprocal deals that don’t always create real economic value”). Microsoft has already shed “some services” ahead of CoreWeave’s IPO, though the latter denied this. What happens to CoreWeave if it no longer gets priority access to NVIDIA chips or if Microsoft pulls back AI capex (which it did earlier this year)?
CoreWeave’s ungodly financial losses are hard to ignore. $1.9 billion in revenue, $863 million in losses. This loss leading, ubiquitous across the “AI economy” fuels the boom—but why is it happening at a firm whose business model is GPU compute, the one thing you’d expect every AI firm to need? Is demand insufficient? Are costs too high?
CoreWeave is potentially crippled by its debt. The firm has secured $8 billion through loans with high interest rates and complex, predatory loans. Most of CoreWeave's raised capital is debt, namely in two Delayed Draw Term Loan facilities (you get access to some of the money, distributed in tranches that unlock after time periods or certain milestones). DDTL 1: $2.3 billion (fully drawn) secured with an effective annual interest rate of 14.11 percent, requiring quarterly payments of $250 million ($1 billion annually) to service the loan. DDTL 2: $7.63 billion (half drawn) with a 10.5 percent annual interest rate that would double if fully drawn—so it requires $760 million in annual interest payments to service the loan, potentially doubling to $1.52 billion.
To make matters worse, it's not clear how CoreWeave will get enough capital to meet its obligations. CoreWeave "will have to spend in excess of $39 billion to build its contracted compute," has already taken on massive loans to fuel its aggressive capex (that stipulate all future capital raised must go towards repaying the debt), and does not have the revenue to support operations or interest payments.
Coreweave has put up GPU compute as collateral, a depreciating asset that may create a trap. GPUs like the H100 see their value fall quickly, in part because even before mechanical failure they’ll become obsolete as advanced chips or even-more intensive tasks are developed. Rental prices plummeted from $8/hr to $1.47/hr in March (it’s as low as $0.90/hr at the time of publication).It’s not hard to imagine a scenario whereby we see a depreciation spiral as the collateral backing a massive loan plummets in value and potentially triggers covenants that force early repayment—something CoreWeave does not have the capital to do.
CoreWeave has found an unproven partner in Core Scientific. Core Scientific is a different company, a bitcoin mining firm that went public in 2022 via a SPAC-merger—the fraudulent financial vehicle that lets firms go public when they have no business doing so (Core Scientific filed for Chapter 11 bankruptcy the same year it went public via SPAC). This firm is central to CoreWeave's business strategy, despite the fact that it has little experience with HPC/AI data centers and despite the fact that its plan was to "bulldoze and rebuild" Bitcoin mining infrastructure in hopes of repurposing it for artificial intelligence. That CoreWeave's promised "1.3 GW of contracted power" happens to exactly match the capacity Core Scientific claims it can build should raise some, but alas we are charging full speed ahead with an ambitious plan executed by an unproven partner dependent on a circular business plan.
CoreWeave’s S-1 raises plenty of red flags. It admits to "material weaknesses in [its] internal control over financial reporting," suggesting "there is a reasonable possibility that a material misstatement of our annual or interim financial statements will not be prevented or detected on a timely basis” and at the earliest around 2026—nothing to worry about there! We also get to think about CoreWeave's dual-class share structure, whereby 82 percent of its voting power is held by founders without 30 percent equity ownership. This is a great way to tell shareholders to fuck off, marginalizing their interests and prioritzing decisions that allow you to, say, profit from an unsustainable business venture by "cash[ing] out nearly $500 million before the IPO".
CoreWeave's business model resembles that of the "AI economy". As Zitron puts it, "NVIDIA is selling the pickaxes for the goldrush, CoreWeave is selling the shovels, and it mostly appears to be turning up dirt." We don't see revenues despite its centrality to the ecosystem, we don't see clear expansion plans, we don't see any strong signals that it will thrive, and all this casts doubt on every thread of the “AI economy” that runs through it. Is there not enough real, profitable demand? Are firms not able to find clear demonstrable use cases for this technology beyond round-tripping? Where is the money?! Where are the profits?!
ON EARLY POSITIVE EVIDENCE OF ADOPTION WITH MINOR CAVEATS
One of Thompson’s central claims is that genAI is seeing a widespread, rapid, positive adoption. The St. Louis Fed’s estimate that generative AI adoption is proceeding at roughly twice the rate of the early internet. This statistic—39.4% adoption by August 2024 (~2 years post-ChatGPT) vs. ~20% for the internet at a similar stage—is presented as proof of inevitable, transformative potential. On top of this, Thompson also points to Gallup surveys showing teachers self-reporting efficiency gains and Stripe data showing AI startups reaching revenue milestones quicker, but emphasizes the bull case even as objective studies reveal that workers feel more productive but actually are not.
In March, McKinsey found 71 percent of firms reported use of generative AI and more than 80 percent of those firms reported no ”tangible impact on enterprise-level EBIT from their use of gen AI.” A recent review of claims that artificial intelligence was building "shovelware" which would unlock efficiency gains observed there was no notable impact on software releases across the world these past few years. Torsten Sløk, Apollo Global Management's chief economist, just published a report looking at the US Census Bureau and actually found a decline in AI adoption rates for large firms. Researchers at MIT tracked 300 publicly disclosed AI initiatives and found 95 percent failed to boost profits. We can go on and on and on like this.
The evidence that adoption is rapid, widespread, and positive quickly becomes the weakest when viewed in relation to the larger ecosystem and its disappointing outcomes, however. After all, if adoption is so much faster and is backed by an infrastructure investment larger than the dot-com boom, why are concrete measures of economic value—strong revenues, sustained profitability—so elusive for the core AI industry?
Let’s take one relatively recent example. Back in February 2025, Zitron argued we’re being pretty unrigorous in how we think about the growth of generative AI. ChatGPT’s claims of 300 million weekly users, issued in December 2024, had largely been accepted without much scrutiny about their veracity or the artifice of that metric. Most coverage misrepresents the capabilities of generative AI products and, as Professor Rasmus Nielsen writes for Reuters Institute for the Study of Journalism, “often takes claims about what the technology can and can’t do, and might be able to do in the future, at face value in ways that contributes to the hype cycle."
If ChatGPT had 300 million weekly users at the time, we’re underestimating the extent to which coverage that constantly emphasizes bull cases and reproduces company talking points as hype, ginning up demand for a product that is available for free but does not meet marketed expectations. One way to think about this: is there any useful reference point for a startup that has had as much uncritically optimistic widespread coverage as OpenAI?
If it is true, however, 300 million weekly users is a lot of users. And yet, it does not tell us much about the actual product or how it's used! Is this a sustainable or profitable venture? How are people using it at home or at work, how much of their use is casual versus intensive, and so on.
Digital market intelligence data Zitron obtained suggests ChatGPT's monthly unique visitors trending up to 247.1 million in November, as well as a snapshot of weekly visitor traffic in January and February 2025—trending up to 136.7 million. Typically, you get more visitors to your site than users, which means the gap between reported and actual traffic only grows larger. It doesn't get filled if you bring in mobile app data—its iOS app had been downloaded 353 million times total by late January, so a best case scenario would need 100 million mobile only users a week at least to plug the hole. Monthly active users aren't reported, even though in theory that number should be higher ("a monthly active user is one that uses an app even once a given month") because it might reveal how poor the company's paid conversion rate is!
I would be surprised if you can find more than a handful of people in the wider media ecosystem even entertaining critical discussion of OpenAI’s reported numbers, the artifice of them, what it means that we have this or that metric, what insights into their businesses and into consumer use we are denied, and how this all relates to our ability to accurately get a sense of adoption, impact, sustainability, or the damn business model more generally!
IS TECHNO-OPTIMISM WARRANTED?
To wrap this up, it's not clear to me what merits the techno-optimist outlook on whatever constitutes the “AI economy.” It ignores the financials: overlooks the gap between revenues and capex, waves away the question of how it will generate profits, engages in revisionist historical accounts to justify these bad economics, and whistles past the debt land mines that fuel growth. It ignores technology: there's no interest in market structure, scale of cost, or product roadmaps. It ignores the reality of adoption: gloms onto hype, falls for simplistic narratives, repeats corporate talking points, and reproduces shaky assumptions. We're left with a picture of reality that leaves us unable to explain why things are the way they are and what to do about it. The "AI economy" as talked about within mainstream and optimist circles presents a vision unmoored from reality that is frothy enough to drown out skeptics, juice speculation, and provide cover for entrenched interests looking to enrich themselves at everyone else's expense.