Matthew here. I’m trying to build a long-lasting independent journalism operation to chronicle the epic rise and growth of one of the most important new technologies in decades. Your support—aside from meaning the world to me—means I can continue to build this into a sustainable business. Plus, subscribers will get access to exclusive information, interviews, under-the-radar finds, scoops, and other fun stuff. If you’re up for it, please consider becoming a paid subscriber to support my work! Putting the brakes on the AI hypeEnterprises are taking a more methodical approach when figuring out how to put AI tools into production—and considering much longer timelines.Over the past few weeks I’ve been asking dozens of industry executives and sources one of the same thing: how have the questions enterprises have been asking about AI tools changed in the last six to nine months as they get more serious about getting projects into production? Almost universally the response has been that the tenor of these conversations has changed, especially when talking to enterprises and executives that initially bought into the original AI hype. Or, more specifically:
In short, enterprises are getting, well, smarter about what they are looking for when they are evaluating AI-based tools. And it’s generally confirmed the kind of “vibe shift” that started to happen in the past six months. Despite the recent fuss about whether or not we’re in an AI bubble, the “hype cycle” actually ended quite a while ago as companies have started to figure out how these things are going to actually useful now. And we’ve largely started to see that emerge already in more straightforward—and boring—tasks, even if companies aren’t very loud about it. The era of thinking you could wave a OpenAI-shaped wand and generate completely new lines of business or completely replace an entire class of workers is effectively gone, but at the same time, there’s enough signal that companies aren’t just throwing the idea of deploying a generative AI tool into the trash or writing it off as an experiment. “It’s a lot less frenetic as compared to how it was,” Brian Raymond, CEO of unstructured data ETL provider Unstructured.io, told me. “Things aren’t changing every hour or every week from an industry standpoint, we have a little more confidence on where things are going, and there are fewer surprises. We’re in this unsexy phase, but this is the phase in which most of the value is gonna get created among the organizations that are trying to leverage generative AI. It’s less rapid experimentation and mind-blown emojis, and it’s more like, let’s drive ROI.” All this is pretty far from the days right after the launch of GPT-4 when certain executives at a company would barge into a room shouting “AI” and leaving the team scrambling to build something. The list of needs for enterprises keeps growingWhile we all talk about whether or not AI is in a bubble for the Very Big Models, what’s become clearer is how enterprises are evaluating generating near-term ROI through the use of language models. Most companies actively using these tools I’ve spoken with have done that through automating smaller tasks, like summarization and classification, with customized smaller language models. By focusing on these RPA-like tasks with smaller models, they’re able to save significantly on cost relative to paying for an API product like one from OpenAI or Google. These kinds of tasks are augmented with the use of information retrieval techniques like retrieval augmented generation (or RAG), which effectively fetches some extra information for a prompt in a given language model. Companies convert their data—which most likely sits in a provider like Snowflake, Databricks, or others—into a format through a process called embedding. They then make that data readily available on-demand for these prompts. (Most developers I’ve spoken with have also joked that all of these “new” techniques that people keep “discovering,” like RAG augmented with a graph database, are also decades old.) The needs of those enterprises are also becoming a lot more sophisticated as time goes on. Rather than just firing stuff into a prompt, enterprises are increasingly looking for governance tooling around these language models, such as lineage and, seemingly more recently, role-based access control (or RBAC). Everything old, it seems, is new again—just this time in AI... Subscribe to Supervised to unlock the rest.Become a paying subscriber of Supervised to get access to this post and other subscriber-only content. A subscription gets you:
|