|
Here's this week's free edition of Platformer: a look at OpenAI on its 10th birthday, the release of ChatGPT-5.2, and growing evidence that the company has traded its signature weirdness for Silicon Valley normal. Which is to say, a focus on engagement and user growth above all. Did you get value out of Platformer this year? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about how Grok's porn companion is rated for children 12+ in Apple's App Store. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice.
|
|
|
|
This is a column about AI. My boyfriend works at Anthropic. See my full ethics disclosure here. OpenAI turned 10 today. For most of its life, the company has been defined by its weirdness. There was that weird corporate structure — the world’s most valuable startup, tucked somehow inside a nonprofit organization. There was the tumultuous corporate history, with Sam Altman’s now-legendary firing and quick re-hiring. There was the series of high-profile departures, with Altman’s top lieutenants regularly leaving in frustration to found their own multi-billion-dollar AI ventures. And there was the unprecedented promise that the company will spend more than a trillion dollars on building infrastructure to serve its clients, long before such demand arrives. Perhaps weirdest of all, though, was the series of promises the company made when it was founded. There was the mission to “ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.” (At the time of its founding in 2015, the suggestion that AGI would soon be possible was seen as quite weird.) And there were the promises it once made to achieve that mission, including that if a rival lab came close to safely achieving AGI, OpenAI would stop its own work and help them. By the end of 2025, though, much of that weirdness has faded. The company has converted its for-profit arm from a “capped-profit” enterprise to a more normal one. It has a steady leader in Altman, who is building out a growing roster of seasoned corporate deputies, including most recently former Slack CEO Denise Dresser as chief revenue officer. (She is expected to push hard into enterprise sales, where Anthropic has gained an advantage.) And while the company continues to acknowledge the peril that powerful AI models will bring, over the past year the company has transformed to focus on much more normal business risks: that revenue growth will slow; that engagement will decline; that a competitor will steal market share. All of this has been evident in the run-up to ChatGPT 5.2, which OpenAI released today. It comes out a little over a week after Altman declared a “code red” at the company, instructing employees to put more focus on the core ChatGPT experience and to delay work on ads, e-commerce agents, and its Pulse daily news digest. Altman is concerned about the rising popularity of Google’s Gemini, which grew its monthly user base from 450 million in July to 650 million in October, while ChatGPT usage has plateaued since the summer. Nick Turley, who leads ChatGPT, told employees in an October memo that OpenAI aimed to increase daily users by 5 percent by the end of the year. (At the time, the company’s threat level was still only set to “code orange.”) The good news for OpenAI is that 5.2 seems to be a meaningful improvement over its predecessor. In its launch post, OpenAI says GPT‑5.2 Thinking beats or ties top human professionals on 70.9 percent of comparisons in GDPval, a benchmark of knowledge‑work tasks across 44 occupations. It also reports new highs on coding and reasoning. The model is also 38 percent less prone to hallucinating on factual questions, the company said. But no model should be evaluated on benchmarks alone. And ChatGPT’s models, which remain the most widely used of their kind, deserve special scrutiny given the way past iterations have led users into delusion, self-harm, and other worrisome states. When GPT-4o led to a sharp rise in mental health issues among users earlier this year, OpenAI dialed down various engagement knobs to make the model less sycophantic. But the fix had the side effect of reducing engagement on the platform, worsening the company’s business ambitions. And so OpenAI has spent much of the latter half of the year working out how to boost engagement without returning their user base to the glazed-and-confused state they were in this spring. These are essentially science experiments on live human beings, and when they go wrong, they can end in tragedy. (Just today, a new wrongful death suit was filed against the company by the estate of a woman who was killed by her son in August after he had a series of delusional conversations with ChatGPT. The son died by suicide. “The main factor was that he was isolated and only talked to an AI that affirmed every thought he had,” the killer's son told the Wall Street Journal.) It’s plausible that 5.2 represents an improvement on these fronts. The model’s system card states that it offers improved responses on prompts related to suicide, self-harm, and mental health. OpenAI says it has achieved this while still reducing the rate at which ChatGPT erroneously refuses to respond to a user. (In recent ask-me-anything sessions with Altman, it’s over-refusals — and not safety — which seem to have upset ChatGPT users the most.) But it also seems clear that there are real tradeoffs between making a model safer and making it the kind of app that hundreds of millions of people feel compelled to use every day. And increasingly, growth and engagement are winning arguments at OpenAI headquarters. You saw it in the chaotic release of Sora. The company’s decision to release an infinite video feed app optimized in the same fashion that TikTok and Instagram are marked a significant departure from the humanitarian mission its founders once laid out; Altman acknowledged that among other things it was a moneymaking venture designed to fund OpenAI’s enormous costs. Another executive said Sora released with fewer copyright protections than rightsholders expected because it made Sora more competitive. These are the kinds of decisions that lie along the path toward engagement. And it seems clear that elements of this thinking are coming to ChatGPT as well. OpenAI apps chief Fidji Simo told reporters today that the company’s “adult mode,” which will allow the generation of sexually explicit erotica and roleplay, is coming in the first quarter of next year. Adults should be allowed to use chatbots for adult purposes. But you don’t have to be a Puritan to worry about the consequences of a generation of lonely people becoming dependent on, and increasingly isolated, by a chatbot company that they’re paying $20 or more a month to use. Particularly when by the company’s own estimates, millions of people are already developing unhealthy relationships with a chatbot that to date has not been tuned for engagement. The reason we still call OpenAI and a handful of rivals “labs” is because they were born as research organizations that were indifferent to commercial imperatives and more interested in the sci-fi possibilities, good and bad, that artificial intelligence could make possible. Ten years in, hundreds of millions of people now use tools every day that in 2015 would have qualified as sci-fi. And the company continues to flag new potential dangers along the way (its next models will carry a high risk of enabling cyber attacks, it said on Wednesday.) Still, in aggregate the company’s actions suggest it has become more worried about user churn than it is most days about catastrophe. One of the best things about OpenAI’s weird era is how weirdly focused it was on preventing harm. Ten years in, though, OpenAI looks increasingly normal. And should the company stay that course, the consequences will be serious — and strange. Elsewhere in OpenAI: Disney and OpenAI announced a partnership where Disney characters will be available in Sora. Disney will invest $1 billion in OpenAI, and Disney will “become a major customer” of OpenAI. The companies have created a joint steering committee to supervise how Disney characters are used, and OpenAI is renewing a commitment to “age-appropriate policies.” (Disney also sent a cease-and-desist letter to Google, accusing the company of using AI to do copyright infringement on a “massive scale.”) Meanwhile, ChatGPT is the most downloaded free iPhone app in the US this year. New apps for Adobe products launched directly in ChatGPT. (Users can ask ChatGPT to edit photos in Photoshop and more.) On the podcast this week: Kevin and I talk about the implications of Kevin's ban on teenagers under 16 from social apps in Australia. Then, blogger Andy Masley joins us to discuss his crusade to separate fact from fiction on AI water use. And finally, we present your Hard Fork Wrapped. Apple | Spotify | Stitcher | Amazon | Google | YouTube Sponsored Redefine your dream jobDid you know that you don't have to ‘follow your passion’ if you want to have a fulfilling career? After reviewing over 60 studies on what makes for a dream job, 80,000 Hours found that most of the common advice — like looking for work that pays well and isn’t stressful — doesn’t hold up to the evidence. So what does?
The 80,000 Hours research-driven career guide explains that to have a satisfying career, you should do work that feels meaningful because it contributes to helping others. You don’t have to follow the conventional path of being a doctor, teacher, or a charity worker if you want to do good. This free guide is full of concrete, practical advice that aims to help you create a full career plan that you feel confident in, and it draws on over ten years of research. Get the free guide. Following
What happened: Fun details are leaking out of Meta as the company attempts to build superintelligence with a ragtag crew of billionaires and centi-millionaires. Most notably, the company may be planning an about-face on its years-long campaign to promote open-source AI. Bloomberg reports that a model nicknamed “Avocado” has been training for a few months. To train Avocado, the team has been “distilling:” training Avocado to imitate competitors’ open-source models, including OpenAI’s gpt-oss, Google’s Gemma, and Alibaba’s Qwen. Meta’s new AI chief Alexandr Wang, whose services Zuckerberg paid an exorbitant amount to acquire a few months ago, is reportedly an advocate of closed-source. That may mean that Avocado emerges as a closed model, possibly available via paid subscription. Meanwhile, the New York Times reported that TBD Labs, Meta’s elite AI division, is clashing with other Meta divisions. Wang has reportedly butted heads with Zuckerberg lieutenant Chris Cox, while TBD Labs researchers feel that Meta executives are too fixated on using AI to boost short-term revenue instead of the larger project of “superintelligence.” The company says everyone is getting along just fine. We'll see! Why we’re following: "How are things going over at Meta's incredibly new expensive AI division?" is simply one of the most compelling questions in the world, to us, and no morsel of gossip on the subject is too small to enjoy. More seriously, while Meta has long said that it plans to build a mix of open- and closed-source models, shifting most of its efforts to closed-source would represent a major shift for the company — and also a huge new challenge. (One of the main things people liked about Llama was that it is free. If you're asking people to pay, you have to be able to credibly say that you're as good as the other guys.) What people are saying: Engineer and blogger Gergeley Orosz speculated about Meta’s intentions: “I always thought Meta’s goal was to devalue the closed models OpenAI and Anthropic has by offering an open” model, which people could use for free, to “undercut them,” he posted on X. “If Meta is going closed model they gave up on this strategy.” Wharton professor Ethan Mollick, meanwhile, argued in an X post that the case for Meta’s open-source plan wasn’t that strong to begin with. “A lot of discussion on open weights models seems to assume there is a clear incentive for building them. I don’t see how is the case.” While open-source software has clear advantages, open-weight language models don’t, Mollick argued. Unlike in an open-source software project, you “can’t rely on volunteer effort to provide compute or do training runs.” —Ella Markianos Side QuestsSo many things happened over the past few days. Sorry! Trump signed an executive order seeking to stop state-level AI regulation. (I imagine we'll have more on this in the days ahead.) The administration also issued federal procurement guidelines seeking to ban "biased" models. Time’s 2025 Person of the Year is “The Architects of AI.” New York’s governor Kathy Hochul signed two new AI laws: one that requires ads to disclose use of synthetic performers, and another that requires permission from heirs or executors to use someone’s likeness for commercial purposes after their death. McDonald’s pulled down an AI-generated holiday ad that called the holidays the “most terrible time of the year,” after commenters mocked it online. NBC found that AI kid’s toys give instructions about lighting matches, say sexually explicit things, and repeat CCP talking points. Google released an update to its Deep Research agent that uses its best model, Gemini 3 Pro. Google DeepMind announced plans for a new materials science lab in the UK, which they call an “automated science laboratory.” Google introduced GenTabs, a tool that makes new web apps from your tabs and context using Gemini 3. (I want to try this.) Google launched an under-$5 subscription plan in India in an attempt to compete with OpenAI’s India offering, ChatGPT Go. Sources say Google will be hit with an EU fine if it doesn’t change App Store restrictions that limit competitors. Google rolled out Preferred Sources, a feature that shows you more stories from publications you’ve starred. Google is also testing new AI article overviews on some partner publications’ Google News pages, paying them for the certain loss of traffic they will cause. (A real scorpion-and-the-frog situation going on here!) Fortnite is back on the Google Play Store after Google complied with a U.S. District Court injunction. Dozens of apps for companies under US government sanctions have remained available in Apple and Google’s app stores, according to a new report. Apple CEO Tim Cook met with US lawmakers to lobby against the App Store Accountability Act, a child safety bill that would require app stores to verify users’ ages. Apple lost an appeal of a contempt of court ruling in its legal battle with Epic Games, but won an opportunity to argue it should be able to collect some sort of commission from app developers. Reddit started testing verification badges for notable people and businesses. Runway released its first world model, joining an increasing number of companies investing in AI that predicts world events to help it learn. Meta announced a Facebook app redesign that focuses on Friends and Marketplace. Meta dropped a “Your Algorithm” feature, designed to help users understand and customize their Reels recommendations. Former Meta executive and UK deputy prime minister Nick Clegg joined London-based venture firm Hiro Capital as a general partner, while Meta AI scientist Yann LeCun joined as an advisor. A study showed 15 TikTok accounts posting sexualized AI videos of underage girls have a total of 300,000 followers (TikTok responded that 14 of the 15 accounts did not violate their rules). TikTok also announced a live podcast series featuring major artists. Spotify is testing Prompted Playlists, a new AI playlist feature that lets you write longer prompts with more specific instructions. The official standards specification for RSL 1.0, a pay-to-scrape AI licensing standard, was released. A startup called “Operation Bluebird” filed a formal petition for the US patent office to cancel X’s trademark of “Twitter,” citing “abandonment.” DeepSeek is reportedly developing its next AI using thousands of smuggled Nvidia Blackwell chips. Large parts of the Department of Homeland Security are being redirected to work on ICE deportations, including the department’s extensive digital surveillance resources. US Customs and Border Protection filed a proposal to scrutinize up to 5 years of foreign tourists’ social media history, in the latest act of censorship from the censorship administration. 3 in 10 teens use chatbots daily, a new Pew Research study shows. Australia’s eSafety commissioner said under-16s who have gotten past the country’s social media ban will be “booted off” in time. Porn traffic is down in the UK since the start of age verification. VPN use, which can let users bypass age restrictions, is up, but comparatively lower. Hinge’s CEO Justin McLeod stepped down. A coordinated bot campaign spread outlandish accusations that Taylor Swift’s last album contained secret Nazi messages. On Instacart, shoppers buying identical items from the same store often get different prices. Will Fidji Simo bring this energy to ChatGPT? Mobile users see Copilot as “a conversational partner” rather than a productivity tool, a Microsoft analysis of user’s chats shows. Microsoft, Providence Genomics, and the University of Washington released GigaTIME, a cutting-edge AI cancer tumor analysis tool. Those good postsFor more good posts every day, follow Casey’s Instagram stories. (Link) (Link) (Link) Talk to usSend us tips, comments, questions, and 5.2 thoughts: casey@platformer.news. Read our ethics policy here.
|