Here's this week's free edition of Platformer: a look at the tech industry's most-hated piece of state legislation, and the heated debate at its core. Do you value independent reporting on government and platforms? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about ~the dismantling of the Stanford Internet Observatory~. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice.
California's controversial bill to regulate the artificial intelligence industry, SB-1047, passed out of the Assembly Appropriations Committee on Thursday. If it passes the full Senate by the end of the month, it will head to Gov. Gavin Newsom for his signature. Today let’s talk about what it could mean for Meta, Google, Anthropic, and the other leading AI companies that call California home. If an AI causes harm, should we blame the AI — or the person who used the AI? That’s the question that runs through the debate over SB-1047, and the larger question of how to regulate the technology. We saw a practical example of the debate this week when X released the second generation of its AI model, Grok, which has an image generation feature similar to OpenAI’s DALL-E. X is known for its laissez-faire approach to content moderation, and the new Grok is no exception. Users quickly put the text-to-image generator through its paces — and, as Adi Robertson found out at The Verge, Grok will make just about anything. “Subscribers to X Premium, which grants access to Grok, have been posting everything from Barack Obama doing cocaine to Donald Trump with a pregnant woman who (vaguely) resembles Kamala Harris to Trump and Harris pointing guns,” she writes, before citing several more examples of violent or edgy images that Grok created. (“Bill Gates sniffing a line of cocaine from a table with a Microsoft logo,” for example.) One possible response to this is to get mad at Grok for creating the image. Another, conveyed with some deft sarcasm by this X user, is to suggest we should instead get mad at the person who created the image. This kind of question is almost as old as the web. In the 1990s, internet service providers like Prodigy and Compuserve faced lawsuits related to potentially libelous material that their users had posted. Congress included Section 230 in the Communications Decency Act to specify that tech companies in most cases cannot be held legally liable for what their users post. In this case, Congress ruled that we should get mad at the person rather than the technology. And we’ve been fighting about it ever since. Tech companies would love to see a kind of Section 230 for AI, making them immune to prosecution for what their users do with their AI tools. But California’s bill takes the opposite approach, putting the onus on tech companies to assure the government that their products won’t be used to create harm. SB-1047 has some widely accepted provisions, such as adding legal protections for whistleblowers at AI companies, and studying the feasibility of building a public AI cloud that startups and researchers could use. More controversially, it requires makers of large AI models to notify the government when they train a model that exceeds a certain computing threshold and costs more than $100 million. It allows the California attorney general to seek an injunction against companies that release models that the AG considers unsafe. And it requires that large models have a “kill switch” that allows developers to stop them in the case of danger. SB-1047 was introduced in February by Sen. Scott Wiener, D-San Francisco. Wiener had released an outline of the bill last September and says he has gathered feedback from the industry and other stakeholders ever since. The bill passed out of the Senate’s privacy committee in June, and since then tech companies have become increasingly vocal about the risks that they argue the bill presents to the nascent AI industry. On Thursday, before the bill passed out of the Senate’s appropriations committee, the industry won some significant concessions. The bill no longer enables the AG to sue companies for negligent safety practices before a catastrophic event occurs; it no longer creates a new state agency to monitor compliance; and it no longer requires AI labs to certify their safety testing under penalty of perjury. (AI companies had been warning loudly that the bill would result in startup founders being thrown in jail.) The bill also no longer requires “reasonable assurance” from developers that their models won’t create harm. (Instead, they must only take “reasonable care.”) And amid widespread fears that the bill would chill the development of open-source models, the bill was amended to exempt anyone who spends less than $10 million to fine-tune an open-source AI model from the bill’s other requirements. “We accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” Wiener told TechCrunch. “These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation.” Despite those changes, the bill still faces significant criticism — and not all of it comes from the tech industry. Shortly before the bill’s passage out of committee on Thursday, a group of eight Democratic members of Congress from California wrote a letter to California Gov. Gavin Newsom urging him to veto the bill in its then-current form. The lawmakers, led by Rep. Zoe Lofgren, write that they support a wide variety of AI regulations — but that the bill goes too far in asking tech companies to predict how people use their models. “Not only is it unreasonable to expect developers to completely control what end users do with their products, but it is difficult if not impossible to certify certain outcomes without undermining the rights of end users, including their privacy rights,” they write. Moreover, they write, the bill could prompt AI companies to move out of California or stop releasing their AI models here. (Meta recently decided not to release multimodal AI models in Europe over similar rules, they note.) Wiener’s bill also has some prominent backers, including two of the godfathers of AI — Geoffrey Hinton and Yoshua Bengio. Hinton and Bengio are among those who believe that we must put strong safeguards into place now before next-generation AI models arrive and potentially wreak havoc. But they have been countered by dozens of other academics who published a letter arguing that the bill will interfere with their academic freedom and hamper research efforts. Ultimately, I suspect lawmakers will regulate both AI and the people who use it. But I’m sympathetic to the members of Congress who find SB-1047 to be — if nothing else — premature. Today’s models have shown no risk of creating catastrophic harm, and President Biden’s executive order from last year should provide at least some defense against worst-case scenarios in the near term if next-generation models prove out to be much more capable than today’s. And in any case, it seems preferable to regulate AI once at the national level than encouraging 50 states to all experiment with their own risk models. In the meantime, Lofgren notes, California is considering more than 30 other AI bills this term, including much more urgent and focused efforts to restrict the creation of synthetic, nonconsensual porn and to require disclosures when AI is used to create election ads. “These bills have a firmer evidentiary basis than SB 1047,” Lofgren writes. And given the continued opposition to Wiener’s bill, I suspect they may also have higher odds of Newsom signing them into law. On the podcast this week: Kevin and I debate whether Elon Musk's attempts to get Trump elected are working. Then, former Microsoft CEO Steve Ballmer stops by to explain how he's trying to improve policy debates with USA Facts. And finally, it's time for This Week in AI. Apple | Spotify | Stitcher | Amazon | Google | YouTube Governing- The Harris campaign trimmed Trump’s post about his interview with Elon Musk on why his voice sounded strange to exclude a line about his release of a “perfect” recording of the conversation. Perhaps a minor sin in the grand scheme of things, but it's worth noting how common this sort of thing is becoming in every political campaign. (Lauren Feiner / The Verge)
- A look at how the Gen Z staffers on Harris' campaign and how they’re building a viral machine. (Betsy Klein, Camila DeChalus, Way Mullery and Curt Merrill / CNN)
- Harris has spent 10 times as much as Trump on advertising on Google and Meta, this analysis found. (Peter Andringa, Alex Rogers and Sam Learner / Financial Times)
- An Iranian hacker group, APT42, which reportedly works for Iran’s Revolutionary Guard Corps, targeted both Trump and Biden campaigns this spring, Google says. (Andy Greenberg / Wired)
- Biden told creators that they are trusted by audiences in a way that traditional outlets are not. His remarks came at the first ever White House Creator Economy Conference. (Amanda Silberling / TechCrunch)
- Most Gen Z voters oppose regulations that impose restrictions on social media, a new study finds. (Aisha Counts / Bloomberg)
- A look at how scammers are using Elon Musk deepfakes to make it seem like Musk is endorsing investment opportunities. The fake ads seem to be sadly effective. (Stuart A. Thompson / New York Times)
- A look at Google’s “affirmative litigation” legal strategy to deter hackers and scammers. (Paresh Dave / Wired)
- A US district judge recused himself from a X’s lawsuit against an ad group after his investments in Tesla and defendant Unilever drew scrutiny. Great reporting from NPR led to this one. (Bobby Allyn / NPR)
- Meta shut down CrowdTangle, a tool that academics used to research misinformation, despite backlash from researchers and lawmakers. It have remained available through the election to give Meta more time to improve its still-quite-partial replacement. (Dara Kerr / NPR)
- The judge on the Epic v. Google case says that he will “tear the barriers down,” and that the world that is a “product of monopolistic conduct” today is changing. (Sean Hollister / The Verge)
- San Francisco city attorney David Chiu filed a lawsuit seeking to shut down 16 websites that create deepfake porn from real images of people. Good! (Heather Knight / New York Times)
- Researchers at MIT and other institutions released the AI Risk Repository, a database that can help classify the types of AI risks to aid in research and regulations. (Ben Dickson / VentureBeat)
- The FTC now prohibits the buying and selling of fake reviews and product testimonials, with violators facing possible fines of up to $50,000. Good! (Danny Gallagher / Engadget)
- A look at how companies like Google, Amazon and Meta are proposing changes to climate laws that would allow them to hide their actual emission numbers. (Kenza Bryan, Camilla Hodgson and Jana Tauschinski / Financial Times)
- California driver’s licenses will soon be supported in Apple and Google wallets. (Aisha Malik / TechCrunch)
- European iPhone users will soon see in-app pricing information on Spotify in an update that Apple previously blocked. (Jess Weatherbed / The Verge)
IndustryThose good postsFor more good posts every day, follow Casey’s Instagram stories. (Link) (Link) (Link) Talk to usSend us tips, comments, questions, and AI legislation: casey@platformer.news.
|