Here's this week's free column: a look at the surprising news over the weekend that Telegram founder Pavel Durov has been arrested in France. Do you value independent reporting on government and platforms? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about the dismantling of the Stanford Internet Observatory. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice.
By now you’ve surely heard that Telegram founder Pavel Durov is in French custody. On Saturday, the Russia-born billionaire was arrested after landing in a private plane at Le Bourget airport outside Paris. Durov is now being questioned as part of a wide-ranging investigation into criminal activity on the platform, and under French law can be held until at least Wednesday. If he is charged with a crime, though, he could be detained much longer. I’ve spent the past few days talking with sources who work on and study tech policy and international law, along with those who are fighting the spread of child sexual abuse material (CSAM) on digital platforms. Today let’s try to answer three big questions from the arrest: Why was Durov arrested? Why is Telegram under investigation? And what does France’s move here portend for platforms more generally? Many important details about the case remain secret, and most sources I spoke with declined to comment on the charges before evidence is made public. But to anyone who has followed Telegram’s story over the past few years, the events of the past few days have appeared to be increasingly unavoidable. - Why was Durov arrested?
On Monday, Laure Beccuau, a prosecutor in Paris, issued a statement saying that Durov had been arrested as part of an investigation that had begun on July 8 against an unnamed person on a dozen potential charges, including complicity in spreading CSAM, complicity in spreading drugs, money laundering, and refusing to cooperate with law enforcement. It also suggests that Telegram improperly used cryptography. Notably, these charges do not appear to stem from the European Union’s recently passed Digital Services Act, as some had speculated. Rather, they seem to be based on French law. It’s unclear who that person under investigation is. But it would be strange if any investigation into how Telegram enables cybercrime did not center on the company’s CEO. Durov, who holds French citizenship, has long championed user privacy; he says he fled Russia after refusing to turn over user data from his previous company, the Facebook clone VKontakte, to the country’s security services. At the same time, Telegram’s aversion to content moderation has been hugely lucrative for the company and for Durov personally. Durov is a billionaire and Telegram is said to be nearing profitability on hundreds of millions of dollars in revenue; meanwhile, he bragged to the Financial Times earlier this year that each user only cost him 70 cents a year to support. Ramping up the teams necessary to respond to law enforcement requests, disrupt networks of CSAM traders and other criminals, and address other content moderation needs would eat into Durov’s profit margin — and potentially disrupt his planned initial public offering. That hasn’t stopped him from becoming an overnight free speech martyr on some corners of the internet, particularly among the richest cohort of X users. And indeed, it is startling to see the CEO of a social platform being arrested in any context, particularly when the arrest may be connected to content posted by the platform’s users rather than the CEO himself. “Telegram abides by EU laws,” the company posted on X. “Its moderation is within industry standards and constantly improving. Telegram's CEO Pavel Durov has nothing to hide and travels frequently in Europe. It is absurd to claim that a platform or its owner are responsible for abuse of that platform. … We're awaiting a prompt resolution of this situation.” But saying that Telegram’s moderation is “within industry standards” seems obviously false. In some important ways, Telegram stands alone among its peers. Daphne Keller, an expert on platform legal liability at the Stanford Cyber Policy Center, noted that there is a history of prosecuting platform owners when they host criminal activity. "I am usually one of the people making noise about free expression consequences when lawmakers go overboard regulating platforms," Keller wrote on LinkedIn. "Possibly this will turn out to be one of those cases. But so far, I don't think so." - Why is Telegram under scrutiny?
Launched in 2013, Telegram is a messaging app that boasts more than 900 million users. Over time, it has added more public content and other features familiar from social networks. For example, some of its public “channels” have millions of followers, all of whom can repost content from those channels to their own group chats. Officially, Telegram’s terms of service prohibit users from posting illegal pornographic content or promotions of violence on public channels. But as the Stanford Internet Observatory noted last year in an analysis of how CSAM spreads online, these terms implicitly permit users who share CSAM in private channels as much as they want to. “There's illegal content on Telegram. How do I take it down?” asks a question on Telegram’s FAQ page. The company declares that it will not intervene in any circumstances: “All Telegram chats and group chats are private amongst their participants,” it states. “We do not process any requests related to them." Telegram is often described as an “encrypted” messenger. But as Ben Thompson explains today, Telegram is not end-to-end encrypted, as rivals WhatsApp and Signal are. (Its “secret chat” feature is end-to-end encrypted, but it is not enabled on chats by default. The vast majority of chats on Telegram are not secret chats.) That means Telegram can look at the contents of private messages, making it vulnerable to law enforcement requests for that data. Anticipating these requests, Telegram created a kind of jurisdictional obstacle course for law enforcement that (it says) none of them have successfully navigated so far. From the FAQ again: To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions. The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data. […] To this day, we have disclosed 0 bytes of user data to third parties, including governments.
As a result, investigation after investigation finds that Telegram is a significant vector for the spread of CSAM. (To take only the most recent example, here’s one from India’s Decode last month, which like others found that criminals often advertise their wares on Instagram and direct buyers to Telegram to complete their purchases.) The French investigation appears to have a wide scope, including an apparent focus on Telegram’s use of encryption that bears close scrutiny. (Encryption has been under global assault since I began writing this newsletter.) At the moment, it’s unclear what crimes will be alleged, and which will have evidence to support them. It’s entirely possible that French prosecutors could overreach. At the same time, child safety advocates have been begging authorities to do something about Telegram for years now. The company’s refusal to answer almost any law enforcement request, no matter how dire, has enabled some truly vile behavior. “Telegram is another level,” Brian Fishman, Meta’s former anti-terrorism chief, wrote in a post on Threads. “It has been the key hub for ISIS for a decade. It tolerates CSAM. Its ignored reasonable [law enforcement] engagement for YEARS. It’s not ‘light’ content moderation; it’s a different approach entirely. - What does Durov’s arrest mean for other platforms?
We don’t know quite yet. On one hand, platforms have faced increasing pressure for years now to crack down on speech globally and cooperate more with law enforcement agencies — including by breaking end-to-end encryption. Building platforms responsibly means reaching compromises with law enforcement agencies that allow for the investigation of serious crimes while protecting users’ privacy to the greatest extent possible. It’s a difficult, expensive dance, and when it’s working right neither the platforms or the law enforcement agencies are satisfied with the outcome. A worrisome outcome of France’s ultimate prosecution of Telegram, assuming there is one, is that it will embolden countries around the world to prosecute platform CEOs criminally for failing to turn over user data. We have already come worryingly close to this reality, and I have covered it closely here over the years. India and Russia were among the first countries to use so-called “hostage-taking laws” to threaten platform employees with jail over content moderation decisions, but many more have followed since. On the other hand, Telegram really does seem to be actively enabling a staggering amount of abuse. And while it’s disturbing to see state power used indiscriminately to snoop on private conversations, it’s equally disturbing to see a private company declare itself to be above the law. Given its behavior, a legal intervention into Telegram’s business practices was inevitable. But the end of private conversation, and end-to-end encryption, need not be. Fending off onerous speech regulations and overzealous prosecutors requires that platform builders act responsibly. Telegram never even pretended to. Governing- A small group of Iranian-linked hackers targeted at people affiliated with Biden and Trump, posing as tech support on WhatsApp, has been blocked, Meta said. (Shannon Bond / NPR)
- The Democratic Party is seemingly doubling down on tech antitrust, according to its official platform, while Kamala Harris’ stance remains unclear. (Lauren Feiner / The Verge)
- An analysis of Elon Musk’s posts depicts his evolution from avoiding politics to a vocal Trump supporter. (Andrea Fuller, Alexa Corse, John West and Kara Dapena / Wall Street Journal)
- An appeals court ruled that Section 230 couldn’t block a part of a lawsuit that alleged that Yolo, an anonymous messaging service, misrepresented its terms of service that pledged to unmask harassers. (Adi Robertson / The Verge)
- OpenAI is in support of a California bill that requires companies to label AI-generated content, it said. (Anna Tong / Reuters)
- Chinese entities are reportedly using Amazon’s cloud services and that of its rivals to access advanced US chips and AI capabilities that were otherwise restricted. (Eduardo Baptista, Fanny Potkin and Karen Freifeld / Reuters)
- Israel is funding an online advertising campaign to discredit and defund the UNRWA, the United Nation’s top aid agency for Palestine and Gaza. (Paresh Dave / Wired)
- Drones embedded with AI are now being tested to detect landmines in Ukraine. (Lara Jakes / New York Times)
- Nepal’s new prime minister, K.P. Sharma Oli, overturned the country’s TikTok ban. (Bhadra Sharma / New York Times)
Industry- OpenAI hired Irina Kofman, a former Meta executive, to lead strategic initiatives. (Shirin Ghaffary / Bloomberg)
- Google is reportedly appointing Noam Shazeer, former CEO of Character.AI, as the co-technical lead of Gemini. (Erin Woo / The Information)
- To get stock grants, X employees have to write a one-page essay to leadership outlining how they’ve contributed to the company, Elon Musk reportedly said in an email. (Kylie Robison / The Verge)
- Actress Jenna Ortega said she deleted X, then Twitter, when she was 14 and saw explicit AI images of herself on the platform. (Angela Yang / NBC News)
- Meta has reportedly decided not to move forward with a high-end mixed reality headset that could have competed with the Vision Pro. (Sylvia Varnham O’Regan and Wayne Ma / The Information)
- Threads is testing posts that disappear within 24 hours with a limited number of users, it said. Users should be able to nominate posts to disappear! (Ivan Mehta / TechCrunch)
- Apple, a mainstream pioneer of podcasting, has lost its lead to YouTube and Spotify. (Ashley Carman / Bloomberg)
- Apple’s iPhone 16 launch event is set to take place in September. (Jay Peters / The Verge)
- A subscription to AI Alexa is now reportedly set to launch in October after delays. (Caroline O’Donovan / Washington Post)
- Reddit is courting advertisers by targeting users’ interests instead of their personal data. (Aisha Counts / Bloomberg)
- Midjourney’s AI image generator website is now available for everyone to generate up to 25 images for free. (Lance Whitney / ZDNet)
- Inflection is capping free access to its AI chatbot Pi. (Maxwell Zeff / TechCrunch)
- Anthropic has published its system prompts for its latest models, including the Claude 3.5 Opus. (Kyle Wiggers / TechCrunch)
- AI influencers have increasingly fallen for hoaxes and scams from anonymous accounts with grand claims of the future of AI. (John Herrman / The Intelligencer)
- Former Google AI researchers that have launched startups are now experiencing struggles with its founders and investors. (Stephanie Palazzolo and Rocket Drew / The Information)
- Newspaper publisher Gannett is reportedly shutting down its product review site Reviewed following accusations of posting AI-generated reviews. (Mia Sato / The Verge)
Those good postsFor more good posts every day, follow Casey’s Instagram stories. (Link) (Link) (Link) Talk to usSend us tips, comments, questions, and Telegram defenses: casey@platformer.news.
|