Crypto-Gram
June 15, 2025
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram's web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
** *** ***** ******* *********** *************
In this issue:
If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.
- Communications Backdoor in Chinese Power Inverters
- The NSA’s "Fifty Years of Mathematical Cryptanalysis (1937–1987)"
- DoorDash Hack
- More AIs Are Taking Polls and Surveys
- The Voter Experience
- Signal Blocks Windows Recall
- Chinese-Owned VPNs
- Location Tracking App for Foreigners in Moscow
- Surveillance Via Smart Toothbrush
- Why Take9 Won’t Improve Cybersecurity
- Australia Requires Ransomware Victims to Declare Payments
- New Linux Vulnerabilities
- The Ramifications of Ukraine’s Drone Attack
- Report on the Malicious Uses of AI
- Hearing on the Federal Government and AI
- New Way to Covertly Track Android Users
- Airlines Secretly Selling Passenger Data to the Government
- Paragon Spyware Used to Spy on European Journalists
- Upcoming Speaking Engagements
** *** ***** ******* *********** *************
Communications Backdoor in Chinese Power Inverters
[2025.05.16] This is a weird story:
U.S. energy officials are reassessing the risk posed by Chinese-made devices that play a critical role in renewable energy infrastructure after unexplained communication equipment was found inside some of them, two people familiar with the matter said.
[...]
Over the past nine months, undocumented communication devices, including cellular radios, have also been found in some batteries from multiple Chinese suppliers, one of them said.
Reuters was unable to determine how many solar power inverters and batteries they have looked at.
The rogue components provide additional, undocumented communication channels that could allow firewalls to be circumvented remotely, with potentially catastrophic consequences, the two people said.
The article is short on fact and long on innuendo. Both more details and credible named sources would help a lot here.
** *** ***** ******* *********** *************
The NSA’s "Fifty Years of Mathematical Cryptanalysis (1937 -- 1987)"
[2025.05.19] In response to a FOIA request, the NSA released “Fifty Years of Mathematical Cryptanalysis (1937-1987),” by Glenn F. Stahly, with a lot of redactions.
Weirdly, this is the second time the NSA has declassified the document. John Young got a copy in 2019. This one has a few less redactions. And nothing that was provided in 2019 was redacted here.
If you find anything interesting in the document, please tell us about it in the comments.
** *** ***** ******* *********** *************
DoorDash Hack
[2025.05.20] A DoorDash driver stole over $2.5 million over several months:
The driver, Sayee Chaitainya Reddy Devagiri, placed expensive orders from a fraudulent customer account in the DoorDash app. Then, using DoorDash employee credentials, he manually assigned the orders to driver accounts he and the others involved had created. Devagiri would then mark the undelivered orders as complete and prompt DoorDash’s system to pay the driver accounts. Then he’d switch those same orders back to “in process” and do it all over again. Doing this “took less than five minutes, and was repeated hundreds of times for many of the orders,” writes the US Attorney’s Office.
Interesting flaw in the software design. He probably would have gotten away with it if he’d kept the numbers small. It’s only when the amount missing is too big to ignore that the investigations start.
** *** ***** ******* *********** *************
More AIs Are Taking Polls and Surveys
[2025.05.21] I already knew about the declining response rate for polls and surveys. The percentage of AI bots that respond to surveys is also increasing.
Solutions are hard:
1. Make surveys less boring.
We need to move past bland, grid-filled surveys and start designing experiences people actually want to complete. That means mobile-first layouts, shorter runtimes, and maybe even a dash of storytelling. TikTok or dating app style surveys wouldn’t be a bad idea or is that just me being too much Gen Z?
2. Bot detection.
There’s a growing toolkit of ways to spot AI-generated responses -- using things like response entropy, writing style patterns or even metadata like keystroke timing. Platforms should start integrating these detection tools more widely. Ideally, you introduce an element that only humans can do, e.g., you have to pick up your price somewhere in-person. Btw, note that these bots can easily be designed to find ways around the most common detection tactics such as Captcha’s, timed responses and postcode and IP recognition. Believe me, way less code than you suspect is needed to do this.
3. Pay people more.
If you’re only offering 50 cents for 10 minutes of mental effort, don’t be surprised when your respondent pool consists of AI agents and sleep-deprived gig workers. Smarter, dynamic incentives -- especially for underrepresented groups -- can make a big difference. Perhaps pay-differentiation (based on simple demand/supply) makes sense?
4. Rethink the whole model.
Surveys aren’t the only way to understand people. We can also learn from digital traces, behavioral data, or administrative records. Think of it as moving from a single snapshot to a fuller, blended picture. Yes, it’s messier -- but it’s also more real.
** *** ***** ******* *********** *************
The Voter Experience
[2025.05.22] Technology and innovation have transformed every part of society, including our electoral experiences. Campaigns are spending and doing more than at any other time in history. Ever-growing war chests fuel billions of voter contacts every cycle. Campaigns now have better ways of scaling outreach methods and offer volunteers and donors more efficient ways to contribute time and money. Campaign staff have adapted to vast changes in media and social media landscapes, and use data analytics to forecast voter turnout and behavior.
Yet despite these unprecedented investments in mobilizing voters, overall trust in electoral health, democratic institutions, voter satisfaction, and electoral engagement has significantly declined. What might we be missing?
In software development, the concept of user experience (UX) is fundamental to the design of any product or service. It’s a way to think holistically about how a user interacts with technology. It ensures that products and services are built with the users’ actual needs, behaviors, and expectations in mind, as opposed to what developers think users want. UX enables informed decisions based on how the user will interact with the system, leading to improved design, more effective solutions, and increased user satisfaction. Good UX design results in easy, relevant, useful, positive experiences. Bad UX design leads to unhappy users.
This is not how we normally think of elections. Campaigns measure success through short-term outputs -- voter contacts, fundraising totals, issue polls, ad impressions -- and, ultimately, election results. Rarely do they evaluate how individuals experience this as a singular, messy, democratic process. Each campaign, PAC, nonprofit, and volunteer group may be focused on their own goal, but the voter experiences it all at once. By the time they’re in line to vote, they’ve been hit with a flood of outreach -- spammy texts from unfamiliar candidates, organizers with no local ties, clunky voter registration sites, conflicting information, and confusing messages, even from campaigns they support. Political teams can point to data that justifies this barrage, but the effectiveness of voter contact has been steadily declining since 2008. Intuitively, we know this approach has long-term costs. To address this, let’s evaluate the UX of an election cycle from the point of view of the end user, the everyday citizen.
Specifically, how might we define the UX of an election cycle: the voter experience (VX)? A VX lens could help us see the full impact of the electoral cycle from the perspective that matters most: the voters’.
For example, what if we thought about elections in terms of questions like these?
- How do voters experience an election cycle, from start to finish?
- How do voters perceive their interactions with political campaigns?
- What aspects of the election cycle do voters enjoy? What do they dislike? Do citizens currently feel fulfilled by voting?
- If voters “tune out” of politics, what part of the process has made them want to not pay attention?
- What experiences decrease the number of eligible citizens who register and vote?
- Are we able to measure the cumulative impacts of political content interactions over the course of multiple election cycles?
- Can polls or focus groups help researchers learn about longitudinal sentiment from citizens as they experience multiple election cycles?
- If so, what would we want to learn in order to bolster democratic participation and trust in institutions?
Thinking in terms of VX can help answer these questions. Moreover, researching and designing around VX could help identify additional metrics, beyond traditional turnout and engagement numbers, that better reflect the collective impact of campaigning: of all those voter contact and persuasion efforts combined.
This isn’t a radically new idea, and earlier efforts to embed UX design into electoral work yielded promising early benefits. In 2020, a coalition of political tech builders created a Volunteer Experience program. The group held design sprints for political tech tools, such as canvassing apps and phone banking sites. Their goal was to apply UX principles to improve the volunteer user flow, enhance data hygiene, and improve volunteer retention. If a few sprints can improve the phone banking experience, imagine the transformative possibilities of taking this lens to the VX as a whole.
If we want democracy to thrive long-term, we need to think beyond short-term wins and table stakes. This isn’t about replacing grassroots organizing or civic action with digital tools. Rather, it’s about learning from UX research methodology to build lasting, meaningful engagement that involves both technology and community organizing. Often, it is indeed local, on-the-ground organizers who have been sounding the alarm about the long-term effects of prioritizing short-term tactics. A VX approach may provide additional data to bolster their arguments.
Learnings from a VX analysis of election cycles could also guide the design of new programs that not only mobilize voters (to contribute, to campaign for their candidates, and to vote), but also ensure that the entire process of voting, post-election follow-up, and broader civic participation is as accessible, intuitive, and fulfilling as possible. Better voter UX will lead to more politically engaged citizens and higher voter turnout.
VX methodology may help combine real-time citizen feedback with centralized decision-making. Moving beyond election cycles, focusing on the citizen UX could accelerate possibilities for citizens to provide real-time feedback, review the performance of elected officials and government, and receive help-desk-style support with the same level of ease as other everyday “products.” By understanding how people engage with civic life over time, we can better design systems for citizens that strengthen participation, trust, and accountability at every level.
Our hope is that this approach, and the new data and metrics uncovered by it, will support shifts that help restore civic participation and strengthen trust in institutions. With citizens oriented as the central users of our democratic systems, we can build new best practices for fulfilling civic infrastructure that foster a more effective and inclusive democracy.
The time for this is now. Despite hard-fought victories and lessons learned from failures, many people working in politics privately acknowledge a hard truth: our current approach isn’t working. Every two years, people build campaigns, mobilize voters, and drive engagement, but they are held back by what they don’t understand about the long-term impact of their efforts. VX thinking can help solve that.
This essay was written with Hillary Lehr, and originally appeared on the Harvard Kennedy School Ash Center’s website.
** *** ***** ******* *********** *************
Signal Blocks Windows Recall
[2025.05.23] This article gives a good rundown of the security risks of Windows Recall, and the repurposed copyright protection tool that Signal used to block the AI feature from scraping Signal data.
** *** ***** ******* *********** *************
Chinese-Owned VPNs
[2025.05.27] One one my biggest worries about VPNs is the amount of trust users need to place in them, and how opaque most of them are about who owns them and what sorts of data they retain.
A new study found that many commercials VPNS are (often surreptitiously) owned by Chinese companies.
It would be hard for U.S. users to avoid the Chinese VPNs. The ownership of many appeared deliberately opaque, with several concealing their structure behind layers of offshore shell companies. TTP was able to determine the Chinese ownership of the 20 VPN apps being offered to Apple’s U.S. users by piecing together corporate documents from around the world. None of those apps clearly disclosed their Chinese ownership.
** *** ***** ******* *********** *************
Location Tracking App for Foreigners in Moscow
[2025.05.28] Russia is proposing a rule that all foreigners in Moscow install a tracking app on their phones.
Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information:
- Residence location
- Fingerprint
- Face photograph
- Real-time geo-location monitoring
This isn’t the first time we’ve seen this. Qatar did it in 2022 around the World Cup:
“After accepting the terms of these apps, moderators will have complete control of users’ devices,” he continued. “All personal content, the ability to edit it, share it, extract it as well as data from other apps on your device is in their hands. Moderators will even have the power to unlock users’ devices remotely.”
** *** ***** ******* *********** *************
Surveillance Via Smart Toothbrush
[2025.05.29] The only links are from The Daily Mail and The Mirror, but a marital affair was discovered because the cheater was recorded using his smart toothbrush at home when he was supposed to be at work.
** *** ***** ******* *********** *************
Why Take9 Won’t Improve Cybersecurity
[2025.05.30] There’s a new cybersecurity awareness campaign: Take9. The idea is that people -- you, me, everyone -- should just pause for nine seconds and think more about the link they are planning to click on, the file they are planning to download, or whatever it is they are planning to share.
There’s a website -- of course -- and a video, well-produced and scary. But the campaign won’t do much to improve cybersecurity. The advice isn’t reasonable, it won’t make either individuals or nations appreciably safer, and it deflects blame from the real causes of our cyberspace insecurities.
First, the advice is not realistic. A nine-second pause is an eternity in something as routine as using your computer or phone. Try it; use a timer. Then think about how many links you click on and how many things you forward or reply to. Are we pausing for nine seconds after every text message? Every Slack ping? Does the clock reset if someone replies midpause? What about browsing -- do we pause before clicking each link, or after every page loads? The logistics quickly become impossible. I doubt they tested the idea on actual users.
Second, it largely won’t help. The industry should know because we tried it a decade ago. “Stop. Think. Connect.” was an awareness campaign from 2016, by the Department of Homeland Security -- this was before CISA -- and the National Cybersecurity Alliance. The message was basically the same: Stop and think before doing anything online. It didn’t work then, either.
Take9’s website says, “Science says: In stressful situations, wait 10 seconds before responding.” The problem with that is that clicking on a link is not a stressful situation. It’s normal, one that happens hundreds of times a day. Maybe you can train a person to count to 10 before punching someone in a bar but not before opening an attachment.
And there is no basis in science for it. It’s a folk belief, all over the Internet but with no actual research behind it -- like the five-second rule when you drop food on the floor. In emotionally charged contexts, most people are already overwhelmed, cognitively taxed, and not functioning in a space where rational interruption works as neatly as this advice suggests.
Pausing Adds Little
Pauses help us break habits. If we are clicking, sharing, linking, downloading, and connecting out of habit, a pause to break that habit works. But the problem here isn’t habit alone. The problem is that people aren’t able to differentiate between something legitimate and an attack.
The Take9 website says that nine seconds is “time enough to make a better decision,” but there’s no use telling people to stop and think if they don’t know what to think about after they’ve stopped. Pause for nine seconds and... do what? Take9 offers no guidance. It presumes people have the cognitive tools to understand the myriad potential attacks and figure out which one of the thousands of Internet actions they take is harmful. If people don’t have the right knowledge, pausing for longer -- even a minute -- will do nothing to add knowledge.
The three-part suspicion, cognition, and automaticity model (SCAM) is one way to think about this. The first is lack of knowledge -- not knowing what’s risky and what isn’t. The second is habits: people doing what they always do. And third, using flawed mental shortcuts, like believing PDFs to be safer than Microsoft Word documents, or that mobile devices are safer than computers for opening suspicious emails.
These pathways don’t always occur in isolation; sometimes they happen together or sequentially. They can influence each other or cancel each other out. For example, a lack of knowledge can lead someone to rely on flawed mental shortcuts, while those same shortcuts can reinforce that lack of knowledge. That’s why meaningful behavioral change requires more than just a pause; it needs cognitive scaffolding and system designs that account for these dynamic interactions.
A successful awareness campaign would do more than tell people to pause. It would guide them through a two-step process. First trigger suspicion, motivating them to look more closely. Then, direct their attention by telling them what to look at and how to evaluate it. When both happen, the person is far more likely to make a better decision.
This means that pauses need to be context specific. Think about email readers that embed warnings like “EXTERNAL: This email is from an address outside your organization” or “You have not received an email from this person before.” Those are specifics, and useful. We could imagine an AI plug-in that warns: “This isn’t how Bruce normally writes.” But of course, there’s an arms race in play; the bad guys will use these systems to figure out how to bypass them.
This is all hard. The old cues aren’t there anymore. Current phishing attacks have evolved from those older Nigerian scams filled with grammar mistakes and typos. Text message, voice, or video scams are even harder to detect. There isn’t enough context in a text message for the system to flag. In voice or video, it’s much harder to trigger suspicion without disrupting the ongoing conversation. And all the false positives, when the system flags a legitimate conversation as a potential scam, work against people’s own intuition. People will just start ignoring their own suspicions, just as most people ignore all sorts of warnings that their computer puts in their way.
Even if we do this all well and correctly, we can’t make people immune to social engineering. Recently, both cyberspace activist Cory Doctorow and security researcher Troy Hunt -- two people who you’d expect to be excellent scam detectors -- got phished. In both cases, it was just the right message at just the right time.
It’s even worse if you’re a large organization. Security isn’t based on the average employee’s ability to detect a malicious email; it’s based on the worst person’s inability -- the weakest link. Even if awareness raises the average, it won’t help enough.
Don’t Place Blame Where It Doesn’t Belong
Finally, all of this is bad public policy. The Take9 campaign tells people that they can stop cyberattacks by taking a pause and making a better decision. What’s not said, but certainly implied, is that if they don’t take that pause and don’t make those better decisions, then they’re to blame when the attack occurs.
That’s simply not true, and its blame-the-user message is one of the worst mistakes our industry makes. Stop trying to fix the user. It’s not the user’s fault if they click on a link and it infects their system. It’s not their fault if they plug in a strange USB drive or ignore a warning message that they can’t understand. It’s not even their fault if they get fooled by a look-alike bank website and lose their money. The problem is that we’ve designed these systems to be so insecure that regular, nontechnical people can’t use them with confidence. We’re using security awareness campaigns to cover up bad system design. Or, as security researcher Angela Sasse first said in 1999: “Users are not the enemy.”
We wouldn’t accept that in other parts of our lives. Imagine Take9 in other contexts. Food service: “Before sitting down at a restaurant, take nine seconds: Look in the kitchen, maybe check the temperature of the cooler, or if the cooks’ hands are clean.” Aviation: “Before boarding a plane, take nine seconds: Look at the engine and cockpit, glance at the plane’s maintenance log, ask the pilots if they feel rested.” This is obviously ridiculous advice. The average person doesn’t have the training or expertise to evaluate restaurant or aircraft safety -- and we don’t expect them to. We have laws and regulations in place that allow people to eat at a restaurant or board a plane without worry.
But -- we get it -- the government isn’t going to step in and regulate the Internet. These insecure systems are what we have. Security awareness training, and the blame-the-user mentality that comes with it, are all we have. So if we want meaningful behavioral change, it needs a lot more than just a pause. It needs cognitive scaffolding and system designs that account for all the dynamic interactions that go into a decision to click, download, or share. And that takes real work -- more work than just an ad campaign and a slick video.
This essay was written with Arun Vishwanath, and originally appeared in Dark Reading.
** *** ***** ******* *********** *************
Australia Requires Ransomware Victims to Declare Payments
[2025.06.02] A new Australian law requires larger companies to declare any ransomware payments they have made.
** *** ***** ******* *********** *************
New Linux Vulnerabilities
[2025.06.03] They’re interesting:
Tracked as CVE-2025-5054 and CVE-2025-4598, both vulnerabilities are race condition bugs that could enable a local attacker to obtain access to access sensitive information. Tools like Apport and systemd-coredump are designed to handle crash reporting and core dumps in Linux systems.
[...]
“This means that if a local attacker manages to induce a crash in a privileged process and quickly replaces it with another one with the same process ID that resides inside a mount and pid namespace, apport will attempt to forward the core dump (which might contain sensitive information belonging to the original, privileged process) into the namespace.”
Moderate severity, but definitely worth fixing.
Slashdot thread.
** *** ***** ******* *********** *************
The Ramifications of Ukraine’s Drone Attack
[2025.06.04] You can read the details of Operation Spiderweb elsewhere. What interests me are the implications for future warfare:
If the Ukrainians could sneak drones so close to major air bases in a police state such as Russia, what is to prevent the Chinese from doing the same with U.S. air bases? Or the Pakistanis with Indian air bases? Or the North Koreans with South Korean air bases? Militaries that thought they had secured their air bases with electrified fences and guard posts will now have to reckon with the threat from the skies posed by cheap, ubiquitous drones that can be easily modified for military use. This will necessitate a massive investment in counter-drone systems. Money spent on conventional manned weapons systems increasingly looks to be as wasted as spending on the cavalry in the 1930s.
The Atlantic makes similar points.
There’s a balance between the cost of the thing, and the cost to destroy the thing, and that balance is changing dramatically. This isn’t new, of course. Here’s an article from last year about the cost of drones versus the cost of top-of-the-line fighter jets. If $35K in drones (117 drones times an estimated $300 per drone) can destroy $7B in Russian bombers and other long-range aircraft, why would anyone build more of those planes? And we can have this discussion about ships, or tanks, or pretty much every other military vehicle. And then we can add in drone-coordinating technologies like swarming.
Clearly we need more research on remotely and automatically disabling drones.
** *** ***** ******* *********** *************
Report on the Malicious Uses of AI
[2025.06.06] OpenAI just published its annual report on malicious uses of AI.
By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams.
These operations originated in many parts of the world, acted in many different ways, and focused on many different targets. A significant number appeared to originate in China: Four of the 10 cases in this report, spanning social engineering, covert influence operations and cyber threats, likely had a Chinese origin. But we’ve disrupted abuses from many other countries too: this report includes case studies of a likely task scam from Cambodia, comment spamming apparently from the Philippines, covert influence attempts potentially linked with Russia and Iran, and deceptive employment schemes.
Reports like these give a brief window into the ways AI is being used by malicious actors around the world. I say “brief” because last year the models weren’t good enough for these sorts of things, and next year the threat actors will run their AI models locally -- and we won’t have this kind of visibility.
Wall Street Journal article (also here). Slashdot thread.
** *** ***** ******* *********** *************
Hearing on the Federal Government and AI
[2025.06.06] On Thursday I testified before the House Committee on Oversight and Government Reform at a hearing titled “The Federal Government in the Age of Artificial Intelligence.”
The other speakers mostly talked about how cool AI was -- and sometimes about how cool their own company was -- but I was asked by the Democrats to specifically talk about DOGE and the risks of exfiltrating our data from government agencies and feeding it into AIs.
My written testimony is here. Video of the hearing is here.
** *** ***** ******* *********** *************
New Way to Covertly Track Android Users
[2025.06.09] Researchers have discovered a new way to covertly track Android users. Both Meta and Yandex were using it, but have suddenly stopped now that they have been caught.
The details are interesting, and worth reading in detail:
Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it’s investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.
The covert tracking -- implemented in the Meta Pixel and Yandex Metrica trackers -- allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they’re off-limits for every other site.
Washington Post article.
** *** ***** ******* *********** *************
Airlines Secretly Selling Passenger Data to the Government
[2025.06.12] This is news:
A data broker owned by the country’s major airlines, including Delta, American Airlines, and United, collected U.S. travellers’ domestic flight records, sold access to them to Customs and Border Protection (CBP), and then as part of the contract told CBP to not reveal where the data came from, according to internal CBP documents obtained by 404 Media. The data includes passenger names, their full flight itineraries, and financial details.
Another article.
EDITED TO ADD (6/14): Ed Hausbrook reported this a month and a half ago.
** *** ***** ******* *********** *************
Paragon Spyware Used to Spy on European Journalists
[2025.06.13] Paragon is an Israeli spyware company, increasingly in the news (now that NSO Group seems to be waning). “Graphite” is the name of its product. Citizen Lab caught it spying on multiple European journalists with a zero-click iOS exploit:
On April 29, 2025, a select group of iOS users were notified by Apple that they were targeted with advanced spyware. Among the group were two journalists that consented for the technical analysis of their cases. The key findings from our forensic analysis of their devices are summarized below:
- Our analysis finds forensic evidence confirming with high confidence that both a prominent European journalist (who requests anonymity), and Italian journalist Ciro Pellegrino, were targeted with Paragon’s Graphite mercenary spyware.
- We identify an indicator linking both cases to the same Paragon operator.
- Apple confirms to us that the zero-click attack deployed in these cases was mitigated as of iOS 18.3.1 and has assigned the vulnerability CVE-2025-43200.
Our analysis is ongoing.
The list of confirmed Italian cases is in the report’s appendix. Italy has recently admitted to using the spyware.
TechCrunch article. Slashdot thread.
** *** ***** ******* *********** *************
Upcoming Speaking Engagements
[2025.06.14] This is a current list of where and when I am scheduled to speak:
- I’m speaking at the International Conference on Digital Trust, AI and the Future in Edinburgh, Scotland on Tuesday, June 24 at 4:00 PM.
The list is maintained on this page.
** *** ***** ******* *********** *************
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books -- including his latest, A Hacker’s Mind -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
Copyright © 2025 by Bruce Schneier.
** *** ***** ******* *********** *************
Mailing list hosting graciously provided by MailChimp. Sent without web bugs or link tracking.
This email was sent to: kaitlyn.concilio@gmail.com
You are receiving this email because you subscribed to the Crypto-Gram newsletter.
unsubscribe from this list update subscription preferences
Bruce Schneier · Harvard Kennedy School · 1 Brattle Square · Cambridge, MA 02138 · USA