Hey Kaitlyn, Joseph here with some internal Secret Service documents. Turns out the agency bought access to OpenAI tooling for $50,000. But the Secret Service won't tell me why. Soon federal agencies will be required to be transparent about what they're using AI for, so why not get ahead? The full story with a link to the documents follows below. This article was produced with support from the Capitol Forum. The Secret Service spent $50,000 on Microsoft Azure and OpenAI cloud services, according to internal Secret Service documents obtained by 404 Media. The news shows that U.S. federal law enforcement is actively moving into the world of AI, with the Secret Service saying it won’t disclose the use case because it does not discuss methods used for its “operations.” It also comes after a recent White House policy change that will require federal agencies to, among other things, ensure they have proper safeguards when using AI that could impact Americans’ rights or safety. The Secret Service recently faced a wave of criticism after two assassination attempts against former President Donald Trump, with the director resigning in July.
This segment is a paid ad. If you’re interested in advertising, let's talk.
Although GenAI technologies drive innovation, they also attract bad actors—child predators, terrorist organizations, and hate groups—who are exploiting these tools for malicious purposes.
To help companies combat these threats, ActiveFence is hosting an upcoming webinar, Designing Your AI Safety Tool Stack: What to Build, Buy, and Blend, featuring Frost & Sullivan’s Global Vice President & AI Program Leader, Nishchal Khorana.
Together, we will break down the key elements of a secure AI safety tool stack—including analytics, incident management, prompt filtering, and red teaming. Attendees will also learn which tools are best built in-house, bought from specialized vendors, and when a hybrid approach is the most effective.
Whether building from scratch or optimizing existing systems, this webinar will help you make informed decisions and ensure the safe, responsible use of AI tech. Register now.
“The USSS [U.S. Secret Service] has a requirement to procure r [sic] Microsoft Azure-Open AI cloud-based services,” a Secret Service memorandum dated September 2023 reads. 404 Media obtained the document and others through a Freedom of Information Act (FOIA) request with the Secret Service. The office responsible for the $50,000 purchase was the Secret Service’s Chief Information Office, according to the document. Another indicates that the work could extend through to June of this year. 💡 Do you know about any other government purchases of AI technology? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +44 20 8133 5190. Otherwise, send me an email at joseph@404media.co. The documents do not elaborate on why the Secret Service needed such a tool. Microsoft’s website for the Azure OpenAI service says customers can “build your own copilot and generative AI applications.” Users can connect their own data and then use OpenAI models on that information, it adds. Potential use cases include chat bots that develop answers based on the customer’s own data; language translation; and predictive analytics, according to Microsoft’s website. “Out of concern for operational security, the U.S. Secret Service does not discuss the means or methods used for our operations,” Alexi Worley, from the Office of Communication and Media Relations at the Secret Service, told 404 Media in a statement. “All technology used by the Secret Service must meet the agency's strict security requirements.” The agency did not answer 404 Media’s question on whether the tool is being used to generate material that may later be used in a criminal prosecution.
We can only do these sorts of investigations with the direct support of our paying subscribers. If you found this article interesting or helpful, and you want us to keep producing journalism like it, please consider subscribing below. You’ll get unlimited access to our articles ad-free and bonus content.
In March, the White House announced that the Office of Management and Budget (OMB) was issuing the agency’s first government-wide policy to mitigate the risks of AI. The policy requires agencies to have proper safeguards in place, release annual inventories of their AI use cases, and report metrics about the agency’s AI use cases that are withheld from that public inventory “because of their sensitivity.” In May, Bloomberg reported that Microsoft created a GPT-4 generative AI model that is geared towards U.S. intelligence agencies. The first federal agency customer for ChatGPT Enterprise was the U.S. Agency for International Development, FedScoop reported in August.
|