Technoscreed is a user supported newsletter that talks about science, tech and society in a humorous (or at least very sarcastic) way. Because you need that when you’re dealing with this stuff. Y’know? If you like it, please consider becoming a paid subscriber. There's a point to the story I'm about to tell. Trust me. The other day I was trying to remember the details of a scene from a book. I knew a little bit of it. It was in one of the Sharpe books by Bernard Cornwell. This is a series about an officer in the Napoleonic wars. They're very good. The scene that interested me involved two officers, one of whom was in command over the other. The senior officer said something insulting, causing a dilemma. Normally, the insulted party would challenge the one doing the insulting to a duel. But he didn't dare challenge his superior. It would be career, and possibly actual, suicide. I wanted more of the details. If I could remember which book it was in, I could maybe re-read it. For a reason, by the way. This wasn't just a pointless whim. Anyway, I did the thing that makes sense in today’s world for solving a problem like that. I asked ChatGPT. ChatGPT, from OpenAI, is the current leader in artificial intelligence¹. You can ask it questions in ordinary English and get back answers the same way. It's been trained on unbelievably ginormous gobs of data pulled from all over the Internet, giving it a spectacular base of "knowledge²” from which to answer questions. The first answer ChatGPT gave me to my question was completely wrong. It said that, in the book Sharpe's Regiment, the character Richard Sharpe was sent to England to track down a lost regiment. But while he was there his superior officer, Sir Henry Simmerson insulted him and he was very frustrated that he couldn't respond against a superior. I knew this was the wrong answer. By the time Sharpe went to England to look for the missing soldiers, Simmerson wasn't even in the army anymore. So not only was it not the correct answer, it didn't make any sense. This kind of mistake is built into the way Large Language Models like ChatGPT work. They don't look up answers. They predict a string of words that has a strong semantic likelihood of being a good answer. When the prediction conjures something incorrect, like it did for me there, it's called a hallucination. In the words of the great comic Ron White, "I told you that story so I could tell you this one." Police have started using AI, like ChatGPT, to write reports. There was a recent report about it in the Associated Press, Police officers are starting to use AI chatbots to write crime reports. Will they hold up in court? Police have always been interested in science and technology since the days of Alphonse Bertillon and Dr. Edmond Locarde. Not to mention Sir Francis Galton, who figured out that fingerprints could be used to catch criminals. In case you don't feel like looking at the Wikipedia pages I so thoughtfully linked for you, all of those people did major work in forensic science in the late 19th or early 20th centuries. Without these guys (especially Locarde) CSI would be very different. If you want to go more modern, think about computer crime investigations, cell phone tracking, and using your Fitbit or other wearable health device as evidence against you in court³. So it's no surprise at all that police departments are looking for ways to use the new hotness, Large Language Models. Like ChatGPT. There are several parts to the use described in the AP story. One is summarization. An AI can take in a bunch of text and audio, say from a police radio or a body cam, and develop a summary of it in almost no time. There's a big market right now for summarization services that can give you notes on a PDF you upload, like of a scientific paper, or meeting notes for a Zoom or Teams call. Some people like these tools, some don't. I rarely use them so I don't really have an opinion. The other part the police use is the report writing. The AI can make the summaries, then pick stuff out and write a sufficiently policey-sounding report in a tiny fraction of the time it takes a live human cop to do it. Cops obviously love not having to spend half their lives on paperwork. And, like most of what comes out of ChatGPT (the LLM I know best, not necessarily the one the police are using), it probably looks perfectly fine. It will all be in complete sentences, with no spelling errors. It will flow smoothly from paragraph to paragraph. The style and tone will probably be exactly what a police report is supposed to be like. BUT ... I saw a really good take on this on the service formerly known as Twitter: "It's really a question if the officers are better editors than they are writers." Yeah. You have to give it more than a quick once over, just to be sure it doesn't have Simmerson pulling rank at a time when he didn't have any. Simmerson. He's that guy from the book I mentioned at the top of the article. Remember? Anyway, my point is that if the AI inserts something that wasn't really there, or leaves out something that the lawyers would consider important for their client's defense, you could blow the whole case. And, trust me, it's easy to miss some little detail in a couple thousand word report, especially if you're not used to doing a slow, fact-conscious, editorial read through. And given the current state of the technology, these problems are guaranteed to happen. You can almost smell the lawsuits brewing already, like a storm coming in off the lake. On a cool day in the Fall. When there are wildfires up in Canada⁴. Lest you think I'm just one of those doom and gloom types, I'm going to offer a little attempt at a solution. It's fairly simple (Although the programming could be tricky). Before the AI spews out the finished report, have it do an interactive session with the officer. For example,
And so on. After you go through the whole set of facts and circumstances, then it produces a report, showing that the teddy bear was actually killed at 7:57 AM because after it brandished a weapon. Or whatever. You get the idea. This way might take longer. It might even take longer than if the officer wrote it up manually. But the end result will be harder to refute in court. At least, if the machine keeps a log of all the questions and answers and the original evidence, so that lawyers can nit pick over every tiny detail the way they like to do. Sorry. I've been watching old episodes of Perry Mason again. He would have loved tearing apart the AI-reports and probably gotten triple his usual fee for doing it. Ain't technology wonderful? Here's that prompt: "A humorous scene depicting a robot police officer in a low-tech police office. The robot has a somewhat clunky, old-fashioned design, with visible wires and mechanical joints, and is now dressed in a traditional policeman's uniform, complete with a hat and badge. It is sitting at an old wooden desk cluttered with papers, an ancient computer, and a rotary phone. The robot is typing up a report using just one finger on each hand, pecking at the keys awkwardly. Its tongue, designed to resemble a human tongue, is hanging out of one side of its mouth, adding a quirky, almost human-like expression of concentration. The office itself is dimly lit with flickering fluorescent lights, and the walls are covered with outdated wanted posters and corkboards filled with notes. The atmosphere is both comical and nostalgic, evoking a blend of retro-futuristic and old-school detective vibes." 1 Though there is stiff competition from a couple of others. Claude.ai, for example. 2 Large Language Models like ChatGPT don't actually have knowledge. Not really. But they fake it beautifully. 3 Yes, data from Fitbits and similar devices has been used against the people who wore them in court. In one case, the GPS info from one (used to calculate steps, IIRC) was used to break a killer's alibi. I wrote a conference paper once called "Medical Device Data Goes to Court." It was LOTS of fun! 4 The lake I was referring to is Lake Ontario. Canada to the North, us to the South. David Vandervort is a writer, software engineer, science and tech nerd (People still use the word ‘nerd’ don’t they?) and all around sarcastic guy. If you liked this article, please consider upgrading to a paid subscription. |