I’ve been in journalism for a dozen years, and halfway through, I encountered a big surprise.
After spending the first six years as a religion reporter, I got interested in a new topic — and ended up spending the next six years (and counting) reporting on AI.
On the surface, that might seem like a weird pivot. How do you go from religion to AI? Aren’t those two completely separate fields?
Well, it turns out they’re not. And for me, the transition felt totally natural. It came about because, in 2018, I was covering China’s repression of Uyghur Muslims. I realized it was AI that was enabling China to surveil at least a million Muslims and put them in internment camps. It was early days for AI then, but it was already clear to me that this technology was going to have a huge impact on humanity’s ability or inability to flourish.
So I started caring about AI. Not because I care about tech in and of itself, but because I care about human beings.
And over the years, I’ve developed what I think of as a humanistic approach to AI coverage. That means I’m less interested in the specifics of whatever shiny, new product OpenAI releases and more interested in how those products are empowering or disempowering us humans — how they are, in fact, reshaping what it means to be human.
Companies claim that AI models can augment our thinking. And, in certain ways, they can. But what if they’re also eroding some of our core human capacities — for creativity, for freedom, for love, for moral decision-making? Is that a possibility we should seriously fear — or is AI just revealing that we had a shoddy understanding of what creativity, freedom, love, and morality meant in the first place? Who gets to decide?
For that matter, who gets to decide if AI companies should be allowed to build a world-changing technology at breakneck speed? If they’re trying to build a god, shouldn’t they get our permission first?
Rather than letting the AI companies set the terms of the conversation, it’s my job as an AI reporter to continuously recenter the conversation around these big questions — questions that really matter for regular people like you and me.
A year ago, I also began trying to serve readers more directly by launching an advice column called Your Mileage May Vary. I’ve answered questions from a teaching assistant wondering whether to flunk all his students for using AI, a ChatGPT user who thinks she’s awakened consciousness in the chatbot, and lots more.
If you have a question you’d like advice on, you can submit it anonymously here — I’d love to hear from you!