■ In this week's AI Lab:Microsoft’s AI lead says making models seem too human could be dangerous. Also: The limits of AI reasoning, a way to improve “parallel thinking,” and robots with impressive dexterity.
Mustafa Suleyman is not your average big tech executive. He dropped out of Oxford university as an undergrad to create the Muslim Youth Helpline, before teaming up with friends to cofound DeepMind, a company that blazed a trail in building game-playing AI systems before being acquired by Google in 2014.
Suleyman left Google in 2022 to commercialize large language models (LLMs) and build empathetic chatbot assistants with a startup called Inflection. He then joined Microsoft as its first CEO of AI in March 2024 after the software giant invested in his company and hired most of its employees.
Suleyman tells me that this approach will make it more difficult to limit the abilities of AI systems and harder to ensure that AI benefits humans. The conversation has been edited for brevity and clarity.
Subscribe today to continue receiving my full newsletter each week. I've been writing about AI and related themes for over a decade (since neural networks were considered a dead end, in fact). I'm fascinated by how innovation occurs and how it affects the economy. And I want to understand the global picture, rather than just the world as viewed through the lens of Silicon Valley.
So, if you're keen to keep track of advances from the world of computing—AI, robotics, quantum, chipmaking, and more—that have the potential to transform the way we live ... this is the newsletter you for you.