Hi! In case you missed it, we have two exclusive events for Sublime Premium members (and paid subs of this newsletter) coming up: a private demo of Wabi (my favorite landing page on the internet right now) and an invitation to our IRL Internet Serendipity Tour happening in 13 cities from Nov 7 – Dec 16 in Taipei, London, NYC, Boston, Toronto, DC, Austin, Seattle, Vancouver, SF, Denver, Miami, and Columbus. This series is made possible by Mercury: business banking that more than 200k entrepreneurs use and hands down my favorite tool for running Sublime.Running a company is hard. Mercury is one of the rare tools that makes it feel just a little bit easier.A conversation with Billy OppenheimerI first stumbled upon Billy through an essay he wrote about the cup of coffee theory of AI, then fell down the rabbit hole of his notecard system (the paper cousin to what we’re building with Sublime). His Sunday newsletter is one of my favorites. When we finally got on the phone, what I loved most was his pace. Unrushed. He’s just doing the work, while the rest of us sprint in place. In this conversations, he covers:
Or listen on Spotify and Apple. (Best if you want to highlight your fave moments with Podcast Magic). The edited transcriptAlex Dobrenko: I read your cup of coffee theory on AI essay from 2023, which feels like an eternity ago in AI time. Has your thinking changed since then? Billy Oppenheimer: I stand by most of what I wrote in that piece. My main point was that AI is just another tool, and creatives have always had many different tools at their disposal. The foundation of making stuff and having it land with an audience is taste. I think AI has maybe gotten better at taste and discernment, but it’s still an impossible question to answer. Every creator has made something they thought was the greatest thing they’ve ever done, put it out there, and it was crickets. Conversely, something you’re on the fence about goes crazy. It’s an impossible thing to unlock with any certainty. The tools can help you, but at the end of the day, it’s always a gamble. As fast as AI moves, people’s taste and what’s popular are also moving all the time. I don’t think a tool can ever give you certainty around that. AD: When you say it’s gotten better at taste, what makes you feel that way? BO: In that piece, I used an example of asking it for quotes. Because of my work with Ryan Holiday on The Daily Stoic, I’m constantly engaging with Meditations by Marcus Aurelius. At the time I wrote that piece, the quotes were not only not in the realm of the topic I was looking for, but they were also made up. Now, the quotes are more legitimate. It might say, “The quote you’re looking for is book seven, passage four,” and then I go to book seven, passage four, and that’s not the quote. So I still have to verify everything. It’s a “trust but verify” situation. I’d love for it to get to a place where I can give it a fragment of a quote from Meditations and it can just tell me exactly where it is. It’s actually one of my frustrations as a reader. I’m seeing a lot of “loosey-goosey-ness” in a lot of writing, like, “As Mark Twain said...” Then I’ll check, and it’s usually not something he said. I think that’s happening because people are asking AI, “Hey, I’m writing about this topic, can you give me quotes from credible people to legitimize it?” AD: I remember the first time that happened to me. I was writing something and wanted a movie quote to say something in a funny way. It gave me the perfect quote from The Sandlot. Then I looked it up and realized it was never said. And it’s not that it’s lying; it just doesn’t know that truth is a thing. It was a real rude awakening. BO: Yeah, and the certainty with which it gives you those quotes... AD: I’m fascinated by your process as a research assistant to Ryan Holiday and Rick Rubin. What is your research process? BO: My process is that I’m constantly reading, listening, and watching things. I do all my reading with physical books, so I have a pen in hand and I’m making notes in the margins. Once I finish a book, I put it aside for a bit and then go back through it with the note cards. Anything that holds up as interesting or potentially useful I write on a card. A lot of times, on an initial read, I might mark 100 pages in a 300-page book. When I go back through it two weeks later, the number of things that remain interesting are a fraction of that. I like letting time filter things out. When I go back a second time, it’s a test: “Do I want to take the time and effort to transfer this onto a note card?” If I can’t muster the willpower, it must not be as interesting as I thought. I make cards and put them in various boxes. When a section piles up to like eight or 12 cards, that’s my signal that there’s something here. Let me see if I can piece these together. Sometimes I’ll note in the margin to ask ChatGPT if there’s something I didn’t understand. For example, I’m reading a collection of letters by Louisa May Alcott, who is writing in the 1800s. She’s using some phrasing that doesn’t make sense to me. I’ll ask ChatGPT, “What does this mean?” but as far as capturing or finding material, all of that happens in the reading of books. AD: You don’t use it to make crazy connections between things. It sounds like you prefer whatever naturally connects in your brain more than what AI would do. BO: Yeah, that’s my favorite moment as a reader— being like, “This is just like that other thing I read in that other book that seemingly doesn’t belong together.” So I wouldn’t want to outsource that part of it. The times I’ve tried to ask AI things like, “Hey, I’ve just learned about this story. Can you think of any other historical examples with a similar story?” it’s always the same handful of characters, like Oprah and Steve Jobs. I want examples that are not in ChatGPT’s knowledge base. The times I’ve tried to ask AI things like, “Hey, I’ve just learned about this story. Can you think of any other historical examples with a similar story?” it’s always the same handful of characters, like Oprah and Steve Jobs. I want examples that are not in ChatGPT’s knowledge base. AD: That’s a great point. What creative fears or insecurities does AI surface for you? Are you worried that it’s going to replace you in research? BO: I think there’s always going to be a need for human filtering. Even if AI gets really good at generating a first draft, you’re still going to need a human to do some final touches or to integrate it. I can’t see a time where AI is doing 100% of the creating. Maybe that would just be a genre. I’ve also seen in the office with Ryan — he has a bookstore in Bastrop, Texas – and I go over there a couple of times a week. Sometimes he’s like, “Let’s mess around with AI. Let’s give it an article we published last week and ask it for feedback.” and what Ryan thinks to prompt it with gets better results than what I’m able to get. Because of the 20 years he’s been cultivating his ability in the craft of writing, what he thinks to do is just different from what I would think to do. I’ve seen the differences in what people get out of AI. The people who can get the best out of it need it the least. I’ve seen the differences in what people get out of AI. The people who can get the best out of it need it the least. I feel like how you cultivate yourself as a person to get better and better results out of these tools is not something we talk about enough. My concern is that if you’re somebody who’s just coming out of college and wants to pursue a creative profession, and you go straight to these tools, you’re not going to do the work to cultivate those things. I think there’s an edge to be had in doing the tedious work of expanding your knowledge base around a subject. AD: What would you say your personal AI thesis is? The core belief that drives your decisions about these tools? BO: I don’t know. I guess I don’t have one. AD: Or maybe the fact that you don’t have one is itself a thesis. What advice would you give to people starting out who want to do what you do and are worried about all this? BO: My advice would be to cultivate some skill set or knowledge and to do things that other people aren’t doing. I think I have an edge because I enjoy reading a lot, and people are not doing this amount of reading and finding these stories or ideas that the tools can’t generate. I recently read a paper by a sociologist named Daniel Chambliss. He spent about 10 years with swimmers at every level of ability, from Olympic teams to high school and country club swimmers. He was observing what the Olympians do differently. His core point was that excellence is not a quantitative phenomenon, it’s a qualitative one. At the highest level, they don’t necessarily just train more; they do things differently. He tells a story about high school and college coaches being invited to spend a week with the U.S. Olympic team, expecting to learn the secret to their success. Over the week, their enthusiasm drained as they realized the Olympic team wasn’t doing anything crazy different. It’s just the little things, like making your practice turn as if it were a meet. Those small things, over a year or five years, ultimately distinguish you from the people who aren’t doing them. I love that phrase, “a qualitative phenomenon,” because it applies to many domains. It’s about figuring out what small, mundane, or tedious things I can do differently that will, over time, compound into a significant distinction. The paper was called The Mundanity of Excellence, and it’s a reminder that excellence often comes from focusing on those mundane things that many people either ignore or underestimate. For me, that’s reading books and putting in the effort to transfer that material onto note cards. AD: The internet has a very fast pace, and what I notice is that your pace is much slower. Even in how you talk, and certainly in how you work and compile things, there’s a real emphasis on putting in the time. AI, on the other hand, seems to collapse time, promising you don’t need to spend any time at all. BO: Yeah. I was just reading Make Something Wonderful, a book of Steve Jobs’s words. He was talking in 1983 about computers and said, “Let’s say I could move a hundred times faster than anyone in here. In the blink of your eye, I could run out there, grab a bouquet of fresh spring flowers, run back in here, and snap my fingers. You would all think I was a magician. And yet I would basically be doing a series of really simple instructions: running out there, grabbing some flowers, running back, snapping my fingers. But I could just do them so fast that you would think that there was something magical going on.” It’s the exact same thing with a computer. We tend to think there’s something magical going on, but it’s just a series of simple instructions. I think that’s what a lot of people are attracted to about AI—it seems magical, but it’s really just the speed. AD: When I use it, what I want is speed. I want a shortcut. You wrote something about “neck down.” It’s that feeling of, rather than just doing the thing, I’ll spend hours trying to create a weird little system that does it for me. AI is an accelerant of that desire. BO: I literally have that note card right here: “neck down.” It comes from comedian Matt McCusker who was working a blue-collar job in a warehouse. His boss said, “Look, man, I just need you from the neck down today. I don’t want to hear any ideas. Pick the box up there and move it there.” I found that when I’m hoping AI can make a tedious task faster, it’s just a delay. I just need to sit down and do the thing. I’ll put that note where I can see it to remind myself, “For the next two hours, just go neck down on this.” It’s okay to do something I don’t necessarily want to do. And often, I’m glad I did it because in the process, I might stumble on something I didn’t know I was looking for. AD: I’m curious to hear some of the specific ways you use it. BO: Okay. This might be underwhelming, but I’ll give it a shot. I use it when I’m reading and a phrase or metaphor is used that I don’t quite understand. For example, I was reading a journal by the French novelist Andre Gide, and he wrote about Kant’s dove. I didn’t know the reference, so I asked ChatGPT about it. It told me it comes from Kant’s Critique of Pure Reason. I was then able to find the passage where Kant writes about the dove, which helped me write a newsletter about it. In the introduction to Critique of Pure Reason, Immanuel Kant writes about the seductive illusion of complete freedom, a life with no bounds. He compares the longing for unbounded autonomy to a dove cutting through the air, imagining that it could do even better in airless space. Another use case is when I find a cool framework or model and want to develop it further. I read a quote from Little Women about “head, heart, and hand.” I put it in ChatGPT and asked for real-life examples where this framework or these three different types of work show up. I was just going back and forth, exploring different angles, asking what some “head jobs” are—tasks that are strategic and forward-thinking. It helped me develop the idea more quickly. This is similar to how Ryan and I work. When he’s drafting an article and can’t immediately think of examples for a point, he’ll put in an “insert,” and I’ll give him options. Now, I’ll go to ChatGPT and say, “Hey, what are some common, everyday situations where people meet circumstances that are not what they would have chosen? For example, your flight gets delayed.” It will quickly give me a list of 20, and I can pick the ones I like. This is a case where the speed of getting a ton of options is helpful. I also use it as a kind of knowledge partner. I was reading an article that said, “Everything that LLMs say, true or false, comes from the same process of statistically reconstructing what words are likely in some contexts.” I realized I didn’t truly understand what statistics were, so I asked AI to help me understand it simply, in the context of LLMs. It then broke down how weather forecasting works, which is a great example of a statistical process. It collects tons of data and makes predictions based on historical patterns. Like there’s a 70 % chance of rain tomorrow because the last hundred times the weather and the atmosphere and the pressure, etc were like this, it rained 70 of those 100 times. Now, when I think of LLMs, I can compare them to weather forecasting. This back-and-forth process of drilling deeper and following my curiosity is a great way to use the tool. I think I then asked, where does weather forecasting go wrong? There’s no embarrassment in asking basic questions, so it helps me go several levels deeper into a topic than I otherwise would have. Before these tools, if I saw that line and didn’t understand it, I would just think for a second, how could I go about understanding what that line means? And would probably be like, there seems to me no easy way to do it and just get on with the next thing. Before these tools, if I saw that line and didn’t understand it, I would just think for a second, how could I go about understanding what that line means? And would probably be like, there seems to me no easy way to do it and just get on with the next thing. The other thing I use it for is when I’m struggling with a sentence. I’ll say: Can you give me 20 variations of the sentence? and usually there’s not one that’s entirely perfect, but I like this word use here, this phrase here. And then I’ll put the sentence together myself. I could probably sit here for a long time and figure it out but that is a case where the speed of getting a ton of options is helpful. BO: No, I don’t. The sentence you’re helping it edit is yours. It’s your idea. If you sent that sentence to ChatGPT and asked for help, and you also asked me to do the same thing, we’d both come up with different versions. Your taste and discernment ultimately decide what to use. You were the conductor. It’s like how Andre Agassi used a ghostwriter for his book Open, which is one of my favorites. It’s okay to have a collaborative partner. A screenwriter gets credit for a script, but I’m sure they had other people contribute or overheard lines they put in. Does that minimize their work? Now, if I just asked it to write a thousand-word piece and then posted it with no corrections, I’d probably feel weird about that. AD: Everyone’s making predictions about AI. Let’s make a prediction about humans. Where do you think humans will be in 10 years? BO: I think we’ll be focused more on how to become better humans to get more out of the tools. Instead of courses on how to be a master prompter, we’ll see more on how to become highly competent in a domain so you don’t have to rely on those courses. AD: What question about AI and creativity isn’t being asked enough? BO: Is it actually good, or is it just fast? AD: What do you wish people talked more about instead of AI? BO: Books, maybe? AD: Do you have any book recommendations or recent things you’ve loved? BO: I recently loved A Swim in a Pond in the Rain by George Saunders. And Consolations and Consolations 2 by David Whyte. The chapters in those are each a word, like “burnout” or “anxiety,” that he explores from a poetic perspective. I also love collections of letters and journals. If I get interested in a person, I’ll find a book like that. It’s a great way to hear their core stories in their own words. AD: I love those too. All right man, this was amazing... Subscribe to The Sublime to unlock the rest.Become a paying subscriber of The Sublime to get access to this post and other subscriber-only content. A subscription gets you:
|

