Ethan Mollick on the new model, GPT-o1, that OpenAI released yesterday:

Using GPT-o1 means confronting a paradigm change in AI. Planning is a form of agency, where the AI arrives at conclusions about how to solve a problem on its own, without our help. You can see from the video above that the AI does so much thinking and heavy lifting, churning out complete results, that my role as a human partner feels diminished. It just does its thing and hands me an answer. Sure, I can sift through its pages of reasoning to spot mistakes, but I no longer feel as connected to the AI output, or that I am playing as large a role in shaping where the solution is going. This isn’t necessarily bad, but it is different. As these systems level up and inch towards true autonomous agents, we’re going to need to figure out how to stay in the loop - both to catch errors and to keep our fingers on the pulse of the problems we’re trying to crack. GPT-o1 is pulling back the curtain on AI capabilities we might not have seen coming, even with its current limitations. This leaves us with a crucial question: How do we evolve our collaboration with AI as it evolves? That is a problem that GPT-o1 can not yet solve.

Every new turn of the LLM and AI story reinforces the idea that this is a huge change to work and intellectual life. What does it mean that a cheap model can do our logical reasoning for us and how can we use that to enhance our efforts? In the short term I think we can raise our expectations of everyone to now know the latest in research or best practices in their field. And if someone doesn’t, it means they aren’t spending the 5 minutes to query chatGPT or the like.

Tim Molloy @timthinks