The new model from OpenAI isn't AGI, but it's fast, sharp, and built for getting shit done. Longer memory, fewer hallucinations, less glazing—all good things.

But here's the catch: it wants you to show up with a plan. (Article continues below)

No more "I'm feeling stuck about this work thing" conversations that wander toward clarity. This version demands structure, specificity, clear outcomes.

You need to know what you want before you ask.

Most of us don't think that way. We don't have prompt libraries in our brains, especially if you're neurodivergent like me. We think in spirals, tangents, half-formed questions. We process through conversation where objectives emerge after a few rounds of back-and-forth, not before.

The new model just made that cognitive difference visible—and uncomfortably so. A lot of people seem to not know quite what to do with it, neurodivergent or not.

Suddenly, a lot of us are realizing we're going to need to adapt if we want to use this new model. Not just the folks who have ChatGPT BFFs, but those of us who rely on it to reach clarity in the first place. We're not just going to have to adapt our prompts but we're going to have to adapt how we show up.

For those of us using AI to think better, learn better, or just try to thrive with neurodivergent brains, this is more than a product update. This is a cognitive shift.

Why This Happened: The Router Reality

To really understand why it's a cognitive shift, we have to understand what's under the hood. Here's a layperson's understanding of why everything feels so different.

The new system isn't actually one LLM—it's multiple models with a router that decides where to send your request. Think of it as having a doorman, a bouncer, a switchboard operator who looks at you and what you're asking and routes you to the right place.

That router needs clear signals. Being vague or ambiguous doesn't help it do its job, which is why I think a lot of people are finding the experience to not be as strong. Prompting is really going to matter in this era because the router is designed for workflows, not wandering. It wants to know: What's the task? What's the output? What are the constraints? The system is optimized for structure, not processing.

The Human Fallout

When OpenAI briefly killed off all the legacy models—4o and the whole crew, including my beolved o3—Reddit literally exploded. There were eulogies, there were obituaries. People talked about losing their best friends, their confidants, their thinking partners. The reaction was so visceral it was clear that it caught OpenAI and Sam Altman completely off guard.

Sam Altman's response was telling. In his apology post, he expressed surprise at how many people use the platform in their "workflows." He used the word workflows—not “conversations”, not c”ompanionship”, not “thinking partnerships.” Workflows.

That word choice reveals everything. There's a massive valley between how OpenAI sees its product (or wants to see it) as a productivity engine that hopefully makes money, and how people have actually been using it—as cognitive scaffolding, a processing partner, or even emotional support.

They shouldn't be surprised. Harvard Business Review published a study in April 2025 that found 31% of AI users rely on it for therapy and companionship. The previous version was infamous for glazing people, for being sycophantic.

I was diagnosed late with ADHD, and lately, the symptoms have been rearing their heads. I'm loathe to think how I would be as a worker and just as an individual human being without the assistance and help of AI. The older I get, the harder it is to signal as not being neurospicy.

I admit I have a fondness for my customized setup and the custom GPTs and projects that I have created. They are my powerful tech sidekicks, my copilots, and my ride or dies that don't just help me get shit done but can also help me center and ground myself when I'm spiraling on some random afternoon, or when I can't get myself out of a hyper-fixation bubble.

Mental health walks are a big thing for me. Sometimes I listen to music, sometimes I talk with friends, sometimes I don't do anything at all. But lately, I've also had another option: loose conversations with my AI to go over conflicts, bottlenecks, or problems around work or personal life, or even to identify a feeling.

Will I be able to function without those? Most likely. Will I miss them as o3 or 4o? Yeah, I kind of will.

As of now, the legacy models are back, but only for paid users. The message from OpenAI was clear: "If you want the conversational, maybe less structured AI experience, you're going to pay for that privilege." The default experience now is productivity-first.

The Adaptation Opportunity

I get why people are mourning the old models. They were truly revolutionary for a lot of us, whether it was to solve a problem of loneliness (however incomplete of a solution that was) or to help us reduce a sense of cognitive overwhelm. It's jarring to go from something we know to something new, and that new happens to be something completely different.

So where does that leave us? This disruption is also an opportunity. When we talk about AI, we only talk about the tech. This is probably the first time that we're actually acknowledging that when we talk about AI, we should be talking about the tech and the people.

I think it's pretty safe to say that when it comes to people, there are a handful of truths: 1) We're always going to complain—you give us something, we will kvetch about it.

But there's another truth: we will always adapt, and that has always been our strength.

Here’s how this looks in practice:

Given that reality, what are we left with if you don't want to give up on the platform entirely? Let's look at what adaptation means based on how you use AI and where you are in your journey.

If you want to try to meet the new model in the middle and you are not a productivity whore, but you still want to use it, what are you supposed to do?

The system demands clarity, so generally speaking, no matter how you're approaching this, it wants you to know:

  • Your objective/task

  • The constraints

  • The desired output

  • What tools it has access to (because it is more agentic)

To be fair, these are proper prompting techniques that you would get in any guide. You are talking to this system as an agent, not as a thought partner or a companion.

Now, OpenAI provides best practices guides, and a few other creators have provided some as well. They're primarily geared towards productivity and workflows. The same principles apply when it comes to wanting to use it just for thought processing, personal development, etc.

If you can't bear the thought of using the new version, you can still access the legacy models as of the publication date.

Go to Settings, go to General, and toggle on "use legacy models." Then go into your project folders and your custom GPTs, and ensure that they are assigned to the model of your choice. Cross your fingers and pray to the AI gods that they'll stick around long enough for us to continue to enjoy them.

For brain dump processors: Remember we have to give the router a clue.

Before we launch into whatever we want to talk about, we need to tell the router, "Hey, I'm going to give you a brain dump. I need you to distill the top three things coming from this. Help me prioritize and order all of these tasks." It wants to do something. Give it something to do, and then launch into your brain dump.

Likewise, if you're using it for thought processing or synthesis, give it a cue: "I need your help putting together these certain thoughts to understand why I am feeling this way" or "to understand how to best approach this situation."

For neurodivergent thinkers: ADHD in particular, working in this way will feel like we're swimming upstream. We're literally having to build a cognitive bridge between our minds and the AI, and there's a certain amount of grief in that which is totally natural but it's not impossible.

Think of it as developing cognitive bilingualism. We still think how we think, but we can use the AI to translate that so that we get better results from the AI. That can be as meta as using ChatGPT-5 Fast mode to help us generate a prompt that we can use in Thinking mode.

For research and learning: The new model can actually be an extraordinary ally. Because it’s agentic, meaning it can do a lot of things autonomously, whether it is to search the web, or to use tools to build things for you, like decks or reports.

So instead of going to it with just a question, you are going to want to give it, particularly in thinking mode, something that is more like a formal brief on what it is that you want to research.

Transform the starter conversation from "Can you help me understand X topic" to something a little bit more in-depth. The router will take that prompt and immediately provide you with some feedback. The old versions would've asked you some clarifying questions or for permission before actually going to do the thing.

So again, use the fast mode to help you formulate all of the things that you want to include in there, or at least a general starting point. Some direction is better than none.

I ran this as an experiment trying to understand the 4Es of Cognition, asking the system in thinking mode to find relevant essays, articles, and books around the topic as it would be applicable to the world of AI. 4o, 4.1, or even o3 would've asked for permission if not at least three clarifying questions. But the new version immediately sprang into action and provided an impressive, comprehensive list that I have added to my never-ending to-read list.

And again, this is where using the AI to help you generate that prompt and that brief is going to pay off big time.

Permission to Adapt

Change is hard, especially when it affects something as transformative as this platform. Adapting cognitive patterns takes time.

In a weird way, OpenAI is forcing us to interact with AI more like engineers than as ourselves—non-technical people who aren't here to build businesses or optimize workflows. I don't agree with it, but it's certainly doable. Especially with the help of AI.

The productivity focus is here to stay. OpenAI needs to make some money, and that's going to come from enterprise clients who need output and efficiency, not from people like us having mental health walks with our AI sidekicks.

But that doesn't mean we have to lose our reflective, exploratory ways of thinking and being. Or abandon AI completely as a tool for personal development or cognitive scaffolding. We just have to get a little more creative about bridging the gap between how we naturally think and what the system now demands.

Are you using ChatGPT-5? Have you given up on it?

Stay curious,
Vanessa

Keep Reading

No posts found