
You're here because you want to think WITH AI, not LIKE AI. Smart. Settle in for some meaningful screen time or grab your earbuds to multitask your mental health walk. This issue features bonus materials to some critical thinking-friendly prompts for folks new on their AI journey. Read on for more.
Hey friends,
I'm sitting on the floor of the Ferry Building, where TED AI has been dazzling attendees for the past two days.
Everyone here is absolutely brilliant. Researchers, entrepreneurs, technologists, each bringing their own take on the conference's theme of "Be Bold." And they brought it: Data centers in space! Storytelling power for all! Cure diseases! Make money! AI is the unlock. They were singing to the choir—myself included.
But here's the thing: I'm not dazzled.
I'm disappointed. Maybe confused. Because when a panel discussion turned to AI's impact on human thinking and identity, a prominent speaker dismissed it:
"Oh dear, I fear we're getting too philosophical here."
The audience laughed.
I didn't.
The Moment That Could Have Been A Conference
Don't get me wrong—there were some truly a-ha moments, mostly from the researchers.
The TL;DR: we're nowhere near AGI (artificial general intelligence or the super intelligence of Terminator or The Matrix that annihilates us). That's sobering, necessary context.
And then in the last moment of the last panel of the last day, Llion Jones (he’s a big deal, think Rick Rubin of AI research) gave us this hot take:
“AI is gonna make people lose their jobs and that's a good thing."
[If you were there in the audience and happened to hear someone cackle and golf clap, I want you to know that someone was me.]
That statement? That's the conversation. That's the philosophical work we're all avoiding. The vision and narrative we desperately need.
It could have been the entire conference.
Instead, it was the outro, the underline, and exclamation point to the conference that went nowhere other than than a segue to the taco bar at Happy Hour.
The Conference That Asked Different Questions
Two weeks before, I was at another AI conference. The difference keeps nagging at me.
She Leads AI's inaugural CREATE conference in Salt Lake City presumably had a fraction of TED's budget but it had all the things the bigger conference missed.
In the program and among the attendees, they were asking the questions TED AI sidestepped:
What does AI mean for us?
The mix of technical and non-technical people could each get out of the conversations insights into both the tech capabilities and the human impact.
Business owners, creatives, people protecting their IP, critical thinkers—all wrestling with integration, not just implementation.
That's the conversation TED AI treated as "too philosophical."
The Unanswered Question
We're building AI capabilities at extraordinary speed. Transformative, breakthrough capabilities.
But we're building zero narrative infrastructure for how we as people actually adapt.
Tech conferences show us what's possible. That's important work.
But who's helping us figure out what we're becoming?
Knowledge workers are drowning, weighed down by a FOMO-driven tech race and the burden of not having any understanding of what happens to them:
What happens to my expertise when AI produces similar outputs?
How do I maintain my judgment when AI can generate recommendations instantly?
How do I trust my own thinking when I'm constantly checking with AI?
What does "good work" even look like anymore?
Folks, these aren't "too philosophical." These are operational realities affecting confidence, performance, and purpose right now.
Why This Matters
Most of us are exposed to AI in work life. Businesses invest grotesque amounts in AI tools.
Yet businesses and leaders investing zero in frameworks for how people integrate, adapt, and maintain agency.
We're optimizing for capabilities without orienting for transformation.
And I think that's why everyone feels like they're drowning—not just from the tech and pace of change but also from the absence of any coherent story about what we're building toward. What kind of humans we're becoming.
The big tech conferences, businesses, governments aren't having this conversation.
Which means it falls to us. The practitioners. The people living in this transition.
We have to build the frameworks ourselves:
Ask the "too philosophical" questions.
Create the narrative infrastructure institutions refuse to prioritize.
Use tools meaningfully, leveraging AI capabilities while trusting our own intelligence.
This is why my work focuses on thinking WITH AI, not LIKE AI.
It's an effort to create orientation before optimization.
I know the questions matter more than sometimes even the experts let on.
And I know we can't wait for the conferences and leaders to catch up.
👀 As a bonus, here are some prompts to help you start thinking with AI, not like AI. These were inspired by both this article and my talk at the She Leads AI conference called “Redefining (and Reclaiming) Intelligence in The Age of AI.”
Until next time,
V
Vanessa Chang is the founder of RE: Human and Mosaek AI, documenting the journey of thinking with AI, not like AI, and helping businesses, leaders, and knowledge workers do the same. Find her on YouTube, LinkedIn, TikTok, and yes, Instagram.
For speaking or collaboration inquiries: [email protected]
For consulting inquiries: [email protected]
