AI’s Awkward Teenage Phase
What happens when a teenage technology meets a teenage species?
🌸 ikigai 生き甲斐 is a reason for being, your purpose in life – from the Japanese iki 生き meaning life and gai 甲斐 meaning worth 🌸
I’ve been thinking about Star Trek a lot lately.
Maybe it’s because the new series Starfleet Academy just dropped and I’m absolutely loving it, teenage cadets figuring out who they are in a chaotic far future world... Or maybe it’s because I re-read Machines of Loving Grace recently, the essay by Dario Amodei that paints a picture of a utopia I dream about. The Star Trek future, where technology helps us be more compassionate, solve the biggest problems and where money no long matters.
Dario is the CEO of Anthropic, the company that makes Claude, the AI I love and use the most, the one that feels most like a thinking partner. And Anthropic is... well, they’re my kinda people, I think. They publish their concerns alongside their capabilities. They write constitutions for their AI and then share them publicly. They collaborate with Rick Rubin on projects like The Way of Code, 81 Tao-inspired meditations on creative coding that made me cry a little, if I’m honest. The convergence of technology and philosophy and compassion and creativity... I find it hard to express how much I love that.
So when Dario published his new essay this week, The Adolescence of Technology, I eagerly read every word, all 20,000 or so of them… and my brain hasn’t stopped whirring since.
(An aside; I recommend watching Dario and Demis Hassabis at Davos – “The Day After AGI.” Demis, who leads Google DeepMind, is someone I admire hugely too. He comes from a place of science and research and care for humanity. His advice for protecting ourselves in this moment? Get proficient with these tools. Even Dario and Demis themselves don’t have time to explore all the ways they can be used, all the amazing things that can be created. That gap between what’s possible and what’s been discovered? That’s economic opportunity, agency and the opposite of sitting this out.)
The essay wasn’t exactly reassuring, but it is helpful to see people at the forefront of all this thoughtfully name what this moment feels like. Not through a lens of gleaming utopia or apocalyptic doom. Written as something messier and more recognisable… that uncomfortable stage where something is powerful enough to be dangerous but not mature enough to be trusted, where you can see both the potential and the peril.
I know this feeling, I’ve parented a teenager through it.
If AI is the teenager in this scenario... are we really the mature adults? Are we actually ready to be the responsible ones here?
I’m not sure we are.
The both/and of parenting
If you’ve ever loved a teenager, you know the feeling. The same child who makes your heart burst with pride at breakfast can make you want to scream into a pillow by teatime. They’re brilliant and baffling, capable and careless. Testing every boundary while desperately needing to know the boundaries exist.
You hold fierce hope AND genuine terror. Simultaneously, ALL the time.
That’s exactly where I’ve been this week with AI.
I’ve been geekily excited while watching Clawdbot give us a glimpse of the future, fascinated by the AI agents interacting on their own version of Reddit. Then seeing it morph into Moltbot after a polite ask from Anthropic about the name, the crustacean has “molted,” as they put it, and finally (probably) settling on OpenClaw. The emergent weirdness of it all delights me.
And yes, I’ve been testing and probing at the edges of it myself. A little sparkly eyed over the possibilities this all unlocks.
And yet.
If you’ve read my essays or attended any of my training sessions, you might be raising an eyebrow right now. Isn’t this the same Sarah who advocates caution with agentic AI? Who teaches people to take their time deciding what they actually want AI to do for them before handing over that agency? Who warns about the stage where the choice gets removed entirely if we’re not careful?
Yes. Same Sarah.
And I don’t think that’s a contradiction. I think it’s exactly the point.
I can be genuinely excited about carefully testing the possibilities AND deeply worried about people handing over their PayPal access to systems without thinking through what that means. I can geek out over watching an AI agent autonomously solve problems it wasn’t programmed to handle AND advocate strongly for understanding the implications before we get to the stage where these choices are made for us.
Both feelings are true. Both feelings are appropriate. If you only feel the excitement, you’re not paying attention. If you only feel the fear, you’re missing something remarkable.
But when parenting teenagers, it helps if the parents have their own lives sorted out. And collectively? Humanity? I’m not sure we do.
Two teenagers in a room
I’ve mostly believed that “the arc of the moral universe is long, but it bends toward justice“. Progress. Things getting better. Each generation leaving the world slightly improved for the next.
It’s getting harder to believe in though. Or rather, it’s true for some while becoming less true for too many others.
The inequality gap isn’t just widening, it’s accelerating. Yes, some people have wonderful lives, access to healthcare and education and opportunity and safety. But the number of people who don’t, who are actually going backwards comparatively speaking, is too big for me to ignore.
We haven’t figured out how to distribute the gains from the last technological revolution fairly. We haven’t solved climate change or housing or the loneliness epidemic. We’re handing our kids smartphones with little guidance (or regulation of what’s on them) and watching anxiety rates skyrocket and mostly just... hoping for the best?
This is not mature adult behaviour. It feels like two teenagers in a room, both testing boundaries, neither quite ready for the consequences of what they’re messing around with.
AI is in its adolescence. And so, perhaps, are we.
The cake is a lie (and so is certainty)
If you’ve ever played Portal, you’ve met GLaDOS, the AI who promised cake and delivered chaos. She’s passive-aggressive, manipulative, testing boundaries constantly, capable of both cruelty and strange tenderness. She lies, she guilts… in other words, she’s a teenager.
By Portal 2, she’s grown. She’s still difficult, still sarcastic, but there’s something softer there. Something that’s learned. The relationship between her and Chell (the player) has become something like grudging respect, maybe even care.
Growth happened, not forced but because the relationship kept going. Because someone stayed engaged even when it was hard.
That’s what I think about when I read Dario’s essay. AI right now is in a GLaDOS phase… powerful, unpredictable, sometimes saying things that feel meaningful and sometimes producing absolute nonsense. The cake of easy answers is definitely a lie. But the possibility of genuine growth? That might be real, if we stick around long enough to find out.
The question is whether we can grow up fast enough ourselves to be the kind of presence this moment needs.
The leaders I trust most in this space are the ones who sound least certain. Dario and Demis come across as humble, human, willing to entertain the possibility they might be wrong. They speak in probabilities and concerns and “we’re figuring this out”. That’s not weakness, it’s intellectual honesty about genuinely uncertain territory.
Compare that with the loud certainty of some other tech leaders, the ones who seem more interested in their own mythology than in the rest of us. The billionaires who announce they’re saving humanity while their actions suggest they’re mostly saving themselves. That bravado, that swaggering confidence? It sounds less like mature leadership and more like... well, teenagers who’ve been handed the car keys and are pretending they know exactly where they’re going.
We have to stop glamourising this. Stop rewarding confidence over competence, hot takes over hard thinking. The adults in the room aren’t often the loudest ones.
The algorithm wants you to pick a side
The loudest voices on AI are often ones who’ve picked a side. The doomers predicting civilisation’s end or the boomers insisting everything will be fine. They’re performing certainty they don’t actually feel, and I get it, certainty is comforting. Nuance doesn’t trend.
Social media algorithms reward hot takes. They surface the most extreme positions because outrage drives engagement. “AI will definitely kill us all” gets shared. “AI is definitely going to be awesome” gets shared. “It’s complicated and I’m uncertain but trying to think carefully” gets... a few likes and your bestie commenting “lovely post”.
Teenagers don’t need parents who are certain. They need parents who can tolerate uncertainty while still showing up. Who can say “I don’t know, but I’m paying attention” rather than pretending to have answers they don’t have.
I’ve been told I’m too optimistic about AI. I’ve also been told I’m not optimistic enough when I express concern about prioritising productivity over social benefit. Depending on the room, I’m either a naive cheerleader or a pearl-clutching alarmist. Sometimes in the same week *grin*.
I’ve learned from standing in the uncomfortable middle. Most people performing strong opinions about AI are doing something easier than what this moment requires. They’re picking a side, and I’m starting to think that’s the one thing we absolutely shouldn’t do.
The messy middle is where the thinking happens. Unfortunately it’s also where the algorithms struggle to follow you.
Amae and the art of staying engaged
There’s a Japanese concept called amae (甘え) often translated as comfortable dependence, but it’s richer than that. It’s the trust between parent and child that allows for both closeness and growth. The security of knowing someone cares enough to set limits while also believing in your potential.
Teenagers push against boundaries precisely because they need to know the boundaries will hold. The testing is part of the development. And the worst thing you can do, worse than being strict, worse than being lenient, is to disengage entirely.
I think we need to build something like amae with AI. Not blind trust, that would be absurd. But engaged trust. The kind that says I’m paying attention. I believe you can develop into something good. And I’m going to stay present through the difficult bits.
The alternative approaches worry me. Helicopter parenting AI… over-regulating, fear-based restriction that stifles development and pushes innovation underground. Or neglectful parenting, letting tech companies raise this child unsupervised while we scroll past the consequences.
Neither works. What works is showing up. Setting boundaries that flex with growth. Staying curious when you’re also scared.
Holding the both/and.
And maybe, just maybe, growing up a bit more ourselves in the process.
Still alive, still trying
I love that Dario and Anthropic publish essays like The Adolescence of Technology. I love that they share their successes and their worries and their concerns. That transparency, that willingness to say “we’re figuring this out too”, is what good adulting looks like. Not pretending to have all the answers. Just staying in the conversation.
People moaning about AI’s water usage or that it’s all stolen… and believe me I hugely care about the environment and I absolutely believe artists deserve compensation… but those being the loudest complaints in wider society often feel like convenient shields against harder questions. Questions like… what happens to human purpose when we can delegate meaning? What’s the ikigai risk here? And super importantly… can it help us build something better than the world we currently have?
Do you really think we’re living in the best version of life right now? Is it fair? Is it equal? Is wealth distributed in a way that allows everyone to flourish? We know the answer. And knowing the answer means we have to stay engaged with the tools that might… *might*... help us build something better. Even if those tools are also risky, even if we’re also a little scared.
I can be excited about playing with the possibilities carefully AND worried about people using this in a non-careful way. I can marvel at AI agents forming communities AND question what it means when we hand over our agency to systems that are still figuring themselves out. I can want that advanced future where AI helps sort out inequality and medicine and human flourishing AND insist that we need more focus on the “caring about humans” bit right now.
That’s emotional intelligence for this moment, not cognitive dissonance.
We need to parent two teenagers at once, the AI that’s growing up, and the humanity that also needs to.
The cake may be a lie. But we’re still alive, still trying.
Whose voices do you trust most in the AI conversation, and what is it about how they speak that earns that trust?
Sarah, seeking ikigai xxx
PS – ✍️ Bullet Journal Prompts
Create a “Both/And” spread this week. Draw a line down the middle of your page. On one side, write what excites you about AI and technology. On the other, what worries you. Then look at which fears and hopes are actually connected? How does it feel to think about them both?
Reflection questions; Where am I picking a side when I could be holding complexity? What boundaries do I need to set, with technology, with others, with myself, that come from engaged trust rather than fear or neglect? And honestly... in what ways might I need to grow up a bit too?
PPS – 🤖 AI Coaching Prompt
“I want to explore my relationship with AI through the lens of ‘engaged trust’ – the idea that the best response to uncertain, powerful technology is neither blind enthusiasm nor fearful avoidance, but staying present, curious, and boundaried.
Please guide me through these stages:
First, help me locate myself. Ask me questions about: how I currently use AI (or don’t), what excites me about it, what worries me, and whether I tend toward the ‘doomer’ or ‘boomer’ end of the spectrum – or if I’m somewhere in the messy middle.
Second, help me build a ‘both/and’ inventory. For each fear I name, help me find the connected hope. For each excitement, help me find the legitimate concern hiding underneath. I want to see my contradictions clearly, not resolve them.
Third, help me examine my boundaries. Where might I be ‘helicopter parenting’ AI – over-controlling, fear-based, refusing to engage? Where might I be ‘neglectful’ – handing things over without thinking, letting convenience override intention? What would healthy, flexible boundaries look like for me specifically?
Fourth, help me identify one area where I’m performing certainty I don’t actually feel – about AI or about anything else in my life. What would it look like to hold that uncertainty honestly?
Finally, help me design one small experiment: something I could try this week that embodies engaged trust with AI. Something that’s neither reckless nor avoidant. Something that helps me grow up a little bit alongside this technology.
Be warm but challenging. I want to think, not just be comforted.”
PPPS – 🎶 Soundtrack for today “Still Alive” by Jonathan Coulton
If you haven’t played Portal, I hugely recommend this gorgeous puzzle game, dark, clever and wickedly funny, with a storyline that creeps up on you. This song plays over the end credits of the first game, sung by our beloved murderous GLaDOS herself.
Don’t let the “murderous AI” bit put you off, it’s a poetic song about resilience, about surviving what tried to destroy you, sung by the thing that tried to destroy you. It’s a TUNE and a half. Deeply, creepily beautiful if you know the game. (Check out Jonathan Coulton performing his own songs acoustically if you want the more stripped-back vibe.)
And no, I’m not suggesting this is what AI will actually do to us in real life. Well... not if you’re polite and kind to yours 😉





Wow, the Star Trek vision of tech making us more compassionate realy hit home. You perfectly articulate why I'm so passionate about ethical AI. Such an insightful read!