I opened this document a few days ago telling myself this time I’d write it old school. Just me, a blank screen, and some painfully slow typing. No prompts, no bullet point briefs, no LLM whispering in my digital ear. But then, half an hour in, I realized that structuring thoughts is not exactly my superpower. Concepts, yes. Writing in an elegant, narrative arc? That’s what I use GPT for.
So, full disclosure: I’m using Perplexity to help shape this piece. I might be “outing” myself, but honestly, who am I kidding? I use AI for everything. Emails, outlines, brainstorming, even small talk with investors when I’m trying not to sound too robotic (which is ironic). But here’s the thing: just because I’m using an AI doesn’t mean I haven’t spent time thinking. In fact, I probably spend more time now thinking about the inputs, which feels like creative work in its own right.
And let’s be clear, OpenAI. You left a bit of a bad taste in our mouths with all those dramatic long dashes and poetic overpunctuation. So no em-dash therapy here. Just honest thoughts, less punctuation theater.
So here we go, in no particular order, as they come to mind, with a little help from my AI writing therapist.
I’m building an AI company now, Finlay. We just wrapped our first year, and it’s been equal parts thrilling and humbling. We had a great start, real customers, real impact, and equally real “we need to fix this” moments.
Here’s the honest bit: we’re not deep tech in the strict sense. We’re leveraging Gemini models, building agentic workflows, and plugging real-world problems into powerful systems. We’re builders, not researchers. I'm far from an AI expert. Our stack might rely on cutting-edge models, but what we’re really doing is applying them with intent. That’s the magic.
It’s funny. In the AI world, there’s this tension between the “purists,” who live and breathe transformer architectures, and the “pragmatists,” who live and breathe customer value. I fall firmly into the second camp. Yet, paradoxically, I often find myself pushing the AI narrative harder than many people who are technically deeper in it. Maybe because when you live the changes, when you see AI turning from a toy into a tool in your own company, it stops being theoretical. It becomes visceral.
A couple weeks back, Matt Shumer dropped a thought piece about what it’s really like building an AI company right now. That eerie feeling of watching the world change faster than most people are ready for. It hit home. When I chatted with a few folks about it, everyone had a similar response: “Yeah, things are moving insanely fast.” But what insanely fast means seems to depend on your vantage point.
Then, practically overnight, OpenClaw, that new personal AI agent that can actually take actions, exploded on social media. Our Lead Developer, Tomas, spent the weekend tearing into it like a kid at Christmas. That’s one thing I love about Tomas: he doesn’t see himself as “front-end” or “backend” or “C#” or “Python.” He’s just a builder. Those are the people who thrive in this era.
Watching OpenClaw in action got us reflecting internally. At Finlay, we’ve already built agentic workflows capable of things like reading Slack threads and triggering real operations through natural conversation. But the next logical jump, and it’s coming faster than people realize, is agent-to-agent communication. Personal agents talking to Finlay agents to get real work done. That’s not sci-fi. That’s a use case we’re preparing for.
And somewhere between Matt’s reflections and Tomas’s experiments, it hit me again: this world is materializing now. Before you even finish reading this sentence, someone probably built a new layer of abstraction on top of last week’s innovation. It’s exhilarating and also slightly terrifying, but in a weirdly familiar way.
Yesterday, another now viral scene got stuck in my head. China’s Spring Festival Gala aired a performance featuring perfectly synchronized kung fu robots. Last year’s version was awkward and meme-worthy. This year’s felt like an artistic breakthrough. The robots moved with fluid precision, flipping swords through the air, performing gestures that somehow balanced human poetry and mechanical perfection.
I shared the clip with a few friends and said, “Look at this. This is the moment when robotics stopped being cute and started being capable.” My first reaction wasn’t fear or dread, it was awe. I even looked up the company, half thinking it might be a good investment.
But then came the responses. My cofounder Matt’s brother replied, “No thanks, I’m not opening that AI doomsday link.” Another friend said it was terrifying. Saw it as a glimpse of a world being built for self-destruction. And, honestly, I understood both takes.
Somewhere along the way, I’ve become comfortable with the uncomfortable. Maybe it’s because I’ve lived in four different countries, run an early-stage, high-risk company at 41, and learned to stop expecting stability. My risk tolerance looks different from most people’s. But part of me still paused and wondered: am I losing something in that comfort?
Technology is inevitable. Stopping it is impossible. It’s like trying to stop coal mining mid-century. We can complain, protest, and philosophize, but in the end, the gears turn. So I’ve stopped fighting it and started thinking about adaptation. Not “will this happen?” but “how do we live with it?”
My wife and I recently welcomed our daughter into the world. She’s three months old, and there’s really no preparing for the shockwave of emotions that come with being a parent. The joy is constant, but so is the reflection.
Because as I hold her, I think about her world. What kind of economy is she going to grow up into? What does “career” even mean 20 years from now? People talk about universal basic income like it’s still some hypothetical sci-fi future, but look around. It’s being modeled in real tests. If the workforce shrinks because machines do the labor, do we finally lean into shared prosperity? Or does politics get stuck fighting invisible ideological wars while the world quietly shifts underfoot?
For my daughter, I can’t make promises that her world will look like mine did. Fewer wars in recent decades, sure. But now, the new battlegrounds are digital, cognitive, even moral. I hope she has choices, whether that’s coding alongside agents or choosing to live slowly, disconnected, and human.
Some people see all of this and feel fear. I understand them now more than ever, especially when my daughter smiles and I think, "I hope she gets to feel this same joy in 40 years." My optimism isn’t because I think the world will be fine. It’s because optimism is the only way to stay grounded enough to keep building.
All this flashing through my mind lately has me wondering: in ten years, do I want a farmhouse? Somewhere off-grid, no smart agents whispering in my ear, no robots mowing the lawn. A place where my daughter can choose between fast-forward tech life and slow, human rhythms.
I’ve pushed those thoughts away for a few years now, too comfortable in the present, too busy building. But is my optimism just a failure to see the reckoning ahead? Am I naive for sharing those robot clips not out of fear, but excitement? “See, this isn’t hype, we’re underestimating it”?
I don’t have easy answers. Futurists aren’t wrong. Laggards aren’t wrong. Doomsayers have valid points. Optimists like me might be blind to risks. Nobody’s fully right. The world is just balancing itself out, like it always has.
What I do know: adaptation beats resistance. At Finlay, we’re building for this world, agentic recruiting that works with personal AIs, workflows that scale as robots enter the labor pool. For my daughter? I’ll teach her to build, to question, to choose her pace.
The uncomfortable truth is we’re all in this together, getting comfortable, one adaptation at a time.