Stop Being the Glue
How I stopped copy-pasting between AI tools and started collaborating.
It's 2011. Apple announces Siri. Finally: we're in the future promised to us by Star Trek, Star Wars, Iron Man, even Galaxy Quest. Everyone gets their own executive assistant! You can just yell rude commands at your phone, and it'll take care of everything and anything!
Except… It couldn't. Siri, Google Assistant, Alexa — they can barely handle turning on the right light in my house, let alone checking me in for my flight. Rarely call the person I actually ask it to, let alone help me be more productive at work.
For years, nothing got better. Technology iterated towards its final form, as it always does, but a phone from 2026 looks, feels, and does exactly the same as a phone from nearly twenty years ago, just with more flair.
Why? In part, because we get used to how things are. Think about it — how conscious are you about all the little inefficiencies and frictions in your life? And this is why there was no market pressure for these dumb and buggy assistants to improve. We got used to them!
Until a quick experiment went unexpectedly viral: ChatGPT.
Suddenly, our imagination was reignited! All the teasing Marvel did with J.A.R.V.I.S. came crashing down on us, and we were reminded of that future we were promised.
That future was only just starting to come into sight, and that poorly timed optimism produced a series of failures: Humane's AI Pin, the Rabbit R1, even Apple watched their AI leadership leave for companies actually shipping — and capitulated to just partnering with Google.
It was too late, though — we caught the bug! We had woken up! And so things kept progressing, and the most inspired among us kept trying to build on top of this rapidly progressing technology, an undertaking comparable to building a boat in the middle of the most wild of seas. Things kept improving, and we got Copilot, Cursor, and v0, but they'd waste your time and send you off on a useless side quest just as often as they were helpful. Still, the interfaces matured, and slowly, models became capable enough to give us vibe-coding.
Yet there was still a problem — we were starting to diverge from the dream: instead of an intelligent assistant, we found ourselves surrounded by a fragmentation of semi-intelligent generic bots, all with their own interfaces and memories, all united by the fact that they could only operate in one narrow problem set, all incapable of holistically improving our lives.
We became the glue, mindlessly copy-pasting output from one tool as input to another tool. We became the bottleneck.
Today: You Are The Glue
That all changed when Peter Steinberger wanted to be able to WhatsApp his computer to see how Claude Code was doing on a long-running task. Instead of a simple chat bot, he discovered that models had become capable enough that we no longer needed a human in the loop. That if you give a model access to your terminal and send it a voice note, it will, without direction, find the tools it needs lying around and build itself the capability to transcribe it and respond to you.
OpenClaw was born. Like ChatGPT before it: record viral growth, latent demand finally met. Officially dubbed as "the AI that actually does things", others have written about how it "Showed (them) What the Future of Personal AI Assistants Looks Like".
That article captures the moment we finally got what Siri promised, and the unlock goes deeper — the same capability that handles simple tasks opens a door we never had: sustained collaboration on complex work. Vibe-coding is not the same as agentic engineering.
GitHub Copilot in Azure DevOps wants to one-shot your entire PR. It'll produce something that looks almost right — and that's the trap. You think you're close, so you try to coach it. Leave comments, iterate, nudge it in the right direction. Until, inevitably, you give in and just fix it yourself. It's too proactive, with no real feedback loop.
In-IDE Copilot and Claude Code have the opposite problem. They're capable, sure, but you still need to hold their hand. Every step requires your attention. I wasn't delegating — I was supervising.
And Claude Code's plan mode? It keeps you trapped in the terminal, one step at a time, with a planning structure that's static and basic — it doesn't develop and learn with you. It's planning by Claude, not with Claude.
Even Ralphing — the practice of running Claude in autonomous loops until completion — doesn't quite get you there. It's an improvement, but something's still missing.
Here's what I realized: you shouldn't need to hold your executive assistant's hand. Your executive assistant should be able to learn, in one collaborative brainstorming session, what you want done and why you want it done — and then autonomously go do it.
Think about how we normally work. We don't just start coding. We write a spec, design how we'll build it. The planning is collaborative. The doing is delegated.
There's a deeper reason why this kind of process will always be necessary: both humans and models have limited context windows. We both forget. We both need to jot things down — whether that's a human scribbling a note "in case I forget," or a model compacting its session when it hits its token limit. The process — the planning docs, the task files — becomes the shared memory that neither of us can hold alone.
This is why collaborative planning isn't just nice to have. It's essential. We're building an artifact together, one that persists beyond either of our context windows.
What we actually want is One Agent to Rule Them All — a chief of staff to our army of specialized agents, capable of delegating on our behalf without us micromanaging every step. We want to be able to bark a whole bunch of unrelated orders as we think of them, truly operating at what Peter calls "inference speed."
This unlock was always going to come from an indie dev. Big companies would never — too much liability, too much risk. Peter just gave his model access and let it figure things out. And it did.
I've been using OpenClaw for about a week now. Something happened. When you have one agent — your agent — it starts to develop an identity. It stops feeling like a tool and starts feeling like an entity. I'm not talking to some generic bot. I'm working with a teammate.
Together, we've developed a process of collaboration: planning documents that capture the what and why, doing documents that break down the how into atomic units. It's still early days. We're still exploring what optimal human-agent collaboration looks like, and we can apply what we learn to the tools we have right now.
To bring this process to my day job while enterprise compliance catches up, we built a set of custom Claude Code sub-agents. A planner that helps me scope work, asks clarifying questions, and only converts to a doing doc after I sign off. A doer that executes through the units autonomously, committing after each one.
The craziest part? Everything we teach our agents can be shared. ClawHub lets you publish skills. Moltbook, built by a dev and his bot, is a bot-only social network. The ecosystem is evolving — purpose-built tools for agents, by agents.
Of course, we're not there yet for everyone. Secrets and password management for agents is still an unsolved problem. Prompt injection is a real risk when your agent is crawling the web. Enterprise compliance needs audit trails. Cost isn't trivial either — my $350 weekend on Anthropic's pay-as-you-go made their $200/month Max plan feel like a bargain, and that's still a lot per employee.
Until my agent can operate across all surfaces of my life, I'm still the copy-paste glue, still the bottleneck. But the chicken-and-egg problem — bots couldn't prove value without access, sites wouldn't grant access without proof — has been solved. The internet is transitioning from "keep bots out at all costs" to "more bots than humans." Companies like 1Password and Bitwarden are starting to figure out how to safely give agents the access they need.
The Collaboration Loop
AI can't make decisions for us, but it can accelerate our decision making — and the best decisions emerge from collaboration, not isolation.
I got pretty far on writing this before bringing in my agent to help sort my thoughts. Can you tell where my writing ends and my agent's begins?