Where AI Falls Short

There are things AI can't do for you. Reading a room. Building trust through follow-through. Making a go/no-go call when the answer depends on organizational risk tolerance, not data. Facilitating a workshop when the plan stops working and you have to adapt in real time.

I keep a clear line between what I hand to AI and what I don't. That line is the most important part of my practice.

How I Think About AI

AI compresses the time between question and insight. You get to the hard decisions faster, but the judgment still has to be yours.

I treat Claude Code as a thinking partner, not an assistant. An assistant does what you tell it. A thinking partner pushes back, surfaces blind spots, and makes you articulate why you believe what you believe. My PM practice is built around that distinction.

In practice, AI handles research, drafting, prototyping, and coordination. I decide what to build, when to pivot, who to align, and what to kill.

What I've Built

I designed an operating system for running multiple concurrent projects with Claude Code as the core interface. It connects five tools I use daily (task management, messaging, meeting notes, files, and cloud storage) through a single natural language layer.

The key design decision was shared state. Every project has a lightweight status framework that auto-loads when I start a session. Claude Code already knows where I left off, what's blocked, and what decisions have been made. No ramp-up, no lost context.

Seven workflows run through this system, each with a clear trigger, process, and measured outcome. A few:

  • Rapid prototyping: 6 prototype versions in a single day, each testable with real users. One session surfaced a critical interaction model flaw that no design review would have caught.
  • Executive communication: structured as Tension, Recommendation, Rationale, Next Steps. Delivered a strategic recommendation in 2 weeks (vs. a typical 6-8 week cycle) and got leadership buy-in.
  • AI boundary design: defined five explicit things the AI should never do. Stakeholders who were skeptical of AI trusted the product once they saw we'd drawn the lines. The boundaries became a feature, not a limitation.

What I've Learned

  • Automation serves delivery. If I'm optimizing the system instead of shipping work, something is wrong.
  • Speed needs guardrails. An early automation mistake deleted 22,000+ files. Now every workflow has manual checkpoints for anything touching shared systems.
  • Document the failures. What didn't work and why is as valuable as what did.