Projects
I am running one project and two personal tracks right now, all on the same system I describe on the Practice page: status files, scoped instances, the daily ritual. The shape of each is different, but the underlying system is the same.
AI Entrepreneurship Portfolio
A 0-to-1 product exploring whether AI-enhanced delivery can scale a proven entrepreneurship methodology to people underserved by traditional accelerators and venture capital. I lead this as the sole product manager, with fractional support from a UX researcher, and I run every layer of the work, from foundational research and synthesis through prototype iteration, market validation, a 12-week learning pilot, and external partner conversations. The project itself has shifted from a single bet to a portfolio of related bets, evaluated against each other in a Desirability, Feasibility, and Viability framework that the org has introduced.
An innovation framework I designed and apply honestly enough that it tells me when I'm wrong. I built a four-phase loop (Sense, Shape, Test, Scale) drawn from IDEO, lean startup, and dual-track agile. The first time I applied it to this project, the assessment was not flattering: thorough research, zero conversations with users, output measured but no learning. Within a few sprints the project went from zero user contact to a market test, a pivot, and a learning pilot.
Foundational synthesis at scale. Using Claude Code, I synthesized 9,876 archived documents from a partner methodology into validated principles. Those principles became the pedagogical backbone of the prototype and the criteria the AI behavior spec is judged against.
An AI behavior specification for the prototype. The spec defines five guiding principles, seven interaction patterns, and five hard boundaries on what the AI is not allowed to do in the user's space. It is grounded in peer-feedback documents and founder interviews, so the AI is shaped by real-world data and user feedback from the start. The prototype is built against this spec.
A market validation experiment stood up in nine days. The strategy was designed in partnership with a marketing colleague who had done the upstream competitive research and persona work; I built and shipped the execution end-to-end. Landing page from scratch in Claude Code, PostHog for behavior, a Google Sheets backend for email capture, deployed through GitHub, with a smoke test at the top of the sign-up flow. The data it produced fed the pivot decision below.
An evidence-based pivot from AI-first to community-first. A community-led message variant on the landing page drew fewer clicks than the AI-coaching variant but converted at nearly twice the rate. User interviews surfaced the same theme in entrepreneurs' own words: AI was experienced as a feedback-giver, not a collaborator. The data was ambiguous; the pre-committed criteria I had written down twelve days before the smoke test ads went live kept me honest. The call was PIVOT. The full worked example lives on the Practice page.
A 12-week learning pilot with seven committed early adopters. As I wrapped the market validation in March 2026, the opportunity to run a learning pilot with early adopters emerged and I took it. The pilot is mid-stream as I write this, running on a Slack and Google Drive stack with a weekly build cycle that ships small automations and learns what works. In one sprint, the automations I shipped lifted weekly peer comments by 206% and pulled attendance from four-of-seven to full participation. The experience is helping me shape the product, understand where AI delivers value in the work, and experience the methodology first-hand.
A new prototype that integrates what the pilot is teaching me. Built in Claude Design as a three-view walkthrough across the Solo Founder, the Cohort, and the Partners views. Each view is grounded in what the early adopters have surfaced about where AI fits in the work and where it should stay out. The walkthrough is what I use to align stakeholders on the next iteration.
Materials produced on short notice for partners and funders. Pitches, decks, one-pagers, and posters drafted in hours or days, depending on the ask. The system holds enough context that I can draft to a specific audience without re-explaining the project from scratch, which is what makes short-notice asks workable.
Operational overhead off my plate. Leadership briefs, sprint planning, Asana updates, daily check-ins, status updates. The system drafts first passes from status files and meeting transcripts; I review, adjust, and ship. What used to take half a day takes thirty minutes, and the freed time goes back into product taste decisions and partner conversations.
Working solo with AI as the operating model was the mandate from the start. The engineer who helped set up Claude Code on my laptop was transitional support; when his work was needed on another project, leadership was comfortable making the move because they could see what I was building on the system on my own.
More on the operating model behind this work lives on the Practice page.
This portfolio
I treat my portfolio as a project I run on the same operating model I use for product work. The point is the discipline of maintaining a public surface that reflects how I actually think, on a cadence I can sustain.
An evolving site built with Astro, TailwindCSS, and DaisyUI, deployed through GitHub Actions to a custom domain, with PostHog analytics for visibility into how readers find the work.
Custom components built for specific editorial purposes: a pull-quote treatment for thesis statements, a callout pattern for worked examples, a sticky table of contents with scroll-spy for long-form pages.
A bi-weekly Notes cadence that captures what I am learning at work in sanitized form. The Notes are tagged across four categories: methods, experiments, failures, signals. Eight entries are live so far; the full archive lives at Notes.
A regular update process that mirrors my work practice. Every other Friday I draft a Notes entry from my work that period, sanitize it, and ship it. The cadence holds because I run it as part of my work, not as an aspiration.
AI + EdTech Journal
A personal intelligence journal for tracking AI and EdTech developments. The discipline is integrating what I read back into the work I am doing, so reading is not a separate activity from building.
A capture system that does not break flow. A Chrome extension I built triggers a /capture skill in Claude Code. Whatever I am reading gets captured with metadata, tagged by project, and added to a searchable journal.
A tag taxonomy that ties findings back to specific work. When I am making a decision on the AI Entrepreneurship Portfolio, I can pull captures tagged with it. When I am preparing a workshop, I pull training-related captures. The tagging is the glue between reading and doing.
A practice of integration over accumulation. I write up what each capture changes about how I think. The writing is what makes the work stick, and the public version of that writing lives in the Notes.