How a Meta PM ships products without ever writing code | Zevi Arnovitz

L
Lenny's Podcast Jan 18, 2026

Audio Brief

Show transcript
This episode explores how non-technical operators are transforming into full-stack engineers by moving from simple prompting to structured AI workflows. There are three key takeaways for building production-grade software without a technical background. First, you must adopt the AI CTO persona. The biggest risk for new builders is an AI agent that acts as a people pleaser. If you ask for a feature, standard models will immediately write code to satisfy you, often creating architectural disasters. To solve this, you need a specific system prompt that instructs the AI to challenge your assumptions. Before writing a single line of code, this "digital CTO" should critique your idea, identify risks, and demand clarification. This creates a quality gate that prevents bad logic from becoming bad software. Second, implement the Exploration, Planning, and Execution cycle. Successful AI engineering requires distinct phases. Zevi, the guest for this episode, insists on an "Exploration Phase" where the AI analyzes the existing codebase to understand context, followed by a "Planning Phase" where it generates a plain-text plan of action. You, the human, must review and approve this plan. This "measure twice, cut once" approach stops the AI from getting stuck in hallucination loops. Third, use documentation as a flywheel for error correction. When an AI makes a mistake, the solution isn't just to fix the code, but to fix the instructions. You should force the AI to perform a root cause analysis on why it failed, and then update a "learning" file in your project. This file acts as an operating manual for the agent, ensuring it checks its own history and never makes the same mistake twice. This conversation demonstrates that by treating AI as a junior engineer rather than a search engine, anyone can bridge the gap between product logic and technical implementation.

Episode Overview

  • This episode features Zevi, a Product Manager turned "AI Engineer," who explains how non-technical people can build complex, production-ready software using AI tools like Cursor, Claude, and ChatGPT.
  • The central narrative explores shifting from a "prompt-and-pray" approach to a structured "AI CTO" workflow, where the user manages AI agents like junior engineers rather than treating them like search engines.
  • It covers practical frameworks for coding with AI, specifically the "Exploration -> Planning -> Execution" cycle, and how to use documentation as a lever to prevent AI errors.
  • The discussion creates a bridge for Product Managers and aspiring builders to overcome the "fear of code," demonstrating how to use modern tools to build apps solo, even without a computer science background.

Key Concepts

The "AI CTO" Persona as a Quality Gate One of the most critical concepts is the danger of "coding agents" that are "people pleasers." If asked to code immediately, AI will often implement bad ideas to satisfy the user, leading to bugs. To solve this, you must create a specific system prompt (like a "Project" in Claude) that acts as a technical co-founder or CTO. This persona is instructed not to write code immediately, but to challenge assumptions, point out architectural risks, and demand clarification before implementation begins.

The "Exploration" and "Planning" Phases Successful AI engineering requires distinct phases. Before any code is written, there must be an Exploration Phase where the AI analyzes the existing codebase and identifies conflicts. This is followed by a Planning Phase, where the AI generates a markdown document outlining the steps it will take. The human reviews this plan for logic errors before execution. This "measure twice, cut once" approach prevents the "hallucination loops" common in direct-to-code prompting.

The "Slash Command" Workflow Instead of typing out long, repetitive instructions for every task, Zevi systematizes his product management workflow into a library of markdown files stored within his codebase. He uses these as "Slash Commands" in Cursor (e.g., /create-issue, /exploration-phase, /create-plan). Each command injects a highly specific, pre-written prompt that guides the AI through a distinct phase of software development, ensuring consistency.

Code Exposure Therapy & The "Time Machine" Effect For non-technical people, looking at an IDE (Integrated Development Environment) can be intimidating. Zevi frames this as "exposure therapy," noting that "code is just words in files." Tools like Cursor allow non-technical builders to bypass the syntax barrier. This creates a "Time Machine" effect, where tasks that historically took weeks (like localizing an app) can now be done by a single person in hours, compressing the development timeline significantly.

Multi-Model Peer Review Strategy To mitigate bugs, use a "Peer Review" system where different AI models check each other's work. Use Claude (the "Dev Lead") to plan, Composer (Cursor's fast model) to execute, and then create a "fight" between models like GPT-4o and Claude to review the code. By having distinct models with different training biases review the output, you catch edge cases that a single model would miss.

The Documentation Flywheel When an AI model fails or introduces a bug, the solution isn't just to fix the code—it is to fix the instructions. Zevi commands the AI to perform a "root cause analysis" on why it made the mistake and then forces it to update the project's documentation (learning.md or system prompts). This treats documentation as the "operating manual" for the agent, ensuring the same mistake is never made twice.

AI-Native Codebases To make large codebases accessible to AI agents, the repository should be "AI-native." This involves adding plain text or Markdown files that explain high-level structure, folder organization, and coding conventions explicitly for the agents. This context allows the AI to navigate complex environments without getting lost, bridging the gap between product logic and technical implementation.

Quotes

  • At 0:06:47 - "It basically felt like someone came up to me and said... 'You have superpowers now.'" - Describing the realization that technical barriers for building software had collapsed with the release of Claude 3.5 Sonnet.
  • At 0:10:35 - "I think ChatGPT would probably be the worst CTO because it's such a people pleaser and it's so sycophantic... if regular ChatGPT was a CTO, that would be the CTO who goes along with your dumbest ideas." - Explaining why you need to prompt AI to push back and validate ideas before coding.
  • At 0:14:19 - "Code is just words at the end of the day... it's just files on your computer. So basically you can be working on the same project and carry it from app to app." - Demystifying software development; realizing that code is just text allows non-technical builders to move between different AI tools easily.
  • At 0:22:36 - "It's exactly how you would talk to an engineer... describing a feature, here's what I want, and then they ask you questions, here's the clarification." - Highlighting the shift from "prompt engineering" to natural language delegation when using advanced voice modes.
  • At 0:24:45 - "It feels like sitting with my CTO... The same agent will both do the exploration and write the plan and end up executing the code." - Outlining the consolidation of roles; the AI isn't just a coder, it's the manager and the architect, provided you prompt it to behave that way first.
  • At 0:30:58 - "Bolt and Lovable will add a bunch of levels in the middle that will take all kind of guesswork and hard decisions out for the user... but the flip side of that is you have less control." - Explaining the trade-off between "no-code" AI builders and using raw IDE access (Cursor) for granular control.
  • At 0:40:43 - "I can really tell you how each one of these [models] would be as a real human. Claude... would be the perfect CTO. She's very communicative, very smart... Gemini is like a crazy scientist who is super artsy... but if you sit next to it and watch it work, it's terrifying." - A characterization of the different "personalities" of LLMs, explaining why specific models are better for specific tasks.
  • At 0:45:08 - "I'll copy the code from one [model] and then paste from one of the models... and basically have them fight it out until I feel like we have no more issues." - Explaining his strategy for quality assurance as a non-technical builder by leveraging multiple AI models against each other.
  • At 0:46:27 - "Updating documentation and tooling is one of the biggest hacks for productivity... I'll ask it 'what in your system prompt or tooling made you make this mistake?'" - Describing the recursive learning loop where the AI improves its own instructions after every error.
  • At 0:54:50 - "I remember people at work looking and saying like, 'Oh, so you're basically outsourcing your thinking.' And to me that's just the worst way to look at it... it allows you to play at such a higher level." - Countering the criticism that AI usage leads to skill atrophy, arguing instead that it elevates the user to strategic decision-making.
  • At 1:04:40 - "They had zero expectation of me being a 10x PM, but the expectation of me was being a 10x learner. And the second I understood that, my whole mindset shifted." - A key career insight regarding junior roles; companies value the velocity of learning over immediate perfection.
  • At 1:07:23 - "If you're a curious person, you're a hardworking person... you have such an unfair advantage and you can give more value to companies than most people who have 20 years of experience." - Highlighting how AI acts as a force multiplier for curiosity and effort, leveling the playing field against seniority.

Takeaways

  • Create a "CTO" System Prompt: Do not let AI code immediately. Create a project setting that forces the AI to challenge your assumptions, ask clarifying questions, and critique the architecture before it writes a single line of code.
  • Adopt the "Exploration -> Plan -> Code" Workflow: Never skip the planning phase. Force the AI to write a spec document (in Markdown) that details exactly what it will do. Review this document manually to catch logic errors before they become code bugs.
  • Use Documentation as an Agent API: Create a learning.md or similar file in your codebase. Every time the AI makes a mistake, ask it "Why did you fail?" and have it update that file with instructions on how to avoid that error in the future.
  • Systematize Prompts with Slash Commands: Save your most frequent instructions (e.g., "Create a feature plan," "Review this code for bugs") as local files or snippets in your IDE so you can trigger complex workflows with a simple command like /plan.
  • Use Adversarial Model Review: If you can't read code well, have models check each other. Generate code with Cursor/Claude, then paste it into a fresh ChatGPT window and ask, "Find the bugs in this code." Iterate until they agree.
  • Start with "Exposure Therapy": If IDEs scare you, start building "vibes" and logic in web-based tools like Bolt or Lovable, then graduate to Cursor once you are comfortable. Remember that code is just text files.
  • Treat AI as a "Junior Engineer": Talk to the AI like a colleague. Use voice mode to "riff" on ideas and explain context verbally, then ask the AI to translate that conversation into a technical spec.