TLDR: It was Claude :-)When I set out to compare ChatGPT, Claude, Gemini, Grok, and ChatPRD for writing Product Requirement Documents, I figured they’d all be roughly equivalent. Maybe some subtle variations in tone or structure, but nothing earth-shattering. They’re all built on similar transformer architectures, trained on massive datasets, and marketed as capable of handling complex business writing.
What I discovered over 45 minutes of hands-on testing revealed not just which tools are better for PRD creation, but why they’re better, and more importantly, how you should actually be using AI to accelerate your product work without sacrificing quality or strategic thinking.
If you’re an early or mid-career PM in Silicon Valley, this matters to you. Because here’s the uncomfortable truth: your peers are already using AI to write PRDs, analyze features, and generate documentation. The question isn’t whether to use these tools. The question is whether you’re using the right ones most effectively.
So let me walk you through exactly what I did, what I learned, and what you should do differently.
The Setup: A Real-World Test Case
Here’s how I structured the experiment. As I said at the beginning of my recording, “We are back in the Fireside PM podcast and I did that review of the ChatGPT browser and people seemed to like it and then I asked, uh, in a poll, I think it was a LinkedIn poll maybe, what should my next PM product review be? And, people asked for ChatPRD.”
So I had my marching orders from the audience. But I wanted to make this more comprehensive than just testing ChatPRD in isolation. I opened up five tabs: ChatGPT, Claude, Gemini, Grok, and ChatPRD.
For the test case, I chose something realistic and relevant: an AI-powered tutor for high school students. Think KhanAmigo or similar edtech platforms. This gave me a concrete product scenario that’s complex enough to stress-test these tools but straightforward enough that I could iterate quickly.
But here’s the critical part that too many PMs get wrong when they start using AI for product work: I didn’t just throw a single sentence at these tools and expect magic.
The “Back of the Napkin” Approach: Why You Still Need to Think
“I presume everybody agrees that you should have some formulated thinking before you dump it into the chatbot for your PRD,” I noted early in my experiment. “I suppose in the future maybe you could just do, like, a one-sentence prompt and come out with the perfect PRD because it would just know everything about you and your company in the context, but for now we’re gonna do this more, a little old-school AI approach where we’re gonna do some original human thinking.”
This is crucial. I see so many PMs, especially those newer to the field, treat AI like a magic oracle. They type in “Write me a PRD for a social feature” and then wonder why the output is generic, unfocused, and useless.
Your job as a PM isn’t to become obsolete. It’s to become more effective. And that means doing the strategic thinking work that AI cannot do for you.
So I started in Google Docs with what I call a “back of the napkin” PRD structure. Here’s what I included:
Why: The strategic rationale. In this case: “Want to complement our existing edtech business with a personalized AI tutor, uh, want to maintain position industry, and grow through innovation. on mission for learners.”
Target User: Who are we building for? “High school students interested in improving their grades and fundamentals. Fundamental knowledge topics. Specifically science and math. Students who are not in the top ten percent, nor in the bottom ten percent.”
This is key—I got specific. Not just “students,” but students in the middle 80%. Not just “any subject,” but science and math. This specificity is what separates useful AI output from garbage.
Problem to Solve: What’s broken? “Students want better grades. Students are impatient. Students currently use AI just for finding the answers and less to, uh, understand concepts and practice using them.”
Key Elements: The feature set and approach.
Success Metrics: How we’d measure success.
Now, was this a perfectly polished PRD outline? Hell no. As you can see from my transcript, I was literally thinking out loud, making typos, restructuring on the fly. But that’s exactly the point. I put in maybe 10-15 minutes of human strategic thinking. That’s all it took to create a foundation that would dramatically improve what came out of the AI tools.
Round One: Generating the Full PRD
With my back-of-the-napkin outline ready, I copied it into each tool with a simple prompt asking them to expand it into a more complete PRD.
ChatGPT: The Reliable Generalist
ChatGPT gave me something that was... fine. Competent. Professional. But also deeply uninspiring.
The document it produced checked all the boxes. It had the sections you’d expect. The writing was clear. But when I read it, I couldn’t shake the feeling that I was reading something that could have been written for literally any product in any company. It felt like “an average of everything out there,” as I noted in my evaluation.
Here’s what ChatGPT did well: It understood the basic structure of a PRD. It generated appropriate sections. The grammar and formatting were clean. If you needed to hand something in by EOD and had literally no time for refinement, ChatGPT would save you from complete embarrassment.
But here’s what it lacked: Depth. Nuance. Strategic thinking that felt connected to real product decisions. When it described the target user, it used phrases that could apply to any edtech product. When it outlined success metrics, they were the obvious ones (engagement, retention, test scores) without any interesting thinking about leading indicators or proxy metrics.
The problem with generic output isn’t that it’s wrong, it’s that it’s invisible. When you’re trying to get buy-in from leadership or alignment from engineering, you need your PRD to feel specific, considered, and connected to your company’s actual strategy. ChatGPT’s output felt like it was written by someone who’d read a lot of PRDs but never actually shipped a product.
One specific example: When I asked for success metrics, ChatGPT gave me “Student engagement rate, Time spent on platform, Test score improvement.” These aren’t wrong, but they’re lazy. They don’t show any thinking about what specifically matters for an AI tutor versus any other educational product. Compare that to Claude’s output, which got more specific about things like “concept mastery rate” and “question-to-understanding ratio.”
Actionable Insight: Use ChatGPT when you need fast, serviceable documentation that doesn’t need to be exceptional. Think: internal updates, status reports, routine communications. Don’t rely on it for strategic documents where differentiation matters. If you do use ChatGPT for important documents, treat its output as a starting point that needs significant human refinement to add strategic depth and company-specific context.
Gemini: Better Than Expected
Google’s Gemini actually impressed me more than I anticipated. The structure was solid, and it had a nice balance of detail without being overwhelming.
What Gemini got right: The writing had a nice flow to it. The document felt organized and logical. It did a better job than ChatGPT at providing specific examples and thinking through edge cases. For instance, when describing the target user, it went beyond demographics to consider behavioral characteristics and motivations.
Gemini also showed some interesting strategic thinking. It considered competitive positioning more thoughtfully than ChatGPT and proposed some differentiation angles that weren’t in my original outline. Good AI tools should add insight, not just regurgitate your input with better formatting.
But here’s where it fell short: the visual elements. When I asked for mockups, Gemini produced images that looked more like stock photos than actual product designs. They weren’t terrible, but they weren’t compelling either. They had that AI-generated sheen that makes it obvious they came from an image model rather than a designer’s brain.
For a PRD that you’re going to use internally with a team that already understands the context, Gemini’s output would work well. The text quality is strong enough, and if you’re in the Google ecosystem (Docs, Sheets, Meet, etc.), the integration is seamless. You can paste Gemini’s output directly into Google Docs and continue iterating there.
But if you need to create something compelling enough to win over skeptics or secure budget, Gemini falls just short. It’s good, but not great. It’s the solid B+ student: reliably competent but rarely exceptional.
Actionable Insight: Gemini is a strong choice if you’re working in the Google ecosystem and need good integration with Docs, Sheets, and other Google Workspace tools. The quality is sufficient for most internal documentation needs. It’s particularly good if you’re working with cross-functional partners who are already in Google Workspace. You can share and collaborate on AI-generated drafts without friction. But don’t expect visual mockups that will wow anyone, and plan to add your own strategic polish for high-stakes documents.
Grok: Not Ready for Prime Time
Let’s just say my expectations were low, and Grok still managed to underdeliver. The PRD felt thin, generic, and lacked the depth you need for real product work.
“I don’t have high expectations for grok, unfortunately,” I said before testing it. Spoiler alert: my low expectations were validated.
Actionable Insight: Skip Grok for product documentation work right now. Maybe it’ll improve, but as of my testing, it’s simply not competitive with the other options. It felt like 1-2 years behind the others.
ChatPRD: The Specialized Tool
Now this was interesting. ChatPRD is purpose-built for PRDs, using foundational models underneath but with specific tuning and structure for product documentation.
The result? The structure was logical, the depth was appropriate, and it included elements that showed understanding of what actually matters in a PRD. As I reflected: “Cause this one feels like, A human wrote this PRD.”
The interface guides you through the process more deliberately than just dumping text into a general chat interface. It asks clarifying questions. It structures the output more thoughtfully.
Actionable Insight: If you’re a technical lead without a dedicated PM, or you’re a PM who wants a more structured approach to using AI for PRDs, ChatPRD is worth the specialized focus. It’s particularly good when you need something that feels authentic enough to share with stakeholders without heavy editing.
Claude: The Clear Winner
But the standout performer, and I’m ranking these, was Claude.
“I think we know that for now, I’m gonna say Claude did the best job,” I concluded after all the testing. Claude produced the most comprehensive, thoughtful, and strategically sound PRD. But what really set it apart were the concept mocks.
When I asked each tool to generate visual mockups of the product, Claude produced HTML prototypes that, while not fully functional, looked genuinely compelling. They had thoughtful UI design, clear information architecture, and felt like something that could actually guide development.
“They were, like, closer to, like, what a Lovable would produce or something like that,” I noted, referring to the quality of low-fidelity prototypes that good designers create.
The text quality was also superior: more nuanced, better structured, and with more strategic depth. It felt like Claude understood not just what a PRD should contain, but why it should contain those elements.
Actionable Insight: For any PRD that matters, meaning anything you’ll share with leadership, use to get buy-in, or guide actual product development, you might as well start with Claude. The quality difference is significant enough that it’s worth using Claude even if you primarily use another tool for other tasks.
Final Rankings: The Definitive Hierarchy
After testing all five tools on multiple dimensions: initial PRD generation, visual mockups, and even crafting a pitch paragraph for a skeptical VP of Engineering, here’s my final ranking:
* Claude - Best overall quality, most compelling mockups, strongest strategic thinking
* ChatPRD - Best for structured PRD creation, feels most “human”
* Gemini - Solid all-around performance, good Google integration
* ChatGPT - Reliable but generic, lacks differentiation
* Grok - Not competitive for this use case
“I’d probably say Claude, then chat PRD, then Gemini, then chat GPT, and then Grock,” I concluded.
The Deeper Lesson: Garbage In, Garbage Out (Still Applies)
But here’s what matters more than which tool wins: the realization that hit me partway through this experiment.
“I think it really does come down to, like, you know, the quality of the prompt,” I observed. “So if our prompt were a little more detailed, all that were more thought-through, then I’m sure the output would have been better. But as you can see we didn’t really put in brain trust prompting here. Just a little bit of, kind of hand-wavy prompting, but a little better than just one or two sentences.”
And we still got pretty good results.
This is the meta-insight that should change how you approach AI tools in your product work: The quality of your input determines the quality of your output, but the baseline quality of the tool determines the ceiling of what’s possible.
No amount of great prompting will make Grok produce Claude-level output. But even mediocre prompting with Claude will beat great prompting with lesser tools.
So the dual strategy is:
* Use the best tool available (currently Claude for PRDs)
* Invest in improving your prompting skills ideally with as much original and insightful human, company aware, and context aware thinking as possible.
Real-World Workflows: How to Actually Use This in Your Day-to-Day PM Work
Theory is great. Here’s how to incorporate these insights into your actual product management workflows.
The Weekly Sprint Planning Workflow
Every PM I know spends hours each week preparing for sprint planning. You need to refine user stories, clarify acceptance criteria, anticipate engineering questions, and align with design and data science. AI can compress this work significantly.
Here’s an example workflow:
Monday morning (30 minutes):
* Review upcoming priorities and open your rough notes/outline in Google Docs
* Open Claude and paste your outline with this prompt:
“I’m preparing for sprint planning. Based on these priorities [paste notes], generate detailed user stories with acceptance criteria. Format each as: User story, Business context, Technical considerations, Acceptance criteria, Dependencies, Open questions.”
Monday afternoon (20 minutes):
* Review Claude’s output critically
* Identify gaps, unclear requirements, or missing context
* Follow up with targeted prompts:
“The user story about authentication is too vague. Break it down into separate stories for: social login, email/password, session management, and password reset. For each, specify security requirements and edge cases.”
Tuesday morning (15 minutes):
* Generate mockups for any UI-heavy stories:
“Create an HTML mockup for the login flow showing: landing page, social login options, email/password form, error states, and success redirect.”
* Even if the HTML doesn’t work perfectly, it gives your designers a starting point
Before sprint planning (10 minutes):
* Ask Claude to anticipate engineering questions:
“Review these user stories as if you’re a senior engineer. What questions would you ask? What concerns would you raise about technical feasibility, dependencies, or edge cases?”
* This preparation makes you look thoughtful and helps the meeting run smoothly
Total time investment: ~75 minutes. Typical time saved: 3-4 hours compared to doing this manually.
The Stakeholder Alignment Workflow
Getting alignment from multiple stakeholders (product leadership, engineering, design, data science, legal, marketing) is one of the hardest parts of PM work. AI can help you think through different stakeholder perspectives and craft compelling communications for each.
Here’s how:
Step 1: Map your stakeholders (10 minutes)
Create a quick table in a doc:
Stakeholder | Primary Concern | Decision Criteria | Likely Objections VP Product | Strategic fit, ROI | Company OKRs, market opportunity | Resource allocation vs other priorities VP Eng | Technical risk, capacity | Engineering capacity, tech debt | Complexity, unclear requirements Design Lead | User experience | User research, design principles | Timeline doesn’t allow proper design process Legal | Compliance, risk | Regulatory requirements | Data privacy, user consent flows
Step 2: Generate stakeholder-specific communications (20 minutes)
For each key stakeholder, ask Claude:
“I need to pitch this product idea to [Stakeholder]. Based on this PRD, create a 1-page brief addressing their primary concern of [concern from your table]. Open with the specific value for them, address their likely objection of [objection], and close with a clear ask. Tone should be [professional/technical/strategic] based on their role.”
Then you’ll have customized one-pagers for your pre-meetings with each stakeholder, dramatically increasing your alignment rate.
Step 3: Synthesize feedback (15 minutes)
After gathering stakeholder input, ask Claude to help you synthesize:
“I got the following feedback from stakeholders: [paste feedback]. Identify: (1) Common themes, (2) Conflicting requirements, (3) Legitimate concerns vs organizational politics, (4) Recommended compromises that might satisfy multiple parties.”
This pattern-matching across stakeholder feedback is something AI does really well and saves you hours of mental processing.
The Quarterly Planning Workflow
Quarterly or annual planning is where product strategy gets real. You need to synthesize market trends, customer feedback, technical capabilities, and business objectives into a coherent roadmap. AI can accelerate this dramatically.
Six weeks before planning:
* Start collecting input (customer interviews, market research, competitive analysis, engineering feedback)
* Don’t wait until the last minute
Four weeks before planning:
Dump everything into Claude with this structure:
“I’m creating our Q2 roadmap. Context:
* Business objectives: [paste from leadership]
* Customer feedback themes: [paste synthesis]
* Technical capabilities/constraints: [paste from engineering]
* Competitive landscape: [paste analysis]
* Current product gaps: [paste from your analysis]
Generate 5 strategic themes that could anchor our Q2 roadmap. For each theme:
* Strategic rationale (how it connects to business objectives)
* Key initiatives (2-3 major features/projects)
* Success metrics
* Resource requirements (rough estimate)
* Risks and mitigations
* Customer segments addressed”
This gives you a strategic framework to react to rather than starting from a blank page.
Three weeks before planning:
Iterate on the most promising themes:
“Deep dive on Theme 3. Generate:
* Detailed initiative breakdown
* Dependencies on platform/infrastructure
* Phasing options (MVP vs full build)
* Go-to-market considerations
* Data requirements
* Open questions requiring research”
Two weeks before planning:
Pressure-test your thinking:
“Play devil’s advocate on this roadmap. What are the strongest arguments against each initiative? What am I likely missing? What failure modes should I plan for?”
This adversarial prompting forces you to strengthen weak points before your leadership reviews it.
One week before planning:
Generate your presentation:
“Create an executive presentation for this roadmap. Structure: (1) Market context and strategic imperative, (2) Q2 themes and initiatives, (3) Expected outcomes and metrics, (4) Resource requirements, (5) Key risks and mitigations, (6) Success criteria for decision. Make it compelling but data-driven. Tone: confident but not overselling.”
Then add your company-specific context, visual brand, and personal voice.
The Customer Research Workflow
AI can’t replace talking to customers, but it can help you prepare better questions, analyze feedback more systematically, and identify patterns faster.
Before customer interviews:
“I’m interviewing customers about [topic]. Generate:
* 10 open-ended questions that avoid leading the witness
* 5 follow-up questions for each main question
* Common cognitive biases I should watch for
* A framework for categorizing responses”
This prep work helps you conduct better interviews.
After interviews:
“I conducted 15 customer interviews. Here are the key quotes: [paste anonymized quotes]. Identify:
* Recurring themes and patterns
* Surprising insights that contradict our assumptions
* Segments with different needs
* Implied needs customers didn’t articulate directly
* Recommended next steps for validation”
AI is excellent at pattern-matching across qualitative data at scale.
The Crisis Management Workflow
Something broke. The site is down. Data was lost. A feature shipped with a critical bug. You need to move fast.
Immediate response (5 minutes):
“Critical incident. Details: [brief description]. Generate:
* Incident classification (Sev 1-4)
* Immediate stakeholders to notify
* Draft customer communication (honest, apologetic, specific about what happened and what we’re doing)
* Draft internal communication for leadership
* Key questions to ask engineering during investigation”
Having these drafted in 5 minutes lets you focus on coordination and decision-making rather than wordsmithing.
Post-incident (30 minutes):
“Write a post-mortem based on this incident timeline: [paste timeline]. Include:
* What happened (technical details)
* Root cause analysis
* Impact quantification (users affected, revenue impact, time to resolution)
* What went well in our response
* What could have been better
* Specific action items with owners and deadlines
* Process changes to prevent recurrence Tone: Blameless, focused on learning and improvement.”
This gives you a strong first draft to refine with your team.
Common Pitfalls: What Not to Do with AI in Product Management
Now let’s talk about the mistakes I see PMs making with AI tools.
Pitfall #1: Treating AI Output as Final
The biggest mistake is copy-pasting AI output directly into your PRD, roadmap presentation, or stakeholder email without critical review.
The result? Documents that are grammatically perfect but strategically shallow. Presentations that sound impressive but don’t hold up under questioning. Emails that are professionally worded but miss the subtext of organizational politics.
The fix: Always ask yourself:
* Does this reflect my actual strategic thinking, or generic best practices?
* Would my CEO/engineering lead/biggest customer find this compelling and specific?
* Are there company-specific details, customer insights, or technical constraints that only I know?
* Does this sound like me, or like a robot?
Add those elements. That’s where your value as a PM comes through.
Pitfall #2: Using AI as a Crutch Instead of a Tool
Some PMs use AI because they don’t want to think deeply about the product. They’re looking for AI to do the hard work of strategy, prioritization, and trade-off analysis.
This never works. AI can help you think more systematically, but it can’t replace thinking.
If you find yourself using AI to avoid wrestling with hard questions (”Should we build X or Y?” “What’s our actual competitive advantage?” “Why would customers switch from the incumbent?”), you’re using it wrong.
The fix: Use AI to explore options, not to make decisions. Generate three alternatives, pressure-test each one, then use your judgment to decide. The AI can help you think through implications, but you’re still the one choosing.
Pitfall #3: Not Iterating
Getting mediocre AI output and just accepting it is a waste of the technology’s potential.
The PMs who get exceptional results from AI are the ones who iterate. They generate an initial response, identify what’s weak or missing, and ask follow-up questions. They might go through 5-10 iterations on a key section of a PRD.
Each iteration is quick (30 seconds to type a follow-up prompt, 30 seconds to read the response), but the cumulative effect is dramatically better output.
The fix: Budget time for iteration. Don’t try to generate a complete, polished PRD in one prompt. Instead, generate a rough draft, then spend 30 minutes iterating on specific sections that matter most.
Pitfall #4: Ignoring the Political and Human Context
AI tools have no understanding of organizational politics, interpersonal relationships, or the specific humans you’re working with.
They don’t know that your VP of Engineering is burned out and skeptical of any new initiatives. They don’t know that your CEO has a personal obsession with a specific competitor. They don’t know that your lead designer is sensitive about not being included early enough in the process.
If you use AI-generated communications without layering in this human context, you’ll create perfectly worded documents that land badly because they miss the subtext.
The fix: After generating AI content, explicitly ask yourself: “What human context am I missing? What relationships do I need to consider? What political dynamics are in play?” Then modify the AI output accordingly.
Pitfall #5: Over-Relying on a Single Tool
Different AI tools have different strengths. Claude is great for strategic depth, ChatPRD is great for structure, Gemini integrates well with Google Workspace.
If you only ever use one tool, you’re missing opportunities to leverage different strengths for different tasks.
The fix: Keep 2-3 tools in your toolkit. Use Claude for important PRDs and strategic documents. Use Gemini for quick internal documentation that needs to integrate with Google Docs. Use ChatPRD when you want more guided structure. Match the tool to the task.
Pitfall #6: Not Fact-Checking AI Output
AI tools hallucinate. They make up statistics, misrepresent competitors, and confidently state things that aren’t true. If you include those hallucinations in a PRD that goes to leadership, you look incompetent.
The fix: Fact-check everything, especially:
* Statistics and market data
* Competitive feature claims
* Technical capabilities and limitations
* Regulatory and compliance requirements
If the AI cites a number or makes a factual claim, verify it independently before including it in your document.
The Meta-Skill: Prompt Engineering for PMs
Let’s zoom out and talk about the underlying skill that makes all of this work: prompt engineering.
This is a real skill. The difference between a mediocre prompt and a great prompt can be 10x difference in output quality. And unlike coding or design, where there’s a steep learning curve, prompt engineering is something you can get good at quickly.
Principle 1: Provide Context Before Instructions
Bad prompt:
“Write a PRD for an AI tutor”
Good prompt:
“I’m a PM at an edtech company with 2M users, primarily high school students. We’re exploring an AI tutor feature to complement our existing video content library and practice problems. Our main competitors are Khan Academy and Course Hero. Our differentiation is personalized learning paths based on student performance data.
Write a PRD for an AI tutor feature targeting students in the middle 80% academically who struggle with science and math.”
The second prompt gives Claude the context it needs to generate something specific and strategic rather than generic.
Principle 2: Specify Format and Constraints
Bad prompt:
“Generate success metrics”
Good prompt:
“Generate 5-7 success metrics for this feature. Include a mix of:
* Leading indicators (early signals of success)
* Lagging indicators (definitive success measures)
* User behavior metrics
* Business impact metrics
For each metric, specify: name, definition, target value, measurement method, and why it matters.”
The structure you provide shapes the structure you get back.
Principle 3: Ask for Multiple Options
Bad prompt:
“What should our Q2 priorities be?”
Good prompt:
“Generate 3 different strategic approaches for Q2:
* Option A: Focus on user acquisition
* Option B: Focus on engagement and retention
* Option C: Focus on monetization
For each option, detail: key initiatives, expected outcomes, resource requirements, risks, and recommendation for or against.”
Asking for multiple options forces the AI (and forces you) to think through trade-offs systematically.
Principle 4: Specify Audience and Tone
Bad prompt:
“Summarize this PRD”
Good prompt:
“Create a 1-paragraph summary of this PRD for our skeptical VP of Engineering. Tone: Technical, concise, addresses engineering concerns upfront. Focus on: technical architecture, resource requirements, risks, and expected engineering effort. Avoid marketing language.”
The audience and tone specification ensures the output will actually work for your intended use.
Principle 5: Use Iterative Refinement
Don’t try to get perfect output in one prompt. Instead:
First prompt: Generate rough draft Second prompt: “This is too generic. Add specific examples from [our company context].” Third prompt: “The technical section is weak. Expand with architecture details and dependencies.” Fourth prompt: “Good. Now make it 30% more concise while keeping the key details.”
Each iteration improves the output incrementally.
Let me break down the prompting approach that worked in this experiment, because this is immediately actionable for your work tomorrow.
Strategy 1: The Structured Outline Approach
Don’t go from zero to full PRD in one prompt. Instead:
* Start with strategic thinking - Spend 10-15 minutes outlining why you’re building this, who it’s for, and what problem it solves
* Get specific - Don’t say “users,” say “high school students in the middle 80% of academic performance”
* Include constraints - Budget, timeline, technical limitations, competitive landscape
* Dump your outline into the AI - Now ask it to expand into a full PRD
* Iterate section by section - Don’t try to perfect everything at once
This is exactly what I did in my experiment, and even with my somewhat sloppy outline, the results were dramatically better than they would have been with a single-sentence prompt.
Strategy 2: The Comparative Analysis Pattern
One technique I used that worked particularly well: asking each tool to do the same specific task and comparing results.
For example, I asked all five tools: “Please compose a one paragraph exact summary I can share over DM with a highly influential VP of engineering who is generally a skeptic but super smart.”
This forced each tool to synthesize the entire PRD into a compelling pitch while accounting for a specific, challenging audience. The variation in quality was revealing—and it gave me multiple options to choose from or blend together.
Actionable tip: When you need something critical (a pitch, an executive summary, a key decision framework), generate it with 2-3 different AI tools and take the best elements from each. This “ensemble approach” often produces better results than any single tool.
Strategy 3: The Iterative Refinement Loop
Don’t treat the AI output as final. Use it as a first draft that you then refine through conversation with the AI.
After getting the initial PRD, I could have asked follow-up questions like:
* “What’s missing from this PRD?”
* “How would you strengthen the success metrics section?”
* “Generate 3 alternative approaches to the core feature set”
Each iteration improves the output and, more importantly, forces me to think more deeply about the product.
What This Means for Your Career
If you’re an early or mid-career PM reading this, you might be thinking: “Great, so AI can write PRDs now. Am I becoming obsolete?”
Absolutely not. But your role is evolving, and understanding that evolution is critical.
The PMs who will thrive in the AI era are those who:
* Excel at strategic thinking - AI can generate options, but you need to know which options align with company strategy, customer needs, and technical feasibility
* Master the art of prompting - This is a genuine skill that separates mediocre AI users from exceptional ones
* Know when to use AI and when not to - Some aspects of product work benefit enormously from AI. Others (user interviews, stakeholder negotiation, cross-functional relationship building) require human judgment and empathy
* Can evaluate AI output critically - You need to spot the hallucinations, the generic fluff, and the strategic misalignments that AI inevitably produces
Think of AI tools as incredibly capable interns. They can produce impressive work quickly, but they need direction, oversight, and strategic guidance. Your job is to provide that guidance while leveraging their speed and breadth.
The Real-World Application: What to Do Monday Morning
Let’s get tactical. Here’s exactly how to apply these insights to your actual product work:
For Your Next PRD:
* Block 30 minutes for strategic thinking - Write your back-of-the-napkin outline in Google Docs or your tool of choice
* Open Claude (or ChatPRD if you want more structure)
* Copy your outline with this prompt:
“I’m a product manager at [company] working on [product area]. I need to create a comprehensive PRD based on this outline. Please expand this into a complete PRD with the following sections: [list your preferred sections]. Make it detailed enough for engineering to start breaking down into user stories, but concise enough for leadership to read in 15 minutes. [Paste your outline]”
* Review the output critically - Look for generic statements, missing details, or strategic misalignments
* Iterate on specific sections:
“The success metrics section is too vague. Please provide 3-5 specific, measurable KPIs with target values and explanation of why these metrics matter.”
* Generate supporting materials:
“Create a visual mockup of the core user flow showing the key interaction points.”
* Synthesize the best elements - Don’t just copy-paste the AI output. Use it as raw material that you shape into your final document
For Stakeholder Communication:
When you need to pitch something to leadership or engineering:
* Generate 3 versions of your pitch using different tools (Claude, ChatPRD, and one other)
* Compare them for:
* Clarity and conciseness
* Strategic framing
* Compelling value proposition
* Addressing likely objections
* Blend the best elements into your final version
* Add your personal voice - This is crucial. AI output often lacks personality and specific company context. Add that yourself.
For Feature Prioritization:
AI tools can help you think through trade-offs more systematically:
“I’m deciding between three features for our next release: [Feature A], [Feature B], and [Feature C]. For each feature, analyze: (1) Estimated engineering effort, (2) Expected user impact, (3) Strategic alignment with making our platform the go-to solution for [your market], (4) Risk factors. Then recommend a prioritization with rationale.”
This doesn’t replace your judgment, but it forces you to think through each dimension systematically and often surfaces considerations you hadn’t thought of.
The Uncomfortable Truth About AI and Product Management
Let me be direct about something that makes many PMs uncomfortable: AI will make some PM skills less valuable while making others more valuable.
Less valuable:
* Writing boilerplate documentation
* Creating standard frameworks and templates
* Generating routine status updates
* Synthesizing information from existing sources
More valuable:
* Strategic product vision and roadmapping
* Deep customer empathy and insight generation
* Cross-functional leadership and influence
* Critical evaluation of options and trade-offs
* Creative problem-solving for novel situations
If your PM role primarily involves the first category of tasks, you should be concerned. But if you’re focused on the second category while leveraging AI for the first, you’re going to be exponentially more effective than your peers who resist these tools.
The PMs I see succeeding aren’t those who can write the best PRD manually. They’re those who can write the best PRD with AI assistance in one-tenth the time, then use the saved time to talk to more customers, think more deeply about strategy, and build stronger cross-functional relationships.
Advanced Techniques: Beyond Basic PRD Generation
Once you’ve mastered the basics, here are some advanced applications I’ve found valuable:
Competitive Analysis at Scale
“Research our top 5 competitors in [market]. For each one, analyze: their core value proposition, key features, pricing strategy, target customer, and likely product roadmap based on recent releases and job postings. Create a comparison matrix showing where we have advantages and gaps.”
Then use web search tools in Claude or Perplexity to fact-check and expand the analysis.
Scenario Planning
“We’re considering three strategic directions for our product: [Direction A], [Direction B], [Direction C]. For each direction, map out: likely customer adoption curve, required technical investments, competitive positioning in 12 months, and potential pivots if the hypothesis proves wrong. Then identify the highest-risk assumptions we should test first for each direction.”
This kind of structured scenario thinking is exactly what AI excels at—generating multiple well-reasoned perspectives quickly.
User Story Generation
After your PRD is solid:
“Based on this PRD, generate a complete set of user stories following the format ‘As a [user type], I want to [action] so that [benefit].’ Include acceptance criteria for each story. Organize them into epics by functional area.”
This can save your engineering team hours of grooming meetings.
The Tools Will Keep Evolving. Your Process Shouldn’t
Here’s something important to remember: by the time you read this, the specific rankings might have shifted. Maybe ChatGPT-5 has leapfrogged Claude. Maybe a new specialized tool has emerged.
But the core principles won’t change:
* Do strategic thinking before touching AI
* Use the best tool available for your specific task
* Iterate and refine rather than accepting first outputs
* Blend AI capabilities with human judgment
* Focus your time on the uniquely human aspects of product management
The specific tools matter less than your process for using them effectively.
A Final Experiment: The Skeptical VP Test
I want to share one more insight from my testing that I think is particularly relevant for early and mid-career PMs.
Toward the end of my experiment, I gave each tool this prompt: “Please compose a one paragraph exact summary I can share over DM with a highly influential VP of engineering who is generally a skeptic but super smart.”
This is such a realistic scenario. How many times have you needed to pitch an idea to a skeptical technical leader via Slack or email? Someone who’s brilliant, who’s seen a thousand product ideas fail, and who can spot b******t from a mile away?
The quality variation in the responses was fascinating. ChatGPT gave me something that felt generic and safe. Gemini was better but still a bit too enthusiastic. Grok was... well, Grok.
But Claude and ChatPRD both produced messages that felt authentic, technically credible, and appropriately confident without being overselling. They acknowledged the engineering challenges while framing the opportunity compellingly.
The lesson: When the stakes are high and the audience is sophisticated, the quality of your AI tool matters even more. That skeptical VP can tell the difference between a carefully crafted message and AI-generated fluff. So can your CEO. So can your biggest customers.
Use the best tools available, but more importantly, always add your own strategic thinking and authentic voice on top.
Questions to Consider: A Framework for Your Own Experiments
As I wrapped up my Loom, I posed some questions to the audience that I’ll pose to you:
“Let me know in the comments, if you do your PRDs using AI differently, do you start with back of the envelope? Do you say, oh no, I just start with one sentence, and then I let the chatbot refine it with me? Or do you go way more detailed and then use the chatbot to kind of pressure test it?”
These aren’t rhetorical questions. Your answer reveals your approach to AI-augmented product work, and different approaches work for different people and contexts.
For early-career PMs: I’d recommend starting with more detailed outlines. The discipline of thinking through your product strategy before touching AI will make you a stronger PM. You can always compress that process later as you get more experienced.
For mid-career PMs: Experiment with different approaches for different types of documents. Maybe you do detailed outlines for major feature PRDs but use more iterative AI-assisted refinement for smaller features or updates. Find what optimizes your personal productivity while maintaining quality.
For senior PMs and product leaders: Consider how AI changes what you should expect from your PM team. Should you be reviewing more AI-generated first drafts and spending more time on strategic guidance? Should you be training your team on effective AI usage? These are leadership questions worth grappling with.
The Path Forward: Continuous Experimentation
My experiment with these five AI tools took 45 minutes. But I’m not done experimenting.
The field of AI-assisted product management is evolving rapidly. New tools launch monthly. Existing tools get smarter weekly. Prompting techniques that work today might be obsolete in three months.
Your job, if you want to stay at the forefront of product management, is to continuously experiment. Try new tools. Share what works with your peers. Build a personal knowledge base of effective prompts and workflows. And be generous with what you learn. The PM community gets stronger when we share insights rather than hoarding them.
That’s why I created this Loom and why I’m writing this post. Not because I have all the answers, but because I’m figuring it out in real-time and want to share the journey.
A Personal Note on Coaching and Consulting
If this kind of practical advice resonates with you, I’m happy to work with you directly.
Through my pm coaching practice, I offer 1:1 executive, career, and product coaching for PMs and product leaders. We can dig into your specific challenges: whether that’s leveling up your AI workflows, navigating a career transition, or developing your strategic product thinking.
I also work with companies (usually startups or incubation teams) on product strategy, helping teams figure out PMF for new explorations and improving their product management function.
The format is flexible. Some clients want ongoing coaching, others prefer project-based consulting, and some just want a strategic sounding board for a specific decision. Whatever works for you.
Reach out through tomleungcoaching.com if you’re interested in working together.
OK. Enough pontificating. Let’s ship greatness.
Every few years, the world of product management goes through a phase shift. When I started at Microsoft in the early 2000s, we shipped Office in boxes. Product cycles were long, engineering was expensive, and user research moved at the speed of snail mail. Fast forward a decade and the cloud era reset the speed at which we build, measure, and learn. Then mobile reshaped everything we thought we knew about attention, engagement, and distribution.
Now we are standing at the edge of another shift. Not a small shift, but a tectonic one. Artificial intelligence is rewriting the rules of product creation, product discovery, product expectations, and product careers.
To help make sense of this moment, I hosted a panel of world class product leaders on the Fireside PM podcast:
• Rami Abu-Zahra, Amazon product leader across Kindle, Books, and Prime Video• Todd Beaupre, Product Director at YouTube leading Home and Recommendations• Joe Corkery, CEO and cofounder of Jaide Health • Tom Leung (me), Partner at Palo Alto Foundry• Lauren Nagel, VP Product at Mezmo• David Nydegger, Chief Product Officer at OvivaThese are leaders running massive consumer platforms, high stakes health tech, and fast moving developer tools. The conversation was rich, honest, and filled with specific examples.
This post summarizes the discussion, adds my own reflections, and offers a practical guide for early and mid career PMs who want to stay relevant in a world where AI is redefining what great product management looks like.
Table of Contents
* What AI Cannot Do and Why PM Judgment Still Matters
* The New AI Literacy: What PMs Must Know by 2026
* Why Building AI Products Speeds Up Some Cycles and Slows Down Others
* Whether the PM, Eng, UX Trifecta Still Stands
* The Biggest Risks AI Introduces Into Product Development
* Actionable Advice for Early and Mid Career PMs
* My Takeaways and What Really Matters Going Forward
* Closing Thoughts and Coaching Practice
1. What AI Cannot Do and Why PM Judgment Still Matters
We opened the panel with a foundational question. As AI becomes more capable every quarter, what is left for humans to do. Where do PMs still add irreplaceable value. It is the question every PM secretly wonders.
Todd put it simply: “At the end of the day, you have to make some judgment calls. We are not going to turn that over anytime soon.”
This theme came up again and again. AI is phenomenal at synthesizing, drafting, exploring, and narrowing. But it does not have conviction. It does not have lived experience. It does not feel user pain. It does not carry responsibility.
Joe from Jaide Health captured it perfectly when he said: “AI cannot feel the pain your users have. It can help meet their goals, but it will not get you that deep understanding.”
There is still no replacement for sitting with a frustrated healthcare customer who cannot get their clinical data into your system, or a creator on YouTube who feels the algorithm is punishing their art, or a devops engineer staring at an RCA output that feels 20 percent off.
Every PM knows this feeling: the moment when all signals point one way, but your gut tells you the data is incomplete or misleading. This is the craft that AI does not have.
Why judgment becomes even more important in an AI world
David, who runs product at a regulated health company, said something incredibly important: “Knowing what great looks like becomes more essential, not less. The PM's that thrive in AI are the ones with great product sense.”
This is counterintuitive for many. But when the operational work becomes automated, the differentiation shifts toward taste, intuition, sequencing, and prioritization.
Lauren asked the million dollar question. “How are we going to train junior PMs if AI is doing the legwork. Who teaches them how to think.”
This is a profound point. If AI closes the gap between junior and senior PMs in execution tasks, the difference will emerge almost entirely in judgment. Knowing how to probe user problems. Knowing when a feature is good enough. Knowing which tradeoffs matter. Knowing which flaw is fatal and which is cosmetic.
AI is incredible at writing a PRD. AI is terrible at knowing whether the PRD is any good.
Which means the future PM becomes more strategic, more intuitive, more customer obsessed, and more willing to make thoughtful bets under uncertainty.
2. The New AI Literacy: What PMs Must Know by 2026
I asked the panel what AI literacy actually means for PMs. Not the hype. Not the buzzwords. The real work.
Instead of giving gimmicky answers, the discussion converged on a clear set of skills that PMs must master.
Skill 1: Understanding context engineering
David laid this out clearly: “Knowing what LMS are good at and what they are not good at, and knowing how to give them the right context, has become a foundational PM skill.”
Most PMs think prompt engineering is about clever phrasing. In reality, the future is about context engineering. Feeding models the right data. Choosing the right constraints. Deciding what to ignore. Curating inputs that shape outputs in reliable ways.
Context engineering is to AI product development what Figma was to collaborative design. If you cannot do it, you are not going to be effective.
Skill 2: Evals, evals, evals
Rami said something that resonated with the entire panel: “Last year was all about prompts. This year is all about evals.”
He is right.
• How do you build a golden dataset.• How do you evaluate accuracy.• How do you detect drift.• How do you measure hallucination rates.• How do you combine UX evals with model evals.• How do you decide what good looks like.• How do you define safe versus unsafe boundaries.
AI evaluation is now a core PM responsibility. Not exclusively. But PMs must understand what engineers are testing for, what failure modes exist, and how to design test sets that reflect the real world.
Lauren said her PMs write evals side by side with engineering. That is where the world is going.
Skill 3: Knowing when to trust AI output and when to override it
Todd noted: “It is one thing to get an answer that sounds good. It is another thing to know if it is actually good.”
This is the heart of the role. AI can produce strategic recommendations that look polished, structured, and wise. But the real question is whether they are grounded in reality, aligned with your constraints, and consistent with your product vision.
A PM without the ability to tell real insight from confident nonsense will be replaced by someone who can.
Skill 4: Understanding the physics of model changes
This one surprised many people, but it was a recurring point.
Rami noted: “When you upgrade a model, the outputs can be totally different. The evals start failing. The experience shifts.”
PMs must understand:
• Models get deprecated• Models drift• Model updates can break well tuned prompts• API pricing has real COGS implications• Latency varies• Context windows vary• Some tasks need agents, some need RAG, some need a small finetuned model
This is product work now. The PM of 2026 must know these constraints as well as a PM of the cloud era understood database limits or API rate limits.
Skill 5: How to construct AI powered prototypes in hours, not weeks
It now takes one afternoon to build something meaningful. Zero code required. Prompt, test, refine. Whether you use Replit, Cursor, Vercel, or sandboxed agents, the speed is shocking.
But this makes taste and problem selection even more important. The future PM must be able to quickly validate whether a concept is worth building beyond the demo stage.
3. Why Building AI Products Speeds Up Some Cycles and Slows Down Others
This part of the conversation was fascinating because people expected AI to accelerate everything. The panel had a very different view.
Fast: Prototyping and concept validation
Lauren described how her teams can build working versions of an AI powered Root Cause Analysis feature in days, test it with customers, and get directional feedback immediately.
“You can think bigger because the cost of trying things is much lower,” she said.
For founders, early PMs, and anyone validating hypotheses, this is liberating. You can test ten ideas in a week. That used to take a quarter.
Slow: Productionizing AI features
The surprising part is that shipping the V1 of an AI feature is slower than most expect.
Joe noted: “You can get prototypes instantly. But turning that into a real product that works reliably is still hard.”
Why. Because:
• You need evals.• You need monitoring.• You need guardrails.• You need safety reviews.• You need deterministic parts of the workflow.• You need to manage COGS.• You need to design fallbacks.• You need to handle unpredictable inputs.• You need to think about hallucination risk.• You need new UI surfaces for non deterministic outputs.
Lauren said bluntly: “Vibe coding is fast. Moving that vibe code to production is still a four month process.”
This should be printed on a poster in every AI startup office.
Very Slow: Iterating on AI powered features
Another counterintuitive point. Many teams ship a great V1 but struggle to improve it significantly afterward.
David said their nutrition AI feature launched well but: “We struggled really hard to make it better. Each iteration was easy to try but difficult to improve in a meaningful way.”
Why is iteration so difficult.
Because model improvements may not translate directly into UX improvements. Users need consistency. Drift creates churn. Small changes in context or prompts can cause large changes in behavior.
Teams are learning a hard truth: AI powered features do not behave like typical deterministic product flows. They require new iteration muscles that most orgs do not yet have.
4. The PM, Eng, UX Trifecta in the AI Era
I asked whether the classic PM, Eng, UX triad is still the right model. The audience was expecting disagreement. The panel was surprisingly aligned.
The trifecta is not going anywhere
Rami put it simply: “We still need experts in all three domains to raise the bar.”
Joe added: “AI makes it possible for PMs to do more technical work. But it does not replace engineering. Same for design.”
AI blurs the edges of the roles, but it does not collapse them. In fact, each role becomes more valuable because the work becomes more abstract.
• PMs focus on judgment, sequencing, evaluation, and customer centric problem framing• Engineers focus on agents, systems, architecture, guardrails, latency, and reliability• Designers focus on dynamic UX, non deterministic UX patterns, and new affordances for AI outputs
What does change
AI makes the PM-Eng relationship more intense. The backbone of AI features is a combination of model orchestration, evaluation, prompting, and context curation. PMs must be tighter than ever with engineering to design these systems.
David noted that his teams focus more on individual talents. Some PMs are great at context engineering. Some designers excel at polishing AI generated layouts. Some engineers are brilliant at prompt chaining. AI reveals strengths quickly.
The trifecta remains. The skill distribution within it evolves.
5. The Biggest Risks AI Introduces Into Product Development
When we asked what scares PMs most about AI, the conversation became blunt and honest.
Risk 1: Loss of user trust
Lauren warned: “If people keep shipping low quality AI features, user trust in AI erodes. And then your good AI product suffers from the skepticism.”
This is very real. Many early AI features across industries are low quality, gimmicky, or unreliable. Users quickly learn to distrust these experiences.
Which means PMs must resist the pressure to ship before the feature is ready.
Risk 2: Skill atrophy
Todd shared a story that hit home for many PMs. “Junior folks just want to plug in the prompt and take whatever the AI gives them. That is a recipe for having no job later.”
PMs who outsource their thinking to AI will lose their judgment. Judgment cannot be regained easily.
This is the silent career killer.
Risk 3: Safety hazards in sensitive domains
David was direct: “If we have one unsafe output, we have to shut the feature off. We cannot afford even small mistakes.”
In healthcare, finance, education, and legal industries, the tolerance for error is near zero. AI must be monitored relentlessly. Human in the loop systems are mandatory. The cycles are slower but the stakes are higher.
Risk 4: The high bar for AI compared to humans
Joe said something I have thought about for years: “AI is held to a much higher standard than human decision making. Humans make mistakes constantly, but we forgive them. AI makes one mistake and it is unacceptable.”
This slows adoption in certain industries and creates unrealistic expectations.
Risk 5: Model deprecation and instability
Rami described a real problem AI PMs face: “Models get deprecated faster than they get replaced. The next model is not always GA. Outputs change. Prompts break.”
This creates product instability that PMs must anticipate and design around.
Risk 6: Differentiation becomes hard
I shared this perspective because I see so many early stage startups struggle with it.
If your whole product is a wrapper around an LLM, competitors will copy you in a week. The real differentiation will not come from using AI. It will come from how deeply you understand the customer, how you integrate AI with proprietary data, and how you create durable workflows.
6. Actionable Advice for Early and Mid Career PMs
This was one of my favorite parts of the panel because the advice was humble, practical, and immediately useful.
A. Develop deep user empathy. This will become your biggest differentiator.
Lauren said it clearly: “Maintain your empathy. Understand the pain your user really has.”
AI makes execution cheap. It makes insight valuable.
If you can articulate user pain precisely.If you can differentiate surface friction from underlying need.If you can see around corners.If you can prototype solutions and test them in hours.If you can connect dots between what AI can do and what users need.
You will thrive.
Tactical steps:
• Sit in on customer support calls every week.• Watch 10 user sessions for every feature you own.• Talk to customers until patterns emerge.• Ask “why” five times in every conversation.• Maintain a user pain log and update it constantly.
B. Become great at context engineering
This will matter as much as SQL mattered ten years ago.
Action steps:
• Practice writing prompts with structured context blocks.• Build a library of prompts that work for your product.• Study how adding, removing, or reordering context changes output.• Learn RAG patterns.• Learn when structured data beats embeddings.• Learn when smaller local models outperform big ones.
C. Learn eval frameworks
This is non negotiable.
You need to know:
• Precision vs recall tradeoffs• How to build golden datasets• How to design scenario based evals for UX• How to test for hallucination• How to monitor drift• How to set quality thresholds• How to build dashboards that reflect real world input distributions
You do not need to write the code.You do need to define the eval strategy.
D. Strengthen your product sense
You cannot outsource product taste.
Todd said it best: “Imagine asking AI to generate 20 percent growth for you. It will not tell you what great looks like.”
To strengthen your product sense:
• Review the best products weekly.• Take screenshots of great UX patterns.• Map user flows from apps you admire.• Break products down into primitives.• Ask yourself why a product decision works.• Predict what great would look like before you design it.
The PMs who thrive will be the ones who can recognize magic when they see it.
E. Stay curious
Rami’s closing advice was simple and perfect: “Stay curious. Keep learning. It never gets old.”
AI changes monthly. The PM who is excited by new ideas will outperform the PM who clings to old patterns.
Practical habits:
• Read one AI research paper summary each week.• Follow evaluation and model updates from major vendors.• Build at least one small AI prototype a month.• Join AI PM communities.• Teach juniors what you learn. Nothing accelerates mastery faster.
F. Embrace velocity and side projects
Todd said that some of his biggest career breakthroughs came from solving problems on the side.
This is more true now than ever.
If you have an idea, you can build an MVP over a weekend. If it solves a real problem, someone will notice.
G. Stay close to engineering
Not because you need to code, but because AI features require tighter PM engineering collaboration.
Learn enough to be dangerous:
• How embeddings work• How vector stores behave• What latency tradeoffs exist• How agents chain tasks• How model versioning works• How context limits shape UX• Why some prompts blow up API costs
If you can speak this language, you will earn trust and accelerate cycles.
H. Understand the business deeply
Joe’s advice was timeless: “Know who pays you and how much they pay. Solve real problems and know the business model.”
PMs who understand unit economics, COGS, pricing, and funnel dynamics will stand out.
7. Tom’s Takeaways and What Really Matters Going Forward
I ended the recording by sharing what I personally believe after moderating this discussion and working closely with a variety of AI teams over the past 2 years.
Judgment becomes the most valuable PM skill
As AI gets better at analysis, synthesis, and execution, your value shifts to:
• Choosing the right problem• Sequencing decisions• Making 55 45 calls• Understanding user pain• Making tradeoffs• Deciding when good is good enough• Defining success• Communicating vision• Influencing the org
Agents can write specs.LLMs can produce strategies.But only humans can choose the right one and commit.
Learning speed becomes a competitive advantage
I said this on the panel and I believe it more every month.
Because of AI, you now have:
• Infinite coaches• Infinite mentors• Infinite experts• Infinite documentation• Infinite learning loops
A PM who learns slowly will not survive the next decade.
Curiosity, empathy, and velocity will separate great from good
Many panelists said versions of this. The common pattern was:
• Understand users deeply• Combine multiple tools creatively• Move quickly• Learn constantly
The future rewards generalists with taste, speed, and emotional intelligence.
Differentiation requires going beyond wrapper apps
This is one of my biggest concerns for early stage founders. If your entire product is a wrapper around a model, you are vulnerable.
Durable value will come from:
• Proprietary data• Proprietary workflows• Deep domain insight• Organizational trust• Distribution advantage• Safety and reliability• Integration with existing systems
AI is a component, not a moat.
8. Closing Thoughts
Hosting this panel made me more optimistic about the future of product management. Not because AI will not change the job. It already has. But because the fundamental craft remains alive.
Product management has always been about understanding people, making decisions with incomplete information, telling compelling stories, and guiding teams through ambiguity and being right often.
AI accelerates the craft. It amplifies the best PMs and exposes the weak ones. It rewards curiosity, empathy, velocity, and judgment.
If you want tailored support on your PM career, leadership journey, or executive path, I offer 1 on 1 career, executive, and product coaching at tomleungcoaching.com.
OK team. Let’s ship greatness.
The Interview That Sparked This Essay
Joe Corkery and I worked together at Google years ago, and he has since gone on to build a venture-backed company tackling a real and systemic problem in healthcare communication.
This essay is my attempt to synthesize that conversation. It is written for early and mid career PMs in Silicon Valley who want to get sharper at product judgment, market discovery, customer validation, and knowing the difference between encouragement and signal. If you feel like you have ever shipped something, presented it to customers, and then heard polite nodding instead of movement and urgency, this is for you.
Joe’s Unusual Career Arc
Joe’s background is not typical for a founder. He is a software engineer. And a physician. And someone who has led business development in the pharmaceutical industry. That multidisciplinary profile allowed him to see something that many insiders miss: healthcare is full of problems that everyone acknowledges, yet very few organizations are structurally capable of solving.
When Joe joined Google Cloud in 2014, he helped start the healthcare and life sciences product org. Yet the timing was difficult. As he put it:
“The world wasn’t ready or Google wasn’t ready to do healthcare.”
So instead of building healthcare products right away, he spent two years working on security, compliance, and privacy. That detour will matter later, because it set the foundation for everything he is now doing at Jaide.
Years later, he left Google to build a healthcare company focused initially on guided healthcare search, particularly for women’s health. The idea resonated emotionally. Every customer interview validated the need. Investors said it was important. Healthcare organizations nodded enthusiastically.
And yet, there was no traction.
This created a familiar and emotionally challenging founder dilemma:
* When everyone is encouraging you
* But no one will pay you or adopt early
* How do you know if you are early, unlucky, or wrong?
This is the question at the heart of product strategy.
False Positives: Why Encouragement Is Not Feedback
If you have worked as a PM or founder for more than a few weeks, you have encountered positive feedback that turned out to be meaningless. People love your idea. Executives praise your clarity. Customers tell you they would definitely use it. Friends offer supportive high-fives.
But then nothing moves.
As Joe put it:
“Everyone wanted to be supportive. But that makes it hard to know whether you’re actually on the right path.”
This is not because people are dishonest. It is because people are kind, polite, and socially conditioned to encourage enthusiasm. In Silicon Valley especially, we celebrate ambition. We praise risk-taking. We cheer for the founder-in-the-garage mythology. If someone tells you that your idea is flawed, they fear they are crushing your passion.
So even when we explicitly ask for brutal honesty, people soften their answers.
This is the false positive trap.
And if you misread encouragement as traction, you can waste months or even years.
The Small Framing Change That Changes Everything
Joe eventually realized that the problem was not the idea itself. The problem was how he was asking for feedback.
When you present your idea as the idea, people naturally react supportively:
* “That’s really interesting.”
* “I could see that being useful.”
* “This is definitely needed.”
But when you instead present two competing ideas and ask someone to help you choose, you change the psychology of the conversation entirely.
Joe explained it this way:
“When we said, ‘We are building this. What do you think?’ people wanted to be encouraging. But when we asked, ‘We are choosing between these two products. Which one should we build?’ it gave them permission to actually critique.”
This shift is subtle, but powerful. Suddenly:
* People contrast.
* Their reasoning surfaces.
* Their hesitation becomes visible.
* Their priorities emerge with clarity.
By asking someone to choose between two ideas, you activate their decision-making brain instead of their supportive brain.
It is no different from usability testing. If you show someone a screen and ask what they think, they are polite. If you give them a task and ask them to complete it, their actual friction appears immediately.
In product discovery, friction is truth.
How This Applies to PMs, Not Just Founders
You may be thinking: this is interesting for entrepreneurs, but I work inside a company. I have stakeholders, OKRs, a roadmap, and a backlog that already feels too full.
This technique is actually more relevant for PMs inside companies than for founders.
Inside organizations, political encouragement is even more pervasive:
* Leaders say they want innovation, but are risk averse.
* Cross-functional partners smile in meetings, but quietly maintain objections.
* Engineers nod when you present the roadmap, but may not believe in it.
* Customers say they like your idea, but do not prioritize adoption.
One of the most powerful tools you can use as a PM is explicitly framing your product decisions as explicit choices, rather than proposals seeking validation. For example:
Instead of saying:“We are planning to build a new onboarding flow. Here is the design. Thoughts?”
Say:“We are deciding between optimizing retention or acquisition next quarter. If we choose retention, the main lever is onboarding friction. Here are two possible approaches. Which outcome matters more to the business right now?”
In the second framing:
* The business goal is visible.
* The tradeoff is unavoidable.
* The decision owner is clear.
* The conversation becomes real.
This is how PMs build credibility and influence: not through slides or persuasion, but through framing decisions clearly.
Jaide’s Pivot: From Health Search to AI Translation
The result of Joe’s reframed feedback approach was unambiguous.
Across dozens of conversations with healthcare executives and hospital leaders, one pattern emerged consistently:
Translation was the urgent, budget-backed, economically meaningful problem.
As Joe put it, after talking to more than 40 healthcare decision-makers:
“Every single person told us to build the translation product. Not mostly. Not many. Every single one.”
This kind of clarity is rare in product strategy. When you get it, you do not ignore it. You move.
Jaide Health shifted its core focus to solving a very real, very measurable, and very painful problem in healthcare: the language gap affecting millions of patients.
More than 25 million patients in the United States do not speak English well enough to communicate with clinicians. This leads to measurable harm:
* Longer hospital stays
* Increased readmission rates
* Higher medical error rates
* Lower comprehension of discharge instructions
The status quo for translation relies on human interpreters who are expensive, limited, slow to schedule, and often unavailable after hours or in rare languages. Many clinicians, due to lack of resources, simply use Google Translate privately on their phones. They know this is not secure or compliant, but they feel like they have no better option.
So Jaide built a platform that integrates compliance, healthcare-specific terminology, workflow embedding, custom glossaries, discharge summaries, and real-time accessibility.
This is not simply “healthcare plus GPT”. It is targeted, workflow-integrated, risk-aware operational excellence.
Product managers should study this pattern closely.
The winning strategy was not inventing a new problem. It was solving a painful problem that everyone already agreed mattered.
The Core PM Lesson: Focus on Problems With Urgent Budgets Behind Them
A question I often ask PMs I coach:
Who loses sleep if this problem is not solved?
If the answer is:
* “Not sure”
* “Eventually the business will feel it”
* “It would improve the experience”
* “It could move a KPI if adoption increases”
Then you do not have a real problem yet.
Real product opportunities have:
* A user who is blocked from achieving something meaningful
* A measurable cost or consequence of inaction
* An internal champion with authority to push change
* An adjacent workflow that your product can attach to immediately
* A budget owner who is willing to pay now, not later
Healthcare translation checks every box. That is why Joe now has institutional adoption and a business with meaningful traction behind it.
Why PMs Struggle With This in Practice
If the lesson seems obvious, why do so many PMs fall into the encouragement trap?
The reason is emotional more than analytical.
It is uncomfortable to confront the possibility that your idea, feature, roadmap, strategy, or deck is not compelling enough yet. It is easier to seek validation than truth.
In my first startup, we kept our product in closed beta for months longer than we should have. We told ourselves we were refining the UX, improving onboarding, solidifying architecture. The real reason, which I only admitted years later, was that I was afraid the product was not good enough. I delayed reality to protect my ego.
In product work, speed of invalidation is as important as speed of iteration.
If something is not working, you need to know as quickly as possible. The faster you learn, the more shots you get. The best PMs do not fall in love with their solutions. They fall in love with the moments of clarity that allow them to change direction quickly.
Actionable Advice for Early and Mid Career PMs
Below are specific behaviors and habits you can put into practice immediately.
1. Always test product concepts as choices, not presentations
Instead of asking:“What do you think of this idea?”
Ask:“We are deciding between these two approaches. Which one is more important for you right now and why?”
This forces prioritization, not politeness.
2. Never ship a feature without observing real usage inside the workflow
A feature that exists but is not used does not exist.
Sit next to users. Watch screen behavior. Listen to their muttering. Ask where they hesitate. And most importantly, observe what they do after they close your product.
That is where the real friction lives.
3. Always ask: What is the cost of not solving this?
If there is no real cost of inaction, the feature will not drive adoption.
Impact must be felt, not imagined.
4. Look for users with strong emotional urgency, not polite agreement
When someone says:“This would be helpful.”
That is death.
When someone says:“I need this and I need it now.”
That is life.
Find urgency. Design around urgency. Ignore politeness.
5. Know the business model of your customer better than they do
This is where many PMs plateau.
If you want to be taken seriously by executives, you must understand:
* How your customer makes money
* What costs they must manage
* Which levers influence financial outcomes
When PMs learn to speak in revenue, cost, and risk instead of features, priorities, and backlog, their influence changes instantly.
The Broader Strategic Question: What Happens When Foundational Models Improve?
During our conversation, I asked Joe whether the rapid improvement of GPT-like translation will eventually make specialized healthcare translation unnecessary.
His answer was pragmatic:
“Our goal is to ride the wave. The best technology alone does not win. The integrated solution that solves the real problem wins.”
This is another crucial product lesson:
* Foundational models are table stakes.
* Differentiation comes from workflow integration, specialization, compliance, and trust.
* Adoption is driven by reducing operational friction.
In other words:
In AI-first product strategy, the model is the engine. The workflow is the vehicle. The customer problem is the road.
The Future of Product Work: Judgment Over Output
The world is changing. Tools are accelerating. Capabilities are compounding. But the core skill of product leadership remains the same:
Can you tell the difference between signal and noise, urgency and politeness, truth and encouragement?
That is judgment.
Product management will increasingly become less about writing PRDs or pushing execution and more about identifying the real problem worth solving, framing tradeoffs clearly, and navigating ambiguity with confidence and clarity.
The PMs who will thrive in the coming decade are those who learn how to ask better questions.
Closing
This conversation with Joe reminded me that most of the time, product failure is not the result of a bad idea. It is the result of insufficient clarity. The clarity does not come from thinking harder. It comes from testing real choices, with real users, in real workflows, and asking questions that force truth rather than encouragement.
If this resonates and you want help sharpening your product judgment, improving your influence with executives, developing clarity in your roadmap, or navigating career transitions, I work 1:1 with a small number of PMs, founders, and product executives.
You can learn more at tomleungcoaching.com.
OK. Enough pontificating. Let’s ship greatness.
I didn’t plan to make a video today. I’d just wrapped a client call, remembered that OpenAI had released Atlas, and decided to record a quick unboxing for my Fireside PM community.
I’d heard mixed things—some people raving about it, others underwhelmed—but I made a deliberate choice not to read any reviews beforehand. I wanted to go in blind, the way an actual user would.
Within 30 minutes, I had my verdict: Atlas earns a C+.
It’s ambitious, it’s fast, and it hints at a radical new way to experience the web. But it also stumbles in ways that remind you just how fragile early AI products can be—especially when ambition outpaces usability.
This post isn’t a teardown or a fan letter. It’s a field report from someone who’s built and shipped dozens of products, from scrappy startups to billion-user platforms. My goal here is simple: unpack what Atlas gets wrong, acknowledge what it gets right, and pull out lessons every PM and product team can use.
The Unboxing Experience
When I first launched Atlas, I got the usual macOS security warning. I’m not docking points for that—this is an MVP, and once it hits the Mac App Store, those prompts will fade into the background.
There was an onboarding window outlining the main features, but I barely glanced at it. I was eager to jump in and see the product in action. That’s not a unique flaw—it’s how most real users behave. We skip the instructions and go straight to testing the limits.
That’s why the best onboarding happens in motion, not before use. There were some suggested prompts which I ignored but I would’ve loved contextual fly-outs or light tooltips appearing as I explored past the first 30 seconds of my experience:
* “Try asking Atlas to summarize this page.”
* “Highlight text to discuss it.”
* “Atlas can compare this to other sources—want to see how?”
Small, progressive cues like these are what turn exploration into mastery.
The initial onboarding screen wasn’t wrong—it was just misplaced. It taught before I cared. And that’s a universal PM lesson: meet users where their curiosity is, not where your product tour is.
When Atlas Stumbled
Atlas’s biggest issue isn’t accuracy or latency—it’s identity.
It doesn’t yet know what it wants to be. On one hand, it acts like a browser with ChatGPT built in. On the other, it markets itself as an intelligent agent that can browse for you. Right now, it does neither convincingly.
When I tried simple commands like “Summarize this page” or “Open the next link and tell me what it says,” the experience broke down. Sometimes it responded correctly; other times, it ignored the context entirely.
The deeper issue isn’t technical—it’s architectural. Atlas hasn’t yet resolved the question of who’s driving. Is the user steering and Atlas assisting, or is Atlas steering and the user supervising?
That uncertainty creates friction. It’s like co-piloting with someone who keeps grabbing the wheel mid-turn.
Then there’s the missing piece that could make Atlas truly special: action loops.
The UI makes it feel like Atlas should be able to take action—click, save, organize—but it rarely does. You can ask it to summarize, but you can’t yet say “add this to my notes” or “book this flight.” Those are the natural next steps in the agentic journey, and until they arrive, Atlas feels like a chat interface masquerading as a browser.
This isn’t a criticism of the vision—it’s a question of sequencing. The team is building for the agentic future before the product earns the right to claim that mantle. Until it can act, Atlas is mostly a neat wrapper around ChatGPT that doesn’t justify replacing Chrome, Safari, or Edge.
Where Atlas Shines
Despite the friction, there were moments where I saw real promise.
When Atlas got it right, it was magical. I’d open a 3,000-word article, ask for a summary, and seconds later have a coherent, tone-aware digest. Having that capability integrated directly into the browsing experience—no copy-paste, no tab-switching—is an elegant idea.
You can tell the team understands restraint. The UI is clean and minimal, the chat panel is thoughtfully integrated, and the speed is impressive. It feels engineered by people who care about quality.
The challenge is that all of this could, in theory, exist as a plugin. The browser leap feels premature. Building a full browser is one of the hardest product decisions a company can make—it’s expensive, high-friction, and carries a huge switching cost for users.
The most generous interpretation is that OpenAI went full browser to enable agentic workflows—where Atlas doesn’t just summarize, but acts on behalf of the user. That would justify the architecture. But until that capability arrives, the browser feels like infrastructure waiting for a reason to exist.
Atlas today is a scaffolding for the future, not a product for the present.
Lessons for Product Managers
Even so, Atlas offers a rich set of takeaways for PMs building ambitious products.
1. Don’t Confuse Vision with MVP
You earn the right to ship big ideas by nailing the small ones. Atlas’s long-term vision is compelling, but the MVP doesn’t yet prove why it needed to exist. Start with one unforgettable use case before scaling breadth.
2. Earn Every Switch Cost
Changing browsers is one of the highest-friction user behaviors in software. Unless your product delivers something 10x better, start as an extension, not a replacement.
3. Design for Real Behavior, Not Ideal Behavior
Most users skip onboarding. Expect it. Plan for it. Guide them in context instead of relying on their patience.
4. Choose a Metaphor and Commit
Atlas tries to be both browser and assistant. Pick one. If you’re an assistant, drive. If you’re a browser, stay out of the way. Users shouldn’t have to guess who’s in control.
5. Autonomy Without Agency Frustrates Users
It’s worse for an AI to understand what you want but refuse to act than to not understand at all. Until Atlas can take meaningful action, it’s not an agent—it’s a spectator.
6. Sequence Ambition Behind Value
The product is building for a world that doesn’t exist yet. Ambition is great, but the order of operations matters. Earn adoption today while building for tomorrow.
Advice for the Atlas Team
If I were advising the Atlas PM and design teams directly, I’d focus on five things:
* Clarify the core identity. Decide if you’re an AI browser with ChatGPT or a ChatGPT agent that uses a browser. Everything else flows from that choice.
* Earn the right to replace Chrome. Give users one undeniably magical use case that justifies the switch—research synthesis, comparison mode, or task execution.
* Fix the metaphor collision. Make it obvious who’s in control: human or AI. Even a “manual vs. autopilot” toggle would add clarity.
* Build action loops. Move from summarization to completion. The browser of the future won’t just explain—it will execute.
* Sequence ambition. Agentic work is the destination, but the current version needs to win users on everyday value first.
None of this is out of reach. The bones are good. What’s missing is coherence.
Closing Reflection
Atlas is a fascinating case study in what happens when world-class technology meets premature positioning. It’s not bad—it’s unfinished.
A C+ isn’t an insult. It’s a reminder that potential and product-market fit are two different things. Atlas is the kind of product that might, in a few releases, feel indispensable. But right now, it’s a prototype wearing the clothes of a platform.
For every PM watching this unfold, the lesson is universal: don’t get seduced by your own roadmap. Ambition must be earned, one user journey at a time.
That’s how trust is built—and in AI, trust is everything.
If you or your team are wrestling with similar challenges—whether it’s clarifying your product vision, sequencing your roadmap, or improving PM leadership—I offer both 1:1 executive and career coaching at tomleungcoaching.com and expert product management consulting and fractional CPO services through my firm, Palo Alto Foundry.
OK. Enough pontificating. Let’s ship greatness.
Introduction
One of the great joys of hosting my Fireside PM podcast is the opportunity to reconnect with people I’ve known for years and go deep into the mechanics of business building. Recently, I sat down with Jason Stoffer, partner at Maveron Capital, a venture firm with a laser focus on consumer companies. Jason and I go way back to my Seattle days, so this was both a reunion and an education. Our conversation turned into a masterclass on scaling consumer businesses, the art of finding moats, and the brutal realities of marketplaces.
But beyond the case studies, what stood out were the actionable insights PMs can apply right now. If you’re an early or mid-career product manager in Silicon Valley, there are playbooks here you can borrow—not in theory, but in practice.
Jason summed up his approach to analyzing companies like this: “So many founders can get caught in the moment that sometimes it’s best when we’re looking at a new investment to talk about if things go right, what can happen. What would an S-1 or public filing look like? What would the company look like at a big M&A event? And then you work backwards.” That mindset—begin with the end in mind—is as powerful for a product manager shipping features as it is for a VC evaluating billion-dollar bets.
In this post, I’ll share:
* The key lessons from Jason’s breakdown of Quince and StubHub
* How these lessons apply directly to your PM career
* Tactical moves you can make to future-proof your trajectory
* Reflections on what surprised me most in this conversation
And along the way, I’ll highlight specific frameworks and examples you can put into action this week.
Part 1: Quince and the Power of Supply Chain Innovation
When Jason first explained Quince’s model, I’ll admit I was skeptical. On its face, it sounds like yet another DTC apparel play. Sell cheap cashmere sweaters online? Compete with incumbents like Theory and Away? It didn’t sound differentiated.
Jason disagreed. “Most people know Shein, and Shein was kind of working direct with factories. Quince’s innovation was asking, what do factories in Asia have during certain times of the year? They have excess capacity. Those are the same factories who are making a Theory shirt or an Away bag. Quince went to those factories and said, hey, make product for us, you hold the inventory, we’ll guarantee we’ll sell it.”
That’s not a design tweak—it’s a supply chain disruption. Costco built an empire on this principle. TJX did the same. Walmart before them. If you can structurally rewire how goods get to consumers, you’ve got the foundation for a massive business.
Lesson for PMs: Sometimes the real innovation isn’t visible in the interface. It’s hidden in the plumbing. As PMs, we often obsess over UI polish, onboarding flows, or feature prioritization. But step back and ask: what’s the equivalent of supply chain disruption in your domain? It might be a new data pipeline, a pricing model, or even a workflow that cuts out three layers of manual steps for your users. Those invisible shifts can unlock outsized value.
Jason gave the example of Quince’s $50 cashmere sweater. “Anyone in retail knows that if you’re selling at a 12% gross margin and it’s apparel with returns, you’re making no money on that. What is it? It’s an alternative method of customer acquisition. You hook them with the sweater and sell them everything else.” In other words, they turned a P&L liability into a marketing hack.
Actionable move for PMs: Identify your “$50 sweater.” What’s the feature you can offer that might look unprofitable or inconvenient in isolation, but serves as an on-ramp to deeper engagement? Maybe it’s a generous free tier in SaaS, or an intentionally unscalable white-glove onboarding process. Don’t dismiss those just because they don’t scale on day one.
Part 2: Moats, Marketing, and Hero SKUs
Jason emphasized that great retailers pair supply chain execution with marketing innovation. Costco has rotisserie chickens and $2 hot dogs. Quince has $50 cashmere sweaters. These “hero SKUs” create shareable moments and lasting brand associations.
“You’re pairing supply chain innovation with marketing innovation, and it’s super effective,” Jason explained.
Lesson for PMs: Don’t just think about your feature set—think about your hero feature. What’s the one thing that makes users say, “You have to try this product”? Too often, PM roadmaps are a laundry list of incremental improvements. Instead, design at least one feature that can carry your brand in conversations, tweets, and TikToks. Think about Figma’s multiplayer cursors or Slack’s playful onboarding. These are features that double as marketing.
Part 3: StubHub and the Economics of Trust
After Quince, Jason shifted to a very different case study: StubHub. Here, the lesson wasn’t about supply chain but about moats built on trust, liquidity, and cash flow mechanics.
“Customers will pay for certainty even if they hate you,” Jason said. Think about that. StubHub’s fees are infamous. Buyers grumble, sellers grumble. And yet, if you need a Taylor Swift ticket and want to be sure it’s legit, you go to StubHub. That reliability is the moat.
Lesson for PMs: Trust is an underrated product feature. In consumer software, this might mean uptime and reliability. In enterprise SaaS, it might mean compliance and security certifications. In AI, it could mean interpretability and guardrails. Don’t underestimate how much people will endure friction if they can be sure you’ll deliver.
Jason also pointed out StubHub’s cash flow hack: “StubHub gets money from buyers up front and then pays the sellers later. That’s a beautiful business model. If you create a cash flow cycle where you’re getting the money first and delivering later, you raise a lot less equity and get diluted less.”
This is a reminder that product decisions can have financial implications. As PMs, you may not directly set billing cycles, but you can influence monetization models, free trial design, or even refund policies—all of which affect working capital.
Actionable move for PMs: Partner with finance. Ask them: what product levers could improve cash conversion cycles? Could prepayment discounts, annual billing, or usage-based pricing reduce working capital strain? Thinking beyond the feature spec makes you more valuable to your company—and accelerates your own career.
Part 4: Five Takeaways from StubHub
Jason listed five lessons from StubHub:
* Trust is a moat – Even if users complain, reliability keeps them loyal.
* Liquidity is a moat – Scale compounds, especially in marketplaces.
* Cash flow mechanics matter – Payment terms can determine survival.
* Tooling locks in supply – Seller-facing tools create stickiness.
* Scale itself compounds – Once you’re ahead, momentum carries you.
Part 5: What Surprised Me Most
As I listened back to this conversation, two surprises stood out.
First, the sheer size of value retail. Jason noted that TJX is worth $157 billion. Burlington, $22 billion. Costco, $418 billion. These aren’t sexy tech names, but they are empires. It made me rethink my assumptions about what “boring” industries can teach us.
Second, Jason’s humility about being wrong. “Reddit might be one,” he admitted when I asked about his biggest misses. “I had no idea that LLMs would use their data in a way that would make it incredibly important. I was dead wrong. I said sit on the sidelines.” That candor is refreshing—and a reminder that even seasoned investors get it wrong. The key is to keep learning.
Lesson for PMs: Admit your misses. Write them down. Share them. Don’t hide them. Your credibility grows when you own your blind spots and show how you’ve adjusted.
Closing Thoughts
Talking with Jason felt like being back in business school—but with sharper edges. These aren’t abstract frameworks. They’re battle-tested strategies from companies that scaled to billions. As PMs, our job isn’t just to ship features. It’s to build businesses. That requires thinking about supply chains, trust, cash flow, and marketing moats.
If you found this helpful and want to go deeper, check out Jason’s Substack, Ringing the Bell, where he publishes his case studies. And if you want to level up your own career trajectory, I offer 1:1 executive, career, and product coaching at tomleungcoaching.com.
Shape the Future of PMAnd if you haven’t yet, I’d love your input on my Future of Product Management survey. It only takes about 5 minutes, and by filling it out you’ll get early access to the results plus an invitation to a live readout with a panel of top product leaders. The survey explores how AI, team structures, and skill sets are reshaping the PM role for 2026 and beyond.OK. Let’s ship greatness.
When I sit down with product leaders who’ve spent decades shaping how Silicon Valley builds products, I’m always struck by how their career arcs echo the very lessons they now teach. Michael Margolis is no exception.
Michael started his career as an anthropologist, stumbled into educational software in the late 90s, helped scale Gmail during its formative years, and eventually became one of the first design researchers at Google Ventures (GV). For fifteen years, he sat at the intersection of startups and product discovery, helping founders learn faster, save years of wasted effort, and—sometimes—kill their darlings before they drained all the fuel.
In our conversation, Michael didn’t just share war stories. He laid out a concrete, repeatable framework for product teams—whether you’re a PM at a FAANG company or a fresh hire at a Series A startup—on how to cut through noise, get to the truth, and accelerate learning cycles.
This post is my attempt to capture those lessons. If you’re an early to mid-career PM in Silicon Valley trying to sharpen your craft, this is for you.
From Anthropology to Gmail: The Value of Unorthodox Beginnings
Michael’s path to Google wasn’t a linear “go to Stanford CS, join a startup, IPO” narrative. Instead, he started in anthropology and educational software, producing floppy-disk learning titles at The Learning Company and Electronic Arts. That detour turned out to be foundational.
“Studying anthropology was my introduction to usability and ethnography,” Michael told me. “It gave me a lens to look at people’s behaviors not just as data points but as cultural patterns.”
For PMs, the lesson is clear: don’t discount the odd chapters of your own career. That sales job, that nonprofit internship, or that side hustle in teaching can become your secret weapon later. Michael carried those anthropology muscles into Gmail, where understanding human behavior at scale was just as critical as writing code.
Actionable Advice for PMs:
* Audit your own “non-linear” career experiences. What hidden skills—interviewing, pattern-recognition, narrative-building—could you bring into product work?
* When hiring, don’t filter only for straight-line resumes. The best PMs often bring unexpected perspectives.
The Google Years: Scaling Research at Hyper-speed
Michael joined Gmail in 2006, when it was still young but maturing fast. He quickly noticed how different the rhythm was compared to the slow, expensive ethnographic studies he had done for consulting clients like Walmart.com.
“At Walmart,” he explained, “I had to compress these big, long expensive projects into something faster. Gmail demanded that same speed, but at enormous scale.”
At Google, the prime “clients” for his research were often designers. The questions he answered were things like: How do we attract Outlook users? How do we make the interface intuitive enough for mass adoption?
This difference matters for PMs: in big companies, research questions often start downstream—how to refine, polish, or optimize. In startups, questions live upstream: What should we build at all? Knowing where you sit in that spectrum changes the kind of research (and product bets) you should prioritize.
Jumping to Google Ventures: Bringing UXR Into VC
In 2010, Michael made a bold move: leaving the mothership to become one of the very first design researchers embedded inside a venture capital firm. GV was trying to differentiate itself by not just writing checks but also offering operational help—design, hiring, PR.
“I got lucky,” he recalled. “GV had already hired Braden Kowitz as their design partner, and Braden said, ‘I need a researcher.’ That was my break.”
Working with founders was a shock. They didn’t act like Google PMs. “It was like they were playing by a different set of rules. They’d say, ‘Here’s where we’re going. You can help me, or get out of my way.’”
That forced Michael to reinvent how he showed value. Instead of writing reports that might sit unread, he had to deliver insights in real-time, in ways founders couldn’t ignore.
The Watch Party Method: Stop Writing Reports
Here’s where the gold nuggets come in. Michael realized traditional reports weren’t cutting it. Instead, he invented what he calls “watch parties.”
“I don’t do the research study unless the whole team watches,” he said. “I compress it into a day—five interviews with bullseye customers, the whole team in a virtual backroom. By the end, they’ve seen it all, they’re debriefing themselves, and alignment happens automatically. I haven’t written a report in years.”
Think about that. No 30-page decks. No long hand-offs. Just visceral, shared observation.
Actionable Advice for PMs:
* Next time you run a user test, insist that at least your core team attends live. Skip the sanitized recap slides.
* At the end of a session, have the team summarize their top three takeaways. When they say it, it sticks.
Bullseye Customers: Getting Uncomfortably Specific
One of Michael’s most powerful contributions is the bullseye customer exercise.
“A bullseye customer,” he explained, “is the very specific subset of your target market who is most likely to adopt your product first. The key is to define not just inclusion criteria but also exclusion criteria.”
Founders (and PMs) often resist narrowing. They want to believe their TAM is huge. But Michael’s method forces rigor. He described grilling teams until they admit things like: Actually, if this person doesn’t work from home, they probably won’t care. Or if they’ve never paid for a premium tool, they won’t convert.
Example: Imagine you’re building a new coffee subscription. Your bullseye might be: Remote tech workers in San Francisco, ages 25-35, who already spend $50+ per month on specialty coffee, and who like experimenting with new roasters. If your product doesn’t delight them, it won’t magically resonate with “all coffee drinkers.”
Actionable Advice for PMs:
* Write down both inclusion and exclusion criteria for your bullseye.
* Add triggers: life events that make adoption more likely (e.g., new job, new diagnosis, move to a new city).
* Recruit five people who fit it exactly. If they’re lukewarm, rethink your product.
Why Five Interviews Is Enough
Michael swears by the number five.
“After three interviews, you’re not sure if it’s a pattern,” he said. “By five, you hit data saturation. Everyone sees the signal. Any more and the team is begging you to stop so they can make changes.”
For PMs under pressure, this is liberating. You don’t need 100 customer calls. You need five of the right customers, observed by the right team members, in a compressed timeframe.
Multiple Prototypes: Don’t Ask Customers to Imagine
Another Margolis rule: never show just one prototype.
“If you show one, the team gets too attached, and the customer can only react. With three, I can say: compare and contrast. What do you love? What do you hate? I collect the Lego pieces and assemble the next iteration.”
Sometimes those prototypes aren’t even original mockups—they’re competitor landing pages. As Michael joked: “Have you tested your competitor’s prototypes? No? Then you’ve left something out.”
Actionable Advice for PMs:
* When exploring value props, mock up three different landing pages. Don’t ask “Which do you prefer?” Instead ask: “Which elements matter most, and why?”
* Treat mild praise as a “no.” Only visceral excitement counts as signal.
Founders, Stubbornness, and the Henry Ford Trap
I pressed Michael on what happens when founders dismiss customer feedback by invoking Henry Ford’s famous line about “faster horses.”
He smiled. “The beauty of bullseye customers is it forces accountability. If you told me these people are your dream users, and they shrug, then you can’t hand-wave it away. Either change your customer definition or your product.”
This is a crucial lesson for PMs who work with visionary leaders. Conviction is necessary, but unchecked conviction can sink a product. Anchoring on bullseye customers creates a shared contract that keeps both egos and hypotheses grounded.
Bright Spots > Exit Interviews
When teams ask him to interview churned customers, Michael often refuses.
“There are a bazillion reasons people don’t use something,” he said. “It’s inefficient. Instead, I go find the bright spots—the power users who love it. I want to know why they’re on fire, and then go find more people like them.”
This “bright spot” focus helps PMs avoid premature pivots. Instead of chasing every no, double down on the yeses until you understand the common thread.
Case Study: Refrigerated Medications and Zipline
To illustrate, Michael shared a project with Zipline, the drone-delivery company. They wanted to deliver specialty medications. The core question: was speed or timing more important?
Through interviews, the bright spot insight emerged: refrigeration was the killer constraint. Patients didn’t care about “fastest possible” delivery in the abstract. They cared about not leaving refrigerated drugs on their porch.
That nuance completely changed the product and infrastructure design.
For PMs, the takeaway is that sometimes the decisive factor isn’t the flashy benefit you advertise (“we’re the fastest!”) but a practical detail you only uncover through careful listening.
AI and the Future of Research
We couldn’t avoid the AI question. Has it changed his process?
“I worry about how AI is creating distance between teams and customers,” Michael admitted. “If my bot talks to your bot and spits out a report, you miss the nuance. The power of research is in the stories, the details, the visceral reactions.”
That said, he does use AI for quick prototype copywriting and summaries. But he insists on live team observation for the real work.
For PMs, the advice is to use AI as an accelerant, not a replacement. Let it write the rough draft of your landing page copy, but don’t outsource customer empathy to a transcript.
What PMs Should Do Differently Tomorrow
Let’s distill Michael’s 15 years of wisdom into actionable steps you can implement this week:
* Define your bullseye. Write down exact inclusion, exclusion, and trigger criteria.
* Recruit five. Stop at five, but make them exact matches.
* Run a watch party. Get your designer, engineer, and PM peers in the virtual backroom. No observers, no insights.
* Prototype in threes. Landing pages are cheap. Competitor screenshots are free.
* Look for visceral reactions. Anything less than “Wait, can I get this now?” is a polite no.
* Study the bright spots. Find your power users and figure out what makes them glow.
* Compress cycles. The whole exercise—recruit, test, learn—should take days, not months.
Quotes Worth Remembering
To make these lessons stick, here are five quotes from Michael that every PM should tape to their desk:
* “I don’t do the research unless the whole team watches.”
* “A bullseye customer is the very specific subset of your target market most likely to adopt first.”
* “After five interviews, you hit data saturation. Everyone sees the pattern.”
* “If you show one prototype, the team gets too attached. With three, you collect the Lego pieces.”
* “Mild encouragement is a polite no. Only visceral excitement counts as yes.”
My Takeaways as a Coach and PM
Talking to Michael reinforced something I’ve seen in my own career: product failure often comes not from bad execution, but from weak learning cycles. Teams don’t test the right people, don’t synthesize together, and don’t act quickly on what they learn.
Michael’s methods aren’t magic—they’re discipline. They compress time, sharpen focus, and force alignment. Whether you’re building the next Gmail or the next startup idea in a Palo Alto garage, these principles apply.
If you’re an early to mid-career PM, start by practicing on a small scale. Don’t wait for your manager to bless a massive UXR budget. Run a five-person watch party with your next prototype. You’ll be surprised at how quickly the fog lifts.
Closing
If this resonated and you’re looking for deeper guidance, I also work 1:1 with PMs and executives on career, product, and leadership challenges. You can learn more at tomleungcoaching.com.
And if you haven’t yet, I’d love your input on my Future of Product Management survey. It only takes about 5 minutes, and by filling it out you’ll get early access to the results plus an invitation to a live readout with a panel of top product leaders. The survey explores how AI, team structures, and skill sets are reshaping the PM role for 2026 and beyond.
Let’s ship greatness.
From Chaos to Clarity: How AI is Rewriting the Playbook for Product Managers
Lessons from my conversation with ex-Google PM Assaf Reifer on building tools that tame the noise, sharpen priorities, and give PMs back their most valuable resource: focus.
When I think back on my time at Google, one of the highlights was building and scaling teams with incredibly talented product managers. Some of those PMs went on to lead big initiatives across YouTube, Google Health, and other parts of the company. A few branched out and became founders.
One of them is Assaf Reifer, a former PM on my team at YouTube in Zurich. We first met over breakfast through what I think was a LinkedIn networking experiment. He had been at Bain, was exploring his next move, and we happened to be hiring. The match worked out beautifully. He ended up becoming one of the top performers on the team and played a key role in building YouTube Analytics and the transition from the old Creator Studio into what creators now use daily.
Recently, I had the chance to catch up with Assaf on my Fireside PM podcast. He’s been experimenting with new projects, one of which could change how PMs everywhere manage the daily chaos of inputs, competing priorities, and distractions. What follows is a long, deep dive into our conversation, plus my take on what early-to-mid career PMs in Silicon Valley can learn from it.
The Setup: Why Now Is a Historic Moment for Builders
Assaf started by reflecting on what it feels like to be a builder in 2025. He’s been a software engineer, a consultant, and a PM. But he emphasized that the past two years feel different, historic even.
I remarked:
“In the last two years with advancements in AI, a lot of the knowledge necessary to build something end to end is really bridged by some of these technologies. It empowers people to realize ideas and experiments that previously required 10 people and millions of dollars.”
Think about that for a second. Not long ago, building a SaaS product that could ingest Zoom transcripts, Slack threads, and Jira tickets, then triage them into a priority list for a PM would have required a team of engineers, designers, and product folks. Now a single founder can stitch that together with off-the-shelf AI models, APIs, and some creativity.
For early-career PMs, the actionable insight is clear: don’t wait for permission to build. Even if you’re not an engineer, AI has lowered the barrier to entry so much that you can tinker, prototype, and validate ideas faster than ever. Open ChatGPT or Gemini, describe what you want to build, and let the system guide you through the concepts you don’t understand.
Assaf encourages this approach:
“The best way to start is open ChatGPT or Gemini, tell it what you want to build, and ask it how. It will respond with 30 terms you don’t understand, and you just go one by one. You ask it to explain each concept, and gradually you close the gap very quickly.”
That’s the 2025 version of “learning to code.” You don’t need to become a full-stack engineer. But you do need to become fluent in exploring, iterating, and leveraging AI as a co-pilot.
The Problem: PMs as Air Traffic Controllers
After talking about the broader builder landscape, we turned to the problem space Assaf is attacking. We discussed product managers as “air traffic controllers,” juggling multiple channels of information, each with different levels of urgency.
“Being a PM is all about prioritizing. You’re interacting with sales, engineering, customers, peers, executives. You have OKRs on one hand, and then Jira tickets or a customer threatening to churn on the other. Until recently, the best PMs just kept it all in their heads or in spreadsheets.”
Sound familiar? If you’re a PM, you’ve probably woken up to a wall of Slack notifications, 10 unread emails from sales, and a Jira dashboard full of tickets. Then, by 10am, you’re in a meeting where a senior leader asks, “What do you think about this issue that came up this morning?” And you’re embarrassed because you didn’t even know it existed.
I’ve been there. And I bet you have too.
The core challenge: noise vs. signal. PMs succeed not because they read every message but because they know which ones matter. That judgment call has historically been a mix of intuition, experience, and luck.
The Solution: Issue Center (PM Studio?)
Assaf’s project, tentatively called “Issue Center,” is a SaaS tool that ingests all the inputs PMs already swim in: Slack, Jira, Zoom transcript, and applies AI-powered rules to surface the truly critical items.
The workflow looks like this:
* Integration: Connect the tool to your company’s communication stack. (His design partner is running Microsoft 365/Teams, but it could work with Slack and Google too.)
* Rule Setup: Create rules that define what matters to you. For example, “API degradation impacting users” is critical. Or “customer mentions a competitor as better” is high.
* AI Assistance: The system uses AI to evaluate whether inputs match your rules. It flags the items, explains why, and links you back to the source.
* Prioritized Dashboard: Instead of drowning in messages, you wake up to a curated list of critical, high, and medium issues to tackle first.
Assaf demoed it live, showing how rules surfaced relevant Jira tickets, Slack threads, and transcripts. At one point, he laughed at his own naming convention:
“Clearly I’m not a marketer. It’s called Issue Center for now, but we can call it PM Studio if that makes it sound cooler.”
I told him PM Studio had a nice ring to it.
The important thing wasn’t the branding, though—it was the shift from reactive scrambling to proactive clarity.
Actionable Takeaway #1: Define Your Own Rules of Signal
Here’s where PMs can learn something even before using a tool like this. Ask yourself: What are the true signals in my work?
* Is it when a customer threatens to leave?
* When an API is degrading?
* When an executive brings up a competitor?
Whatever they are, write them down. These are your “rules.” Even if you don’t have AI filtering your inputs yet, the discipline of defining rules forces you to separate noise from signal.
Assaf admitted that rule-writing is an art:
“The rule description is very important, because that’s what the system uses to match. If it’s too narrow, it won’t pick up. If it’s too broad, you’ll get noise. That’s why I want to make onboarding easier with quick-start templates for common rules.”
This mirrors how you should think about your own prioritization framework. If you’re too vague (“respond to all customer requests”), you’ll drown. If you’re too narrow (“only focus on API latency under 200ms”), you might miss the forest for the trees.
The Bigger Picture: Managers of PMs
Assaf also highlighted another layer of value, helping PM leads manage their teams.
“If you’re a PM lead and you have a team, you want visibility into what critical topics your PMs care about, what jeopardizes OKRs, and where they need support. This tool can give you that bird’s-eye view.”
This is huge. One of the hardest parts of managing PMs is knowing what’s actually keeping them busy. Are they firefighting customer issues? Negotiating with engineering? Or chasing shiny objects?
For managers, the actionable advice is: ask your PMs to share their “critical issue list” with you weekly. Even if you don’t have Assaf’s tool yet, that discipline will create alignment and uncover mis-prioritizations.
The Privacy Angle: Building Trust
We also talked about the obvious concern: privacy. If your tool is reading Slack messages, Zoom calls, and Jira tickets, where does that data go?
Assaf has thought about this deeply:
“This is architected as a single-tenant SaaS. It’s installed in your company’s own cloud tenant. Nothing leaves the org. Even when we use AI, it runs through your enterprise API key, which isn’t used for training.”
For PMs evaluating AI tools, this is a reminder: always ask how data is handled. At many companies, legal and IT will shut down even the coolest tool if privacy isn’t bulletproof. If you’re the PM championing adoption, anticipate those concerns and come prepared with answers.
Actionable Takeaway #2: Trust Is a Feature
In 2025, building trust is not just about having the right feature set. It’s about handling privacy, security, and reliability as first-class features.
If you’re building a product, or even advocating for one inside your company, bake trust into your pitch. Show that you’ve thought about data handling, failure modes, and user control.
Beyond Explicit Rules: The Future of Inferred Priorities
One of the fun parts of our conversation was brainstorming future features. I suggested that beyond explicit rules, the system could infer priorities by watching behavior:
* If you always jump into competitor-related Slack threads, the system could propose a rule.
* If you consistently respond faster to certain stakeholders, it could bump their inputs up in priority.
Assaf agreed this was interesting but also flagged the risks:
“Whenever you do something that isn’t explicitly set by the user and you get it wrong, you risk losing trust. You don’t want noise creeping into the critical bucket.”
That’s a broader lesson for PMs: don’t get seduced by complexity if it undermines trust. Sometimes a simple, transparent system is better than a magical one that feels unpredictable.
The Side Project: An AI Teddy Bear
We spent most of our time on PM Studio, but Assaf also showed me something else: a prototype for an AI-powered plush toy that serves as a conversational buddy for kids.
The idea is part educational, part entertaining. Think Teddy Ruxpin meets ChatGPT, but with parental controls and guardrails.
He tested it with his own kids, and at one point, a child said he wanted to “eat the squirrel” in a story. The system responded, “That’s not a very nice thing. Let’s try something kinder.”
That made me laugh—and also highlighted the importance of building safe AI for children.
As a parent myself, I told Assaf:
“If this thing could help kids develop critical thinking and curiosity before they jump into ChatGPT, I’d pay money for it. We don’t formally teach critical thinking to children, but a well-designed toy could do it through fun experiences.”
While this project is still early, it connects to a broader theme: AI is reshaping not just how we work, but how we learn, parent, and play.
Actionable Takeaway #3: Think About Second-Order Effects
For PMs, the teddy bear might seem irrelevant. But the lesson is this: when you build with AI, think about the second-order effects.
* How does this change how people learn, not just how they work?
* How does it shape what they trust, not just what they use?
* How does it influence long-term skills, not just short-term productivity?
If you only optimize for immediate outcomes, you miss the deeper impact your product could have.
Practical Advice for PMs in Silicon Valley
Let’s bring this back to you, the early-to-mid career PM navigating the chaos of Silicon Valley. Here are five actionable insights from my conversation with Assaf:
* Define Your Critical Rules. Don’t wait for a tool. Write down the signals that truly matter in your role and use them to triage your own work.
* Build Trust Through Clarity. Whether you’re building products or pitching ideas internally, make privacy, reliability, and transparency part of your value prop.
* Use AI as a Learning Co-Pilot. Open ChatGPT or Gemini and let it teach you the concepts behind the systems you want to build. Don’t be afraid of looking dumb, ask it to explain everything.
* Share Priorities with Your Manager. If you manage PMs, ask for their top three critical issues weekly. If you’re managed, proactively share them. It will align expectations and reduce surprises.
* Anticipate Second-Order Effects. Don’t just think about what your product does today. Think about how it changes behavior, skills, and trust over time.
Why This Matters: The Cambrian Explosion of Builders
We closed our conversation reflecting on the bigger picture. I remarked:
“You wonder if the next hundred billion dollars of market value will come not from 10 decacorns, but from a thousand smaller companies run by 5–10 people. That’s good for customers. It’s good for competition. And it’s possible because of AI.”
This is a turning point in product management. The PMs who thrive in the next decade will be those who can harness AI, not just as users, but as builders, integrators, and thinkers.
Final Thoughts
Catching up with Assaf reminded me of why I love product management. At its best, it’s about solving messy problems, shaping the future, and helping people focus on what matters most.
As you navigate your own PM career, I encourage you to experiment with AI, define your rules of signal, and always keep trust at the core of what you build.
And if you want more personalized support, I run a 1:1 executive, career, and product coaching practice at tomleungcoaching.com. If you want to try Assaf’s Issue Center tool as a design partner, feel free to contact him or hit him up on X.
OK. Enough pontificating. Let’s get back to work.
When Jess Gilmartin talks, I listen. If you've been in Silicon Valley long enough, you might have heard of Jess. She's been a full-time CMO, a founder, a startup whisperer, and most recently, one of the sharpest advisors to CEOs I know when it comes to hiring marketing leadership that actually works.
In our recent Fireside PM conversation, we went deep on the do's and don'ts of hiring a CMO. While many of my listeners and readers are early- to mid-career product managers, this interview is packed with insight relevant not just to founders and CEOs but to any PM who will eventually be part of a hiring panel, collaborating with marketing peers, or considering their own path to executive leadership.
Why Your Company Even Needs a CMO
Let’s start with first principles. As Jess puts it:
“The CMO is the steward of the brand. And brand isn’t just your website or ads—it’s every interaction a customer has with your company. That includes your support team, your social media presence, your onboarding experience, and yes, your product.”
The reason this matters for PMs is simple: we often underestimate the scope and gravity of the brand experience. We build features. We define roadmaps. But we rarely think of the emotional resonance of what we’re building.
“Part of the job is ensuring consistency and excellence across all these touchpoints,” Jess said. "That also means having the spine to flag when something the product team is doing will degrade that experience."
Translation? If you think marketing's job is to "wrap" your product after the work is done, you're missing the point.
What Great CMOs Actually Do (Hint: It’s Not Just Marketing)
One of the biggest wake-up calls for me was hearing Jess talk about the real job of a modern CMO:
“When I was a CMO, I had senior leaders under me running product marketing, growth, and comms. I spent most of my time on executive alignment, crisis communications, and internal messaging. I was rarely in the weeds.”
That division of labor is a signal. The difference between a head of marketing and a CMO isn’t just title inflation—it’s scope. A CMO thinks in systems. They think in multi-stakeholder alignment. And above all, they should be one of the CEO’s most strategic advisors.
Jess broke it down this way:
“The biggest mistake founders make is hiring too senior or too junior a marketer for where they are. If you're still pre-product-market-fit, don’t hire a head of marketing. You need to be doing that work yourself.”
As someone who has worked with a lot of pre-PMF startups, I couldn’t agree more. And yet, time and time again, I see companies try to paper over early churn or stagnant growth with splashy campaigns and SEO spend.
It doesn’t work.
Product Managers: Here’s What You Keep Getting Wrong
There was one part of our conversation where my PM blood pressure rose just a bit. I asked Jess what she does when she’s in a cross-functional meeting and the product team is proudly showcasing something... that isn’t actually great for the user experience.
She smiled:
“I try not to have strong opinions on product. That’s not my job. But I deeply understand the customer experience. And when I see something that isn’t going to land, I raise a fuss. Not all the time—you have to pick your battles—but marketing sees across silos. We’re often the ones that spot inconsistencies in the end-to-end experience.”
PMs, listen carefully to that last part.
We often live in silos—focused on our vertical, our feature, our sprint velocity. Meanwhile, marketing is scanning horizontally, sensing what happens when someone tries to connect the dots. That perspective is invaluable. And if you're lucky enough to work with a CMO or a senior PMM who raises their hand about UX inconsistencies or cross-functional misalignments, treat that as signal, not noise.
The Dirty Truth About CMO Tenure
Ready for the most sobering stat of the interview?
“Most CMOs last two years,” Jess said flatly.
Why? Expectations are sky-high. CEOs want the creativity of Nike, the analytics of Facebook, the virality of TikTok, and the demand gen of HubSpot—all in one human. Oh, and don’t forget crisis PR, event strategy, and internal morale-boosting Slack posts.
That level of sprawl is untenable.
“Marketing is the only function where we expect a single person to be excellent at creative, numbers, product thinking, storytelling, operations, hiring, and analytics,” she said. “It’s unrealistic.”
So what happens? You hire a CMO for one phase, they nail it, and then two years later the business needs something else. That’s not a failure. That’s reality.
Founders and PM leaders should take note: you’re not hiring a CMO to last forever. You’re hiring them to solve today's problem exceptionally well.
Demand Gen vs. Messaging vs. PMM: Pick Your Poison
This next insight is gold for any hiring manager:
“When hiring a marketing leader, figure out what your biggest problem is. Is it lack of pipeline, weak differentiation, or lack of strategic product alignment? You won’t find someone world-class at all three.”
Jess described three typical archetypes:
* Demand Gen-focused leaders – Performance-oriented, data-driven, often strong in growth loops and paid acquisition but weaker on storytelling or product narrative.
* Brand and Messaging experts – They come up through storytelling, design, and content. These are the campaign artists and identity shapers.
* PMM-style CMOs – Strong in positioning, go-to-market, launch orchestration, and cross-functional strategy. They see the product and customer journey clearly but may lack deep growth or brand skills.
That might be the most important hiring advice in this entire conversation. Every CMO candidate comes from somewhere. What they did before will influence what they do next. The key is aligning that background with your immediate business challenge.
If you already have a rockstar PMM but no repeatable pipeline, hire a demand gen-oriented CMO. If you’ve got leads but they don’t convert or your brand is invisible, find a storytelling operator.
And if you're a PM moving up the ranks? This is how you should evaluate your marketing counterparts. Don’t just ask "are they good?" Ask: are they good at the thing we need most right now?
Hiring CMOs: Skip the Case Study, Do the Plan
When Jess advises founders on hiring a CMO, she doesn’t run them through generic behavioral interviews or vague culture fit chats. She makes them present a real plan.
“I give them a budget. I give them our current strategy. I ask: 'Show me how you’d spend it and what your plan would be to hit our goals.'”
The best candidates, she said, are:
* Articulate – They speak clearly, persuasively, and inspire confidence.
* Specific – They don’t just say "we’ll run paid ads" or "we’ll increase brand awareness." They tell you how, why, and in what sequence.
* Bold – They bring creative energy. One candidate impressed Jess with cheeky, bold challenger messaging that she herself wouldn’t have dreamed up.
That kind of spark matters. Especially for a role that’s supposed to shape how the world feels about your company.
Founders: Don’t Get Dazzled by Logos
Perhaps the spiciest take in the conversation came when I asked Jess about resume signals:
“Do not get dazzled by former companies. That senior PMM from Salesforce may not have ever hired a team, built a pipeline, or touched brand messaging.”
This hit close to home. As a former Google exec, I know all too well how much people over-index on logos. Jess prefers candidates who have been in the trenches—startup veterans, operators who’ve hired across functions, people with range.
The ultimate test? Jess asks: Did they just run the playbook, or do they know how to build one?
Actionable Advice for PMs
So, what should early- and mid-career product managers take from this?
* Learn to speak marketing. Understand the difference between PMM, brand, growth, and demand gen. This makes you a better cross-functional partner.
* Invite your PMM early. Don’t treat them as a launch afterthought. Bring them into ideation, prioritization, and roadmap planning.
* Observe how marketing fights. Good CMOs don’t just object; they escalate. They build coalitions. Watch how they influence.
* Test CMO fit with real-world scenarios. Ask candidates to brainstorm a real strategic decision or messaging conflict. See how they think.
* Beware the shiny logo. Ask CMO candidates what they personally owned, who they hired, and what they changed. If you hear too much passive voice, dig deeper.
A Final Word
If you're a founder or exec looking to hire your first CMO, I strongly suggest you watch the full interview. And if you're a PM, use this as a lens to reflect on your own career. How well do you understand your marketing counterparts? How would you describe your company's brand? Learn more about Jess here.
If you'd like help with your own product leadership journey, I offer 1:1 coaching at tomleungcoaching.com.
OK. Enough pontificating. Let's get back to work.
We’re back with a Startup Spotlight episode on the Fireside PM podcast. It’s not every day you get to speak with someone who’s straddled the worlds of architecture, gaming, AI, and robotics—and managed to turn those disparate threads into a startup tackling one of the most important problems in our robotic future.
Steven Ren, the co-founder and CEO of Palatial, joined me from Lower Manhattan to share the winding journey of his company—from Cornell’s architecture school to optimizing simulations for robot training at scale. We went deep on the technology, market evolution, and product insights he’s picked up along the way—and there are dozens of takeaways here for early and mid-career PMs, especially those building infrastructure, devtools, or working in AI-adjacent spaces.
From Watercolors to Headsets: The Early Seeds
Steven didn’t grow up dreaming of building tools for humanoid robot training. He actually wanted to be an architect—and studied architecture at Cornell. His turning point came in a multidisciplinary studio class led by Don Greenberg, a legend in computer graphics.
“He was always trying to get architects to work together with the CS people… and that really opened my eyes to what immersive tech and real-time rendering could do for communicating spaces.”
This interdisciplinary exposure planted the idea that real-time, explorable 3D environments could fundamentally improve how people visualize, design, and collaborate around spaces—both physical and digital.
He got a taste of this while at Tesla, working on Giga factory expansion. The rapid pace of construction caused costly design coordination issues, and Steven built a prototype that stitched disparate CAD formats into a fly-through simulation using Unreal Engine.
“I put together a pipeline that optimized and converted all the CAD designs into an Unreal Engine level—basically a big game—so they could fly around and see how everything fit together.”
It helped prevent expensive errors and even became a tool for internal storytelling. That experience solidified his conviction: digital twins weren’t just cool—they were valuable. He knew he wanted to build a company that scaled that capability.
Pivot 1: From Architecture to Optimization
The initial Palatial concept was ambitious: a cloud platform where architects could upload CAD files and get back interactive, game-like visualizations that clients could explore in the browser.
Sounds great—until you realize how unpredictable CAD file structures can be.
“Every software is different, and everyone uses the software differently. You have to make foundational translations between how engineers organize a scene and how game engines expect it.”
Instead of a tidy black box, they were faced with a combinatorial nightmare of input variability. Worse, customers didn’t want a finished result—they wanted control over how their designs were rendered and experienced.
So they pivoted. The new insight: the universal pain point was optimization. Making the scenes look and perform well across platforms.
Enter: Palatial as a plugin for Unreal Engine. The new tool became something like “CCleaner for your 3D scene,” scanning for inefficiencies and letting users apply best-practice fixes with a few clicks. Lighting, texture mapping, model merging—all simplified and standardized.
“Even if you don’t understand what’s going on, the idea is that you can arrive at a much more optimized project… and sometimes better-looking too.”
If you’re a PM shipping developer tools or plugins, take note: this pivot exemplifies how deep user testing can uncover the narrow wedge feature that wins adoption—before expanding.
The Aha Moment: Simulations, Not Showcases
Despite the optimization plugin gaining traction, Steven and the team began to spot a different kind of demand: robotics companies were building millions of virtual environments for training and testing.
“You need like hundreds of thousands of environments to teach the robot all the different variations of the world it could come across.”
Today, many of those teams manually build 3D scenes—or worse, ask ML engineers to fumble their way through creative tasks. It’s expensive, inconsistent, and distracts from core innovation. Steven saw a gap Palatial was well-suited to fill.
So they pivoted again.
Now, Palatial is focused on powering massive-scale, high-fidelity simulation environments—starting with objects and scenes that train robots to physically manipulate the real world.
PM Takeaway #1: Don’t Fear the Pivot—Engineer for It
Most PMs are taught to avoid scope creep, but what Palatial did is different. They bet on a market’s inevitable evolution (robotics), built a wedge feature (optimization), and used that to find the real platform opportunity (simulation infrastructure).
Steven put it plainly:
“It’s been a winding journey. We thought we’d serve architects, then realized robot developers had the same need—but at far greater scale.”
This is a playbook for product leaders:
* Find a general pain point across verticals (in Palatial’s case: messy 3D pipelines)
* Build a useful component (e.g., optimization plugin)
* Watch for the industry that experiences that pain at 10x scale (robotics vs. architecture)
PM Takeaway #2: Build for Openness, Not Lock-In
Another strategic decision: rather than offering a fully walled-off end-to-end platform, Palatial focused on modularity.
“We’re going to offer this as an API so teams can build generation into their existing pipeline… and just use that piece.”
In a world where AI stacks are increasingly bespoke, trying to own everything can backfire. By being composable, Palatial makes itself easier to adopt—especially for developers already invested in internal tooling.
Whether you’re in devtools, AI, or infra, this is a good reminder: great platforms start by being great plugins.
PM Takeaway #3: Product-Market Fit Might Be a Who, Not a What
Palatial didn’t change their core tech—they changed the user.
Same backend pipeline. Same rendering engine. But by shifting from architects (low frequency, high customization) to robotics engineers (high frequency, high fidelity), they unlocked a recurring, sticky use case.
“We realized this isn’t about showcasing a single building. It’s about training robots through thousands of virtual environments—and those environments need to look and behave like the real world.”
This kind of vertical shift is especially relevant in today’s AI world, where many companies sit atop general capabilities. The biggest opportunities often come from narrowing the audience, not the scope.
PM Takeaway #4: Speed is the New Moat
In one of my favorite moments, I asked Steven how he thinks about competitive defensibility.
His answer:
“There’s no such thing as a technological moat anymore. The moat is speed—having a nimble team that can iterate fast and adapt.”
We’ve heard echoes of this across the startup world, but it hits especially hard in AI and frontier tech. If you’re leading a PM team, ask yourself: are you shipping faster than your competitors can copy you?
And if not, why not?
PM Takeaway #5: Accuracy Will Be the Differentiator in the Robot Era
One thing Steven emphasized again and again was realism. In order for simulation-trained robots to be effective, their environments must behave like the real world. That means physical properties, lighting conditions, and object metadata all matter.
“There’s no point in generating data if it doesn’t match reality. You can generate as much crappy data as you want—it’s like oversweetened candy. You don’t want it.”
In other words: in the age of synthetic data and generative tools, quality—not just quantity—will win.
As a PM, that might mean:
* Prioritizing fidelity over speed when the stakes are high
* Partnering with domain experts to tune your models
* Making room for manual curation and validation—even if it slows you down
PM Takeaway #6: Be Willing to Outgrow Your Initial Market
Steven was candid about the limits of their original architecture play:
“It was kind of a one-and-done thing. There’s a bigger market where you need many environments, all the time.”
This highlights something I often tell coaching clients: your first ICP (ideal customer profile) is often just a foothold. Pay attention when your usage data, pricing power, or support requests point to higher-value customers in adjacent markets.
Where Palatial Is Headed
Today, Palatial is in the middle of rolling out their MVP for simulation-ready 3D asset generation. These aren’t just pretty models—they contain metadata about mass, bounce, physics, and more, making them usable for training and validation.
They’re also building the tooling to generate full environments from those assets and optimize them for scale.
Eventually, Steven sees a future where the robots themselves are capturing and syncing environments in real-time:
“Eventually this will be onboard the robots. As they walk around, they’ll translate what they see into a digital twin—and train on that in the background.”
That vision is a long way off. But Palatial is betting that when we get there, infrastructure like theirs will be indispensable.
Final Thoughts
If you’re an early or mid-career PM, a few questions to reflect on:
* What new verticals are quietly developing the same problems my team is already solving?
* Is there a simpler, standalone piece of my product that could become a wedge?
* Am I over-investing in platform scope vs. developer modularity?
* Is my team fast enough to stay ahead in a post-moat world?
If you want to stay close to the frontlines of robotics infrastructure—or you just want to learn from a founder iterating in public—follow Steven Ren and check out palatialxr.com.
And if your own company is navigating complex product strategy decisions or early-stage growth hurdles, I offer one-on-one coaching at tomleungcoaching.com, and product consulting and startup advisory services at paloaltofoundry.com.
OK, enough pontificating. Back to work, team.
Twenty-five years ago, Tim DeSieno and I were two outsiders on the tropical island of Singapore, me trying to build a startup, him fresh out of a restructuring law practice. We reconnected recently on the Fireside PM podcast, and what followed was one of the most illuminating conversations I've had this year.
Tim's career arc is anything but conventional: from decades in global debt restructuring to litigation finance investor, and now advisor to an AI legal startup. The conversation, which started as a reunion, turned into a firehose of insight—for lawyers, founders, and especially product managers trying to anticipate where disruption lands next.
This post distills that hour-long conversation into key lessons for early- and mid-career product managers. Whether you're wrangling roadmaps at a Series A startup or driving platform strategy at a late-stage unicorn, you'll find practical frameworks, surprising analogies, and a peek into the wild intersection of law and AI.
1. Litigation Funding Is What Early VC Investing Looks Like in a Non-Tech Industry
"We would look at 100 cases, take three seriously, and maybe fund one."
Tim described litigation finance as a "venture capital" approach to legal claims. Funders underwrite the legal equivalent of startups: high-risk, high-reward lawsuits with uncertain outcomes. The investment model is classic VC—non-recourse funding in exchange for a percentage of winnings—but applied to torts, sovereign disputes, and commercial litigation.
This is a also a class in triage. As PMs, we're sometimes guilty of over-indexing on tech, TAM or user demand without enough scrutiny of distribution or defensibility. In litigation finance, everything must be strong: the legal basis, the plaintiff’s character, the likelihood of enforcement.
Actionable Advice:
* When evaluating new bets, use a PM version of Tim’s triangle: Strength of case, rational actor, enforceability. Substitute your product’s domain as needed. If your bet falls apart on any leg, kill it early.
* Don’t be afraid to walk away. "We’d spend weeks researching only to discover a fatal flaw." Avoid sunk cost fallacy.
2. The Real AI Gold Rush Isn’t Just Generation, It’s Prediction
Harvey (the legal AI startup backed by OpenAI) gets the headlines, but Tim is on the board of an earlier stage adjacent player called Canotera. Instead of drafting, Canotera predicts litigation outcomes. Think of it as a risk analytics layer built from all New York legal precedents, offering lawyers (and insurers, GCs, even arbitrators) a probabilistic view of their odds.
"It’s like calling up a senior partner and getting a second opinion—except this one has read every case."
This isn't just a better way to write memos. It's a decision-making accelerator.
Product Insight: There are many types of AI value in any vertical:
* Efficiency (do more, faster)
* Accuracy (better outcomes)
* Confidence (de-risking decisions)
Harvey is largely #1 and #2. Canotera is going hard at #3.
Actionable Advice:
* When building AI products, map your feature set to these value levers. Which one are you really selling?
* Don’t sleep on #3—especially in regulated or high-stakes domains, confidence trumps speed.
3. Adoption Gaps Aren’t Just Technical—They’re Psychological
"The number of people in law who haven’t touched ChatGPT is shockingly large."
Sound familiar? We’ve all worked with that PM, eng lead, or exec who in late 2022 who thought gen-AI was a toy. The parallel to law is stark: many lawyers fear AI not because it's ineffective, but because it threatens their identity.
In both professions, billing hours and writing decks have long been proxies for value. When those tasks are automated, the insecurity is real.
Actionable Advice:
* Frame AI as augmentation, not replacement. Tim noted the firms that are thriving are those that say, “Yes, we bill per hour—but we’ll use AI to deliver more per hour.”
* Early adopters are not just tech-savvy—they're secure enough to rethink their role. When evangelizing AI, target the curious and the confident.
4. “Doctrinal vs. Practical” Isn’t Just a Law School Problem
"You come out of law school, and you're good at arguing both sides. But no client wants that."
Tim called out how legal education—especially the Socratic case method—trains great thinkers but poor practitioners. Law grads often need years of on-the-job experience before they become useful to clients.
Sound like any junior PMs you know?
Product teams are often full of doctrinal thinkers—people great at debating frameworks, prioritization models, or vision decks. But if you can’t turn that into a working prototype, a roadmap aligned with GTM, or a tough tradeoff call, you’re not adding value.
Actionable Advice:
* “Thinking like a PM” (strategy, ambiguity, storytelling) is necessary but not sufficient. Pair it with executional reps early in your career.
* If you’re a manager, give your ICs reps they can own end to end. Treat it like an apprenticeship, not just a theoretical seminar.
5. Liberal Arts Still Matter—Even in the Age of AGI
"If you can’t write it clearly, you don’t own it."
Tim made a powerful case for the liberal arts as the antidote to AI passivity. He sees students turning in polished work generated by LLMs but lacking any real grasp of the content. Writing, he argues, is thinking. If you can't articulate a point unassisted, your judgment muscles don’t get built.
Actionable Advice:
* Don’t outsource the first 70% of a product brief, strategy doc, or roadmap to ChatGPT. Use AI to refine and stress-test, not originate.
* Push yourself to learn something uncomfortable. Tim’s litmus test: "Do hard things that are new to you. That’s how you grow judgment."
6. You’re Not Competing With AI, You’re Competing With Humans Using It Better
"A junior lawyer with AI tools can be more valuable than a senior one without."
In a decade, your job won't be taken by AI—but it might be taken by someone with 5 years less experience who knows how to pair human empathy with AI speed.
Actionable Advice:
* Learn prompt engineering, yes—but also get great at evaluating AI output. That judgment layer is what companies will pay for.
* Practice defending ideas live, without a script. At some point, someone will ask, “Why did you make that decision?” Be ready.
7. Forecasting the Endgame: When Courts Run on Code
"Maybe one day litigation disappears—two parties upload their facts, the machine decides, and that’s enforceable."
While Tim was cautious to say this vision is far off, the implications are worth pondering. What happens when not just lawyers, but judges, juries, and arbitrators are augmented—or replaced—by machines?
Whether or not this comes to pass, the lesson is clear: no profession is immune. If law can be automated, so can most knowledge work. And product managers will either ride that wave—or be washed away.
In Closing
As PMs, we love talking about disruption—but we rarely get to see it play out in an industry as slow-moving and tradition-bound as law. That’s what made this conversation with Tim DeSieno so instructive. Law is changing. AI is changing. And the humans who thrive are the ones who stay curious, adaptable, and relentlessly focused on value—not ego.
If this resonated, I offer 1:1 coaching for product leaders at tomleungcoaching.com, and PM consulting through paloaltofoundry.com.
OK. Enough pontificating. Let's get back to work.
“We Are Not in Kansas (or Creatorland) Anymore”
When I kicked off this Fireside PM interview with Ben Grubbs, I knew we’d cover the creator economy. What I didn’t expect was how much of it would end up being an MBA seminar for product managers.
Ben isn’t just another ex-YouTube guy with creator war stories. He’s seen the evolution of the online video ecosystem from its scrappy, quirky beginnings to the billion-dollar global marketplace it is today. His vantage point spans across YouTube FanFest, the launch of YouTube Kids, and later, his own venture Creator+.
But this isn’t a nostalgia trip. This conversation is about understanding where the creator economy went right, where it went off the rails, and what PMs and builders can learn from those who survived—and thrived.
Let’s break it down.
1. Don’t Just Sell Picks and Shovels—Sell Gold Bars Too
There’s an old startup trope: during a gold rush, the people who make the money are the ones selling picks and shovels.
Ben and I reflected on this assumption when it came to the 2021–2022 wave of creator economy startups—tools for analytics, monetization, editing, payroll, and more.
A lot of those bets fizzled.
Why?
Because the “miners”—the creators—were not your typical enterprise buyers. Most didn’t make enough to justify expensive tools, and those who did weren’t being well-served.
“You had companies working with hundreds or thousands of creators,” Ben said. “But they were all Tier 5 or Tier 6. The top creators—the ones running real businesses—weren’t touching these tools. The startups couldn’t crack that ceiling.”
Creators with scale (think Tier 1) needed tools built with deep empathy for their workflows—but often the tool builders didn’t even have relationships with these creators.
It’s a warning for PMs: Just because there’s a problem doesn’t mean the solution is a venture-scale business.
Ben would often gut-check startup ideas by calling former colleagues at YouTube to ask if the feature in question was in the product roadmap.
“If they told me it was far down the list—great. That’s a two-year runway. But if it was near the top? I’d pass.”
Takeaway for PMs: Before betting your career or company on a “picks and shovels” play, ask:
* Can I serve the high-value users, or am I stuck with long-tail?
* Is this something the platform will inevitably build?
* Does this idea have cross-platform defensibility?
If the answer to all three is “no,” it’s probably not a durable business.
2. The Myth of the Accidental Creator
One of the most common origin stories in the creator economy is the passionate hobbyist who stumbled into success. But that’s no longer the only model—or even the dominant one.
Ben contrasted the early YouTube generation with today’s operator-led brands like Good Good Golf, where content wasn’t the product—it was the acquisition channel.
“This wasn’t some happy accident. Good Good had a clear business strategy from Day One. Content was the top-of-funnel. They were always going to build a real consumer business.”
And build they did. Good Good went from viral YouTube content to a thriving golf apparel and equipment brand, all while keeping production margins high and paid marketing spend low.
How? They applied DTC logic to a creator-native model. Instead of paying for reach, YouTube paid them to market their own products.
“Some DTC founders were stunned by their margins. But they didn’t realize: Good Good gets paid for their marketing.”
Ben’s point: this isn’t selling out. It’s growing up.
And it’s working.
Actionable Tip for PMs: When evaluating growth loops, ask yourself:
* Is our content serving a bigger business objective?
* Can our audience also become customers?
* Are we building a brand—or just renting attention?
3. Build for the Power Law
We all know the creator economy is a power-law business. But what does that mean for those building around it?
Ben shared a fascinating stat from his YouTube days: at one point, 4,500 creators met the threshold to qualify for top-tier partnership. But YouTube had resources to serve just 500.
“We couldn’t support everyone. And the people who qualified were far more than we could manage. That’s when I realized: there's a huge gap.”
That gap created opportunities—but only if you could build for the whales.
Most of the SaaS tools went after the long tail. Wrong call.
“The top creators are basically SMBs. They need operational support, yes—but they also need defensible strategy, content licensing, IP management. That’s not just software—that’s consulting, services, and deal-making.”
Moonbug is a perfect example.
They weren’t a tool. They were a studio that centralized production, built IP (like Cocomelon), and sold toys, media rights, and more. They exited for over a billion dollars.
For PMs and founders, the takeaway is this:
* Don’t assume the long tail is the market.
* Go upstream. Serve the whales.
* Focus on full-stack solutions, not just utilities.
If you’re not building something worth $10M+ in ARR from a dozen clients, you're probably building a feature, not a business.
4. MrBeast Is a Company, Not Just a Creator—and That’s the Point
We couldn’t have this convo without talking about MrBeast.
Ben sees Jimmy Donaldson as a pioneer not just in content, but in company structure. His organization isn't a hobbyist’s shop—it’s a holding company with a real CEO.
“Jimmy’s not the CEO. He’s the chairman. They hired a real operator from public markets. That person is building a corporate org. They’re hiring institutional people. It’s becoming a conglomerate.”
Unlike most creator ventures—where investors buy into just one slice of the pie—MrBeast’s holding company gives investors exposure to all ventures.
Think Alphabet, not a side hustle.
“It’s better alignment. If you’re putting in capital, you want access to the whole thing—not just the candy bar business or the mobile game.”
And that model might just be repeatable.
As Ben put it: “A lot of creators say they want to be CEOs. But once they see what CEOs actually do—HR, legal, compliance—they change their mind fast.”
Jimmy didn’t want to be bogged down in operations. So he hired someone who could be.
PM Insight: If you're working with high-talent individuals—creators, researchers, engineers—don’t just elevate them to management. Design orgs where they can focus on their strengths and bring in ops leaders to scale.
5. AI Is Not the End. It’s the Efficiency Revolution.
Toward the end of the episode, we dove into AI’s impact on the creator economy.
Ben doesn’t see it as a doomsday scenario. Quite the opposite.
“One animation company showed me a tool that turned a sketch into a production-ready 3D model—in real time. That’s insane. The question is: do you lower your prices… or do you double your margin?”
That’s the rub.
AI will reduce production costs. Which means more creators will have studio-grade tools at their fingertips. It also means fewer people per production.
“I was on a shoot with 50 people. Half weren’t doing anything. I realized the producer brought them in for optics—to make it look big-budget.”
In other words, there’s fat to be trimmed. And AI is the scalpel.
I brought up the stability of the power-law impact. The best AI-assisted content will still win. Most will get buried.
“It’s like CGI in the movies. People feared it would kill cinema. Instead, it became standard. AI might be the same—just another tool.”
For your PM roadmap, this means:
* Expect higher expectations from users.
* Deliver faster, smarter workflows.
* Don’t fight AI—integrate it.
TL;DR: Actionable Advice for PMs in Silicon Valley
Here are the five key lessons from my talk with Ben Grubbs that every PM should remember:
1. Validate Against the Platform’s Roadmap
Before building around YouTube, TikTok, or Instagram, ask: Is this 12 months from being native? If yes, pivot.
2. Serve the Top of the Pyramid
The most successful creators need full-stack services and strategic guidance—not basic tools.
3. Build Brand, Not Just Product
Creators who win big start with brand ambition, not content luck. Align your product roadmap accordingly.
4. Separate Creator Talent from CEO Skillsets
Great creators aren’t always great operators. Your org chart needs to reflect that.
5. Use AI to Win on Efficiency
AI won’t replace you—but the PM who uses AI will. Bake it into production and product from the ground up.
Final Thoughts: Betting on the Right Side of Disruption
As Ben told me:
“You want to be on the side of the disruptor. Not waiting to get disrupted.”
That’s never been truer than in 2025. Whether you're working at a Big Tech platform, building the next venture-backed app, or leading product at a creator startup—this space is still changing fast so go where the puck is going and realize that puck is leaping ahead every month.
If you're navigating a challenging PM role or trying to make your next career move in tech, I offer one-on-one coaching through TomLeungCoaching.com. For companies that want to accelerate their product strategy or AI roadmap, check out my advisory work at PaloAltoFoundry.com.
OK, enough pontificating. Let’s get back to work.