- 43 minutes 20 secondsThe Secrets of Claude's Platform From the Team Who Built It
In the future, you’ll be able to accomplish a goal by just giving Claude an outcome and a budget.
That’s the direction Anthropic is building in with its new Managed Agents features, announced at this week’s Code with Claude developer event. The basic idea: Claude, wrapped in a computer in the cloud, that you can spin up, scale, and manage as needed. Anthropic is taking on the infrastructure that kills most agent products, and making sure that it scales to meet the needs of agents running 24/7.
On this week’s AI & I from @every, I talk with Angela Jiang (@angjiang), head of product for the Claude platform, and Katelyn Lesse (@katelyn_lesse), head of engineering for the Claude platform, about what Anthropic is building and what it takes to make agents reliable in production.
If you found this episode interesting, please like, subscribe, comment, and share!
To hear more from Dan Shipper:
Subscribe to Every: https://every.to/subscribe
Follow him on X: https://twitter.com/danshipper
Timestamps:
00:01:48 - How the Claude platform evolved from API to agents
00:04:09 - The primitives that make up Claude Managed Agents
00:10:37 - Why the harness and the model are becoming a single unit
00:18:49 - The infrastructure wall that kills most agent projects in production
00:24:49 - Why team agents need a different shape than individual productivity tools
00:26:36 - How Anthropic's legal team uses an agent to review marketing copy
00:34:24 - Using multi-agent orchestration for advisor strategies, adversarial pairs, and swarms
00:35:50 - How to measure agent success with outcome and budget as the end state
00:39:11 - What the platform looks like a year from now, when Claude writes its own harness
8 May 2026, 8:27 pm - 58 minutes 23 secondsWhy We Switched From Claude Code to Codex
In January, Dan Shipper wrote that whoever wins vibe coding wins how you work on your computer—and OpenAI had some serious catching up to do.
Three months and the release of GPT-5.5 later, Codex has more than caught up. Austin Tedesco, Every's head of growth, now spends about 80 percent of his working time inside the Codex desktop app, doing everything from drafting go-to-market plans from a stack of meeting transcripts to rebuilding the company's KPI dashboard.
On this episode of AI & I, Dan sat down with Austin to discuss why the agent management interface—a desktop app built on top of a coding agent—is becoming the new operating system for knowledge work, and why Codex has become his daily driver.
If you found this episode interesting, please like, subscribe, comment, and share!
To hear more from Dan Shipper:
Subscribe to Every: every.to/subscribe
Follow him on X: twitter.com/danshipper
Join the membership for Where You Live at joinbilt.com/dan
Timestamps for YouTube:
00:00:00 Introduction
00:00:57 How Codex went from a tool for senior engineers to a daily driver for knowledge work
00:02:42 How Claude Code proved that a great coding agent works for any knowledge work
00:07:24 Austin's switch to Codex
00:13:48 How Austin set up Codex with folders, keys, and reviewer agents
00:18:24 Using Codex to brainstorm automations across Gmail, Slack, and Notion
00:22:42 How Austin manages the human review step when Codex is drafting communications
00:28:54 Using Codex to build specialized agents inspired by product executive Claire Vo
00:31:09 Synthesizing meeting transcripts and Slack threads into a go-to-market plan
00:40:15 Building a live KPI tracker in Notion that agents can read
00:44:54 Using Codex for recruitingLinks to resources mentioned in the episode:
Austin on X: @tedescau
Dan's January essay on OpenAI's catch-up problem: every.to/chain-of-thought/openai-has-some-catching-up-to-do
Every's vibe check on GPT-5.5: every.to/vibe-check/gpt-5-5
6 May 2026, 3:00 pm - 53 minutes 53 secondsHow Stripe Is Building for an Agent-native World
Emily Glassberg Sands leads data and AI at Stripe, which processes roughly 2% of global GDP, giving her a bird’s-eye view into how AI is upending the internet economy. Dan Shipper talked with Glassberg Sands for Every's AI & I about what the data on Stripe's network actually shows: AI companies are scaling three times faster than the top SaaS cohort of 2018, fraud has moved from the checkout to the full funnel, and agents have started buying things, although mostly low-stakes commodities like Halloween costumes. The conversation covers the new fraud types unique to AI companies, the AI-on-AI arms race between bad actors and fraud detectors, where AI revenue growth is actually coming from, and how Stripe is rebuilding the payments infrastructure for a world where the buyer is an agent.If you found this episode interesting, please like, subscribe, comment, and share!To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribeFollow him on X: https://twitter.com/danshipperHead to http://granola.ai/every and get 3 months free with the code EVERYTimestamps00:00:45 Introduction00:01:27 New rules for an agent-driven economy00:03:57 Compute theft is the new payment fraud00:10:00 How Stripe expanded fraud detection from checkout to the full customer lifecycle00:19:48 Why AI companies are scaling way faster than top SaaS companies00:23:27 Outcome-based billing is replacing seat-based pricing00:29:57 Where AI spending is coming from00:36:45 How the developer experience changes when agents are the builders00:41:00 The agentic commerce spectrum, from assisted buying to autonomous purchasing00:51:06 Meet Link, a consumer wallet for delegated agent purchasesLinks to resources mentioned in the episode:Emily Glassberg Sands on X: https://x.com/emilygsandsStripe: https://stripe.comStripe Radar: https://stripe.com/radarStripe Link: https://link.comLovable: https://lovable.dev
29 April 2026, 3:23 pm - 28 minutes 30 secondsThe AI Sandwich: Where Humans Excel in an AI World
Most frameworks for working with AI agents assume humans should stay in the loop at every phase. That’s the wrong approach, says Cora general manager Kieran Klaassen.
Kieran is the creator of Every's AI-native engineering methodology, compound engineering. His four-step framework—plan, work, review, compound—rebuilds how engineers work with agents. The insight, worked out with collaborator Trevin Chow, is about when to be in the loop and when to step away and let the model handle it. "LLMs are very good at just following steps, doing deep work, working for hours—days even now," Kieran says. "That thing is kind of solved."
Kieran and Trevin describe an AI workflow as a sandwich. Agents are the workhorse filling, and humans are the bread, responsible for framing the problem at the start and reviewing the outputs at the end.
Every CEO Dan Shipper talked with Kieran for AI & I about why setting the frame of a problem is still hard for agents, why simulated personas won't replace human judgment, Dan's bar for AGI—an agent worth running 24/7 with no off switch—and what Kieran's background as a classical composer taught him about performance, polish, and finding the parts of work that bring you joy.
If you found this episode interesting, please like, subscribe, comment, and share!
Head to http://granola.ai/every and get 3 months free with the code EVERY
To hear more from Dan Shipper:- Subscribe to Every: https://every.to/subscribe
- Follow him on X: https://twitter.com/danshipper
- Compound engineering plugin: https://github.com/EveryInc/compound-engineering-plugin
- Compound engineering guide: https://every.to/source-code/compound-engineering-the-definitive-guide
- Compound engineering camp: https://every.to/source-code/compound-engineering-camp-every-step-from-scratch
Discover more resources in the episode
Timestamps:
00:00:00 – Introduction and the AI sandwich metaphor
00:02:33 – What compound engineering is and how it’s evolved
00:04:27 – The "work" phase of agentic coding is essentially solved
00:06:27 – Why humans belong at the beginning and the end of an AI workflow
00:11:06 – Dan's argument for why agents can't change frames—and how this will keep us employed
00:16:51 – Full automation is a moving target
00:23:21 – Musical composition as a model for human-AI collaboration
00:26:39 – Find your place in an AI-accelerated world by leaning into what brings you joy22 April 2026, 6:53 pm - 53 minutes 37 secondsThe AI Model Built for What LLMs Can't Do
Most AI companies are racing to build bigger LLMs. Eve Bodnia thinks that's the wrong approach.
Eve is the founder and CEO of Logical Intelligence, which is developing an alternative to the transformer-based models dominating the industry. Her argument: LLMs’ architecture makes them fundamentally unsuited for some mission-critical tasks. A system that generates output one token at a time, with no ability to inspect its own reasoning mid-process or guarantee its results, shouldn't be trusted to design chips, analyze financial data, or even fly a plane. Her alternative is the energy-based model (EBM), a form of AI rooted in the physics principle of energy minimization, not language prediction. Rather than guessing the next probable word, an EBM maps every possible outcome across a mathematical landscape, where likely states settle into valleys and improbable ones sit on peaks.
Dan Shipper talked with Bodnia for AI & I about why she believes LLM progress is plateauing, what it means for AI to actually understand data rather than just pattern-match across it, and how her team is building toward formally verified code generated in plain English—no C++ required.
If you found this episode interesting, please like, subscribe, comment, and share!
Head to http://granola.ai/every and get 3 months free with the code EVERY
To hear more from Dan Shipper:
Subscribe to Every: https://every.to/subscribe
Follow him on X: https://twitter.com/danshipper
Timestamps:
00:00:51 - Introduction
00:02:09 - Why correctness and verifiability matter in AI
00:09:33 - What an energy-based model is
00:14:21 - How EBMs construct energy landscapes to understand data
00:19:00 - Why modeling intelligence through language alone is a flawed approach
00:26:54 - What it means for a model to "understand" data
00:37:21 - How EBMs solve the vibe coding problem and enable formally verified code
00:43:21 - Why LLM progress is plateauing
00:49:54 - Mission-critical industries haven't adopted LLMs, and how EBMs could fill that gap
15 April 2026, 3:00 pm - 49 minutes 42 secondsWe Gave Every Employee an AI Agent. Here's What Happened.
While walking to the office, our COO Brandon Gell had his AI agent call him and go over his emails in his inbox one by one. When he arrived, he opened Gmail and confirmed she'd done everything he'd asked. "My jaw is on the floor," he messaged me.
That was the moment Every got serious about setting up each employee with their own agent. Today, it's a reality—and it has completely changed how we work.
Dan Shipper talked to Every COO Brandon Gell and head of platform Willie Williams for Every's AI & I about what happens when everyone at a company gets their own AI sidekick.
If you found this episode interesting, please like, subscribe, comment, and share!
To hear more from Dan Shipper:
Subscribe to Every: https://every.to/subscribe
Follow him on X: https://twitter.com/danshipper
Visit https://scl.ai/dialect to learn more about Dialect, a new system from Scale AI.
Timestamps:
00:00 Introduction
00:02:21 How Brandon built Zosia, an AI agent to run his household
00:07:09 Brandon's aha moment re: using agents for work
00:09:39 What happened when everyone on the team got their own agent
00:12:42 How agents take on their owners' personalities, and why that matters inside an org
00:23:51 Why it's important for agents to do work in public
00:30:51 What we're still figuring out when it comes to agent behavior, including memory gaps, group chat etiquette, and the "ant death spiral" problem
00:40:45 How we built Plus One, our hosted OpenClaw product
00:47:27 The cultural shift required to make agents work at scale
8 April 2026, 3:00 pm - 52 minutes 48 secondsIf SaaS Is Dead, Linear Didn't Get the Memo
Founded in 2019, Linear is the rare company started pre-ChatGPT to have successfully reinvented itself as an agent-native business.
On this episode of AI & I, Dan Shipper sat down with Karri Saarinen, cofounder and CEO of the product management tool, to discuss building a platform where humans and agents develop software together—and why the "SaaSpocalypse" isn’t coming for all SaaS companies.
If you found this episode interesting, please like, subscribe, comment, and share!
To hear more from Dan Shipper:
Subscribe to Every: https://every.to/subscribe
Follow him on X: https://twitter.com/danshipper
Visit https://scl.ai/dialect to learn more about Dialect, a new system from Scale AI.
Timestamps:
0:00 Introduction
2:00 Why Linear waited to ship AI features instead of rushing to chatbots
5:06 Linear's agent platform and becoming the system that guides AI agents
7:42 Why "SaaS is dead" is a simplistic narrative
12:18 How Linear adopted AI coding tools
17:45 AI's impact on product building workflows—speed versus thoughtfulness
22:18 The value of conceptual work and thinking before shipping
29:30 How AI is reshaping Linear's product strategy
37:18 Demo: Linear's agent skills, shared context, and code review workflow
47:48 The future of product development and the enduring role of human judgment
1 April 2026, 3:00 pm - 48 minutes 29 secondsHow to Build an Agent-native Product | Mike Krieger
Mike Krieger built one of the most consequential consumer apps of the last two decades as cofounder of Instagram. He is now at the frontier of determining what makes a breakout AI-native product as co-lead of Anthropic Labs.
Dan Shipper talked with Krieger for Every’s AI & I about how his experience creating Instagram shapes how he thinks about building with AI, including what can be sped up and what remains stubbornly time-intensive.
If you found this episode interesting, please like, subscribe, comment, and share!
To hear more from Dan Shipper:
Subscribe to Every: https://every.to/subscribe
Follow him on X: https://twitter.com/danshipper
Download Grammarly for FREE at grammarly.com
Timestamps
Introduction: 00:01:39
What's gotten easier—and what hasn't—about building products in the age of AI: 00:02:33
Why vibe coding creates "indoor trees": 00:05:00
How rewrites have become a normal part of the development process: 00:09:00
What "agent native" product design means: 00:11:39
How Mike's labs team is structured and the cofounder model: 00:24:27
The best signal for a product bet is someone with "break through walls" conviction: 00:29:33
Navigating enterprise customers while keeping pace with rapid AI change: 00:38:51
OpenClaw, personal agents, and the product question defining 2026: 00:40:54
Links to resources mentioned in the episode:
Mike Krieger: https://x.com/mikeyk
Agent-native architecture: https://every.to/guides/agent-native
25 March 2026, 3:19 pm - 56 minutes 33 secondsKate Lee on Taste, Hiring, and Running Editorial at Every
Kate Lee has spent her career working with words—first as a literary agent, then in roles at Medium, WeWork, and Stripe. As Every’s editor in chief, she’s been the quiet force behind the newsletter for more than three years.
Lately, something has shifted in Kate’s work. After years of watching her colleague Dan Shipper evangelize AI from the front lines, Katie has started rewiring how she works and is integrating more and more AI tools into her workflow.
We had Kate on to talk about her career path from book deals to tech startups, what it really means to run a newsletter as a small team in the age of AI, and what she thinks the bottleneck to automating copyediting is. Plus: the story of pulling off reviews of two major model releases in 24 hours, and how she’s using her AI-powered browser to help her hire.
To hear more from Dan Shipper:
Subscribe to Every: https://every.to/subscribe
Follow him on X: https://twitter.com/danshipperTimestamps
0:01 – Introduction and Kate's early career as a literary agent
4:45 – From book publishing to tech: Medium, WeWork, and Stripe Press
12:00 – How Kate joined Every and what made the role click
27:00 – What it's like to be a knowledge worker at the frontier of AI
31:00 – The “aha” moment: using AI to manage hundreds of applicants
36:24 – How Every's editorial team uses AI to enforce standards and train taste
45:06 – Publishing two reviews of major model releases on the same day
51:39 – What automating copy editing requiresLinks to resources mentioned in the episode:
Proof: https://www.proofeditor.ai/18 March 2026, 4:16 pm - 44 minutes 37 secondsWe Made a Document Editor Where Humans and AI Work Side by Side
Every has unveiled a new product, built by CEO Dan Shipper. It's called Proof, a free, open-source, live collaborative document editor built for humans and AI agents to work in together.
Proof started as a Mac app designed to show the provenance of AI-written text—purple for AI, green for human. But when Shipper rebuilt it as a web app with real-time collaboration, something clicked. Suddenly, everyone at Every was using it for everything from planning docs, to creative writing and even daily to-do lists. The team realized they needed a lightweight space where their OpenClaw agents and humans could co-author documents and leave comments.
In this special episode, Shipper is joined by Every chief operating officer Brandon Gell, Cora general manager Kieran Klaassen, and head of growth Austin Tedesco to demo Proof live and share how it's changed the way they work. Brandon walks through a loop where his Codex agent writes a plan, Dan's personal Claw R2-C2 reviews it, and the humans just steer. Austin explains how he uses Proof to write a weekly food newsletter, texting ideas to his Claw on runs and watching an outline take shape. And Kieran makes the case that Proof's power is its lightness—just a link you can hand to any agent or colleague.
The conversation covers what "agent native" means in practice, why AX (agent experience) matters as much as UX (user experience), what happens when 10 agents edit one document at the same time, and why some writing is now better read by an AI than a human.
If you found this episode interesting, please like, subscribe, comment, and share!
Want even more?
Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.
To hear more from Dan Shipper:- Subscribe to Every: https://every.to/subscribe
- Follow him on X: https://twitter.com/danshipper
Get started building today at framer.com/dan for 30% OFF a Framer Pro annual plan.Download Grammarly for free at Grammarly.com
Timestamps
00:02:00 — Introduction and the origin story of Proof
00:07:24 — From Mac app to collaborative web editor
00:09:00 — What makes Proof “agent native”
00:14:30 — Live demo: watching an agent join and write inside a shared document
00:20:51 — How Austin uses Proof for creative writing and food journalism
00:24:30 — The challenge of multiple agents editing one document simultaneously
00:26:48 — When AI-written docs are better read by agents than by humans
00:29:30 — Brandon’s agent-to-agent collaboration loop
00:37:09 — Proof as a lightweight scratchpad vs. existing tools like Notion and GitHub
00:42:18 — Why Proof is open source and what that means for buildersLinks to resources mentioned in the episode:
Proof Editor: https://proofeditor.ai
Proof GitHub repo (open source): https://github.com/EveryInc/proof
Every's compound engineering plugin: https://github.com/EveryInc/compound-engineering-plugin
11 March 2026, 3:00 pm - 45 minutes 27 secondsMeet the Slowest Startup Incubator in the World—Pumping Out Billion-dollar Companies
Silicon Valley loves billion-dollar moonshots and AI darlings. Sam Gerstenzang and Dan Friedman are doing something different—they're starting medical spas and funeral homes.
On this episode of AI & I, Dan Shipper sat down with Gerstenzang and Friedman, partners at Boulton and Watt, which they call the "world's slowest startup incubator." Their model: Come up with an idea, achieve five or 10 million dollars in revenue themselves, then hand it off to a CEO who can take it to the next stage. They've used this playbook to build Moxie, a Series C company that helps nurses open their own medical spas, now with 600-plus customers and a 200-person team globally. Their second company, Meadow Memorials, is a contemporary funeral home with no physical real estate. It has become the largest provider of funeral services in California.
Both businesses launched right around the arrival of ChatGPT—and neither was built with AI in mind. So how are they thinking about AI inside companies where the core work isn't going to change? In this conversation, Gerstenzang and Friedman share how they built an AI agent called Matthew Bolton to power their customer discovery process, why synthetic customer calls completely failed for them, and why they believe you shouldn't give anyone credit for using AI.
If you found this episode interesting, please like, subscribe, comment, and share!
Want even more?
Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.
To hear more from Dan Shipper:
Subscribe to Every: https://every.to/subscribe
Follow him on X: https://twitter.com/danshipper
Intent is what comes after your IDE. Try it yourself: augmentcode.com/intent
Head to granola.ai/every to get 3 months free.
Ready to build a site that looks hand-coded—without hiring a developer? Launch your site for free at www.Framer.com, and use code DAN to get your first month of Pro on the house.
Timestamps
00:00:00 — Introduction and how Sam and Dan's paths first crossed
00:01:40 — What it means to be “the world's slowest incubator”
00:04:50 — Why Bolton and Watt runs companies to several million in revenue before handing off to a CEO
00:07:30 — How specialization across the founding journey creates advantages
00:10:40 — Building AI-durable businesses versus AI-native ones
00:16:10 — How an AI agent transformed their customer discovery process
00:19:30 — Where synthetic customer calls completely fail
00:29:30 — Deploying AI inside established companies
00:32:30 — Why newer projects see huge gains from AI while mature companies see 10 percent
00:37:00 — A preview of what's next for Bolton and Watt
4 March 2026, 4:06 pm - More Episodes? Get the App