This is a weekly podcast focused on developer productivity and the teams and leaders dedicated to improving it.
AI engineering tools are evolving fast. New coding assistants, debugging agents, and automation platforms emerge every month. Engineering leaders want to take advantage of these innovations while avoiding costly experiments that create more distraction than impact.
In this episode of the Engineering Enablement podcast, host Laura Tacho and Abi Noda outline a practical model for evaluating AI tools with data. They explain how to shortlist tools by use case, run trials that mirror real development work, select representative cohorts, and ensure consistent support and enablement. They also highlight why baselines and frameworks like DX’s Core 4 and the AI Measurement Framework are essential for measuring impact.
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-course
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
• Substack: https://substack.com/@abinoda
In this episode, we cover:
(00:00) Intro: Running a data-driven evaluation of AI tools
(02:36) Challenges in evaluating AI tools
(06:11) How often to reevaluate AI tools
(07:02) Incumbent tools vs challenger tools
(07:40) Why organizations need disciplined evaluations before rolling out tools
(09:28) How to size your tool shortlist based on developer population
(12:44) Why tools must be grouped by use case and interaction mode
(13:30) How to structure trials around a clear research question
(16:45) Best practices for selecting trial participants
(19:22) Why support and enablement are essential for success
(21:10) How to choose the right duration for evaluations
(22:52) How to measure impact using baselines and the AI Measurement Framework
(25:28) Key considerations for an AI tool evaluation
(28:52) Q&A: How reliable is self-reported time savings from AI tools?
(32:22) Q&A: Why not adopt multiple tools instead of choosing just one?
(33:27) Q&A: Tool performance differences and avoiding vendor lock-in
Referenced:
Nathen Harvey leads research at DORA, focused on how teams measure and improve software delivery. In today’s episode of Engineering Enablement, Nathen sits down with host Laura Tacho to explore how AI is changing the way teams think about productivity, quality, and performance.
Together, they examine findings from the 2025 DORA research on AI-assisted software development and DX’s Q4 AI Impact report, comparing where the data aligns and where important gaps emerge. They discuss why relying on traditional delivery metrics can give leaders a false sense of confidence and why AI acts as an amplifier, accelerating healthy systems while intensifying existing friction and failure.
The conversation focuses on how AI is reshaping engineering systems themselves. Rather than treating AI as a standalone tool, they explore how it changes workflows, feedback loops, team dynamics, and organizational decision-making, and why leaders need better system-level visibility to understand its real impact.
Where to find Nathen Harvey:
• LinkedIn: https://www.linkedin.com/in/nathen
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-course
In this episode, we cover:
(00:00) Intro
(00:55) Why the four key DORA metrics aren’t enough to measure AI impact
(03:44) The shift from four to five DORA metrics and why leaders need more than dashboards
(06:20) The one-sentence takeaway from the 2025 DORA report
(07:38) How AI amplifies both strengths and bottlenecks inside engineering systems
(08:58) What DX data reveals about how junior and senior engineers use AI differently
(10:33) The DORA AI Capabilities Model and why AI success depends on how it’s used
(18:24) How a clear and communicated AI stance improves adoption and reduces friction
(23:02) Why talking to your teams still matters
Referenced:
• DORA | State of AI-assisted Software Development 2025
• Steve Fenton - Octonaut | LinkedIn
• AI-assisted engineering: Q4 impact report
In this episode of Engineering Enablement, host Laura Tacho talks with Fabien Deshayes, who leads multiple platform engineering teams at Monzo Bank. Fabien explains how Monzo is adopting AI responsibly within a highly regulated industry, balancing innovation with structure, control, and data-driven decision-making.
They discuss how Monzo runs structured AI trials, measures adoption and satisfaction, and uses metrics to guide investment and training. Fabien shares why the company moved from broad rollouts to small, focused cohorts, how they are addressing existing PR review bottlenecks that AI has intensified, and what they have learned from empowering product managers and designers to use AI tools directly.
He also offers insights into budgeting and experimentation, the results Monzo is seeing from AI-assisted engineering, and his outlook on what comes next, from agent orchestration to more seamless collaboration across roles.
Where to find Fabien Deshayes:
• LinkedIn: https://www.linkedin.com/in/fabiendeshayes
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-course
In this episode, we cover:
(00:00) Intro
(01:01) An overview of Monzo bank and Fabien’s role
(02:05) Monzo’s careful, structured approach to AI experimentation
(05:30) How Monzo’s AI journey began
(06:26) Why Monzo chose a structured approach to experimentation and what criteria they used
(09:21) How Monzo selected AI tools for experimentation
(11:51) Why individual tool stipends don’t work for large, regulated organizations
(15:32) How Monzo measures the impact of AI tools and uses the data
(18:10) Why Monzo limits AI tool trials to small, focused cohorts
(20:54) The phases of Monzo’s AI rollout and how learnings are shared across the organization
(22:43) What Monzo’s data reveals about AI usage and spending
(24:30) How Monzo balances AI budgeting with innovation
(26:45) Results from DX’s spending poll and general advice on AI budgeting
(28:03) What Monzo’s data shows about AI’s impact on engineering performance
(29:50) The growing bottleneck in PR reviews and how Monzo is solving it with tenancies
(33:54) How product managers and designers are using AI at Monzo
(36:36) Fabien’s advice for moving the needle with AI adoption
(38:42) The biggest changes coming next in AI engineering
Referenced:
In this episode of Engineering Enablement, Laura Tacho and Abi Noda discuss how engineering leaders can plan their 2026 AI budgets effectively amid rapid change and rising costs. Drawing on data from DX’s recent poll and industry benchmarks, they explore how much organizations should expect to spend per developer, how to allocate budgets across AI tools, and how to balance innovation with cost control.
Laura and Abi also share practical insights on building a multi-vendor strategy, evaluating ROI through the right metrics, and ensuring continuous measurement before and after adoption. They discuss how to communicate AI’s value to executives, avoid the trap of cost-cutting narratives, and invest in enablement and training to make adoption stick.
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
• Substack: https://substack.com/@abinoda
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-course
In this episode, we cover:
(00:00) Intro: Setting the stage for AI budgeting in 2026
(01:45) Results from DX’s AI spending poll and early trends
(03:30) How companies are currently spending and what to watch in 2026
(04:52) Why clear definitions for AI tools matter and how Laura and Abi think about them
(07:12) The entry point for 2026 AI tooling budgets and emerging spending patterns
(10:14) Why 2026 is the year to prove ROI on AI investments
(11:10) How organizations should approach AI budgeting and allocation
(15:08) Best practices for managing AI vendors and enterprise licensing
(17:02) How to define and choose metrics before and after adopting AI tools
(19:30) How to identify bottlenecks and AI use cases with the highest ROI
(21:58) Key considerations for AI budgeting
(25:10) Why AI investments are about competitiveness, not cost-cutting
(27:19) How to use the right language to build trust and executive buy-in
(28:18) Why training and enablement are essential parts of AI investment
(31:40) How AI add-ons may increase your tool costs
(32:47) Why custom and fine-tuned models aren’t relevant for most companies today
(34:00) The tradeoffs between stipend models and enterprise AI licenses
Referenced:
CEO Abi Noda is joined by DX CTO Laura Tacho to discuss the evolving role of Platform and DevProd teams in the AI era. Together, they unpack how AI is reshaping platform responsibilities, from evaluation and rollout to measurement, tool standardization, and guardrails. They explore why fundamentals like documentation and feedback loops matter more than ever for both developers and AI agents. They also share insights on reducing tool sprawl, hardening systems for higher throughput, and leveraging AI to tackle tech debt, modernize legacy code, and improve workflows across the SDLC.
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
• Substack: https://substack.com/@abinoda
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-course
In this episode, we cover:
(00:00) Intro: Why platform teams need to evolve
(02:34) The challenge of defining platform teams and how AI is changing expectations
(04:44) Why evaluating and rolling out AI tools is becoming a core platform responsibility
(07:14) Why platform teams need solid measurement frameworks to evaluate AI tools
(08:56) Why platform leaders should champion education and advocacy on measurement
(11:20) How AI code stresses pipelines and why platform teams must harden systems
(12:24) Why platform teams must go beyond training to standardize tools and create workflows
(14:31) How platform teams control tool sprawl
(16:22) Why platform teams need strong guardrails and safety checks
(18:41) The importance of standardizing tools and knowledge
(19:44) The opportunity for platform teams to apply AI at scale across the organization
(23:40) Quick recap of the key points so far
(24:33) How AI helps modernize legacy code and handle migrations
(25:45) Why focusing on fundamentals benefits both developers and AI agents
(27:42) Identifying SDLC bottlenecks beyond AI code generation
(30:08) Techniques for optimizing legacy code bases
(32:47) How AI helps tackle tech debt and large-scale code migrations
(35:40) Tools across the SDLC
Referenced:
In this episode, host Laura Tacho speaks with Jesse Adametz, Senior Engineering Leader on the Developer Platform at Twilio. Jesse is leading Twilio’s multi-year platform consolidation, unifying tech stacks across large acquisitions and driving migrations at enterprise scale. He discusses platform adoption, the limits of Kubernetes, and how Twilio balances modernization with pragmatism. The conversation also explores treating developer experience as a product, offering “change as a service,” and Twilio’s evolving approach to AI adoption and platform support.
Where to find Jesse Adametz:
• LinkedIn: https://www.linkedin.com/in/jesseadametz/
• X: https://x.com/jesseadametz
• Website: https://www.jesseadametz.com/
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
• Laura’s course (Measuring Engineering Performance and AI Impact) https://lauratacho.com/developer-productivity-metrics-course
In this episode, we cover:
(00:00) Intro
(01:30) Jesse’s background and how he ended up at Twilio
(04:00) What SRE teaches leaders and ICs
(06:06) Where Twilio started the post-acquisition integration
(08:22) Why platform migrations can’t follow a straight-line plan
(10:05) How Twilio balances multiple strategies for migrations
(12:30) The human side of change: advocacy, training, and alignment
(17:46) Treating developer experience as a first-class product
(21:40) What “change as a service” looks like in practice
(24:57) A mandateless approach: creating voluntary adoption through value
(28:50) How Twilio demonstrates value with metrics and reviews
(30:41) Why Kubernetes wasn’t the right fit for all Twilio workloads
(36:12) How Twilio decides when to expose complexity
(38:23) Lessons from Kubernetes hype and how AI demands more experimentation
(44:48) Where AI fits into Twilio’s platform strategy
(49:45) How guilds fill needs the platform team hasn’t yet met
(51:17) The future of platform in centralizing knowledge and standards
(54:32) How Twilio evaluates tools for fit, pricing, and reliability
(57:53) Where Twilio applies AI in reliability, and where Jesse is skeptical
(59:26) Laura’s vibe-coded side project built on Twilio
(1:01:11) How external lessons shape Twilio’s approach to platform support and docs
Referenced:
In this episode of Engineering Enablement, host Laura Tacho talks with Bruno Passos, Product Lead for Developer Experience at Booking.com, about how the company is rolling out AI tools across a 3,000-person engineering team.
Bruno shares how Booking.com set ambitious innovation goals, why cultural change mattered as much as technology, and the education practices that turned hesitant developers into daily users. He also reflects on the early barriers, from low adoption and knowledge gaps to procurement hurdles, and explains the interventions that worked, including learning paths, hackathon-style workshops, Slack communities, and centralized procurement. The result is that Booking.com now sits in the top 25 percent of companies for AI adoption.
Where to find Bruno Passos:
• LinkedIn: https://www.linkedin.com/in/brpassos/
• X: https://x.com/brunopassos
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
• Laura’s course (Measuring Engineering Performance and AI Impact) https://lauratacho.com/developer-productivity-metrics-course
In this episode, we cover:
(00:00) Intro
(01:09) Bruno’s role at Booking.com and an overview of the business
(02:19) Booking.com’s goals when introducing AI tooling
(03:26) Why Booking.com made such an ambitious innovation ratio goal
(06:46) The beginning of Booking.com’s journey with AI
(08:54) Why the initial adoption of Cody was low
(13:17) How education and enablement fueled adoption
(15:48) The importance of a top-down cultural change for AI adoption
(17:38) The ongoing journey of determining the right metrics
(21:44) Measuring the longer-term impact of AI
(27:04) How Booking.com solved internal bottlenecks to testing new tools
(32:10) Booking.com’s framework for evaluating new tools
(35:50) The state of adoption at Booking.com and efforts to expand AI use
(37:07) What’s still undetermined about AI’s impact on PR/MR quality
(39:48) How Booking.com is addressing lagging adoption and monitoring churn
(43:24) How Booking.com’s Slack community lowers friction for questions and support
(44:35) Closing thoughts on what’s next for Booking.com’s AI plan
Referenced:
In this episode of Engineering Enablement, DX CTO Laura Tacho and CEO Abi Noda break down how to measure developer productivity in the age of AI using DX’s AI Measurement Framework. Drawing on research with industry leaders, vendors, and hundreds of organizations, they explain how to move beyond vendor hype and headlines to make data-driven decisions about AI adoption.
They cover why some fundamentals of productivity measurement remain constant, the pitfalls of over-relying on flawed metrics like acceptance rate, and how to track AI’s real impact across utilization, quality, and cost. The conversation also explores measuring agentic workflows, expanding the definition of “developer” to include new AI-enabled contributors, and avoiding second-order effects like technical debt and slowed PR throughput.
Whether you’re rolling out AI coding tools, experimenting with autonomous agents, or just trying to separate signal from noise, this episode offers a practical roadmap for understanding AI’s role in your organization—and ensuring it delivers sustainable, long-term gains.
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
• Substack: https://substack.com/@abinoda
In this episode, we cover:
(00:00) Intro
(01:26) The challenge of measuring developer productivity in the AI age
(04:17) Measuring productivity in the AI era — what stays the same and what changes
(07:25) How to use DX’s AI Measurement Framework
(13:10) Measuring AI’s true impact from adoption rates to long-term quality and maintainability
(16:31) Why acceptance rate is flawed — and DX’s approach to tracking AI-authored code
(18:25) Three ways to gather measurement data
(21:55) How Google measures time savings and why self-reported data is misleading
(24:25) How to measure agentic workflows and a case for expanding the definition of developer
(28:50) A case for not overemphasizing AI’s role
(30:31) Measuring second-order effects
(32:26) Audience Q&A: applying metrics in practice
(36:45) Wrap up: best practices for rollout and communication
Referenced:
In this special episode of the Engineering Enablement podcast, recorded live at LeadDev London, DX CTO Laura Tacho explores the growing gap between AI headlines and the reality inside engineering teams—and what leaders can do to close it.
Laura shares data from nearly 39,000 developers across 184 companies, highlights the Core 4 and introduces the AI Measurement Framework, and offers a practical playbook for using data to improve developer experience, measure AI’s true impact, and build better software without compromising long-term performance.
Where to find Laura Tacho:
• LinkedIn: https://www.linkedin.com/in/lauratacho/
• Website: https://lauratacho.com/
In this episode, we cover:
(00:00) Intro: Laura’s keynote from LDX3
(01:44) The problem with asking how much faster can we go with AI?
(03:02) How the disappointment gap creates barriers to AI adoption
(06:20) What AI adoption looks like at top-performing organizations
(07:53) What leaders must do to turn AI into meaningful impact
(10:50) Why building better software with AI still depends on fundamentals
(12:03) An overview of the DX Core 4 Framework
(13:22) Why developer experience is the biggest performance lever
(15:12) How Block used Core 4 and DXI to identify 500,000 hours in time savings
(16:08) How to get started with Core 4
(17:32) Measuring AI with the AI Measurement Framework
(21:45) Final takeaways and how to get started with confidence
Referenced:
In this episode of the Engineering Enablement podcast, host Abi Noda is joined by Quentin Anthony, Head of Model Training at Zyphra and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.
Where to find Quentin Anthony:
• LinkedIn: https://www.linkedin.com/in/quentin-anthony/
• X: https://x.com/QuentinAnthon15
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
In this episode, we cover:
(00:00) Intro
(01:32) A brief overview of Quentin’s background and current work
(02:05) An explanation of METR and the study Quentin participated in
(11:02) Surprising results of the METR study
(12:47) Quentin’s takeaways from the study’s results
(16:30) How developers can avoid bloated code bases through self-reflection
(19:31) Signs that you’re not making progress with a model
(21:25) What is “context rot”?
(23:04) Advice for combating context rot
(25:34) How to make the most of your idle time as a developer
(28:13) Developer hygiene: the case for selectively using AI tools
(33:28) How to interact effectively with new models
(35:28) Why organizations should focus on tasks that AI handles well
(38:01) Where AI fits in the software development lifecycle
(39:40) How to approach testing with models
(40:31) What makes models different
(42:05) Quentin’s thoughts on agents
Referenced:
In this episode, Abi Noda talks with Frank Fodera, Director of Engineering for Developer Experience at CarGurus. Frank shares the story behind CarGurus’ transition from a monolithic architecture to microservices, and how that journey led to the creation of their internal developer portal, Showroom. He outlines the five pillars of the IDP, how it integrates with infrastructure, and why they chose to build rather than buy. The conversation also explores how CarGurus is approaching AI tool adoption across the engineering team, from experiments and metrics to culture change and leadership buy-in.
Where to find Frank Fodera :
• LinkedIn: https://www.linkedin.com/in/frankfodera/
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
In this episode, we cover:
(00:00) Intro: IDPs (Internal Developer Portals) and AI
(02:07) The IDP journey at CarGurus
(05:53) A breakdown of the people responsible for building the IDP
(07:05) The five pillars of the Showroom IDP
(09:12) How DevX worked with infrastructure
(11:13) The business impact of Showroom
(13:57) The transition from monolith to microservices and struggles along the way
(15:54) The benefits of building a custom IDP
(19:10) How CarGurus drives AI coding tool adoption
(28:48) Getting started with an AI initiative
(31:50) Metrics to track
(34:06) Tips for driving AI adoption
Referenced: