Deep-dive discussions with the smartest developers we know, explaining what they're working on, how they're trying to move the industry forward, and what we can learn from them.You might find the solution to your next architectural headache, pick up a new programming language, or just hear some good war stories from the frontline of technology.Join your host Kris Jenkins as we try to figure out what tomorrow's computing will look like the best way we know how - by listening directly to the developers' voices.
SQLite is embedded everywhere - phones, browsers, IoT devices. It's reliable, battle-tested, and feature-rich. But what if you want concurrent writes? Or CDC for streaming changes? Or vector indexes for AI workloads? The SQLite codebase isn't accepting new contributors, and the test suite that makes it so reliable is proprietary. So how do you evolve an embedded database that's effectively frozen?
Glauber Costa spent a decade contributing to the Linux kernel at Red Hat, then helped build Scylla, a high-performance rewrite of Cassandra. Now he's applying those lessons to SQLite. After initially forking SQLite (which produced a working business but failed to attract contributors), his team is taking the bolder path: a complete rewrite in Rust called Turso. The project already has features SQLite lacks - vector search, CDC, browser-native async operation - and is using deterministic simulation testing (inspired by TigerBeetle) to match SQLite's legendary reliability without access to its test suite.
The conversation covers why rewrites attract contributors where forks don't, how the Linux kernel maintains quality with thousands of contributors, why Pekka's "pet project" jumped from 32 to 64 contributors in a month, and what it takes to build concurrent writes into an embedded database from scratch.
--
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Turso: https://turso.tech/
Turso GitHub: https://github.com/tursodatabase/turso
libSQL (SQLite fork): https://github.com/tursodatabase/libsql
SQLite: https://www.sqlite.org/
Rust: https://rust-lang.org/
ScyllaDB (Cassandra rewrite): https://www.scylladb.com/
Apache Cassandra: https://cassandra.apache.org/
DuckDB (analytical embedded database): https://duckdb.org/
MotherDuck (DuckDB cloud): https://motherduck.com/
dqlite (Canonical distributed SQLite): https://canonical.com/dqlite
TigerBeetle (deterministic simulation testing): https://tigerbeetle.com/
Redpanda (Kafka alternative): https://www.redpanda.com/
Linux Kernel: https://kernel.org/
Datadog: https://www.datadoghq.com/
Glauber Costa on X: https://x.com/glcst
Glauber Costa on GitHub: https://github.com/glommer
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
--
0:00 Intro
3:16 Ten Years Contributing to the Linux Kernel
15:17 From Linux to Startups: OSv and Scylla
26:23 Lessons from Scylla: The Power of Ecosystem Compatibility
33:00 Why SQLite Needs More
37:41 Open Source But Not Open Contribution
48:04 Why a Rewrite Attracted Contributors When a Fork Didn't
57:22 How Deterministic Simulation Testing Works
1:06:17 70% of SQLite in Six Months
1:12:12 Features Beyond SQLite: Vector Search, CDC, and Browser Support
1:19:15 The Challenge of Adding Concurrent Writes
1:25:05 Building a Self-Sustaining Open Source Community
1:30:09 Where Does Turso Fit Against DuckDB?
1:41:00 Could Turso Compete with Postgres?
1:46:21 How Do You Avoid a Toxic Community Culture?
1:50:32 Outro
How do you build systems with AI? Not code-generating assistants, but production systems that use LLMs as part of their processing pipeline. When should you chain multiple agent calls together versus just making one LLM request? And how do you debug, test, and deploy these things? The industry is clearly in exploration mode—we're seeing good ideas implemented badly and expensive mistakes made at scale. But Google needs to get this right more than most companies, because AI is both their biggest opportunity and an existential threat to their search-based business model.
Christina Lin from Google joins us to discuss Agent Development Kit (ADK), Google's open-source Python framework for building agentic pipelines. We dig into the fundamental question of when agent pipelines make sense versus traditional code, exploring concepts like separation of concerns for agents, tool calling versus MCP servers, Google's grounding feature for citation-backed responses, and agent memory management. Christina explains A2A (Agent-to-Agent), Google's protocol for distributed agent communication that could replace both LangChain and MCP. We also cover practical concerns like debugging agent workflows, evaluation strategies, and how to think about deploying agents to production.
If you're trying to figure out when AI belongs in your processing pipeline, how to structure agent systems, or whether frameworks like ADK solve real problems versus creating new complexity, this episode breaks down Google's approach to making agentic systems practical for production use.
--
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Google Agent Development Kit Announcement: https://developers.googleblog.com/en/agent-development-kit-easy-to-build-multi-agent-applications/
ADK on GitHub: https://google.github.io/adk-docs/
Google Gemini: https://ai.google.dev/gemini-api
Google Vertex AI: https://cloud.google.com/vertex-ai
Google AI Studio: https://aistudio.google.com/
Google Grounding with Google Search: https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview
Model Context Protocol (MCP): https://modelcontextprotocol.io/
Anthropic MCP Servers: https://github.com/modelcontextprotocol/servers
LangChain: https://www.langchain.com/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
How do you monitor distributed systems that span dozens of microservices, multiple languages, and different databases? The old approach of gathering logs from different machines and recompiling apps with profiling flags doesn't scale when you're running thousands of servers. You need a unified strategy that works everywhere, on every component, in every language—and that means tackling the problem from the kernel level up.
Mohammed Aboullaite is a backend engineer at Spotify, and he joins us to explore the latest in continuous profiling and observability using eBPF. We dive into how eBPF lets you programmatically peek into the Linux kernel without recompiling it, why companies like Google and Meta run profiling across their entire infrastructure, and how to manage the massive data volumes that continuous profiling generates. Mohammed walks through specific tools like Pyroscope, Pixie, and Parca, explains the security model of loading code into the kernel, and shares practical advice on overhead thresholds, storage strategies, and getting organizational buy-in for continuous profiling.
Whether you're debugging performance issues, optimizing for scale, or just want to see what your code is really doing in production, this episode covers everything from packet filters to cultural changes in service of getting a clear view of your software when it hits production.
---
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
eBPF: https://ebpf.io/
Google-Wide Profiling Paper (2010): https://research.google.com/pubs/archive/36575.pdf
Google pprof: https://github.com/google/pprof
Continuous Profiling Tools:
Pyroscope (Grafana): https://grafana.com/oss/pyroscope/
Pixie (CNCF): https://px.dev/
Parca: https://www.parca.dev/
Datadog Continuous Profiler: https://www.datadoghq.com/product/code-profiling/
Supporting Technologies:
OpenTelemetry: https://opentelemetry.io/
Grafana: https://grafana.com/
New Relic: https://newrelic.com/
Envoy Proxy: https://www.envoyproxy.io/
Spring Cloud Sleuth: https://spring.io/projects/spring-cloud-sleuth
Mohammed Aboullaite:
LinkedIn: https://www.linkedin.com/in/aboullaite/
GitHub: https://github.com/aboullaite
Website: http://aboullaite.me
Twitter/X: https://twitter.com/laytoun
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
Git might be the most ubiquitous tool in software development, but that doesn't mean it's perfect. What if we could keep Git compatibility while fixing its most frustrating aspects—painful merges, scary rebases, being stuck in conflict states, and the confusing staging area?
This week we're joined by Martin von Zweigbergk, creator of Jujutsu (JJ), a Git-compatible version control system that takes a fundamentally different approach. Starting from a simple idea—automatically snapshotting your working copy—Martin has built a tool that reimagines how we interact with version control. We explore the clever algebra behind Jujutsu's conflict handling that lets you store conflicts as commits and move freely through your repository even when things are broken. We discuss why there's no staging area, how the operation log gives you powerful undo/redo capabilities, and why rebasing becomes trivially easy when you can edit any commit in your history and have changes automatically propagate forward.
Whether you're a Git power user frustrated by interactive rebases, someone who's lost work to a botched merge, or just curious about how version control could work differently, this conversation offers fresh perspectives on a tool we all take for granted. And if you're working with large monorepos or game development assets, Martin's vision for the future of Jujutsu might be exactly what you've been waiting for.
---
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Jujutsu (JJ): https://github.com/martinvonz/jj
Jujutsu Documentation: https://martinvonz.github.io/jj/
Git: https://git-scm.com/
Mercurial: https://www.mercurial-scm.org/
Rust: https://www.rust-lang.org/
Watchman: https://facebook.github.io/watchman/
Google Piper: https://research.google/pubs/why-google-stores-billions-of-lines-of-code-in-a-single-repository/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
Getting new technology adopted in a large organization can feel like pushing water uphill. The best tools in the world are useless if we're not allowed to use them, and as companies grow, their habits turn into inertia, then into "the way we've always done things." So how do you break through that resistance and get meaningful change to happen?
This week's guest is Dov Katz from Morgan Stanley, who specializes in exactly this challenge - driving developer productivity and getting new practices adopted across thousands of developers. We explore the art of organizational change from every angle: How do you get management buy-in? How do you build grassroots developer enthusiasm? When should you use deterministic tools like OpenRewrite versus AI-powered solutions? And what role does open source play in breaking down the walls between competing financial institutions?
Whether you're trying to modernize a legacy codebase, reduce technical debt, or just get your team to try that promising new tool you've discovered, this conversation offers practical strategies for navigating the complex dynamics of enterprise software development. Because sometimes the hardest part of our job isn't writing code - it's getting permission to write better code.
---
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Morgan Stanley: https://www.morganstanley.com/
OpenRewrite: https://docs.openrewrite.org/
Spring Framework: https://spring.io/
Spring Integration: https://spring.io/projects/spring-integration
Apache Camel: https://camel.apache.org/
FINOS (FinTech Open Source Foundation): https://www.finos.org/
Linux Foundation: https://www.linuxfoundation.org/
Moderne (Code Remix conference organizers): https://www.moderne.io/
Code Remix Conference: https://www.moderne.io/events
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
How confident are you when your test suite goes green? If you're honest, probably not 100% confident - because most bugs come from scenarios we never thought to test. Traditional testing only catches the problems we anticipate, but the 3am pager alerts? Those come from the unexpected interactions, timing issues, and edge cases we never imagined.
In this episode, Will Wilson from Antithesis takes us deep into the world of autonomous testing. They've built a deterministic hypervisor that can simulate entire distributed systems - complete with fake AWS services - and intelligently explore millions of possible states to find bugs before production. Think property-based testing, but for your entire infrastructure stack. The approach is so thorough they've even used it to find glitches in Super Mario Brothers (seriously).
We explore how deterministic simulation works at the hypervisor level, why traditional integration tests are fundamentally limited, and how you can write maintainable tests that actually find the bugs that matter. If you've ever wished you could test "what happens when everything that can go wrong does go wrong," this conversation shows you how that's finally becoming possible.
---
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Antithesis: https://antithesis.com/
Antithesis testing with Super Mario: https://antithesis.com/blog/sdtalk/
...and with Metroid: https://antithesis.com/blog/2025/metroid/
MongoDB: https://www.mongodb.com/
etcd (Linux Foundation): https://etcd.io/
Facebook Hermit: https://github.com/facebookexperimental/hermit
RR (Record-Replay Debugger): https://rr-project.org/
T-SAN (Thread Sanitizer): https://clang.llvm.org/docs/ThreadSanitizer.html
Toby Bell's Strange Loop Talk on JPL Testing: https://www.youtube.com/results?search_query=toby+bell+strange+loop+jpl
Andy Weir - Project Hail Mary: https://www.goodreads.com/book/show/54493401-project-hail-mary
Andy Weir - The Martian: https://www.goodreads.com/book/show/18007564-the-martian
Antithesis Blog (Nintendo Games Testing): https://antithesis.com/blog/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
How would you build a Heroku-like platform from scratch? This week we're diving deep into the world of cloud platforms and infrastructure with Anurag Goel, founder and CEO of Render.
Starting from the seemingly simple task of hosting a web service, we quickly discover why building a production-ready platform is far more complex than it appears. Why is hosting a Postgres database so challenging? How do you handle millions of users asking for thousands of different features? And what's the secret to building infrastructure that developers actually want to use?
We explore the technical challenges of building enterprise-grade services—from implementing reliable backups and high availability to managing private networking and service discovery. Anurag shares insights on choosing between infrastructure-as-code versus configuration, why they built on Go, and how they handle 100 billion requests per month.
Plus, we discuss the impact of AI on platform adoption: Are LLMs already influencing which platforms developers choose? Will hosting platforms need to actively support agentic workflows? And what does the future hold for automated debugging?
Whether you're curious about building your own platform, want to understand what really happens behind your cloud provider's dashboard, or just enjoy hearing war stories from the infrastructure trenches, this episode has something for you.
–
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Render: https://render.com/
Render’s MCP Server (Early Access): https://render.com/docs/mcp-server
Pulumi: https://www.pulumi.com/
Victoria Metrics: https://victoriametrics.com
Loki: https://vector.dev/docs/reference/configuration/sinks/loki/
Vector: https://vector.dev/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
How hard is it to write a good database engine? Hard enough that sometimes it takes several versions to get it just right. Paul Dix joins us this week to talk about his journey building InfluxDB, and he's refreshingly frank about what went right, and what went wrong. Sometimes the real database is the knowledge you pick up along the way....
Paul walks us through InfluxDB's evolution from error logging system to time-series datasbase, and from Go to Rust, with unflinching honesty about the major lessons they learnt along the way. We cover technical details like Time-Structure Merge Trees, to business issues like what happens when your database works but your pricing model is broken.
If you're interested in how databases work, this is full of interesting details, and if you're interested in how projects evolve from good idea to functioning business, it's a treat.
--
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@developervoices/join
InfluxData: https://www.influxdata.com/
InfluxDB: https://www.influxdata.com/products/influxdb/
DataFusion: https://datafusion.apache.org/
DataFusion Episode: https://www.youtube.com/watch?v=8QNNCr8WfDM
Apache Arrow: https://arrow.apache.org/
Apache Parquet: https://parquet.apache.org/
BoltDB: https://github.com/boltdb/bolt
LevelDB: https://github.com/google/leveldb
RocksDB: https://rocksdb.org/
Gorilla: A Fast, Scalable, In-Memory Time Series Database (Facebook paper): https://www.vldb.org/pvldb/vol8/p1816-teller.pdf
Paul on LinkedIn: https://www.linkedin.com/in/pauldix/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
If AI coding tools are here to stay, what form will they take? How will we use them? Will they be just another window in our IDE, will they push their way to the centre of our development experience, displacing the editor? No one knows, but Zach Lloyd is making a very interesting bet with the latest version of Warp.
In this deep dive, Zach walks us through the technical architecture behind agentic development, and how it's completely changed what he & his team have been building. Warp has gone from a terminal built from scratch, to what they're calling an "agentic development environment" - a tool that weaves AI agents, a development, a shell and a conversation into a single, unified experience. This may be the future or just one possible path; regardless it's a fascinating glimpse into how our tools might reshape not just how we code, but how we experience programming itself.
Whether you're all-in on agentic coding, a skeptic, or somewhere in between, AI is here to stay. Now's the time to figure out what form it's going to take.
# Support Developer Voices
- Patreon: https://patreon.com/DeveloperVoices
- YouTube: https://www.youtube.com/@DeveloperVoices/join
-- Episode Links
- Warp Homepage: https://warp.dev/
- Warp Pro Free Month (promo code WARPDEVS25): https://warp.dev/
- Previous Warp Episode: https://youtu.be/bLAJvxUpAcg
- SWE-bench: https://www.swebench.com/
- TerminalBench: https://github.com/microsoft/TerminalBench
- Model Context Protocol (MCP): https://modelcontextprotocol.io/
- Claude Code: https://claude.ai/code
- Anthropic Claude: https://claude.ai/
- VS Code: https://code.visualstudio.com/
- Cursor: https://cursor.sh/
- Language Server Protocol (LSP): https://microsoft.github.io/language-server-protocol/
# Connect
- Zach on LinkedIn: https://www.linkedin.com/in/zachlloyd/
- Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
- Kris on Mastodon: http://mastodon.social/@krisajenkins
- Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
Ever wondered why data integration is still such a nightmare in 2025? Marty Pitt has built something that might finally solve it.
TaxiQL isn't just another query language - it's a semantic layer that lets you query across any system without caring about field names, API differences, or where the data actually lives. Instead of writing endless mapping code between your microservices, databases, and APIs, you describe what your data *means* and let TaxiQL figure out how to get it.
In this conversation, Marty walks through the “All Powerful Spreadsheet” moment that sparked TaxiQL, how semantic types work in practice, and why this approach might finally decouple producers from consumers in large organizations. We dive deep into query execution, data lineage, streaming integration, and the technical challenges of building a system that can connect anything to anything.
If you've ever spent months mapping fields between systems or maintaining brittle integration code, this one's for you.
–
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@developervoices/join
–
TaxiLang Homepage: https://taxilang.org/
TaxiLang Playground: https://playground.taxilang.org/examples/message-queue-and-database
Taxi Lang GitHub repository: https://github.com/taxilang/taxilang
OpenAPI Specification (formerly Swagger): https://swagger.io/specification/
YOW! Conference - Australian software conference series: https://yowconference.com/
Spring Framework Kotlin support: https://spring.io/guides/tutorials/spring-boot-kotlin/
Ubiquitous Language (DDD Concept): https://martinfowler.com/bliki/UbiquitousLanguage.html
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
–
0:00 Intro
At 23, Isaac is already jaded about software reliability - and frankly, he's got good reason to be. When your grandmother can't access her medical records because a username change broke the entire system, when bugs routinely make people's lives harder, you start to wonder: why do we just accept that software is broken most of the time?
Isaac's answer isn't just better testing - it's a whole toolkit of techniques working together. He's advocating for scattering "little bombs" throughout your code via runtime assertions, adding in the right amount of static typing, building feedback loops that page you when invariants break, and running nightly SQL queries to catch the bugs that slip through everything else. All building what he sees as a pyramid of software reliability.
Weaving into that, we also dive into the Roc programming language, its unique platform architecture that tailors development to specific domains. Software reliability isn’t just about the end user experience - Roc feeds in the idea we can make reliability easier by tailoring the language domain to the problem at hand.
–
Isaac’s Homepage: https://isaacvando.com/
Episode on Property Testing: https://youtu.be/wHJZ0icwSkc
Property Testing Walkthrough: https://youtu.be/4bpc8NpNHRc
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@developervoices/join
Isaac on LinkedIn: https://www.linkedin.com/in/isaacvando/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/