• 1 hour 4 minutes
    #296: Avoiding Major Oopsies: Twyman's Law, Intuition, and Valuing Accuracy Over Precision

    What do diamond ring shopping, Uber pricing psychology, and active user metrics gone wrong have in common? They all highlight our complicated relationship with precision versus accuracy—and how that relationship can either build or destroy trust in our data. Arik Friedman from Atlassian joins us to unpack why being "about right" often beats being "exactly wrong," and why your nagging feeling that something's off might be a useful insight in and of itself. From the discipline of documenting assumptions to the art of knowing when to round your numbers, we tackle the very human challenge of working with data that's supposed to be objective but rarely is. Plus, we explore Twyman's Law (if data looks too good to be true, it probably is) and why sometimes your intuition is your last line of defense against embarrassing mistakes.

    For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    28 April 2026, 4:30 am
  • 1 hour 9 minutes
    #295: Research and Analytics: the Peanut Butter and Chocolate of Data?

    Research and analytics: are they more like peanut butter and chocolate, or more like oil and water? On this episode, we dig into the surprisingly common (and surprisingly unfortunate) divide between these two disciplines with Stefanie Zammit, Global Director of Analytics and Insights at Bang & Olufsen. Stefanie has spent her career bridging the qual and quant worlds, and she makes a compelling case that the best insights come from putting both methodologies to work on the same business problems. From the "never ask a survey question you already have the answer to" rule to why personas are usually terrible (spoiler: it's not the clustering, it's the storytelling), we explore how organizations can break down the silos between research and analytics teams. Turns out, the fear of the unknown and a bunch of fancy terminology might be keeping us from some pretty powerful insights. Also, apparently 100% soundproof rooms are absolutely terrifying. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    14 April 2026, 4:30 am
  • 1 hour 8 minutes
    #294: Adapting an Analytics Team to an AI World

    AI is moving fast. But so is life. AI is widely recognized as a must-adopt technology, but how and where are data workers expected to find the time for that?! Organizations are struggling to find effective ways to productively drive healthy adoption of AI: What is it they expect their workers to do with AI? Is it purely an efficiency driver, or should they expect other avenues of value creation to be pursued? What guardrails need to be in place? What incentive structures are (and are not) effective when it comes encouraging team members to take the AI plunge? One tactic that is definitely effective is to have leaders who are excited, engaged, and transparent as they get their hands dirty. And, boy, did the algorithm deliver one of those to us in the form of John Lovett, VP of Analytics at SEER Interactive, for this discussion!

    For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    31 March 2026, 4:30 am
  • 1 hour 5 minutes
    #293: Tool Selection and the Unhelpfulness of Feature Comparisons

    The one rule about the Analytics Power Hour is that we don't talk about specific tools. But that doesn't mean we won't talk about tool SELECTION! Jason Packer recently released the second edition of Google Analytics Alternatives, (also available on Amazon) and his approach in the book is very much not an RFP-like "check which features your tool offers" system. And his rationale for that seems just as applicable (to us, at least!) for any data platform selection, be it a digital/product analytics platform, a BI tool, database or storage infrastructure, or, well, you name it! Ultimately, the challenge is how to go about getting a reasonably strong understanding of the philosophy and historical roots of each platform being considered and then marrying that up with the foundational priorities and needs of the organization. Is that a lot harder than a feature checklist? Yes. But them's the breaks.

    For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    17 March 2026, 4:30 am
  • 1 hour 4 minutes
    #292: AI Without Adult Supervision with Aubrey Blanche

    As Kevin McCallister once taught us: just because the house is still standing doesn't mean everything's under control. Everyone's racing to adopt AI, but has anyone actually read the fine print? For this year's International Women's Day episode, we are joined by Aubrey Blanche to unpack the hype, the hidden tradeoffs, and the quiet ways teams are giving up agency in the name of "productivity." We explore how data and tech teams are uniquely prepared and positioned to ask better questions, measure what really matters, and avoid letting the AI teenager run the house. Learn more about "phantom value" and why faster isn't always better… or even cheaper!

    For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    3 March 2026, 5:30 am
  • 1 hour 2 minutes
    #291: The Data Work that Lives in the Shadows

    We know what the work of the data practitioner is, right? It's everything from managing data ingestion to data governance to report development to experimental design to basic and advanced analytics. It's writing (or vibe-writing?) SQL or Python or R while also being adept at whatever data stack—no matter how modern—is at hand. Of course, it's a lot more, too! And that's the topic of this episode: the unofficial, often unheralded, but often quite important "shadow work" of the analyst—the myriad tasks required to effectively glue together all the data work that occurs out in broad daylight to enable the data to truly be useful at driving the business forward.

    For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    17 February 2026, 5:30 am
  • 1 hour 6 minutes
    #290: Always Be Learning

    From a professional development perspective, you should always be learning: listening to podcasts, reading books, connecting with internal colleagues, following useful people on Medium and LinkedIn, and so on. Did we mention listening to podcasts? Well, THIS episode of THIS podcast is not really about that kind of learning. It's more about the sort of organizational learning that experimentation and analytics is supposed to deliver. How does a brand stay ahead of their competitors? One surefire way is to get smarter about their customers at a faster rate than their competitors do. But what does that even mean? Is it a learning to discover that the MVP of a hot new feature…doesn't look to be moving the needle at all? Our guest, Mårten Schultzberg from Spotify, makes a compelling case that it is! And the co-hosts agree. But it's tricky.

    For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    3 February 2026, 5:30 am
  • 1 hour 10 minutes
    #289: The Imperative of Developing Business Acumen

    That darn data. It's so complicated and fragmented and gap-filled and noisy that no amount of time is ever enough to truly get to the bottom of all of its complexity. As a result, it's pretty easy to fill all of our time handling as much of that underlying data messiness as possible. At what cost, though? It's easy for the analyst's connection to the business to suffer as they get mired (too) deeply in the data and lose sight of the broader business needs. In this episode, the gang had a chat about business acumen—what it is, how to develop it, and why it's a must-have for any data or analytics role.

    This episode's Measurement Bite from show sponsor Recast is a brief explanation of identifiability—what it is and how to check for it using simulation—from Michael Kaminsky!

    For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    20 January 2026, 5:30 am
  • 1 hour 39 seconds
    #288: Our LLM Suggested We Chat about MCP. Kinda' Meta, No?

    If there's one thing that we absolutely knew would be coming along with the increased interest and use of AI, it would be… more acronyms! And, along with the acronyms, we pretty much could predict that we see a lot of online flexing through casual dropping of said acronyms as though they're deeply understood by everyone who's anyone. We tackled one such acronym on this episode: MCP! That's "model context protocol" for those who like their acronyms written out, and Sam Redfern joined us to help us wrap our heads around the topic. You see, MCP is kinda' like some other more familiar acronyms like API and XML. But, it's also like… fingers? Sam's enthusiasm and explanation certainly had us ready to dive in!

    This episode's Measurement Bite from show sponsor Recast is an explanation of model robustness from Michael Kaminsky!

    For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    6 January 2026, 5:30 am
  • 1 hour 49 seconds
    #287: 2025 Year in Review

    It's the most…won…derful…tiiiiime…of the year! And by that, we mean it's the time of the year when we sit back, look at each other, and ask, "Where did all the time go?!" We brought back a very special someone for this episode as we collectively reflected on the year—show highlights (and what about those shows have stuck with us), industry reflections, and a little shameless shilling for Tim's book (are you still short on a few stocking stuffers? Order now…!).

    This episode's Measurement Bite from show sponsor Recast is a brief explanation of Granger causality (and how it's NOT actually a causal measure!) from Michael Kaminsky!

    For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    23 December 2025, 5:30 am
  • 55 minutes 44 seconds
    #286: Metrics Layers. Data Dictionaries. Maybe It's All Semantic (Layers)? With Cindi Howson

    Semantic layers are having something of a moment, but they're not actually new as a concept. Ever since the first database table was designed with cryptic field names that no business user could possibly understand, there's been a need for some form of mapping and translation. Should every company be considering employing a semantic layer? Is the idea of a single, comprehensive semantic layer within an organization a monolithic concept that is doomed to fail? These questions and more get bandied about on this episode, where we were joined by industry legend Cindi Howson, Chief Data & AI Strategy Officer at Thoughtspot.

    For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

    This episode's Measurement Bite from show sponsor Recast is an explanation of multicollinearity from Michael Kaminsky!

    9 December 2025, 5:30 am
  • More Episodes? Get the App