• 58 minutes 6 seconds
    #357 Data-Driven Workforce Analytics with Ben Zweig, CEO at Revelio Labs

    The data field has changed shape faster than almost any other. The role that used to be a statistician became a data scientist, became an ML engineer, and is now morphing into AI engineer. Consulting firms are hiring fewer entry-level analysts and more vibe-coders who can ship AI systems to production. For data and AI professionals, this raises immediate questions. Which parts of the work are most exposed to automation, and which are not? Where should you invest your time? And which backgrounds are now producing the strongest hires, whether you are building a team or trying to join one?

    Ben Zweig is the CEO and Co-Founder of Revelio Labs, where he leads the development of a universal HR database built on over a billion public employment profiles and more than 5 billion job postings. He holds a PhD in Economics from the CUNY Graduate Center and teaches Data Science and The Future of Work at NYU Stern. Before founding Revelio Labs, he managed Workforce Analytics projects in the IBM Chief Analytics Office and worked as a data scientist at an emerging-markets hedge fund. He is the author of Job Architecture: Building a Workforce Intelligence Taxonomy.

    In the episode, Richie and Ben explore why hiring is a broken two-sided market, why jobs are bundles of tasks not skills, building universal taxonomies from billions of job postings, which data careers resist AI, advice for hiring data talent, when traditional NLP beats LLMs, and much more.

    Links Mentioned in the Show:


    New to DataCamp?


    27 April 2026, 9:00 am
  • 53 minutes 33 seconds
    #356 The Forecast for Time Series Forecasts with Rami Krispin, Senior Manager of Data Science at Apple

    Time series data is everywhere — from inventory systems and energy grids to financial planning and product demand. As data volumes grow, the old ways of building individual forecasting models simply don't scale. How do you forecast hundreds of thousands of products without spending months on manual modeling? How do you know when to trust automation and when to step in? And what does it actually take to produce forecasts that business stakeholders will act on?

    Rami Krispin is Senior Director of Data Science and Engineering at Apple Finance, where he leads teams working at the intersection of statistical modeling, machine learning, and production forecasting. He is the author of Hands-On Time Series Analysis with R, an open-source contributor, Docker Captain, and instructor. He holds an MA in Applied Economics and an MS in Actuarial Mathematics from the University of Michigan, where he began his journey learning time series on DataCamp — before going on to build his own course there.

    In the episode, Richie and Rami explore time series foundation models and the case for scaling, traditional versus modern forecasting approaches, feature engineering in the business world, backtesting and model selection, risk management in automated forecasting, communicating forecast uncertainty to stakeholders, the evolving role of data scientists as architects, and much more.

    Links Mentioned in the Show:


    New to DataCamp?


    20 April 2026, 9:00 am
  • 52 minutes 37 seconds
    #355 AI's Impact on Databases with Shireesh Thota, CVP of Databases at Microsoft

    Cloud data platforms now offer hundreds of services, plus a growing menu of SQL, NoSQL, and open source options. Unified environments promise a simpler path, but the hard trade-offs—consistency versus scale, single-writer versus sharded, RPO/RTO targets—still matter. In daily work, you may be deciding between SQL Server, Postgres, and a globally distributed JSON store, while also asking AI tools to draft queries and spot issues. Should you still learn SQL if an agent can write it? How do you validate the intent, performance, and security of generated queries? And can monitoring agents actually reduce on-call pain without taking away needed control?

    Shireesh is the CVP of Databases at Microsoft. He leads product management, engineering, and cloud operations for Azure Databases as well as App Development for Microsoft Fabric. The products in his team’s portfolio include Azure SQL Database (on-prem, Hybrid and Cloud), Azure Cosmos DB, Azure PostgreSQL, and Azure MySQL.\\n\\n

    Previously, as the Senior Vice President at SingleStore, Shireesh was responsible for end-to-end engineering and product vision of the company. Before moving to SingleStore, Shireesh was a founding member of Cosmos DB, where he architected, designed, and directly contributed to multiple key pieces of the services.\\n\\n

    Shireesh has 20+ years of experience on large scale, big data, scale-out, relational and schema agnostic distributed systems across SQL, Azure Cosmos DB and PostgreSQL/Citus.

    In the episode, Richie and Shireesh explore how AI agents are reshaping data stacks, why unified platforms like Fabric matter, how semantic models and ontologies reduce confusion in metrics, SQL and NoSQL choices on Azure, Postgres to Cosmos DB with guidance for builders, and much more.

    Links Mentioned in the Show:


    New to DataCamp?


    Empower your business with world-class data and AI skills with DataCamp for business

    13 April 2026, 9:00 am
  • 46 minutes 24 seconds
    #354 Beyond BI: Decision Intelligence with Graphs with Jamie Hutton, CTO at Quantexa

    Decision intelligence is showing up across data and AI teams as companies move beyond dashboards to decisions made with context. Graphs, entity resolution, and better data products are becoming core tools as messy, siloed data meets stricter risk and compliance needs. In day-to-day work, this means linking “James,” “Jim,” and “Jamie” across systems, enriching records with third‑party sources, and pushing models where the data already lives in your lakehouse. How do you trust your customer counts? Which links in a graph matter, and which are noise? Can graph-based context reduce LLM hallucinations enough for regulated decisions with humans still in-loop.

    Jamie Hutton is the Co-founder and Chief Technology Officer of Quantexa, where he leads the company’s global research and development organization in advancing its market-leading Decision Intelligence Platform. With over two decades of experience pioneering data-driven technologies, Jamie has been at the forefront of innovations that connect and unify data at scale to solve complex real-world challenges. He is the creator of dynamic Entity Resolution, a pioneering capability that has redefined how the world’s leading organizations transform raw data into trusted, decision-ready intelligence. This innovation enables enterprises to prepare their data for AI, uncover new revenue streams, and expose hidden connections in even the most sophisticated criminal networks. By providing the foundation for accurate, explainable, and actionable insights, Jamie’s work has empowered governments, financial institutions, and global enterprises to make faster, smarter, and more confident decisions.

    Prior to co-founding Quantexa, Jamie held senior technology and analytics leadership roles at SAS and Detica, where he delivered mission-critical solutions for organizations operating in some of the most complex and high-stakes environments in the world. Jamie holds a First-Class master’s degree in computer engineering and is recognized as a leading authority in contextual analytics, data integration, and applied AI for mission-critical decision-making.

    In the episode, Richie and Jamie explore decision intelligence beyond BI, entity resolution across siloed data, building context graphs for fraud, AML, credit risk, and growth, how graph analytics separates meaningful links from noise, graph-RAG for LLMs to cut hallucinations, human-in-the-loop workflows, and ways to start today, and much more.

    Links Mentioned in the Show:


    New to DataCamp?


    Empower your business with world-class data and AI skills with DataCamp for business

    6 April 2026, 9:00 am
  • 49 minutes 46 seconds
    #353 The Data Team's Agentic Future with Ketan Karkhanis, CEO at ThoughtSpot

    Data and AI platforms are racing toward agentic and even autonomous analytics. But the bottleneck is rarely the model—it’s data readiness: governed metrics, clear metadata, and a semantic layer machines can read. For data engineers and analysts, this shifts work from hand-built SQL and dashboard tweaks to designing meaning and trust. If an agent can draft column descriptions, propose a model for a new business question, and build the first dashboard layout, where do you add the most value? What do you measure to prove ROI in 30 days? How do you prevent “shiny demos” from driving strategy too early.

    Ketan Karkhanis is the CEO of ThoughtSpot. Prior to joining the company in September 2024, Ketan was the Executive Vice President and General Manager of Sales Cloud at Salesforce. He returned to Salesforce in March 2022 after his time as the COO of Turvo, an emerging supply-chain collaboration platform. Before that, Ketan spent nearly a decade at Salesforce, where he led product areas in Sales, Service Cloud, Lightning Platform, and finally Analytics, wherein as the Senior Vice President & GM of Einstein Analytics, he pioneered incredible innovation, customer success, and business acceleration from launch to over $300M and a 30,000 strong user community. Prior to Salesforce, Ketan was at Cisco Systems where he led various technology initiatives and initiatives spanning Customer Advocacy, Cisco Certifications & eLearning.

    In the episode, Richie and Ketan explore AI agents for analytics, why “self‑service BI” often fails, using agents to answer questions, build dashboards, and automate data modeling, how analyst and engineer roles shift toward governance and agent design, how transparency, culture, and ROI drive safe adoption, and much more.

    Links Mentioned in the Show:


    New to DataCamp?


    30 March 2026, 9:00 am
  • 56 minutes 8 seconds
    #352 AI Agents at Work: What Actually Breaks (and How to Fix It) with Danielle Crop, EVP Digital Strategy & Alliances at WNS

    AI agents are spreading across the data and AI industry, promising to automate everything from research to outreach. At the same time, teams are learning that these tools can hallucinate, leak data, or act in surprising ways. In day-to-day work, the challenge is deciding which tasks to hand off, what data to share, and how to keep the output trustworthy. Do your agents actually add value, or just add noise? Are they running in a secured, ring-fenced environment? How do you balance playful experimentation with critical checking when an agent confidently gets a key fact wrong?

    Danielle leads go-to-market strategy at WNS, Capgemini's AI transformation services arm. Previously, Danielle was Chief Data Officer at American Express and Albertsons. She also write The Remix substack on technology trends, and is an Editorial Board Member for CDO Magazine.

    In the episode, Richie and Danielle explore AI agents at work, experimentation with guardrails, data privacy, access, tone controls, OpenClaw automation wins and failures, token costs, tying AI plans to P&L strategy, shifts in careers and hiring, how data teams handle unstructured data governance, and much more.

    Links Mentioned in the Show:

    1. WNS
    2. Connect with Danielle
    3. AI-Native Course: Intro to AI for Work
    4. Catch Danielle speaking at RADAR—April 1
    5. Related Episode: AI Agents Are the New Shadow IT (And Your Governance Isn’t Ready) with Stijn Christiaens, CEO at Collibra
    6. Explore AI-Native Learning on DataCamp

    New to DataCamp?

    1. Learn on the go using the DataCamp mobile app

    Empower your business with world-class data and AI skills with DataCamp for business

    23 March 2026, 9:00 am
  • 1 hour 3 minutes
    #351 Will World Models Bring us AGI? with Eric Xing, President & Professor at MBZUAI

    World models are emerging as the next step after large language models, pushing AI from book knowledge toward systems that can simulate the physical and social world. Instead of just generating text or short videos, the goal is steerable simulation with long-horizon consistency and planning. For practitioners, this raises practical choices: what data and representations do you need, and when do you mix symbolic reasoning with generative models? How do you test whether a model can follow actions over minutes, not seconds? And where do you start—robotics, driving safety, or synthetic data generation?

    Professor Eric Xing is President of Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) and a world-leading computer scientist whose work spans statistical machine learning, distributed systems, computational biology, and healthcare AI. A fellow of AAAI, IEEE, and the American Statistical Association, he has authored over 400 research papers cited more than 44,000 times.Before MBZUAI, Eric was a Professor of Computer Science at Carnegie Mellon University, where he also founded the Center for Machine Learning and Health. He is the founder and chief scientist of Petuum Inc., recognized as a World Economic Forum Technology Pioneer, and has held visiting roles at Stanford and Facebook. He holds PhDs in both Molecular Biology and Computer Science.

    In the episode, Richie and Eric explore world models as simulators for action, the jump from book intelligence to physical and social skills, why long-horizon planning is still hard, architectures, robots, data generation, open K2 Think LLMs, virtual-cell biology, and much more.

    Links Mentioned in the Show:

    1. MBZUAI
    2. Pan World Model
    3. Connect with Eric
    4. AI-Native Course: Intro to AI for Work
    5. Related Episode: Developing Better Predictive Models with Graph Transformers with Jure Leskovec, Pioneer of Graph Transformers, Professor at Stanford
    6. Explore AI-Native Learning on DataCamp

    New to DataCamp?

    1. Learn on the go using the DataCamp mobile app

    Empower your business with world-class data and AI skills with DataCamp for business

    16 March 2026, 9:00 am
  • 1 hour 10 minutes
    #350 How to Make Hard Choices in AI with Atay Kozlovski, Researcher at the University of Zurich

    Across the AI industry, high-stakes tools are being deployed in places where errors can harm people: sepsis alerts in hospitals, identity checks, welfare fraud detection, immigration enforcement, and recommendation systems that shape life outcomes. The pattern is familiar: scale and speed go up, while human review becomes rushed, shallow, or punished for disagreeing. In daily work, that can look like a nurse forced to act on false alarms, or a team using an LLM summary in ways the designers never planned. When should you slow down deployment? How do you detect new “wild” use cases early? And what does responsible tracking and oversight look like under real pressure?

    Atay Kozlovski is a Postdoctoral Researcher at the University of Zurich’s Center for Ethics. He holds a PhD in Philosophy from the University of Zurich, an MA in PPE from the University of Bern, and a BA from Tel Aviv University. His current research focuses on normative ethics, hard choices, and the ethics of AI.

    In the episode, Richie and Atay explore why AI failures keep happening, from automation bias to opaque targeting and hiring models. They unpack “meaningful human control,” accountability, and design in healthcare, government, and warfare. You’ll also hear about deepfakes, consent, digital twins, and AI-driven civic engagement, and much more.

    Links Mentioned in the Show:

    1. “Lavender” IDF recommendation system
    2. Amnesty International reports on AI/automation in welfare systems
    3. “Meaningful Human Control” (MHC) framework
    4. Connect with Atay
    5. AI-Native Course: Intro to AI for Work
    6. Related Episode: Harnessing AI to Help Humanity with Sandy Pentland, HAI Fellow at Stanford
    7. Explore AI-Native Learning on DataCamp

    New to DataCamp?

    1. Learn on the go using the DataCamp mobile app

    Empower your business with world-class data and AI skills with DataCamp for business

    9 March 2026, 9:00 am
  • 52 minutes 56 seconds
    #349 From AI Governance to AI Enablement with Stijn Christiaens, Chief Data Citizen at Collibra

    Data governance has been around long enough to develop playbooks, but AI governance is evolving in real time. Industry trends like LLMs, agents, and emerging “swarms” are changing what oversight even means, from data lineage to agent-to-agent provenance.

    For working teams, the questions are immediate: who leads—legal, security, IT, data, or a new AI role? How do you set standards so engineers aren’t using a different tool for every task? What maturity framework should you measure against, and how often should you reassess as technology shifts? How do you help teams move fast without breaking trust?

    Stijn is a data governance veteran and one of the leading thinkers in the space. He runs data strategy, data infrastructure, and product evangelism at the data and AI governance company Collibra. Since founding Collibra 18 years ago, Stijn has held several executive positions, including COO and CTO.

    In the episode, Richie and Stijn explore AI governance failures and wins, risks from agents that can act on systems, creating visibility with an agent registry, how AI governance differs from data governance, ownership across legal, security, IT, and data teams, EU AI Act risk tiers, and much more.

    Links Mentioned in the Show:

    1. Collibra
    2. Connect with Stijn
    3. AI-Native Course: Intro to AI for Work
    4. Related Episode: The New Paradigm for Enterprise AI Governance with Blake Brannon, Chief Innovation Officer at OneTrust
    5. Explore AI-Native Learning on DataCamp

    New to DataCamp?

    1. Learn on the go using the DataCamp mobile app

    Empower your business with world-class data and AI skills with DataCamp for business

    5 March 2026, 9:00 am
  • 44 minutes 22 seconds
    #348 AI Agents in Your Systems: Speed, Security, and New Access Risks with Jeremy Epling, CPO at Vanta

    Automation is moving from APIs to full “computer use,” where agents click through screens like a human. That power is transforming evidence collection, access reviews, and repetitive security tasks, but it also raises new risk. In everyday workflows, the safest gains often start with read-only actions, sandboxes, and clear opt-in for anything that writes changes. Do your tools know when an access request is an anomaly? Can you keep humans in the loop with fast review-and-approve steps? And if an agent can browse your systems, how do you stop data from walking out the door before customers or attackers notice?

    Jeremy Epling is Chief Product Officer at Vanta, where he leads product strategy and execution for the company’s trust management platform. He focuses on helping organizations automate security and compliance, enabling them to build and scale with confidence.

    Previously, he was VP of Product at GitHub, overseeing Actions, Codespaces, npm, and Packages—core components of the modern developer workflow used by millions worldwide. Before GitHub, Jeremy spent more than 16 years at Microsoft, leading product teams across Azure DevOps Pipelines and Repos, OneDrive, Outlook, Windows, and Internet Explorer. His work has centered on developer platforms, cloud infrastructure, and productivity tools at global scale.

    In the episode, Richie and Jeremy Epling explore AI-driven security risks, vendor data use and trade-secret leakage, governance and access controls, compliance beyond audits, how agents automate security questionnaires and vendor reviews, how to ship faster safely, human-in-the-loop design, and “computer use” automation, and much more.

    Links Mentioned in the Show:

    1. Vanta
    2. Vanta State of Trust Report
    3. Connect with Jeremy
    4. AI-Native Course: Intro to AI for Work
    5. Related Episode: Governing Pandora's Box: Managing AI Risks with Andrea Bonime-Blanc, CEO at GEC Risk Advisory
    6. Explore AI-Native Learning on DataCamp

    New to DataCamp?

    1. Learn on the go using the DataCamp mobile app
    2. Empower your business with world-class data and AI skills with DataCamp for business

    2 March 2026, 9:00 am
  • 45 minutes 57 seconds
    #347 Let's Get Physical with AI with Ivan Poupyrev, CEO at Archetype AI

    Physical AI is showing up across the industry as sensors, connected devices, and foundation models move from the cloud into the real world. After years of IoT wiring everything to the internet, the big shift is turning raw measurements and video into meaning, not just dashboards. For day-to-day teams, that changes how you monitor equipment, detect failures, and decide what to do next. When thousands of sensor streams hit storage, who turns them into insights and recommendations fast enough to matter? Can one model generalize across different sensors and conditions? And what must run on the asset versus the cloud?

    Dr. Ivan Poupyrev is CEO and Founder of Archetype AI, where he is building a multimodal AI foundation model that combines real-time sensor data and natural language to help people and organizations better understand and act on the physical world. The company is developing a developer platform to unlock new applications of Physical AI across industries.

    Previously, he was Director of Engineering at Google’s Advanced Technology and Projects (ATAP) division, where he founded and led large cross-functional teams to create Soli, a radar-based sensing platform, and Jacquard, a connected apparel platform powered by smart textiles and embedded ML. These technologies shipped in more than 15 products across 33 countries, including collaborations with Levi’s, YSL, Adidas, and Samsonite, and were integrated into flagship devices such as Pixel 4 and Nest products. His work has been widely published, recognized with major international awards, and featured in global media.

    In the episode, Richie and Ivan explore physical AI beyond robotics, turning IoT sensor streams into insights, recommendations, and automation, why physical foundation models differ from LLMs, sensor-fusion wins like wind-turbine failure alerts, edge deployment and privacy, how to pick a first project in practice, and much more.

    Links Mentioned in the Show:

    1. Archetype AI
    2. Attention Is All You Need (Original Transformer Architecture Paper)
    3. A Mathematical Theory of Communication (Shannon, 1948)
    4. Connect with Ivan
    5. AI-Native Course: Intro to AI for Work
    6. Related Episode: Enterprise AI Agents with Jun Qian, VP of Generative AI Services at Oracle
    7. Explore AI-Native Learning on DataCamp

    New to DataCamp?

    1. Learn on the go using the DataCamp mobile app
    2. Empower your business with world-class data and AI skills with DataCamp for business

    23 February 2026, 9:00 am
  • More Episodes? Get the App