Detection at Scale

Panther Labs

The Detection at Scale Podcast helps security practitioners succeed at managing and responding to threats at a modern, cloud scale.

  • 37 minutes 48 seconds
    Google's Michael Sinno on Autonomous Detection at 7 Trillion Logs Per Day

    What does it actually take to automate security operations when you're processing 7 trillion log lines daily and a single missed threat could compromise billions of users? Michael Sinno, Director of Detection & Response at Google, explains how his team handles this with less than 1% requiring human intervention through strategic AI implementation. He explores Google's methodical approach to AI autonomy, including fine-tuned models trained on golden datasets, validation through overseer agents, and the critical distinction between traditional automation and agentic AI that exercises judgment. 

    Michael also discusses groundbreaking work with Sec-Gemini and Timesketch that enables forensic analysis to surface attack patterns humans would never detect manually. Michael shares concrete metrics like reducing executive incident notifications from 30 minutes to 90 seconds, achieving 95% precision in ticket deduplication, and automating vulnerability coordination from hours to minutes. 

    Topics discussed:

    • Processing 7 trillion log lines daily with less than 1% of a million annual tickets requiring human intervention at Google

    • Strategic evolution from AI-assisted to AI-led to autonomous security operations using fine-tuned models and golden datasets

    • Building modular detection agents as pluggable components that can be combined like Legos for specific security use cases

    • Implementing quality assurance through overseer agents that review other agents' work to ensure precision in security decisions

    • Reducing executive incident notifications from 30 minutes to 90 seconds using AI-powered summarization and context gathering

    • Achieving 95% precision in ticket deduplication while managing the trade-off between precision and 38% recall rates

    • Integrating Sec-Gemini with Timesketch to surface attack patterns in forensic investigations that humans would never find manually

    • Shifting from traditional detection and response to infer-and-interrupt models that contain threats immediately before escalation

    • Automating vulnerability coordination workflows from hours to minutes through AI-powered data collection and impact analysis

    • Distinguishing between traditional automation and agentic AI that exercises judgment rather than following if-then logic

    • Setting a stretch goal of 70% automation in operations work while focusing humans on novel and complex security challenges

    • Measuring success through time-to-mitigation metrics and evaluating AI performance against human baseline capabilities

    Listen to more episodes: 

    Apple 

    Spotify 

    YouTube

    Website

    24 February 2026, 3:45 pm
  • 33 minutes 57 seconds
    Block's CISO James Nettesheim on How 40% of Their Detections Are Now Written with AI

    What if the real risk isn't adopting AI agents, but refusing to? James Nettesheim, CISO & Head of Enterprise Technology at Block, argues that principled risk-taking beats playing it safe. James shares Block's journey co-designing the Model Context Protocol with Anthropic and building Goose, their open-source general-purpose agent that enables anyone in the company to write security detections using natural language.

    James also explores Block's Binary Intelligent Triage system achieving 99.9% accuracy, their data safety levels framework, and practical strategies for balancing autonomous AI capabilities with human oversight. James offers candid insights about implementing AI security principles, the evolution from tool experts to domain experts, and why open source remains fundamental to Block's mission of economic empowerment and technological innovation. 

    Topics discussed:

    • Co-designing of MCP with Anthropic and developing of Goose as an open-source general-purpose AI agent

    • Implementing prompt injection defenses and adversarial AI concepts to harden Goose against malicious instructions and attacks

    • Rolling out AI responsibly through data safety levels modeled after CDC bio-contamination protocols for sensitive data handling

    • Democratizing detection engineering by enabling anyone at Block to write detections using natural language

    • Achieving 40% of new detections created with AI assistance through recipes, playbooks, and automated tuning capabilities

    • Building Binary Intelligent Triage system that analyzes historical alerts and investigations to achieve 99.9% automated triage accuracy

    • Balancing autonomous AI capabilities with human oversight, requiring PR reviews and maintaining accountability for agent-generated code

    • Transitioning from tool expertise to domain expertise as the future skill set needed for detection and response professionals

    • Block's commitment to open source development driven by economic empowerment mission and desire to build accessible financial tools 

    Listen to more episodes: 

    Apple 

    Spotify 

    YouTube

    Website

    10 February 2026, 11:00 am
  • 41 minutes 27 seconds
    Compass' Ryan Glynn on Why LLMs Shouldn't Make Security Decisions — But Should Power Them

    Ryan Glynn, Staff Security Engineer at Compass, has a practical AI implementation strategy for security operations. His team built machine learning models that removed 95% of on-call burden from phishing triage by combining traditional ML techniques with LLM-powered semantic understanding. 

    He also explores where AI agents excel versus where deterministic approaches still win, why tuning detection rules beats prompt-engineering agents, and how to build company-specific models that solve your actual security problems rather than chasing vendor promises about autonomous SOCs.

    Topics discussed:

    • Language models excel at documentation and semantic understanding of log data for security analysis purposes
    • Using LLMs to create binary feature flags for machine learning models enables more flexible detection engineering
    • Agentic SOC platforms sometimes claim to analyze data they aren't actually querying accurately in practice
    • Tuning detection rules directly proves more reliable than trying to prompt-engineer agent analysis behavior
    • Intent classification in email workflows helps automate triage of forwarded and reported phishing attempts effectively
    • Custom ML models addressing company-specific burdens can achieve 95% reduction in analyst workload for targeted problems
    • Alert tagging systems with simple binary classifications enable better feedback loops for AI-assisted detection tuning
    • Context gathering costs in security make efficiency critical when deploying AI agents across diverse data sources
    • Query language complexity across SIEM platforms creates challenges for general-purpose LLM code generation capabilities
    • Explainable machine learning models remain essential for security decisions requiring human oversight and accountability

    Listen to more episodes: 

    Apple 

    Spotify 

    YouTube

    Website

    27 January 2026, 2:34 pm
  • 37 minutes 55 seconds
    Veeva Systems' Mike Vetri on Building Threat Operations Teams and AI-Powered Investigations

    Mike Vetri, Sr. Director of Security Operations at Veeva Systems, reflects on transforming SOC investigations through AI-powered data aggregation and building threat operations teams with the analytical mindset required for proactive defense. Mike introduces the C3 Matrix framework for prioritizing security efforts across centers of gravity, crown jewels, and capability enablers, and explains the seven Ds of cyber defense from discovery through deception operations. 

    Drawing from 10+ years of Air Force cyber intelligence experience, Mike details why threat operations requires fundamentally different system-two thinking than detection engineering, and how this discipline shift moves organizations from reactive firefighting to proactive threat anticipation. He covers practical examples of AI cutting investigation time by aggregating data from multiple tools, the importance of defense in personnel for operational resilience, and strategies for preventing analyst burnout while maintaining effective security operations. 

    Topics discussed:

    • How AI transforms insider threat investigations by aggregating workstation logs, browsing history, and DLP alerts into single queries
    • The C3 Matrix framework prioritizes security controls across centers of gravity, crown jewels, and capability enablers based on organizational impact and recoverability
    • Why threat operations requires system-two analytical thinking fundamentally different from the engineering mindset
    • The seven Ds of cyber defense: discover, detect, deny, disrupt, degrade, destroy, and deception operations for comprehensive threat mitigation
    • How deception operations provide the most accurate intelligence by studying adversary behavior in controlled environments
    • The distinction between threat intelligence and threat operations, and why mature SOCs need teams focused on proactive defense strategies
    • Defense in personnel ensures multiple team members can handle each security capability, preventing single points of failure
    • Time-sensitive investigation scenarios where AI delivers maximum ROI by eliminating the need to manually query dozens of security tools
    • The evolution of cyber threats from technical attacks to psychological warfare using AI to challenge human judgment and decision-making
    • Why security culture must extend beyond traditional boundaries as AI-powered threats increasingly target HR processes, financial operations, and business functions

    Listen to more episodes: 

    Apple 

    Spotify 

    YouTube

    Website

    13 January 2026, 1:02 pm
  • 37 minutes 43 seconds
    Trustpilot's Gary Hunter on Structuring Security Knowledge for AI Success

    Gary Hunter, Head of Security Operations at Trustpilot, built a security team from scratch at a company synonymous with trust. Gary shares how his ten-person team leverages AI agents across alert triage, multimodal brand protection, and incident response. 

    He explores why he and his team treat AI agents like interns with codified guardrails, why competitive prompt testing reveals the best approaches, and how restricting AI to specific documentation sets prevents confusion. Gary also offers his tips on building weatherproof team members who adapt to any technology shift and reflects on why constraints breed creativity in resource-limited environments.

    Topics discussed:

    • Building security operations from scratch by identifying pain points, understanding technology gaps, and systematically increasing detection coverage and visibility
    • Leveraging AI agents for alert triage and workflows to enable teams to run as fast as attackers while maintaining appropriate human oversight
    • Implementing competitive prompt testing by running multiple AI models to identify the most effective approach before deployment
    • Creating cultural buy-in for AI adoption by empowering team members to contribute prompts and democratizing learning across skill levels
    • Using AI for multimodal brand protection, analyzing screenshots and HTML content to score potential infringements and automate response workflows appropriately
    • Treating AI agents like interns, codifying processes, and limiting tool access based on what you'd delegate to junior team members
    • Building detection strategies that focus on behaviors and crown jewels while using AI to triage noisy but potentially valuable alerts
    • Documenting institutional knowledge concisely rather than overwhelming AI models with extensive documentation that creates conflicting or irrelevant responses
    • Shifting team focus from alert triaging to high-impact prevention work, vendor management, and building relationships across the business 

    Listen to more episodes: 

    Apple 

    Spotify 

    YouTube

    Website

    23 December 2025, 11:09 am
  • 35 minutes 51 seconds
    Vjaceslavs Klimovs on Why 40% of Security Work Lacks Threat Models

    Vjaceslavs Klimovs, Distinguished Engineer at CoreWeave, reflects on building security programs in AI infrastructure companies operating at massive scale. He explores how security observability must be the foundation of any program, how to ensure all security work connects to concrete threat models, and why AI agents will make previously tolerable security gaps completely unacceptable. 

    Vjaceslavs also discusses CoreWeave's approach to host integrity from firmware to user space, the transition from SOC analysts to detection engineers, and building AI-first detection platforms. He shares insights on where LLMs excel in security operations, from customer questionnaires to forensic analysis, while emphasizing the continued need for deterministic controls in compliance-regulated environments.

    Topics discussed:

    • The importance of security observability as the foundation for any security program, even before data is perfectly parsed.
    • Why 40 to 50 percent of security work across the industry lacks connection to concrete threat models or meaningful risk reduction.
    • The prioritization framework for detection over prevention in fast-moving environments due to lower organizational friction.
    • How AI agents will expose previously tolerable security gaps like over-provisioned access, bearer tokens, and lack of source control.
    • Building an AI-first detection platform with assistance for analysis, detection writing, and forensic investigations.
    • The transition from traditional SOC analyst tiers to full-stack detection engineering with end-to-end ownership of verticals.
    • Strategic use of LLMs for customer questionnaires, design doc refinement, and forensic analysis.
    • Why authentication and authorization systems cannot rely on autonomous AI decision-making in compliance-regulated environments requiring strong accountability.
    9 December 2025, 5:32 pm
  • 36 minutes 16 seconds
    GreenSky's Ken Bowles on Auditing Controls before They Silently Fail

    Over his 15-year journey through healthcare and financial services security, Ken Bowles, now Director of Security Operations at GreenSky, has collected a plethora of practical strategies for prioritizing crown jewels, managing cloud over-permissions, and building SOCs that scale effectively. He reflects on transforming security operations through AI and intelligent automation and discusses how AI is reducing analyst investigation time dramatically.

    Ken also asserts the importance of auditing security controls before they silently fail. The conversation touches on the evolving role of the MITRE framework, the concept of signaling versus alerting, and why embracing AI might be the best career move for security professionals navigating rapid technological change in cloud environments.

    Topics discussed:

    • Building security operations programs around crown jewels and scaling outward to manage the most critical assets first.
    • Managing over-permissions in cloud environments that have snowballed across multiple administrators without proper governance.
    • Using AI to reduce analyst investigation time from 30 minutes to seconds through intelligent data enrichment and context.
    • Creating true single-pane-of-glass visibility by connecting security tools and data sources for more effective threat detection.
    • Training new security analysts with AI assistance to bridge knowledge gaps in SQL, SOAR platforms, and log analysis.
    • Documenting institutional knowledge while encouraging analysts to trust their intuition when something doesn't look right.
    • Understanding the limitations of impossible travel alerts and using AI to establish user behavior baselines for accurate detection.
    • Applying the MITRE framework as a guideline rather than gospel, adapting detection strategies to specific organizational needs.
    • Implementing signaling approaches that label security-relevant events without creating alert fatigue for security operations teams.
    • Auditing security controls regularly to catch configuration drift and ensure protective measures remain effective over time. 

    Listen to more episodes: 

    Apple 

    Spotify 

    YouTube

    Website

    25 November 2025, 12:44 pm
  • 39 minutes 34 seconds
    FanDuel's Tyler Martin on the Bronze-Silver-Gold Path to Autonomous Security Triage

    Tyler Martin, Senior Director of Enterprise Security Engineering & Operations at FanDuel, reflects on revolutionizing security operations by replacing traditional analyst tiers with security engineers supported by custom AI agents. Tyler shares the architecture behind SAGE, FanDuel's phishing automation system, and explains how his team progressed from human-in-the-loop validation to fully autonomous triage through bronze-silver-gold maturity stages. 

    The conversation explores practical challenges like context enrichment, implementing user personas connected to IDP and HRIS systems, and choosing between RAG versus CAG models for knowledge augmentation. Tyler also discusses shifts in detection strategy, arguing for leaner detection catalogs with just-in-time, query-based rules over maintaining point-in-time codified detections that no longer address active risks.

    Topics discussed:

    • Restructuring security operations teams to include only security engineers while AI agents handle traditional level 1-3 triage work.
    • Building Security Analysis and Guided Escalation, an AI-powered phishing automation system that reduced manual ticket volume.
    • Implementing bronze-silver-gold maturity stages for AI triage: manual validation, automated closures with oversight, and full autonomous operations.
    • Enriching AI agents with organizational context through connections to IDP systems, HRIS platforms, and user behavior analytics.
    • Creating user personas that encode access patterns, permissions, security groups, and typical behaviors to improve AI decision-making accuracy.
    • Designing incident response automation that spins up Slack channels, Zoom bridges, recordings, and comprehensive documentation through simple commands.
    • Eliminating 90% of missing PIR action items through automated documentation capture and stakeholder tagging in Confluence.
    • Shifting detection strategy from maintaining large MITRE-mapped catalogs to just-in-time query-based rules written by AI agents.
    • Balancing signal volume and enrichment data against inference costs while avoiding context rot that degrades LLM performance.
    • Evaluating RAG versus CAG models for knowledge augmentation and exploring multi-agent architectures with supervisory oversight layers. 

    Listen to more episodes: 

    Apple 

    Spotify 

    YouTube

    Website

    11 November 2025, 12:27 pm
  • 31 minutes 46 seconds
    Live Oak Bank's George Werbacher on AI As SecOps' Single Pane of Glass

    George Werbacher, Head of Security Operations at Live Oak Bank, reviews the practical realities of implementing AI agents in security operations, sharing his journey from exploring tools like Cursor and Claude Code to building custom agents in-house. He also reflects on the challenges of moving from local development to production-ready systems with proper durability and retry logic.

    The conversation explores how AI is changing the security analyst role from alert analysis to deeper investigation work, why SOAR platforms face significant disruption, and how MCP servers enable natural language interactions across security tools. George offers pragmatic advice on cutting through AI hype, emphasizing that agents augment rather than replace human expertise while dramatically lowering barriers to automation and query language mastery.

    Through technical insights and leadership perspective, George illuminates how security teams can embrace AI to improve operational efficiency and mean time to detect without inflating budgets, while maintaining the critical human judgment that effective security demands.

    Topics discussed:

    • Understanding AI's role in augmenting security analysts rather than replacing them, shifting roles toward investigation and threat hunting.
    • Building custom AI agents using Python and exploring frameworks like LangChain to solve specific SecOps use cases.
    • Managing moving agents from local development to production, including retry logic, failbacks, and durability requirements.
    • Implementing MCP servers to enable natural language interactions with security tools, eliminating the need to learn multiple query languages.
    • Navigating AI hype by focusing on solving specific problems and understanding what agents can realistically accomplish.
    • Predicting SOAR platform disruption as agents take over enrichment, orchestration, and response with simpler automation approaches.
    • Removing platform barriers by enabling analysts to use natural language rather than mastering specific tools or query languages.
    • Exploring context management, prompt engineering, and conversation history techniques essential for building effective agentic systems.
    • Adopting tools like Cursor and Claude Code to empower technical security professionals without deep coding backgrounds. 

    Listen to more episodes: 

    Apple 

    Spotify 

    YouTube

    Website

    28 October 2025, 3:48 pm
  • 26 minutes 8 seconds
    Ochsner Health's Andrew Casazza on When AI Becomes the Hammer Looking for Nails

    Andrew Casazza, AVP of Cyber Security Operations at Ochsner Health, explores how healthcare organizations navigate FDA-approved medical devices running on legacy operating systems, implement AI-powered security tools while maintaining HIPAA compliance, and respond to threats that now move from initial compromise to malicious action in seconds rather than hours. 

    Andrew gives Jack his insights on building effective security programs in heavily regulated industries, emphasizing the importance of visibility, automation with guardrails, and keeping humans in the loop for critical decisions while leveraging AI to handle the speed and scale of modern threats.

    Topics discussed:

    • Unique security challenges in healthcare environments where medical devices run on legacy operating systems that cannot be easily updated.
    • Strategies for monitoring and securing systems that cannot have traditional security agents installed due to FDA regulations and medical certification requirements.
    • Leveraging AI and automation in security operations while navigating HIPAA regulations and protecting patient data from external training models.
    • Implementing human-in-the-loop approaches where AI performs initial analysis and triage while escalating critical decisions to human analysts.
    • Understanding the privacy and compliance implications of AI tools that may use customer data for model training and improvement.
    • The dramatic reduction in threat-actor dwell time from hours or days to minutes or seconds.
    • Building effective SOAR automation playbooks to handle repetitive cases and reduce noise while focusing attention on bigger threats.
    • Establishing appropriate guardrails for AI-powered security tools to prevent unintended consequences while enabling automated response capabilities.
    • The importance of being curious and maintaining broad knowledge across multiple domains to become more effective.

    Listen to more episodes: 

    Apple 

    Spotify 

    YouTube

    Website

    14 October 2025, 12:07 pm
  • 34 minutes 15 seconds
    Cisco Meraki's Stephen Gubenia on How to Crawl-Walk-Run to AI-Powered SecOps

    Stephen Gubenia, Head of Detection Engineering for Threat Response for Cisco Meraki, shares his evolution from managing overwhelming alert volumes as a one-person security team to architecting sophisticated automated systems that handle everything from enrichment to containment. 

    Stephen discusses the organizational changes needed for successful AI adoption, including top-down buy-in and proper training programs that help team members understand AI as a productivity multiplier rather than a job threat. 

    The conversation also explores Stephen’s practical "crawl, walk, run" methodology for responsibly implementing AI agents, the critical importance of maintaining human oversight through auditable workflows, and how security teams can transition from reactive alert management to strategic agent supervision. 

    Topics discussed:

    • Evolution from manual security operations to AI-powered agentic workflows that eliminate repetitive tasks and enable strategic focus.
    • Implementation of the "crawl, walk, run" methodology for gradually introducing AI agents with proper human oversight and validation.
    • Building enrichment agents that automatically gather threat intelligence and OSINT data instead of manual investigations.
    • Development of reasoning models that can dynamically triage alerts, run additional queries, and recommend investigation steps.
    • Automated containment workflows that can perform endpoint isolation and other response actions while maintaining appropriate guardrails.
    • Essential foundations including proper logging pipelines, alerting systems, and detection logic required before implementing AI automation.
    • Human-in-the-loop strategies that transition from per-alert review to periodic auditing and agent management oversight.
    • Organizational change management including top-down buy-in, training programs, and addressing fears about AI replacing jobs.
    • Future of detection engineering with AI-assisted rule development, gap analysis, and customized detection libraries.
    • Learning recommendations for cybersecurity professionals to develop AI literacy through reputable sources and consistent daily practice.

    Listen to more episodes: 

    Apple 

    Spotify 

    YouTube

    Website

    23 September 2025, 1:02 pm
  • More Episodes? Get the App