TestGuild Automation Testing Podcast

Test Automation with Joe Colantonio. TestGuild Automation Podcast covers Se

  • 32 minutes 56 seconds
    AI Testing LLMs & RAG: What Testers Must Validate with Imran Ali

    AI is transforming how software is built, but testing AI systems requires an entirely new mindset.

    Don't miss AutomationGuild 2026 - Register Now: https://testguild.me/podag26

    Use code TestGuildPod20 to get 20% off your ticket.

    In this episode, Joe Colantonio sits down with Imran Ali to break down what AI testing really looks like when you're dealing with LLMs, RAG pipelines, and autonomous QA workflows.

    You'll learn:

    Why traditional pass/fail testing breaks down with LLMs

    How to test non-deterministic AI outputs for consistency and accuracy

    Practical techniques for detecting hallucinations, grounding issues, and prompt injection risks

    How RAG systems change the way testers validate AI-powered applications

    Where AI delivers quick wins today—and where human validation still matters

    This conversation goes beyond hype and gets into real-world AI testing strategies QA teams are using right now to keep up with AI-generated code, faster release cycles, and DevOps velocity.

    If you're a tester, automation engineer, or QA leader wondering how AI changes your role,not replaces it,this episode is your roadmap.

    21 December 2025, 3:10 pm
  • 44 minutes 11 seconds
    AI Codebase Discovery for Testers with Ben Fellows
    What if understanding your codebase was no longer a blocker for great testing? Most testers were trained to work around the code — clicking through UIs, guessing selectors, and relying on outdated docs or developer explanations. In this episode, Playwright expert Ben Fellows flip that model on its head. Using AI tools like Cursor, testers can now explore the codebase directly — asking questions, uncovering APIs, understanding data relationships, and spotting risk before a single test is written. This isn't about becoming a developer. It's about using AI to finally see how the system really works — and using that insight to test smarter, earlier, and with far more confidence. If you've ever joined a new team, inherited a legacy app, or struggled to understand what really changed in a release, this episode is for you. Registration for Automation Guild 2026 Now: https://testguild.me/podag26
    14 December 2025, 11:07 pm
  • 40 minutes 54 seconds
    Gatling Studio: Start Performance Testing in Minutes (No Expertise Required) with Shaun Brown and Stephane Landelle
    Performance testing has traditionally been one of the hardest parts of QA,slow onboarding, complex scripting, difficult debugging, and too many late-stage surprises. Try Gatling Studio for yourself now: https://links.testguild.com/gatling In this episode, Joe sits down with Stéphane Landelle, creator of Gatling, and Shaun Brown to explore how Gatling is reinventing the load-testing experience. You'll hear how Gatling evolved from a developer-first framework into a far more accessible platform that supports Java, Kotlin, JavaScript/TypeScript, and AI-assisted creation. We break down the thinking behind Gatling Studio, a new companion tool designed to make recording, filtering, correlating, and debugging performance tests dramatically easier. Whether you're a developer, SDET, or automation engineer, you'll learn: How to onboard quickly into performance testing—even without deep expertise Why Gatling Studio offers a smoother way to record traffic and craft tests Where AI is already improving load test authoring How teams can shift-left performance insights and catch issues earlier What's coming next as Gatling expands its developer experience and enterprise platform If you've been meaning to start performance testing—or scale it beyond one performance engineer—this episode will give you the clarity and confidence to begin.
    7 December 2025, 4:00 pm
  • 39 minutes
    AI-Driven Manual Regression: Test Only What Truly Matters With Wilhelm Haaker and Daniel Garay
    Manual regression testing isn't going away—yet most teams still struggle with deciding what actually needs to be retested in fast release cycles. See how AI can help your manual testing now: https://testguild.me/parasoftai In this episode, we explore how Parasoft's Test Impact Analysis helps QA teams run fewer tests while improving confidence, coverage, and release velocity. Wilhelm Haaker (Director of Solution Engineering) and Daniel Garay (Director of QA) join Joe to unpack how code-level insights and real coverage data eliminate guesswork during regression cycles. They walk through how Parasoft CTP identifies exactly which manual or automated tests are impacted by code changes—and how teams use this to reduce risk, shrink regression time, and avoid redundant testing. What You'll Learn: Why manual regression remains a huge bottleneck in modern DevOps How Test Impact Analysis reveals the exact tests affected by code changes How code coverage + impact analysis reduce risk without expanding the test suite Ways teams use saved time for deeper exploratory testing How QA, Dev, and Automation teams can align with real data instead of assumptions Whether you're a tester, automation engineer, QA lead, or DevOps architect, this episode gives you a clear path to faster, safer releases using data-driven regression strategies.
    1 December 2025, 3:39 am
  • 8 minutes 52 seconds
    Top Automation Guild Survey Insights for 2026 with Joe Colantonio

    Automation Guild turns 10 this year, and the 2026 survey revealed some of the strongest trends and signals the testing community has ever shared.

    Register now: https://testgld.link/ag26reg

    In this episode, Joe breaks down the most important insights shaping Automation Guild 2026 and what they mean for testers, automation engineers, and QA leaders.

    You'll hear why AI-powered testing is dominating every category, why Playwright has officially become the tool testers want most, the challenges that continue to follow teams year after year, and how testers are navigating shrinking teams, faster releases, and rising expectations.

    This episode gives you a clear, data-driven snapshot of why Automation Guild 2026 matters — and how this year's event is designed to help you stay relevant, sharpen your skills, and tackle the problems that keep slowing down teams.

    Perfect for anyone considering joining the Guild, planning their 2026 automation strategy, or just trying to make sense of the rapid changes happening in testing today.

    24 November 2025, 6:41 am
  • 32 minutes 23 seconds
    Testing AI Vibe Coding: Stop Vulnerabilities Early with Sarit Tager
    AI is accelerating software delivery, but it's also introducing new security risks that most developers and automation engineers never see coming. In this episode, we explore how AI-generated code can embed vulnerabilities by default, how "vibe coding" is reshaping developer workflows, and what teams must do to secure their pipelines before bad code reaches production. You'll learn how to prompt more securely, how guardrails can stop vulnerabilities at generation time, how to prioritize real risks instead of false positives, and how AI can be used to protect your applications just as effectively as attackers use it to exploit them. Whether you're using Cursor, Copilot, Playwright MCP, or any AI tool in your automation workflow, this conversation gives you a clear roadmap for staying ahead of AI-driven vulnerabilities — without slowing down delivery. Featuring Sarit Tager, VP of Product for Application Security at Palo Alto Networks, who reveals real-world insights on securing AI-generated code, understanding modern attack surfaces, and creating a future-proof DevSecOps strategy.
    16 November 2025, 12:45 pm
  • 17 minutes 15 seconds
    4 Free TestGuild Tools Every Tester Should Be Using with Joe Colantonio

    In this solo episode, Joe Colantonio shares four powerful free TestGuild tools designed to help testers, automation engineers, and QA leaders work smarter. Discover how to instantly find the right testing tool for your team, assess automation risk, check your site's accessibility, and benchmark your automation maturity — all in one session.

    Whether you're looking to improve test coverage, adopt better practices, or simply save time, these tools were built with you in mind.

    What You'll Learn: – How to choose the right test automation tool fast – How to identify and reduce testing risk – How to check your site's accessibility compliance – How to assess your team's automation maturity level

    Try the tools free:

    Tool Matcher: https://testgld.link/toolmatcher Accessibility Scanner: https://testgld.link/scanner Risk Calc: https://testgld.link/riskcalc Automation Readiness Quiz: https://testgld.link/scorequiz

    ️ Join us for the 10th Annual Automation Guild Conference: https://testgld.link/IrHaNIVX

    9 November 2025, 3:17 pm
  • 32 minutes 11 seconds
    AI Testing Made Trustworthy using FizzBee
    As AI tools like Copilot, Claude, and Cursor start writing more of our code, the biggest challenge isn't generating software — it's trusting it. In this episode, JP (Jayaprabhakar) Kadarkarai, founder of FizzBee, joins Joe Colantonio to explore how autonomous, model-based testing can validate AI-generated software automatically and help teams ship with confidence. FizzBee uses a unique approach that connects design, code, and behavior into one continuous feedback loop — automatically testing for concurrency issues and validating that your implementation matches your intent. You'll discover: Why AI-generated code can't be trusted without validation How model-based testing works and why it's crucial for AI-driven development The difference between example-based and property-based testing How FizzBee detects concurrency bugs without intrusive tracing Why autonomous testing is becoming mandatory for the AI era Whether you're a software tester, DevOps engineer, or automation architect, this conversation will change how you think about testing in the age of AI-generated code.
    2 November 2025, 2:02 pm
  • 42 minutes 7 seconds
    Test Automation Optimus Prime Halloween Special

    In this Halloween special, Joe Colantonio and Paul Grossman discuss the evolution of automation testing, focusing on the integration of AI tools, project management strategies, and the importance of custom logging. Paul shares insights from his recent job experience, detailing how he inherited a project and the challenges he faced. Paul also goes over his Optimus Prime framework and uses it to explore various automation tools, the significance of dynamic waiting, and how to handle test case collisions. The discussion also highlights the role of AI in enhancing automation frameworks and the importance of version control in software development.

    21 October 2025, 7:07 pm
  • 41 minutes 19 seconds
    Playwright AI Vibe Testing: True Self-healing Tests with Vasusen Patil
    Flaky Playwright tests got you down? Discover Vibe Testing, a new AI-driven approach that lets Playwright tests understand design intent, adapt to UI changes, and self-heal intelligently.

    In this episode, Joe Colantonio talks with Vasusen Patil, Co-Founder and CEO of Donobu, about how their platform extends Playwright with AI-powered "Vibe Testing." You'll discover how this approach blends visual assertions with contextual understanding to build resilient, low-flake tests that keep shipping smooth.

    You'll take away:

    What "Vibe Testing" really means and why it's a game-changer

    How AI-authored Playwright tests can self-heal without false positives

    The key to balancing autonomy with tester control

    Why Donobu's local-first model keeps your data safe while cutting test flakiness under 2 %

    How to try Donobu's free Playwright AI toolkit

    If you want to see where test automation is heading next — and how to future-proof your QA career — don't miss this one.

    12 October 2025, 3:17 pm
  • 27 minutes 47 seconds
    Playwright Testing: How to Make UI and API Tests 10x Faster with Naeem Malik
    Did you know that Playwright offers an elegant, unified framework that seamlessly integrates both UI and API testing within a single language and test runner? Don't miss the early bird Automation Guild discount: https://testguild.me/ag26early This episode explores how Playwright empowers teams to simplify test maintenance, eliminate silos between dev and QA, and gain true full-stack confidence. You'll discover: How to make your tests 10x faster and more reliable by using API requests for setup instead of brittle UI flows. How to write hybrid tests that validate both UI actions and backend APIs in a single flow. A modern, unified testing strategy that reduces operational friction and helps teams deliver high-quality applications with confidence. Our guest, Naeem Malik, brings 15 years of QA and automation expertise. As the creator of Test Automation TV and bestselling Udemy courses, Naeem specializes in making complex test automation concepts simple, practical, and impactful for engineering teams. Whether you're a QA leader, automation engineer, or DevOps practitioner, this episode will give you the tools to rethink your testing strategy and unlock the power of Playwright.
    5 October 2025, 1:56 pm
  • More Episodes? Get the App