Thoughtworks Technology Podcast

Thoughtworks

  • 36 minutes 53 seconds
    We need to talk about vibe coding

    The term 'vibe coding' — which first appeared in a post on X by Andrej Karpathy in early February 2025 — has set the software development world abuzz: everyone seems to have their own take on what it is, how it's done and whether it's a bold new chapter in the history of programming or an insult to anyone that's ever written a line of code.

    Clearly, then, we need to talk about vibe coding — and that's precisely what we do on this episode of the Technology Podcast. Featuring Thoughtworkers Birgitta Böckeler (AI for Software Delivery Lead) and Lilly Ryan (Cybersecurity Principal), who join hosts Neal Ford and Prem Chandrasekaran, we dive into the different understandings and applications of the concept, and discuss what happens when a meme collides with reality.

    2 April 2025, 6:00 am
  • 29 minutes 7 seconds
    Infrastructure as code in 2025

    Nearly ten years after the first edition of Infrastructure as Code was published by O'Reilly, Kief Morris is publishing a third edition of the book. But why a new edition now? What's changed in technology and business over the last decade?

    Quite a lot, as it happens. To talk about what's new — both in the infrastructure world and in the book itself — Kief Morris joins host Ken Mugrage on the Technology Podcast. They discuss each edition and what's new in this one, and dive into the infrastructure challenges and issues that need to be tackled in 2025, from tooling and deployment to maintenance and infrastructure evolution.

    Learn more about Infrastructure as Code, Third Edition: https://www.thoughtworks.com/en-gb/insights/books/infrastructure-as-code-3rd-ed

    20 March 2025, 6:00 am
  • 42 minutes 1 second
    How fitness functions can help us govern and measure AI

    AI is inherently dynamic: that's true in terms of the field itself, and at a much lower level too — models are trained on new data and algorithms adapt and change to new circumstances and information. That's part of its power and what makes it so exciting, but from a business and organizational perspective, that can make governance and measurement exceptionally difficult. How can we know that our AI is optimized for the right thing? How can we be sure it's oriented towards what we want it to be?

    This is where the concept of fitness functions can help. Broadly speaking, fitness functions are ways of measuring the extent to which a given solution is fulfilling its goals — so, in the context of AI, they can help teams ensure that AI systems are serving their intended purpose.

    In this episode of the Technology Podcast, Rebecca Parsons and Neal Ford — authors (alongside Pat Kua and Pramod Sadalage) of Building Evolutionary Architectures, the book which brought fitness functions into the software architecture space — join host Ken Mugrage to explore how the fitness function concept can help us better manage the dynamism of AI and, in doing so, overcome the challenge of bringing such systems into production.

    Learn more about Building Evolutionary Architectures: https://www.thoughtworks.com/insights/books/building-evolutionaryarchitectures-second-edition

     

     

    6 March 2025, 7:00 am
  • 43 minutes 28 seconds
    Architecture as code

    How can we better define and clarify architectures to ensure consistency and control? If, as Neal Ford and Mark Richards discussed on a recent episode of the Technology Podcast, software architecture intersects with many different facets of software development and delivery, what can we do to better manage architectures in a way that is adaptable and dynamic?

    Neal and Mark return to the guest seats to speak again to host Prem Chandrasekaran about fitness functions and architecture as code, and explain why rethinking our approach to software architecture can help ensure greater alignment with organizational needs and objectives.

     

    19 February 2025, 7:00 am
  • 33 minutes
    Decoding DeepSeek

    The release of DeepSeek's AI models at the end of January 2025 sent shockwaves around the world. The weeks that followed have been rife with hype and rumor, ranging from suggestions that DeepSeek has completely upended the tech industry to claims the efficiency gains ostensibly unlocked by DeepSeek are exagerrated. So, what's the reality? And what does it all really mean for the tech industry?

    In this episode of the Technology Podcast, two of Thoughtworks' AI leaders — Prasanna Pendse (Global Director of AI Strategy) and Shayan Mohanty (Head of AI Research) — join hosts Prem Chandrasekaran and Ken Mugrage to provide a much-needed clear and sober perspective on DeepSeek. They dig into some of the technical details and discuss how the DeepSeek team was able to optimize the limited hardware at their disposal, and think through what the implications might be for the industry in the months to come.

    Read Prasanna's take on DeepSeek on the Thoughtworks blog: https://www.thoughtworks.com/insights/blog/generative-ai/demystifying-deepseek

    6 February 2025, 7:00 am
  • 36 minutes 3 seconds
    AI testing, benchmarks and evals

    Generative AI's popularity has led to a renewed interest in quality assurance — perhaps unsurprising given the inherent unpredictability of the technology. This is why, over the last year, the field has seen a number of techniques and approaches emerge, including evals, benchmarking and guardrails. While these terms all refer to different things, grouped together they all aim to improve the reliability and accuracy of generative AI.

    To discuss these techniques and the renewed enthusiasm for testing across the industry, host Lilly Ryan is joined by Shayan Mohanty, Head of AI Research at Thoughtworks, and John Singleton, Program Manager for Thoughtworks' AI Lab. They discuss the differences between evals, benchmarking and testing and explore both what they mean for businesses venturing into generative AI and how they can be implemented effectively.

    Learn more about evals, benchmarks and testing in this blog post by Shayan and John (written with Parag Mahajani): https://www.thoughtworks.com/insights/blog/generative-ai/LLM-benchmarks,-evals,-and-tests

    23 January 2025, 7:00 am
  • 43 minutes 32 seconds
    Exploring the intersections of software architecture

    Software architecture necessarily intersects with a diverse range of critical things, including implementation, infrastructure, data and engineering practices. All these elements require serious consideration and reflection if you're to architect effectively. 

    To discuss these various intersections, Thoughtworks' Neal Ford and his long-time collaborator Mark Richards join host Prem Chandrasekaran on the Thoughtworks Technology Podcast. They dive into why these intersections matter, what they mean for software architects and how individuals and teams can go about addressing them. 

    9 January 2025, 7:00 am
  • 35 minutes
    Who should make software architecture decisions?

    Who should be involved in the process of making decisions about software architecture? That's a question that's been puzzling Thoughtworker Andrew Harmel-Law for some time — so much so that he decided to write a book about it. The result is Facilitating Software Architecture. Published by O'Reilly in December 2024, it's both an argument for and a guide to involving more people in the architecture decision process.

    To discuss the topic and the book, Andrew joined hosts Neal Ford and Prem Chandrasekaran on the Technology Podcast. They explore why including more roles in software architecture matters today, some of the common objections to and risks of such an approach, alongside techniques and practices that can make doing it in fast-paced and dynamic organizations easier.

    "It's quite magical when you see this blossoming of understanding of what it is that architects do... It's not less architecture, it's more. It's just happening in a broader sphere." — Andrew Harmel-Law

    You can find Andrew's book on the O'Reilly website: https://www.oreilly.com/library/view/facilitating-software-architecture/9781098151850/

    26 December 2024, 7:00 am
  • 28 minutes 51 seconds
    Generative AI's uncanny valley: Problem or opportunity?

    With the rise of generative AI, the concept of the uncanny valley — where human resemblance unsettles, disturbs or disgusts — is more relevant than ever. But is it a problem that technologists need to tackle? Or does it offer an opportunity for greater thoughtfulness about the ways generative AI is being built, deployed and used?

    In this episode of the Technology Podcast, host Lilly Ryan is joined by Srinivasan Raguraman to discuss generative AI's uncanny valley and explore how it might offer a model for thinking through our expectations about generative AI outputs and effects. Taking in everything from the experiences of end users to the mental models engineers bring to AI development, listen for a wide-ranging dive into the implications of the uncanny valley in our experience of generative AI today.

    Read Srinivasan's recent article (written with Ken Mugrage): https://www.technologyreview.com/2024/10/24/1106110/reckoning-with-generative-ais-uncanny-valley/

    12 December 2024, 7:00 am
  • 33 minutes 19 seconds
    Using generative AI for legacy modernization

    Legacy modernization is an enduring challenge — and as systems become more complex, the difficulty of understanding and modelling a system so it can be modernized only becomes more difficult. However, at Thoughtworks we've seen some recent success bringing generative AI into the legacy modernization process.

    To discuss what this means in practice and the benefits it can deliver, host Ken Mugrage is joined by Thoughtworks colleagues Shodhan Sheth and Tom Coggrave. Shodhan and Tom have been working together in this space in recent months and, in this episode of the Technology Podcast, offer their insights into finding success with this novel combination. They explain how it can be implemented, the challenges and experiments they did on their way to positive results and what it means for how teams and organizations think about modernization in the future.

    Read Shodhan and Tom's article on legacy modernization and generative AI (written with Alessio Ferri):  https://martinfowler.com/articles/legacy-modernization-gen-ai.html

    28 November 2024, 7:00 am
  • 37 minutes 38 seconds
    Data contracts: What are they and why do they matter?

    Data contracts are a bit like APIs for data — they make it possible to interface with data in a way that ensures the transfer of data from one place to another is stable and reliable. This is particularly important for building more reliable data-driven applications.

    To discuss data contracts, host Lilly Ryan is joined on the Technology Podcast by Andrew Jones, the creator of the data contract concept (in 2021) and author of Driving Data Quality with Data Contracts (2023), and Thoughtworker Ryan Collingwood who is currently writing their own book on data contracts due to be published in 2025. Andrew and Ryan offer their perspectives on the topic, explaining the origins and motivation for the idea and outlining how they can be used in practice. 

    You can find Andrew’s book here: https://www.amazon.com/Driving-Data-Quality-Contracts-comprehensive/dp/1837635005

     

    14 November 2024, 7:00 am
  • More Episodes? Get the App