- 23 minutes 4 secondsCUGA Agent: From Benchmarks to Business Impact of IBM's Generalist Agent
We dive into the latest paper from a team of researchers at IBM: "From Benchmarks to Business Impact: Deploying IBM Generalist Agent in Enterprise Production." We're excited to host several of the paper's authors, who walk us through the research and its implications. The paper reports IBM’s experience developing and piloting the Computer Using Generalist Agent (CUGA), which has been open-sourced for the community. CUGA adopts a hierarchical planner–executor architecture with strong analytical foundations, achieving state-of-the-art performance on AppWorld and WebArena. Beyond benchmarks, it was evaluated in a pilot within the Business-Process-Outsourcing talent acquisition domain, addressing enterprise requirements for scalability, auditability, safety, and governance.
CUGA code: https://github.com/cuga-project/cuga-agent
Paper: https://arxiv.org/abs/2510.23856
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
11 February 2026, 9:00 pm - 23 minutes 44 secondsTUMIX: Multi-Agent Test-Time Scaling with Tool-Use Mixture
We dive into the latest paper from Google and a team of academic researchers: "TUMIX: Multi-Agent Test-Time Scaling with Tool-Use Mixture."
Hear from one of the paper's authors — Yongchao Chen, Research Scientist — walks through the research and its implications.
The paper proposes Tool-Use Mixture (TUMIX), an ensemble framework that runs multiple agents in parallel, each employing distinct tool-use strategies and answer paths. Agents in TUMIX iteratively share and refine responses based on the question and previous answers. In experiments, TUMIX achieves significant gains over state-of-the-art tool-augmented and test-time scaling methods.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
24 November 2025, 6:00 pm - 22 minutes 34 secondsMeta AI Researcher Explains ARE and Gaia2: Scaling Up Agent Environments and Evaluations
In our latest paper reading, we had the pleasure of hosting Grégoire Mialon — Research Scientist at Meta Superintelligence Labs — to walk us through Meta AI’s groundbreaking paper titled “ARE: scaling up agent environments and evaluations" and the new ARE and Gaia2 frameworks.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
10 November 2025, 4:00 pm - 31 minutes 24 secondsGeorgia Tech's Santosh Vempala Explains Why Language Models Hallucinate, His Research With OpenAI
Santosh Vempala, Frederick Storey II Chair of Computing and Distinguished Professor in the School of Computer Science at Georgia Tech, explains his paper co-authored by OpenAI's Adam Tauman Kalai, Ofir Nachum, and Edwin Zhang. Read the paper: Sign up for future AI research paper readings and author office hours. See LLM hallucination examples here for context.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
14 October 2025, 5:00 pm - 26 minutes 22 secondsAtropos Health’s Arjun Mukerji, PhD, Explains RWESummary: A Framework and Test for Choosing LLMs to Summarize Real-World Evidence (RWE) Studies
Large language models are increasingly used to turn complex study output into plain-English summaries. But how do we know which models are safest and most reliable for healthcare?
In this most recent community AI research paper reading, Arjun Mukerji, PhD – Staff Data Scientist at Atropos Health – walks us through RWESummary, a new benchmark designed to evaluate LLMs on summarizing real-world evidence from structured study output — an important but often under-tested scenario compared to the typical “summarize this PDF” task.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
22 September 2025, 2:00 pm - 48 minutes 11 secondsStan Miasnikov, Distinguished Engineer, AI/ML Architecture, Consumer Experience at Verizon Walks Us Through His New Paper
This episode dives into "Category-Theoretic Analysis of Inter-Agent Communication and Mutual Understanding Metric in Recursive Consciousness." The paper presents an extension of the Recursive Consciousness framework to analyze communication between agents and the inevitable loss of meaning in translation. We're thrilled to feature the paper's author, Stan Miasnikov, Distinguished Engineer, AI/ML Architecture, Consumer Experience at Verizon, to walk us through the research and its implications.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
6 September 2025, 3:00 pm - 31 minutes 15 secondsSmall Language Models are the Future of Agentic AI
We had the privilege of hosting Peter Belcak – an AI Researcher working on the reliability and efficiency of agentic systems at NVIDIA – who walked us through his new paper making the rounds in AI circles titled “Small Language Models are the Future of Agentic AI.”
The paper posits that small language models (SLMs) are sufficiently powerful, inherently more suitable, and necessarily more economical for many invocations in agentic systems, and are therefore the future of agentic AI. The authors’ argumentation is grounded in the current level of capabilities exhibited by SLMs, the common architectures of agentic systems, and the economy of LM deployment. The authors further argue that in situations where general-purpose conversational abilities are essential, heterogeneous agentic systems (i.e., agents invoking multiple different models) are the natural choice. They discuss the potential barriers for the adoption of SLMs in agentic systems and outline a general LLM-to-SLM agent conversion algorithm.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
5 September 2025, 1:00 pm - 42 minutes 56 secondsWatermarking for LLMs and Image Models
In this AI research paper reading, we dive into "A Watermark for Large Language Models" with the paper's author John Kirchenbauer.
This paper is a timely exploration of techniques for embedding invisible but detectable signals in AI-generated text. These watermarking strategies aim to help mitigate misuse of large language models by making machine-generated content distinguishable from human writing, without sacrificing text quality or requiring access to the model’s internals.
Learn more about the A Watermark for Large Language Models paper.
Learn more about agent observability and LLM observability, join the Arize AI Slack community or get the latest on LinkedIn and X.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
30 July 2025, 9:00 pm - 31 minutes 26 secondsSelf-Adapting Language Models: Paper Authors Discuss Implications
The authors of the new paper *Self-Adapting Language Models (SEAL)* shared a behind-the-scenes look at their work, motivations, results, and future directions.
The paper introduces a novel method for enabling large language models (LLMs) to adapt their own weights using self-generated data and training directives — “self-edits.”
Learn more about the Self-Adapting Language Models paper.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
8 July 2025, 6:00 pm - 30 minutes 35 secondsThe Illusion of Thinking: What the Apple AI Paper Says About LLM Reasoning
This week we discuss The Illusion of Thinking, a new paper from researchers at Apple that challenges today’s evaluation methods and introduces a new benchmark: synthetic puzzles with controllable complexity and clean logic.
Their findings? Large Reasoning Models (LRMs) show surprising failure modes, including a complete collapse on high-complexity tasks and a decline in reasoning effort as problems get harder.
Dylan and Parth dive into the paper's findings as well as the debate around it, including a response paper aptly titled "The Illusion of the Illusion of Thinking."
Read the paper: The Illusion of Thinking
Read the response: The Illusion of the Illusion of Thinking
Explore more AI research and sign up for future readingsLearn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
20 June 2025, 9:00 pm - 25 minutes 11 secondsAccurate KV Cache Quantization with Outlier Tokens Tracing
We discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
4 June 2025, 2:00 pm - More Episodes? Get the App