Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

  • 1 hour 21 minutes
    Nicholas Carlini (Google DeepMind)

    Nicholas Carlini from Google DeepMind offers his view of AI security, emergent LLM capabilities, and his groundbreaking model-stealing research. He reveals how LLMs can unexpectedly excel at tasks like chess and discusses the security pitfalls of LLM-generated code.


    SPONSOR MESSAGES:

    ***

    CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

    https://centml.ai/pricing/


    Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?


    Goto https://tufalabs.ai/

    ***


    Transcript: https://www.dropbox.com/scl/fi/lat7sfyd4k3g5k9crjpbf/CARLINI.pdf?rlkey=b7kcqbvau17uw6rksbr8ccd8v&dl=0


    TOC:

    1. ML Security Fundamentals

    [00:00:00] 1.1 ML Model Reasoning and Security Fundamentals

    [00:03:04] 1.2 ML Security Vulnerabilities and System Design

    [00:08:22] 1.3 LLM Chess Capabilities and Emergent Behavior

    [00:13:20] 1.4 Model Training, RLHF, and Calibration Effects


    2. Model Evaluation and Research Methods

    [00:19:40] 2.1 Model Reasoning and Evaluation Metrics

    [00:24:37] 2.2 Security Research Philosophy and Methodology

    [00:27:50] 2.3 Security Disclosure Norms and Community Differences


    3. LLM Applications and Best Practices

    [00:44:29] 3.1 Practical LLM Applications and Productivity Gains

    [00:49:51] 3.2 Effective LLM Usage and Prompting Strategies

    [00:53:03] 3.3 Security Vulnerabilities in LLM-Generated Code


    4. Advanced LLM Research and Architecture

    [00:59:13] 4.1 LLM Code Generation Performance and O(1) Labs Experience

    [01:03:31] 4.2 Adaptation Patterns and Benchmarking Challenges

    [01:10:10] 4.3 Model Stealing Research and Production LLM Architecture Extraction


    REFS:

    [00:01:15] Nicholas Carlini’s personal website & research profile (Google DeepMind, ML security) - https://nicholas.carlini.com/


    [00:01:50] CentML AI compute platform for language model workloads - https://centml.ai/


    [00:04:30] Seminal paper on neural network robustness against adversarial examples (Carlini & Wagner, 2016) - https://arxiv.org/abs/1608.04644


    [00:05:20] Computer Fraud and Abuse Act (CFAA) – primary U.S. federal law on computer hacking liability - https://www.justice.gov/jm/jm-9-48000-computer-fraud


    [00:08:30] Blog post: Emergent chess capabilities in GPT-3.5-turbo-instruct (Nicholas Carlini, Sept 2023) - https://nicholas.carlini.com/writing/2023/chess-llm.html


    [00:16:10] Paper: “Self-Play Preference Optimization for Language Model Alignment” (Yue Wu et al., 2024) - https://arxiv.org/abs/2405.00675


    [00:18:00] GPT-4 Technical Report: development, capabilities, and calibration analysis - https://arxiv.org/abs/2303.08774


    [00:22:40] Historical shift from descriptive to algebraic chess notation (FIDE) - https://en.wikipedia.org/wiki/Descriptive_notation


    [00:23:55] Analysis of distribution shift in ML (Hendrycks et al.) - https://arxiv.org/abs/2006.16241


    [00:27:40] Nicholas Carlini’s essay “Why I Attack” (June 2024) – motivations for security research - https://nicholas.carlini.com/writing/2024/why-i-attack.html


    [00:34:05] Google Project Zero’s 90-day vulnerability disclosure policy - https://googleprojectzero.blogspot.com/p/vulnerability-disclosure-policy.html


    [00:51:15] Evolution of Google search syntax & user behavior (Daniel M. Russell) - https://www.amazon.com/Joy-Search-Google-Master-Information/dp/0262042878


    [01:04:05] Rust’s ownership & borrowing system for memory safety - https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html


    [01:10:05] Paper: “Stealing Part of a Production Language Model” (Carlini et al., March 2024) – extraction attacks on ChatGPT, PaLM-2 - https://arxiv.org/abs/2403.06634


    [01:10:55] First model stealing paper (Tramèr et al., 2016) – attacking ML APIs via prediction - https://arxiv.org/abs/1609.02943

    25 January 2025, 9:22 pm
  • 1 hour 32 minutes
    Subbarao Kambhampati - Do o1 models search?

    Join Prof. Subbarao Kambhampati and host Tim Scarfe for a deep dive into OpenAI's O1 model and the future of AI reasoning systems.


    * How O1 likely uses reinforcement learning similar to AlphaGo, with hidden reasoning tokens that users pay for but never see

    * The evolution from traditional Large Language Models to more sophisticated reasoning systems

    * The concept of "fractal intelligence" in AI - where models work brilliantly sometimes but fail unpredictably

    * Why O1's improved performance comes with substantial computational costs

    * The ongoing debate between single-model approaches (OpenAI) vs hybrid systems (Google)

    * The critical distinction between AI as an intelligence amplifier vs autonomous decision-maker


    SPONSOR MESSAGES:

    ***

    CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

    https://centml.ai/pricing/


    Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?


    Goto https://tufalabs.ai/

    ***


    TOC:

    1. **O1 Architecture and Reasoning Foundations**

    [00:00:00] 1.1 Fractal Intelligence and Reasoning Model Limitations

    [00:04:28] 1.2 LLM Evolution: From Simple Prompting to Advanced Reasoning

    [00:14:28] 1.3 O1's Architecture and AlphaGo-like Reasoning Approach

    [00:23:18] 1.4 Empirical Evaluation of O1's Planning Capabilities


    2. **Monte Carlo Methods and Model Deep-Dive**

    [00:29:30] 2.1 Monte Carlo Methods and MARCO-O1 Implementation

    [00:31:30] 2.2 Reasoning vs. Retrieval in LLM Systems

    [00:40:40] 2.3 Fractal Intelligence Capabilities and Limitations

    [00:45:59] 2.4 Mechanistic Interpretability of Model Behavior

    [00:51:41] 2.5 O1 Response Patterns and Performance Analysis


    3. **System Design and Real-World Applications**

    [00:59:30] 3.1 Evolution from LLMs to Language Reasoning Models

    [01:06:48] 3.2 Cost-Efficiency Analysis: LLMs vs O1

    [01:11:28] 3.3 Autonomous vs Human-in-the-Loop Systems

    [01:16:01] 3.4 Program Generation and Fine-Tuning Approaches

    [01:26:08] 3.5 Hybrid Architecture Implementation Strategies


    Transcript: https://www.dropbox.com/scl/fi/d0ef4ovnfxi0lknirkvft/Subbarao.pdf?rlkey=l3rp29gs4hkut7he8u04mm1df&dl=0


    REFS:

    [00:02:00] Monty Python (1975)

    Witch trial scene: flawed logical reasoning.

    https://www.youtube.com/watch?v=zrzMhU_4m-g


    [00:04:00] Cade Metz (2024)

    Microsoft–OpenAI partnership evolution and control dynamics.

    https://www.nytimes.com/2024/10/17/technology/microsoft-openai-partnership-deal.html


    [00:07:25] Kojima et al. (2022)

    Zero-shot chain-of-thought prompting ('Let's think step by step').

    https://arxiv.org/pdf/2205.11916


    [00:12:50] DeepMind Research Team (2023)

    Multi-bot game solving with external and internal planning.

    https://deepmind.google/research/publications/139455/


    [00:15:10] Silver et al. (2016)

    AlphaGo's Monte Carlo Tree Search and Q-learning.

    https://www.nature.com/articles/nature16961


    [00:16:30] Kambhampati, S. et al. (2023)

    Evaluates O1's planning in "Strawberry Fields" benchmarks.

    https://arxiv.org/pdf/2410.02162


    [00:29:30] Alibaba AIDC-AI Team (2023)

    MARCO-O1: Chain-of-Thought + MCTS for improved reasoning.

    https://arxiv.org/html/2411.14405


    [00:31:30] Kambhampati, S. (2024)

    Explores LLM "reasoning vs retrieval" debate.

    https://arxiv.org/html/2403.04121v2


    [00:37:35] Wei, J. et al. (2022)

    Chain-of-thought prompting (introduces last-letter concatenation).

    https://arxiv.org/pdf/2201.11903


    [00:42:35] Barbero, F. et al. (2024)

    Transformer attention and "information over-squashing."

    https://arxiv.org/html/2406.04267v2


    [00:46:05] Ruis, L. et al. (2023)

    Influence functions to understand procedural knowledge in LLMs.

    https://arxiv.org/html/2411.12580v1


    (truncated - continued in shownotes/transcript doc)

    23 January 2025, 1:46 am
  • 1 hour 18 minutes
    How Do AI Models Actually Think? - Laura Ruis

    Laura Ruis, a PhD student at University College London and researcher at Cohere, explains her groundbreaking research into how large language models (LLMs) perform reasoning tasks, the fundamental mechanisms underlying LLM reasoning capabilities, and whether these models primarily rely on retrieval or develop procedural knowledge.


    SPONSOR MESSAGES:

    ***

    CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

    https://centml.ai/pricing/


    Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?


    Goto https://tufalabs.ai/

    ***


    TOC


    1. LLM Foundations and Learning

    1.1 Scale and Learning in Language Models [00:00:00]

    1.2 Procedural Knowledge vs Fact Retrieval [00:03:40]

    1.3 Influence Functions and Model Analysis [00:07:40]

    1.4 Role of Code in LLM Reasoning [00:11:10]

    1.5 Semantic Understanding and Physical Grounding [00:19:30]


    2. Reasoning Architectures and Measurement

    2.1 Measuring Understanding and Reasoning in Language Models [00:23:10]

    2.2 Formal vs Approximate Reasoning and Model Creativity [00:26:40]

    2.3 Symbolic vs Subsymbolic Computation Debate [00:34:10]

    2.4 Neural Network Architectures and Tensor Product Representations [00:40:50]


    3. AI Agency and Risk Assessment

    3.1 Agency and Goal-Directed Behavior in Language Models [00:45:10]

    3.2 Defining and Measuring Agency in AI Systems [00:49:50]

    3.3 Core Knowledge Systems and Agency Detection [00:54:40]

    3.4 Language Models as Agent Models and Simulator Theory [01:03:20]

    3.5 AI Safety and Societal Control Mechanisms [01:07:10]

    3.6 Evolution of AI Capabilities and Emergent Risks [01:14:20]


    REFS:

    [00:01:10] Procedural Knowledge in Pretraining & LLM Reasoning

    Ruis et al., 2024

    https://arxiv.org/abs/2411.12580


    [00:03:50] EK-FAC Influence Functions in Large LMs

    Grosse et al., 2023

    https://arxiv.org/abs/2308.03296


    [00:13:05] Surfaces and Essences: Analogy as the Core of Cognition

    Hofstadter & Sander

    https://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinking/dp/0465018475


    [00:13:45] Wittgenstein on Language Games

    https://plato.stanford.edu/entries/wittgenstein/


    [00:14:30] Montague Semantics for Natural Language

    https://plato.stanford.edu/entries/montague-semantics/


    [00:19:35] The Chinese Room Argument

    David Cole

    https://plato.stanford.edu/entries/chinese-room/


    [00:19:55] ARC: Abstraction and Reasoning Corpus

    François Chollet

    https://arxiv.org/abs/1911.01547


    [00:24:20] Systematic Generalization in Neural Nets

    Lake & Baroni, 2023

    https://www.nature.com/articles/s41586-023-06668-3


    [00:27:40] Open-Endedness & Creativity in AI

    Tim Rocktäschel

    https://arxiv.org/html/2406.04268v1


    [00:30:50] Fodor & Pylyshyn on Connectionism

    https://www.sciencedirect.com/science/article/abs/pii/0010027788900315


    [00:31:30] Tensor Product Representations

    Smolensky, 1990

    https://www.sciencedirect.com/science/article/abs/pii/000437029090007M


    [00:35:50] DreamCoder: Wake-Sleep Program Synthesis

    Kevin Ellis et al.

    https://courses.cs.washington.edu/courses/cse599j1/22sp/papers/dreamcoder.pdf


    [00:36:30] Compositional Generalization Benchmarks

    Ruis, Lake et al., 2022

    https://arxiv.org/pdf/2202.10745


    [00:40:30] RNNs & Tensor Products

    McCoy et al., 2018

    https://arxiv.org/abs/1812.08718


    [00:46:10] Formal Causal Definition of Agency

    Kenton et al.

    https://arxiv.org/pdf/2208.08345v2


    [00:48:40] Agency in Language Models

    Sumers et al.

    https://arxiv.org/abs/2309.02427


    [00:55:20] Heider & Simmel’s Moving Shapes Experiment

    https://www.nature.com/articles/s41598-024-65532-0


    [01:00:40] Language Models as Agent Models

    Jacob Andreas, 2022

    https://arxiv.org/abs/2212.01681


    [01:13:35] Pragmatic Understanding in LLMs

    Ruis et al.

    https://arxiv.org/abs/2210.14986


    20 January 2025, 12:28 am
  • 1 hour 12 minutes
    Jurgen Schmidhuber on Humans co-existing with AIs

    Jürgen Schmidhuber, the father of generative AI, challenges current AI narratives, revealing that early deep learning work is in his opinion misattributed, where it actually originated in Ukraine and Japan. He discusses his early work on linear transformers and artificial curiosity which preceded modern developments, shares his expansive vision of AI colonising space, and explains his groundbreaking 1991 consciousness model. Schmidhuber dismisses fears of human-AI conflict, arguing that superintelligent AI scientists will be fascinated by their own origins and motivated to protect life rather than harm it, while being more interested in other superintelligent AI and in cosmic expansion than earthly matters. He offers unique insights into how humans and AI might coexist. This was the long-awaited second, unreleased part of our interview we filmed last time. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? Goto https://tufalabs.ai/ *** Interviewer: Tim Scarfe TOC [00:00:00] The Nature and Motivations of AI [00:02:08] Influential Inventions: 20th vs. 21st Century [00:05:28] Transformer and GPT: A Reflection The revolutionary impact of modern language models, the 1991 linear transformer, linear vs. quadratic scaling, the fast weight controller, and fast weight matrix memory. [00:11:03] Pioneering Contributions to AI and Deep Learning The invention of the transformer, pre-trained networks, the first GANs, the role of predictive coding, and the emergence of artificial curiosity. [00:13:58] AI's Evolution and Achievements The role of compute, breakthroughs in handwriting recognition and computer vision, the rise of GPU-based CNNs, achieving superhuman results, and Japanese contributions to CNN development. [00:15:40] The Hardware Lottery and GPUs GPUs as a serendipitous advantage for AI, the gaming-AI parallel, and Nvidia's strategic shift towards AI. [00:19:58] AI Applications and Societal Impact AI-powered translation breaking communication barriers, AI in medicine for imaging and disease prediction, and AI's potential for human enhancement and sustainable development. [00:23:26] The Path to AGI and Current Limitations Distinguishing large language models from AGI, challenges in replacing physical world workers, and AI's difficulty in real-world versus board games. [00:25:56] AI and Consciousness Simulating consciousness through unsupervised learning, chunking and automatizing neural networks, data compression, and self-symbols in predictive world models. [00:30:50] The Future of AI and Humanity Transition from AGIs as tools to AGIs with their own goals, the role of humans in an AGI-dominated world, and the concept of Homo Ludens. [00:38:05] The AI Race: Europe, China, and the US Europe's historical contributions, current dominance of the US and East Asia, and the role of venture capital and industrial policy. [00:50:32] Addressing AI Existential Risk The obsession with AI existential risk, commercial pressure for friendly AIs, AI vs. hydrogen bombs, and the long-term future of AI. [00:58:00] The Fermi Paradox and Extraterrestrial Intelligence Expanding AI bubbles as an explanation for the Fermi paradox, dark matter and encrypted civilizations, and Earth as the first to spawn an AI bubble. [01:02:08] The Diversity of AI and AI Ecologies The unrealism of a monolithic super intelligence, diverse AIs with varying goals, and intense competition and collaboration in AI ecologies. [01:12:21] Final Thoughts and Closing Remarks REFERENCES: See pinned comment on YT: https://youtu.be/fZYUqICYCAk

    16 January 2025, 9:42 pm
  • 1 hour 41 minutes
    Yoshua Bengio - Designing out Agency for Safe AI

    Professor Yoshua Bengio is a pioneer in deep learning and Turing Award winner. Bengio talks about AI safety, why goal-seeking “agentic” AIs might be dangerous, and his vision for building powerful AI tools without giving them agency. Topics include reward tampering risks, instrumental convergence, global AI governance, and how non-agent AIs could revolutionize science and medicine while reducing existential threats. Perfect for anyone curious about advanced AI risks and how to manage them responsibly.


    SPONSOR MESSAGES:

    ***

    CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

    https://centml.ai/pricing/


    Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?


    They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.


    Goto https://tufalabs.ai/

    ***


    Interviewer: Tim Scarfe


    Yoshua Bengio:

    https://x.com/Yoshua_Bengio

    https://scholar.google.com/citations?user=kukA0LcAAAAJ&hl=en

    https://yoshuabengio.org/

    https://en.wikipedia.org/wiki/Yoshua_Bengio


    TOC:

    1. AI Safety Fundamentals

    [00:00:00] 1.1 AI Safety Risks and International Cooperation

    [00:03:20] 1.2 Fundamental Principles vs Scaling in AI Development

    [00:11:25] 1.3 System 1/2 Thinking and AI Reasoning Capabilities

    [00:15:15] 1.4 Reward Tampering and AI Agency Risks

    [00:25:17] 1.5 Alignment Challenges and Instrumental Convergence


    2. AI Architecture and Safety Design

    [00:33:10] 2.1 Instrumental Goals and AI Safety Fundamentals

    [00:35:02] 2.2 Separating Intelligence from Goals in AI Systems

    [00:40:40] 2.3 Non-Agent AI as Scientific Tools

    [00:44:25] 2.4 Oracle AI Systems and Mathematical Safety Frameworks


    3. Global Governance and Security

    [00:49:50] 3.1 International AI Competition and Hardware Governance

    [00:51:58] 3.2 Military and Security Implications of AI Development

    [00:56:07] 3.3 Personal Evolution of AI Safety Perspectives

    [01:00:25] 3.4 AI Development Scaling and Global Governance Challenges

    [01:12:10] 3.5 AI Regulation and Corporate Oversight


    4. Technical Innovations

    [01:23:00] 4.1 Evolution of Neural Architectures: From RNNs to Transformers

    [01:26:02] 4.2 GFlowNets and Symbolic Computation

    [01:30:47] 4.3 Neural Dynamics and Consciousness

    [01:34:38] 4.4 AI Creativity and Scientific Discovery


    SHOWNOTES (Transcript, references, best clips etc):

    https://www.dropbox.com/scl/fi/ajucigli8n90fbxv9h94x/BENGIO_SHOW.pdf?rlkey=38hi2m19sylnr8orb76b85wkw&dl=0


    CORE REFS (full list in shownotes and pinned comment):

    [00:00:15] Bengio et al.: "AI Risk" Statement

    https://www.safe.ai/work/statement-on-ai-risk


    [00:23:10] Bengio on reward tampering & AI safety (Harvard Data Science Review)

    https://hdsr.mitpress.mit.edu/pub/w974bwb0


    [00:40:45] Munk Debate on AI existential risk, featuring Bengio

    https://munkdebates.com/debates/artificial-intelligence


    [00:44:30] "Can a Bayesian Oracle Prevent Harm from an Agent?" (Bengio et al.) on oracle-to-agent safety

    https://arxiv.org/abs/2408.05284


    [00:51:20] Bengio (2024) memo on hardware-based AI governance verification

    https://yoshuabengio.org/wp-content/uploads/2024/08/FlexHEG-Memo_August-2024.pdf


    [01:12:55] Bengio’s involvement in EU AI Act code of practice

    https://digital-strategy.ec.europa.eu/en/news/meet-chairs-leading-development-first-general-purpose-ai-code-practice


    [01:27:05] Complexity-based compositionality theory (Elmoznino, Jiralerspong, Bengio, Lajoie)

    https://arxiv.org/abs/2410.14817


    [01:29:00] GFlowNet Foundations (Bengio et al.) for probabilistic inference

    https://arxiv.org/pdf/2111.09266


    [01:32:10] Discrete attractor states in neural systems (Nam, Elmoznino, Bengio, Lajoie)

    https://arxiv.org/pdf/2302.06403

    15 January 2025, 7:21 pm
  • 1 hour 26 minutes
    Francois Chollet - ARC reflections - NeurIPS 2024

    François Chollet discusses the outcomes of the ARC-AGI (Abstraction and Reasoning Corpus) Prize competition in 2024, where accuracy rose from 33% to 55.5% on a private evaluation set.


    SPONSOR MESSAGES:

    ***

    CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

    https://centml.ai/pricing/


    Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?


    They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.


    Goto https://tufalabs.ai/

    ***


    Read about the recent result on o3 with ARC here (Chollet knew about it at the time of the interview but wasn't allowed to say):

    https://arcprize.org/blog/oai-o3-pub-breakthrough


    TOC:

    1. Introduction and Opening

    [00:00:00] 1.1 Deep Learning vs. Symbolic Reasoning: François’s Long-Standing Hybrid View

    [00:00:48] 1.2 “Why Do They Call You a Symbolist?” – Addressing Misconceptions

    [00:01:31] 1.3 Defining Reasoning


    3. ARC Competition 2024 Results and Evolution

    [00:07:26] 3.1 ARC Prize 2024: Reflecting on the Narrative Shift Toward System 2

    [00:10:29] 3.2 Comparing Private Leaderboard vs. Public Leaderboard Solutions

    [00:13:17] 3.3 Two Winning Approaches: Deep Learning–Guided Program Synthesis and Test-Time Training


    4. Transduction vs. Induction in ARC

    [00:16:04] 4.1 Test-Time Training, Overfitting Concerns, and Developer-Aware Generalization

    [00:19:35] 4.2 Gradient Descent Adaptation vs. Discrete Program Search


    5. ARC-2 Development and Future Directions

    [00:23:51] 5.1 Ensemble Methods, Benchmark Flaws, and the Need for ARC-2

    [00:25:35] 5.2 Human-Level Performance Metrics and Private Test Sets

    [00:29:44] 5.3 Task Diversity, Redundancy Issues, and Expanded Evaluation Methodology


    6. Program Synthesis Approaches

    [00:30:18] 6.1 Induction vs. Transduction

    [00:32:11] 6.2 Challenges of Writing Algorithms for Perceptual vs. Algorithmic Tasks

    [00:34:23] 6.3 Combining Induction and Transduction

    [00:37:05] 6.4 Multi-View Insight and Overfitting Regulation


    7. Latent Space and Graph-Based Synthesis

    [00:38:17] 7.1 Clément Bonnet’s Latent Program Search Approach

    [00:40:10] 7.2 Decoding to Symbolic Form and Local Discrete Search

    [00:41:15] 7.3 Graph of Operators vs. Token-by-Token Code Generation

    [00:45:50] 7.4 Iterative Program Graph Modifications and Reusable Functions


    8. Compute Efficiency and Lifelong Learning

    [00:48:05] 8.1 Symbolic Process for Architecture Generation

    [00:50:33] 8.2 Logarithmic Relationship of Compute and Accuracy

    [00:52:20] 8.3 Learning New Building Blocks for Future Tasks


    9. AI Reasoning and Future Development

    [00:53:15] 9.1 Consciousness as a Self-Consistency Mechanism in Iterative Reasoning

    [00:56:30] 9.2 Reconciling Symbolic and Connectionist Views

    [01:00:13] 9.3 System 2 Reasoning - Awareness and Consistency

    [01:03:05] 9.4 Novel Problem Solving, Abstraction, and Reusability


    10. Program Synthesis and Research Lab

    [01:05:53] 10.1 François Leaving Google to Focus on Program Synthesis

    [01:09:55] 10.2 Democratizing Programming and Natural Language Instruction


    11. Frontier Models and O1 Architecture

    [01:14:38] 11.1 Search-Based Chain of Thought vs. Standard Forward Pass

    [01:16:55] 11.2 o1’s Natural Language Program Generation and Test-Time Compute Scaling

    [01:19:35] 11.3 Logarithmic Gains with Deeper Search


    12. ARC Evaluation and Human Intelligence

    [01:22:55] 12.1 LLMs as Guessing Machines and Agent Reliability Issues

    [01:25:02] 12.2 ARC-2 Human Testing and Correlation with g-Factor

    [01:26:16] 12.3 Closing Remarks and Future Directions


    SHOWNOTES PDF:

    https://www.dropbox.com/scl/fi/ujaai0ewpdnsosc5mc30k/CholletNeurips.pdf?rlkey=s68dp432vefpj2z0dp5wmzqz6&st=hazphyx5&dl=0

    9 January 2025, 2:49 am
  • 2 hours 13 seconds
    Jeff Clune - Agent AI Needs Darwin

    AI professor Jeff Clune ruminates on open-ended evolutionary algorithms—systems designed to generate novel and interesting outcomes forever. Drawing inspiration from nature’s boundless creativity, Clune and his collaborators aim to build “Darwin Complete” search spaces, where any computable environment can be simulated. By harnessing the power of large language models and reinforcement learning, these AI agents continuously develop new skills, explore uncharted domains, and even cooperate with one another in complex tasks.


    SPONSOR MESSAGES:

    ***

    CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

    https://centml.ai/pricing/


    Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?


    They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.


    Goto https://tufalabs.ai/

    ***


    A central theme throughout Clune’s work is “interestingness”: an elusive quality that nudges AI agents toward genuinely original discoveries. Rather than rely on narrowly defined metrics—which often fail due to Goodhart’s Law—Clune employs language models to serve as proxies for human judgment. In doing so, he ensures that “interesting” always reflects authentic novelty, opening the door to unending innovation.


    Yet with these extraordinary possibilities come equally significant risks. Clune says we need AI safety measures—particularly as the technology matures into powerful, open-ended forms. Potential pitfalls include agents inadvertently causing harm or malicious actors subverting AI’s capabilities for destructive ends. To mitigate this, Clune advocates for prudent governance involving democratic coalitions, regulation of cutting-edge models, and global alignment protocols.


    Jeff Clune:

    https://x.com/jeffclune

    http://jeffclune.com/


    (Interviewer: Tim Scarfe)


    TOC:

    1. Introduction

    [00:00:00] 1.1 Overview and Opening Thoughts


    2. Sponsorship

    [00:03:00] 2.1 TufaAI Labs and CentML


    3. Evolutionary AI Foundations

    [00:04:12] 3.1 Open-Ended Algorithm Development and Abstraction Approaches

    [00:07:56] 3.2 Novel Intelligence Forms and Serendipitous Discovery

    [00:11:46] 3.3 Frontier Models and the 'Interestingness' Problem

    [00:30:36] 3.4 Darwin Complete Systems and Evolutionary Search Spaces


    4. System Architecture and Learning

    [00:37:35] 4.1 Code Generation vs Neural Networks Comparison

    [00:41:04] 4.2 Thought Cloning and Behavioral Learning Systems

    [00:47:00] 4.3 Language Emergence in AI Systems

    [00:50:23] 4.4 AI Interpretability and Safety Monitoring Techniques


    5. AI Safety and Governance

    [00:53:56] 5.1 Language Model Consistency and Belief Systems

    [00:57:00] 5.2 AI Safety Challenges and Alignment Limitations

    [01:02:07] 5.3 Open Source AI Development and Value Alignment

    [01:08:19] 5.4 Global AI Governance and Development Control


    6. Advanced AI Systems and Evolution

    [01:16:55] 6.1 Agent Systems and Performance Evaluation

    [01:22:45] 6.2 Continuous Learning Challenges and In-Context Solutions

    [01:26:46] 6.3 Evolution Algorithms and Environment Generation

    [01:35:36] 6.4 Evolutionary Biology Insights and Experiments

    [01:48:08] 6.5 Personal Journey from Philosophy to AI Research


    Shownotes:

    We craft detailed show notes for each episode with high quality transcript and references and best parts bolded.


    https://www.dropbox.com/scl/fi/fz43pdoc5wq5jh7vsnujl/JEFFCLUNE.pdf?rlkey=uu0e70ix9zo6g5xn6amykffpm&st=k2scxteu&dl=0

    4 January 2025, 2:43 am
  • 3 hours 42 minutes
    Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)

    Neel Nanda, a senior research scientist at Google DeepMind, leads their mechanistic interpretability team. In this extensive interview, he discusses his work trying to understand how neural networks function internally. At just 25 years old, Nanda has quickly become a prominent voice in AI research after completing his pure mathematics degree at Cambridge in 2020.


    Nanda reckons that machine learning is unique because we create neural networks that can perform impressive tasks (like complex reasoning and software engineering) without understanding how they work internally. He compares this to having computer programs that can do things no human programmer knows how to write. His work focuses on "mechanistic interpretability" - attempting to uncover and understand the internal structures and algorithms that emerge within these networks.


    SPONSOR MESSAGES:

    ***

    CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

    https://centml.ai/pricing/


    Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/

    ***


    SHOWNOTES, TRANSCRIPT, ALL REFERENCES (DONT MISS!):

    https://www.dropbox.com/scl/fi/36dvtfl3v3p56hbi30im7/NeelShow.pdf?rlkey=pq8t7lyv2z60knlifyy17jdtx&st=kiutudhc&dl=0


    We riff on:

    * How neural networks develop meaningful internal representations beyond simple pattern matching

    * The effectiveness of chain-of-thought prompting and why it improves model performance

    * The importance of hands-on coding over extensive paper reading for new researchers

    * His journey from Cambridge to working with Chris Olah at Anthropic and eventually Google DeepMind

    * The role of mechanistic interpretability in AI safety


    NEEL NANDA:

    https://www.neelnanda.io/

    https://scholar.google.com/citations?user=GLnX3MkAAAAJ&hl=en

    https://x.com/NeelNanda5


    Interviewer - Tim Scarfe


    TOC:

    1. Part 1: Introduction

    [00:00:00] 1.1 Introduction and Core Concepts Overview


    2. Part 2: Outside Interview

    [00:06:45] 2.1 Mechanistic Interpretability Foundations


    3. Part 3: Main Interview

    [00:32:52] 3.1 Mechanistic Interpretability


    4. Neural Architecture and Circuits

    [01:00:31] 4.1 Biological Evolution Parallels

    [01:04:03] 4.2 Universal Circuit Patterns and Induction Heads

    [01:11:07] 4.3 Entity Detection and Knowledge Boundaries

    [01:14:26] 4.4 Mechanistic Interpretability and Activation Patching


    5. Model Behavior Analysis

    [01:30:00] 5.1 Golden Gate Claude Experiment and Feature Amplification

    [01:33:27] 5.2 Model Personas and RLHF Behavior Modification

    [01:36:28] 5.3 Steering Vectors and Linear Representations

    [01:40:00] 5.4 Hallucinations and Model Uncertainty


    6. Sparse Autoencoder Architecture

    [01:44:54] 6.1 Architecture and Mathematical Foundations

    [02:22:03] 6.2 Core Challenges and Solutions

    [02:32:04] 6.3 Advanced Activation Functions and Top-k Implementations

    [02:34:41] 6.4 Research Applications in Transformer Circuit Analysis


    7. Feature Learning and Scaling

    [02:48:02] 7.1 Autoencoder Feature Learning and Width Parameters

    [03:02:46] 7.2 Scaling Laws and Training Stability

    [03:11:00] 7.3 Feature Identification and Bias Correction

    [03:19:52] 7.4 Training Dynamics Analysis Methods


    8. Engineering Implementation

    [03:23:48] 8.1 Scale and Infrastructure Requirements

    [03:25:20] 8.2 Computational Requirements and Storage

    [03:35:22] 8.3 Chain-of-Thought Reasoning Implementation

    [03:37:15] 8.4 Latent Structure Inference in Language Models

    7 December 2024, 9:14 pm
  • 1 hour 45 minutes
    Jonas Hübotter (ETH) - Test Time Inference

    Jonas Hübotter, PhD student at ETH Zurich's Institute for Machine Learning, discusses his groundbreaking research on test-time computation and local learning. He demonstrates how smaller models can outperform larger ones by 30x through strategic test-time computation and introduces a novel paradigm combining inductive and transductive learning approaches.


    Using Bayesian linear regression as a surrogate model for uncertainty estimation, Jonas explains how models can efficiently adapt to specific tasks without massive pre-training. He draws an analogy to Google Earth's variable resolution system to illustrate dynamic resource allocation based on task complexity.


    The conversation explores the future of AI architecture, envisioning systems that continuously learn and adapt beyond current monolithic models. Jonas concludes by proposing hybrid deployment strategies combining local and cloud computation, suggesting a future where compute resources are allocated based on task complexity rather than fixed model size.


    This research represents a significant shift in machine learning, prioritizing intelligent resource allocation and adaptive learning over traditional scaling approaches.


    SPONSOR MESSAGES:

    CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

    https://centml.ai/pricing/


    Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/


    Transcription, references and show notes PDF download:

    https://www.dropbox.com/scl/fi/cxg80p388snwt6qbp4m52/JonasFinal.pdf?rlkey=glk9mhpzjvesanlc14rtpvk4r&st=6qwi8n3x&dl=0


    Jonas Hübotter

    https://jonhue.github.io/

    https://scholar.google.com/citations?user=pxi_RkwAAAAJ


    Transductive Active Learning: Theory and Applications (NeurIPS 2024)

    https://arxiv.org/pdf/2402.15898


    EFFICIENTLY LEARNING AT TEST-TIME: ACTIVE FINE-TUNING OF LLMS (SIFT)

    https://arxiv.org/pdf/2410.08020


    TOC:

    1. Test-Time Computation Fundamentals

    [00:00:00] Intro

    [00:03:10] 1.1 Test-Time Computation and Model Performance Comparison

    [00:05:52] 1.2 Retrieval Augmentation and Machine Teaching Strategies

    [00:09:40] 1.3 In-Context Learning vs Fine-Tuning Trade-offs


    2. System Architecture and Intelligence

    [00:15:58] 2.1 System Architecture and Intelligence Emergence

    [00:23:22] 2.2 Active Inference and Constrained Agency in AI

    [00:29:52] 2.3 Evolution of Local Learning Methods

    [00:32:05] 2.4 Vapnik's Contributions to Transductive Learning


    3. Resource Optimization and Local Learning

    [00:34:35] 3.1 Computational Resource Allocation in ML Models

    [00:35:30] 3.2 Historical Context and Traditional ML Optimization

    [00:37:55] 3.3 Variable Resolution Processing and Active Inference in ML

    [00:43:01] 3.4 Local Learning and Base Model Capacity Trade-offs

    [00:48:04] 3.5 Active Learning vs Local Learning Approaches


    4. Information Retrieval and Model Interpretability

    [00:51:08] 4.1 Information Retrieval and Nearest Neighbor Limitations

    [01:03:07] 4.2 Model Interpretability and Surrogate Models

    [01:15:03] 4.3 Bayesian Uncertainty Estimation and Surrogate Models


    5. Distributed Systems and Deployment

    [01:23:56] 5.1 Memory Architecture and Controller Systems

    [01:28:14] 5.2 Evolution from Static to Distributed Learning Systems

    [01:38:03] 5.3 Transductive Learning and Model Specialization

    [01:41:58] 5.4 Hybrid Local-Cloud Deployment Strategies

    1 December 2024, 12:25 pm
  • 1 hour 44 minutes
    How AI Could Be A Mathematician's Co-Pilot by 2026 (Prof. Swarat Chaudhuri)

    Professor Swarat Chaudhuri from the University of Texas at Austin and visiting researcher at Google DeepMind discusses breakthroughs in AI reasoning, theorem proving, and mathematical discovery. Chaudhuri explains his groundbreaking work on COPRA (a GPT-based prover agent), shares insights on neurosymbolic approaches to AI.


    Professor Swarat Chaudhuri:

    https://www.cs.utexas.edu/~swarat/


    SPONSOR MESSAGES:

    CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

    https://centml.ai/pricing/


    Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/


    TOC:

    [00:00:00] 0. Introduction / CentML ad, Tufa ad


    1. AI Reasoning: From Language Models to Neurosymbolic Approaches

    [00:02:27] 1.1 Defining Reasoning in AI

    [00:09:51] 1.2 Limitations of Current Language Models

    [00:17:22] 1.3 Neuro-symbolic Approaches and Program Synthesis

    [00:24:59] 1.4 COPRA and In-Context Learning for Theorem Proving

    [00:34:39] 1.5 Symbolic Regression and LLM-Guided Abstraction


    2. AI in Mathematics: Theorem Proving and Concept Discovery

    [00:43:37] 2.1 AI-Assisted Theorem Proving and Proof Verification

    [01:01:37] 2.2 Symbolic Regression and Concept Discovery in Mathematics

    [01:11:57] 2.3 Scaling and Modularizing Mathematical Proofs

    [01:21:53] 2.4 COPRA: In-Context Learning for Formal Theorem-Proving

    [01:28:22] 2.5 AI-driven theorem proving and mathematical discovery


    3. Formal Methods and Challenges in AI Mathematics

    [01:30:42] 3.1 Formal proofs, empirical predicates, and uncertainty in AI mathematics

    [01:34:01] 3.2 Characteristics of good theoretical computer science research

    [01:39:16] 3.3 LLMs in theorem generation and proving

    [01:42:21] 3.4 Addressing contamination and concept learning in AI systems


    REFS:

    00:04:58 The Chinese Room Argument, https://plato.stanford.edu/entries/chinese-room/

    00:11:42 Software 2.0, https://medium.com/@karpathy/software-2-0-a64152b37c35

    00:11:57 Solving Olympiad Geometry Without Human Demonstrations, https://www.nature.com/articles/s41586-023-06747-5

    00:13:26 Lean, https://lean-lang.org/

    00:15:43 A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go Through Self-Play, https://www.science.org/doi/10.1126/science.aar6404

    00:19:24 DreamCoder (Ellis et al., PLDI 2021), https://arxiv.org/abs/2006.08381

    00:24:37 The Lambda Calculus, https://plato.stanford.edu/entries/lambda-calculus/

    00:26:43 Neural Sketch Learning for Conditional Program Generation, https://arxiv.org/pdf/1703.05698

    00:28:08 Learning Differentiable Programs With Admissible Neural Heuristics, https://arxiv.org/abs/2007.12101

    00:31:03 Symbolic Regression With a Learned Concept Library (Grayeli et al., NeurIPS 2024), https://arxiv.org/abs/2409.09359

    00:41:30 Formal Verification of Parallel Programs, https://dl.acm.org/doi/10.1145/360248.360251

    01:00:37 Training Compute-Optimal Large Language Models, https://arxiv.org/abs/2203.15556

    01:18:19 Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, https://arxiv.org/abs/2201.11903

    01:18:42 Draft, Sketch, and Prove: Guiding Formal Theorem Provers With Informal Proofs, https://arxiv.org/abs/2210.12283

    01:19:49 Learning Formal Mathematics From Intrinsic Motivation, https://arxiv.org/pdf/2407.00695

    01:20:19 An In-Context Learning Agent for Formal Theorem-Proving (Thakur et al., CoLM 2024), https://arxiv.org/pdf/2310.04353

    01:23:58 Learning to Prove Theorems via Interacting With Proof Assistants, https://arxiv.org/abs/1905.09381

    01:39:58 An In-Context Learning Agent for Formal Theorem-Proving (Thakur et al., CoLM 2024), https://arxiv.org/pdf/2310.04353

    01:42:24 Programmatically Interpretable Reinforcement Learning (Verma et al., ICML 2018), https://arxiv.org/abs/1804.02477

    25 November 2024, 8:01 am
  • 2 hours 29 minutes
    Nora Belrose - AI Development, Safety, and Meaning

    Nora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety.


    Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up.


    Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture.


    The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor.



    SPONSOR MESSAGES:

    CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

    https://centml.ai/pricing/


    Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/


    Nora Belrose:

    https://norabelrose.com/

    https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en

    https://x.com/norabelrose


    SHOWNOTES:

    https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0


    TOC:

    1. Neural Network Foundations

    [00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias

    [00:02:20] 1.2 LEACE and Concept Erasure Fundamentals

    [00:13:16] 1.3 LISA Technical Implementation and Applications

    [00:18:50] 1.4 Practical Implementation Challenges and Data Requirements

    [00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure


    2. Machine Learning Theory

    [00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias

    [00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation

    [00:43:05] 2.3 Grokking Phenomena and Training Dynamics

    [00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models

    [00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations


    3. AI Systems and Value Learning

    [00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems

    [00:53:06] 3.2 Global Connectivity vs Local Culture Preservation

    [00:58:18] 3.3 AI Capabilities and Future Development Trajectory


    4. Consciousness Theory

    [01:03:03] 4.1 4E Cognition and Extended Mind Theory

    [01:09:40] 4.2 Thompson's Views on Consciousness and Simulation

    [01:12:46] 4.3 Phenomenology and Consciousness Theory

    [01:15:43] 4.4 Critique of Illusionism and Embodied Experience

    [01:23:16] 4.5 AI Alignment and Counting Arguments Debate


    (TRUNCATED, TOC embedded in MP3 file with more information)

    17 November 2024, 9:35 pm
  • More Episodes? Get the App
© MoonFM 2025. All rights reserved.