Arxiv Papers

Igor Melnyk

Running out of time to catch up with new arXiv papers? We take the most impactful papers and present them as convenient podcasts. If you're a visual learner, we offer these papers in an engaging video format. Our service fills the gap between overly brief paper summaries and time-consuming full paper reads. You gain academic insights in a time-efficient, digestible format. Code behind this work: https://github.com/imelnyk/ArxivPapers Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/support

  • 8 minutes 55 seconds
    [QA] On the Theoretical Limitations of Embedding-Based Retrieval





    https://arxiv.org/abs//2508.21038


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    1 September 2025, 7:16 pm
  • 23 minutes 17 seconds
    On the Theoretical Limitations of Embedding-Based Retrieval





    https://arxiv.org/abs//2508.21038


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    1 September 2025, 7:15 pm
  • 7 minutes 3 seconds
    [QA] Beyond GPT-5: Making LLMs Cheaper and Better via Performance–Efficiency Optimized Routing



    Avengers-Pro is a test-time routing framework that optimizes performance and efficiency in LLMs, achieving state-of-the-art results by dynamically assigning queries to suitable models based on performance-efficiency scores.


    https://arxiv.org/abs//2508.12631


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    22 August 2025, 10:20 pm
  • 9 minutes 39 seconds
    Beyond GPT-5: Making LLMs Cheaper and Better via Performance–Efficiency Optimized Routing



    Avengers-Pro is a test-time routing framework that optimizes performance and efficiency in LLMs, achieving state-of-the-art results by dynamically assigning queries to suitable models based on performance-efficiency scores.


    https://arxiv.org/abs//2508.12631


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    22 August 2025, 10:20 pm
  • 8 minutes 17 seconds
    [QA] Measuring the environmental impact of delivering AI at Google Scale



    https://arxiv.org/abs//2508.15734


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    22 August 2025, 10:14 pm
  • 22 minutes 9 seconds
    Measuring the environmental impact of delivering AI at Google Scale



    https://arxiv.org/abs//2508.15734


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    22 August 2025, 10:12 pm
  • 7 minutes 36 seconds
    [QA] Deep Think with Confidence



    DeepConf enhances reasoning efficiency and performance in Large Language Models by filtering low-quality traces using internal confidence signals, achieving high accuracy and reduced token generation without extra training.


    https://arxiv.org/abs//2508.15260


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    22 August 2025, 3:26 am
  • 18 minutes 34 seconds
    Deep Think with Confidence



    DeepConf enhances reasoning efficiency and performance in Large Language Models by filtering low-quality traces using internal confidence signals, achieving high accuracy and reduced token generation without extra training.


    https://arxiv.org/abs//2508.15260


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    22 August 2025, 3:25 am
  • 8 minutes 33 seconds
    [QA] Intern-S1: A Scientific     Multimodal Foundation Model  

       Intern-S1 is a multimodal model that excels in scientific tasks, outperforming both open-source and closed-source models, and aims to bridge the gap in high-value scientific research.


    https://arxiv.org/abs//2508.15763


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    22 August 2025, 3:22 am
  • 49 minutes 42 seconds
    Intern-S1: A Scientific     Multimodal Foundation Model  

       

    Intern-S1 is a multimodal model that excels in scientific tasks, outperforming both open-source and closed-source models, and aims to bridge the gap in high-value scientific research.


    https://arxiv.org/abs//2508.15763


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    22 August 2025, 3:22 am
  • 7 minutes 2 seconds
    [QA] Search-Time Data Contamination



    The paper identifies search-time contamination (STC) in evaluating search-based LLM agents, revealing how data leaks compromise benchmark integrity and proposing best practices for trustworthy evaluations.


    https://arxiv.org/abs//2508.13180


    YouTube: https://www.youtube.com/@ArxivPapers


    TikTok: https://www.tiktok.com/@arxiv_papers


    Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016


    Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers


    20 August 2025, 3:00 am
  • More Episodes? Get the App