NerdOut@Spotify is a technology podcast produced by the nerds at Spotify and made for the nerd inside all of us. Hear from Spotify engineers about challenging tech problems and get a firsthand look into what we're doing, what we're building, and what we’re nerding out about at Spotify every day.
Register for Spotify’s roadmap webinar on April 30, 2024 — and see what’s coming next from Spotify for Backstage, the open source platform for building internal developer portals. We’ll show you our latest developer tools, including a sneak peek at new Spotify Plugins for Backstage and a first-look at Spotify Portal for Backstage — a full-featured developer portal that is quick and easy for any engineering org to adopt. See demos from Spotify’s team and learn how to apply for the private beta — work with us to build the next great developer portal: yours!
Host and principal engineer Dave Zolotusky has a quick chat with Helen Greul, head of engineering for Backstage at Spotify, about the event. They talk about the CNCF’s recent BackstageCon in Paris, the growing popularity of the Backstage platform, and why the roadmap webinar on April 30 isn’t one to miss for fans of developer experience and wizardry.
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng, LinkedIn, and YouTube!
Host and principal engineer Dave Zolotusky talks with Kyle Buttner, a product manager on Spotify’s insights team, to discuss Spotify's journey in measuring developer productivity — from how we evaluate different frameworks (like DORA and SPACE) to what kind of data we collect, to the role Backstage plays in unifying our development practices. Can productivity metrics really draw an accurate picture of your engineering org and show you the way to happier and more productive developers?
Learn more about how we measure developer happiness and productivity at Spotify:
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng, LinkedIn, and YouTube!
How did we learn to do event delivery at scale at Spotify? It’s been a journey. When you do something like tap the play button in the Spotify app, that’s an event. And getting that event data is fundamental to the Spotify experience. Without it, we wouldn’t be able to make music recommendations, pay artists fairly, or track down pesky, hard-to-find bugs. At the most basic level, this seems like a straightforward process: record an event, send that event data to a server somewhere, do something useful with it. Easy, right? But now, multiply that process by 50 million events per second. So, how do we make sure all that important data is delivered reliably, from our client apps to the cloud?
Host and principal engineer Dave Zolotusky talks with 9-year Spotify veteran Riccardo Petrocco about our journey building a event delivery system that can reliably handle a trillion events around the world, moving from Kafka to the cloud, building systems that are simple enough so that nobody tries to find a way around them and encourages “doing the right thing”, the definition of “quality data”, the value of moving up the stack and focusing less on the data pipes and more on what’s in them, and how Backstage makes it easier for our developers to discover, consume, produce, and manage data.
Learn more about Spotify’s data journey:
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng, LinkedIn, and YouTube!
We’ve seen generative AI and large language models do some amazing things in the past year — but how do you go from a tech demo to a real shipping product? In this Release Notes episode of the NerdOut@Spotify podcast, we’ll hear about what it took to ship our Voice Translation pilot, which takes podcasts recorded in English and uses AI to generate the original podcaster’s voice speaking in Spanish (with German and French coming next).
Host Dave Zolotusky talks with senior machine learning engineering manager Sandeep Ghael about how we brought expertise from across the company in order to go from a weekend prototype to releasing fully translated episodes of Lex Fridman, Armchair Expert, and other podcasts — in just six weeks.
Read more about Voice Translation for podcasts:
Hear the results on Spotify:
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng and on LinkedIn!
Over the summer, Spotify helped Tesla engineers ship a major update to their built-in media player. In this Release Notes episode, host Dave Zolotusky talks with Spotify engineering manager Geetika Arora and senior product designer JC Chhim about collaborating with Tesla to improve the in-car listening experience, the value of having a familiar user experience across devices, and how there’s more to a great collaboration than just picking the right SDK for the job.
Introducing Release Notes — a new series of mini episodes on the NerdOut@Spotify podcast. There are hundreds of teams at Spotify working on so many different things — from playlists that change throughout the day, to realistic voice translations, to a smarter way to shuffle songs. In each episode of Release Notes, we focus on one thing we shipped and what went into building it. You’ll see these mini episodes from time to time in the main podcast feed right alongside our regular episodes.
Learn more about our SDKs on the Spotify for Developers site: developer.spotify.com
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng and on LinkedIn!
How do you get a machine to find a song that’s similar to another song? What properties of the song should it look for? And then does it just compare each track to every other track, one by one, until it finds the closest match? When you have a catalog of 100 million different music tracks, like we do at Spotify, that would take a long time. So, for these kinds of problems, we use a technique known as nearest neighbor search (NNS). This past summer at Spotify, we built a new library for nearest neighbor search: It’s called Voyager — and we open sourced it.
Host and principal engineer Dave Zolotusky talks with Peter Sobot and Mark Koh, two of the machine learning engineers who developed Voyager. They discuss using nearest neighbor search for recommendations and personalization, how to go from searching for vectors in a 2D space to searching for them in a space with thousands of dimensions, the relative funkiness and danceability of Mozart and Bach, how to find a place on a map when you don’t have the exact coordinates, tricky acronyms (Annoy: “Approximate Nearest Neighbor Oh Yeah”) and initialisms (HNSW: “Hierarchical Navigable Small World”), why we stopped using our old NNS library, why we open sourced the new one, how it works for use cases beyond music (like LLMs), and looking for ducks in grass.
Learn more about Spotify Voyager:
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng and on LinkedIn!
In the very old days, if you needed more storage for your database, you had to walk into the data center and install another server. Now you can just log into your cloud provider’s console and click a few buttons. Voilá, more storage. So easy! But what if you’re replicating that storage configuration for hundreds of databases at once? Suddenly, that’s a lot more clicking. Not so easy! (Plus, very tedious and very error prone.) So instead of living with this “ClickOps” approach, we developed a declarative infrastructure model — our very own “infrastructure as code” solution for managing cloud resources at Spotify scale. Instead of manually configuring each resource, developers just describe the state they want. And once we adopted declarative infra, we unlocked ways to improve not just how we manage resources, but also how we update policies, manage dependencies, and make other changes to code across our entire fleet of repos — quickly, safely, easily. In other words, programmatically.
Host Dave Zolotusky talks with David Flemström — who went from pushing the limits of Spotify’s infrastructure as a feature developer to working on the platform team in order to improve infrastructure for all of our developers. The two Daves discuss what declarative infrastructure means at Spotify, our journey to adopting it (going from Puppet to cloud consoles, to something better than both) and why we did it, how our model works (Kubernetes!), how it changed the relationship between our feature teams and our platform teams, how this shift helped enable Fleet Management at Spotify, and where we’re going next with abstracting infrastructure so that it helps our engineers do more, more easily.
Learn more about declarative infrastructure and Fleet Management:
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng and on LinkedIn!
Last episode, we talked about ABBA, our first A/B testing tool. We used it to test UI changes, new features, content recommendations — anything and everything we could think of. ABBA was so good and worked so well for so long…that we decided to get rid of it. Years of using ABBA taught us what makes for good experimentation, and we eventually realized we needed a better tool, built from scratch. Listen to find out why we pulled the plug on ABBA and how Spotify’s Experimentation Platform was born. And in case you missed it, a version of our internal platform will be available to the public as Confidence, a new enterprise product for developer teams — read today’s announcement: “Coming Soon: Confidence — An Experimentation Platform from Spotify”.
But first, let’s talk buttons. Everyone always has so many questions about buttons. How do you know which color they should be? Or how big they should be? Or whether the corners should be round or square? The easy answer: an A/B test! But if only all product experimentation was as simple as testing buttons. Senior staff engineer Mark Grey returns to talk with host Dave Zolotusky, along with senior engineer Dima Kunin — he helped build Spotify’s Experimentation Platform and was the guy who had the honor of finally retiring ABBA. They discuss the ins and outs of enabling experimentation at scale, including targeting criteria, controlling eligibility, the importance of measuring exposure, using properties instead of feature flags, the advantages of separating your app configuration from your experiments, fallback states, sample ratio mismatches — and all the other questions you have to answer about your experimentation process before you can even ask something as simple as “what color should a button be” — let alone “will this machine learning model consistently provide recommendations users appreciate over the next year”.
Plus, did you definitely, positively, absolutely eat the bread? Or did you just buy the bread? And a bonus trick question: What’s the difference between “treatments”, “variants”, and “groups” — and why is it always so hard to name things?
Learn more about ABBA and its successor, Spotify’s Experimentation Platform:
Plus, find out lots more about how we do experimentation at Spotify on our engineering blog — including a little light reading on automated salting and bucket reuse, choosing sequential testing frameworks, comparing quantiles at scale, and how we scale other scientific best practices across the org.
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng and on LinkedIn!
Back in the day, Spotify built a custom A/B testing tool called ABBA. It was great. The platform enabled lots of teams to try out lots of ideas for new features to see what worked and what didn’t. With ABBA, we went from doing tens of experiments to hundreds of experiments. But we didn’t just learn what color button users liked better: the more tests we ran, the more we learned about our testing methods, including the limitations of ABBA itself — which eventually led us to a new, better way to test. Here’s the story of ABBA, our very first experimentation platform, and the lessons we learned about doing product experimentation at scale.
Host Dave Zolotusky talks with Mark Grey, a senior staff engineer and 10-year Spotify veteran. They discuss Spotify’s earliest efforts at product testing, our early infrastructure for data and data processing (using Hive and Hadoop), how migrating to the cloud unlocked more processing power (and more testing), the difference between using tests to design the color of a button and using tests to inform the very next user interaction via machine learning, feature flags and holdout groups, all the things we learned about conducting scientifically sound experiments, how we built a culture of experimentation among our software development teams, and what finally drove us to sunset ABBA and build its successor: a bigger, better internal experimentation platform. Plus, progress bars and lightsabers.
Read more about ABBA and how we do product experimentation at Spotify:
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng and on LinkedIn!
We’re often focused on features that improve the experience for users — letting them do something better, faster, smarter. But if you don’t consider accessibility issues, many users won’t be able to use the feature at all. From color contrast and text size to alt text for enabling screen readers and voice commands, accessibility issues can affect everyone, whether you have a permanent disability or just an armful of groceries. So how do you get developers and designers to adopt an accessibility mindset and make it a fundamental part of the development process?
Host Dave Zolotusky talks with Dani Devesa Derksen-Staats, an iOS engineer on Spotify’s accessibility team. We’ll hear how Dani went from a five-year computer science program, where accessibility wasn’t mentioned even once, to becoming so passionate about the topic, he wrote a book on it. They also talk about the basic things we forget to consider when we don’t consider accessibility; how we can all benefit from accessibility improvements, whether that’s getting up a curb in a wheelchair or while pushing a stroller; and ways to address accessibility issues into the development process, from adopting a multimodal approach to UX design to integrating accessibility tests into your CI/CD.
Spotify recently introduced an Accessibility Center. Have questions or concerns about accessibility? Contact us:
More on accessibility from Dani:
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng and on LinkedIn!
How do you make an AI-generated voice feel more like a real person? You give it a real personality. Spotify recently released a new feature in the US and Canada called DJ. Turn it on, and you hear a curated selection of music and recommendations, along with a fun, friendly, knowledgeable voice telling you more about what you’re listening to (you know, like a real DJ) — except everything you hear is personalized just for you. Learn what makes this DJ feel so realistic and meet the people behind the technology — including the DJ himself.
Host Dave Zolotusky talks with product director Zeena Qureshi and director of engineering John Flynn — together they lead Speak, the team at Spotify responsible for DJ’s realistic, expressive voice. Hear how the AI voice technology they pioneered for Hollywood blockbusters and triple-A video games now brings Spotify’s personalized DJ to life, how they record a range of emotions to create deeper datasets for modeling DJ’s unique personality, the technical challenges of generating and delivering these dynamic performances at the press of a button, and how having your own personal audio guide brings a totally new dimension to the Spotify listening experience. You’ll also hear from the person who provides the raw ingredients for the AI DJ’s voice and soul: the real-life Xavier “X” Jernigan.
Learn more about Spotify’s personalized DJ and the technology behind it:
Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com
You should follow us on Twitter @SpotifyEng and on LinkedIn!
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.