In Episode 281, we introduced Microsoft OneLake with a high-level overview. Now we're going deeper with a discussion on the Parquet format, why Microsoft went with the Delta Lake variation, and what Delta Lake format brings to the table (no pun intended). We'll also examine some "behind the scenes" aspects of file management, and why you'll still be using the GUI to create most of your objects.
Onelake is Microsoft's solution to the demand for centralizing all data in one location, eliminating the need to transfer it across multiple systems. We expect this to play out further however, when we consider scenarios like data sovereignty, geographical data distribution, separation of subsidiary data, and even departmental budgets that may necessitate multiple instances of OneLake.
We round out our OneLake deep dive with a conversation on the Direct Lake Mode option for importing data into Power BI and Eugene shares his perspective on why everyone may not be rushing to jump on the bandwagon just yet.
We hope you enjoyed this deep dive into Microsoft OneLake! If you have questions or comments, please send them our way. We would love to answer your questions on a future episode. Leave us a comment and some love ❤️ on LinkedIn, X, Facebook, or Instagram.
The show notes for today's episode can be found at Episode 282: OneLake - A Deep Dive. Have fun on the SQL Trail!
As you start using Fabric, having a central location for your data is crucial. OneLake acts as this unified destination, offering a single, consolidated repository for all your data. In this podcast episode, we explore the core features of OneLake and its benefits with our guest, Mariano Kovo, and discuss how it efficiently handles large amounts of data from diverse sources. We'll also dive into the importance of how your data is presented to Azure services, focusing on the Delta Parquet format.
Did you know you can explore OneLake data directly through Windows Explorer? Microsoft aims to make a single copy of your data accessible across multiple services, eliminating the need for constant data movement. Shortcuts make it easier to access your data seamlessly within the OneLake environment, enhancing efficiency and accessibility.
We hope you enjoyed this foundational episode on Microsoft OneLake! If you have questions or comments, please send them our way. We would love to answer your questions on a future episode. Leave us a comment and some love ❤️ on LinkedIn, X, Facebook, or Instagram.
The show notes for today's episode can be found at Episode 281: OneLake - The OneDrive for Data. Have fun on the SQL Trail!
At the Microsoft Build Conference in May 2023, Microsoft announced the new Fabric, where you could slice and dice all your data harmoniously within the environment. A few months later, Kevin, Eugene, and I discussed this evolution of the Azure Data platform in episode 267, and our thoughts on the vision for it's future, our expectations, and predictions.
Now, more than a year later, we decided it's a good time to take an in-depth look at the platform to see what goals have come to fruition, what predictions have come true, and what may have changed. In this introduction to Season 8, we'll get the conversation started.
In the next 10 episodes we'll be taking a deep dive into the reality of what Microsoft Fabric is today, navigating through the nuances, complexities, and sheer vastness of the product. We'll break it down into digestible chunks focused on specific aspects such as:
If you have questions or comments about Microsoft Fabric, please send them our way. We would love to answer your questions on a future episode. Leave us a comment and some love ❤️ on LinkedIn, X, Facebook, or Instagram.
The show notes for today's episode can be found at Episode 280: A Focus on Microsoft Fabric. Have fun on the SQL Trail!
If you use SQL Server, you will eventually have to migrate that instance somewhere – to a new version, a new server, the cloud . . . somewhere.
Or perhaps you'll find yourself migrating from another database into SQL Server.
No matter which way you slice it, SQL Server migrations can be daunting, not to mention complex and time-consuming. While we know there are risks and many things that can go wrong, the "new" Microsoft continues to put time and effort towards making successful SQL Server migrations attainable for everyone.
In this episode of the podcast, we chat with Tejas Shah and Sudhir Raparla, 2 of the Microsoft Project Managers responsible for SQL Server migration tooling. They share practical perspectives on approaching your SQL migration with confidence and the tools and enhancements that will help.
During the conversation, Tejas and Sudhir also take us through the 5 migration steps they want you to consider as you undertake your SQL Server migration process.
Even though we’ve migrated thousands of databases, I had to go back and peek at a couple of the new features the migration tooling team has added. One intriguing addition is the Azure SQL Pricing repository, which is part of the SQL Server Migration assistant and can help with determining costs based on industry standards, deployment recommendations, target sizing, and monthly savings based on your unique scenario.
Let us know what you think! What SQL migration features have you come to trust and rely on? Did you get any good takeaways from today's podcast or have some questions? Leave us a comment and some love ❤️ on LinkedIn, X, Facebook, or Instagram.
The show notes for today's episode can be found at Episode 279: SQL Server Migrations Demystified. Have fun on the SQL Trail!
Can you run SQL Server on Azure VM? Which VM is best? Is running SQL Server on a VM in Azure the right choice? Find out in this insightful episode with Anders Pedersen!
With over 10 different SQL Server services now offered in Microsoft Azure, it can be difficult to know how you want to run your environment. Sometimes, the old ways are the best ways for an organization, and running SQL Server on a VM in Azure is the right fit.
In this episode of the SQL Data Partners Podcast, we chat with Anders Pedersen about his experience moving their systems to Azure VMs. We discuss some of the tiering issues, the newest storage tier being rolled out, and how he manages upgrades.
Join us for another informative podcast where a seasoned database administrator shares their experience of managing a SQL Server environment.
Did you get any good take-aways from today's podcast or have some questions? Leave us a comment and some love ❤️ on LinkedIn, X, Facebook, or Instagram.
The show notes for today's episode can be found at Episode 278: Running SQL Server on Azure VMs. Have fun on the SQL Trail!
Is testing out pgAdmin on your to-do list?
In this episode of the podcast, we chat with Ryan Booz, a PostgreSQL advocate at Redgate, about how a SQL Server professional might begin a dive into PostgreSQL, one of the most popular open source databases in the world.
Ryan came from a career background in SQL Server, but after experiencing his accidental "jump-into-the-deep-end" PostgreSQL moment, he hasn’t looked back.
Naturally, open source presents DBAs and their organizations with many desirable features, but there are certain drawbacks as well. Ryan shares how he navigated his transition into PostgreSQL and raises some points to consider if you are thinking about a switch. We discuss a few of the land mines you might encounter along the way as well as terminology differences in this space.
Be sure to check out Planet PostgreSQL for the most recent blog posts from the very folks that are contributing code to the PostgreSQL project.
Have you shifted from SQL Server to PostgreSQL? How'd it go? Did you get any good take-aways from today's podcast or have some questions? Leave us a comment and some love ❤️ on LinkedIn, X, Facebook, or Instagram.
The show notes for today's episode can be found at Episode 277: PostgreSQL for the SQL Server Crowd. Have fun on the SQL Trail!
Listener beware! This episode is full of danger as we tackle an interesting use case for Dynamic SQL. Dynamic SQL generally has a bad reputation in SQL Server circles, and with good reason. Dynamic SQL can open the door to many undesirable results - SQL Injection attacks being the most frightening of these. It can also be difficult to read, making maintenance problematic; however, in this episode one brave soul - Marathon's own Laura Moss - explains how she uses Dynamic SQL to help refresh a subset of production data to be used in their development environments. You know we are always suckers for an interesting use case and Laura delivers big time. While you won’t be able to plug and play her example into your environment, we hope it gets the wheels turning if you struggle to update your test environments.
Have you found a way to use Dynamic SQL as a tool for good and not evil? Did you get any good take-aways from today's podcast or have some questions? Leave us a comment and some love ❤️ on LinkedIn, Twitter/X, Facebook, or Instagram.
The show notes for today's episode can be found at Episode 276: Dynamic SQL and Testing in Isolation. Have fun on the SQL Trail!
What kinds of problems are organizations solving with Machine Learning? In this episode, we explore a situation where a public works department was looking for more accurate information to predict future water levels based on rainfall to maintain water tank storage for balancing pressure and to prevent overflow flooding. Marathon data solutions consultants Brian Knox and Andy Yao, built a custom machine learning model and made the results available through Power BI reporting. We talk through some of the data hurdles the project presented, the tools they used, and how their work provided results the client could rely on. We touch on Azure ML environment and future integrations that will come with Power BI and ML.
Have you done any work in ML or predictive modeling? Did you get any good take-aways from today's podcast? Leave us some love ❤️ on LinkedIn, Twitter/X, Facebook, or Instagram.
The show notes for today's episode can be found at Episode 275: Machine Learning and Power BI. Have fun on the SQL Trail!
After discussing the Capabilities Maturity Model in our last episode, it was fate when Andy Levy reached out and suggested a topic which sounds like a case study about his experience with CMM. As the only data professional in his organization at the time of his hiring, Andy went from fixing problems to slowing increasing his role in the organization and participating in the planning meetings—being in the room where decisions are made and change happens. We think this episode will be an interesting perspective for those who might be on the fence about the model, and looking for ways to increase their own visibility in an organization.
Let us know what you think! What do you think of CMM? Did you get any good take-aways from today's podcast? Leave us some love ❤️ on LinkedIn, Twitter/X, Facebook, or Instagram.
The show notes for today's episode can be found at Episode 274: A CMM Case Study. Have fun on the SQL Trail!
Have you ever felt stuck in a rut, repeating the same tasks, while knowing there is room for improvement? The Capability Maturity Model may be a way for you to start contributing to those improvements. In this podcast episode, Kevin Kline from SolarWinds walks us through how we might go from simply dealing with issues as they come, to being a contributor in decisions about the future of our organization.
Listen in and learn about the levels of CMM, how they relate to those of us in data professions, and how you can apply the methodologies to become a leader who drives positive change, while doing what you love.
Let us know what you think! What CMM level are you in presently? Did you get any good take-aways from today's podcast? Leave us some love ❤️ on LinkedIn, Twitter/X, Facebook, or Instagram.
The show notes for today's episode can be found at Episode 273: The Capability Maturity Model for Data Professionals. Have fun on the SQL Trail!
Do you find yourself repeating the same actions when pulling SQL Server performance metrics?
Performance tuning a troublesome SQL Server can be a challenge. Luckily the community continues to produce wonderful folks like Erik Darling who contribute their knowledge to make your life a bit easier. In this episode of the SQL Data Partners podcast, we sit down with Erik and discuss the scripts he built to gather performance metrics. While every potential issue is not captured in these scripts, they'll help you start gathering information so you can decide on the next step to take.
Have you used Erik’s scripts before? Let us know!
The show notes for today's episode can be found at Episode 272: Performance Tuning Scripts. Have fun on the SQL Trail!
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.