Guests:
None
Topics:
What have we seen at RSA 2024?
Which buzzwords are rising (AI! AI! AI!) and which ones are falling (hi XDR)?
Is this really all about AI? Is this all marketing?
Security platforms or focused tools, who is winning at RSA?
Anything fun going on with SecOps?
Is cloud security still largely about CSPM?
Any interesting presentations spotted?
Resources:
EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (RSA 2024 episode 1 of 2)
“From Assistant to Analyst: The Power of Gemini 1.5 Pro for Malware Analysis” blog
“Introducing Google Security Operations: Intel-driven, AI-powered SecOps” blog
“Advancing the art of AI-driven security with Google Cloud” blog
Guest:
Elie Bursztein, Google DeepMind Cybersecurity Research Lead, Google
Topics:
Given your experience, how afraid or nervous are you about the use of GenAI by the criminals (PoisonGPT, WormGPT and such)?
What can a top-tier state-sponsored threat actor do better with LLM? Are there “extra scary” examples, real or hypothetical?
Do we really have to care about this “dangerous capabilities” stuff (CBRN)? Really really?
Why do you think that AI favors the defenders? Is this a long term or a short term view?
What about vulnerability discovery? Some people are freaking out that LLM will discover new zero days, is this a real risk?
Resources:
“How Large Language Models Are Reshaping the Cybersecurity Landscape” RSA 2024 presentation by Elie (May 6 at 9:40AM)
“Lessons Learned from Developing Secure AI Workflows” RSA 2024 presentation by Elie (May 8, 2:25PM)
EP50 The Epic Battle: Machine Learning vs Millions of Malicious Documents
EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC
EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It
Threat Actors are Interested in Generative AI, but Use Remains Limited
Guest:
Payal Chakravarty, Director of Product Management, Google SecOps, Google Cloud
Topics:
What are the different use cases for GenAI in security operations and how can organizations prioritize them for maximum impact to their organization?
We’ve heard a lot of worries from people that GenAI will replace junior team members–how do you see GenAI enabling more people to be part of the security mission?
What are the challenges and risks associated with using GenAI in security operations?
We’ve been down the road of automation for SOCs before–UEBA and SOAR both claimed it–and AI looks a lot like those but with way more matrix math-what are we going to get right this time that we didn’t quite live up to last time(s) around?
Imagine a SOC or a D&R team of 2029. What AI-based magic is routine at this time? What new things are done by AI? What do humans do?
Resources:
Live video (LinkedIn, YouTube) [live audio is not great in these]
Practical use cases for AI in security operations, Cloud Next 2024 session by Payal
EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It
EP169 Google Cloud Next 2024 Recap: Is Cloud an Island, So Much AI, Bots in SecOps
Guests:
Topics:
What are some of the fun security-related launches from Next 2024 (sorry for our brief “marketing hat” moment!)?
Any fun security vendors we spotted “in the clouds”?
OK, what are our favorite sessions? Our own, right? Anything else we had time to go to?
What are the new security ideas inspired by the event (you really want to listen to this part! Because “freatures”...)
Resources:
Live video (LinkedIn, YouTube) [live audio is not great in these]
Cloud CISO Perspectives: 20 major security announcements from Next ‘24
EP137 Next 2023 Special: Conference Recap - AI, Cloud, Security, Magical Hallway Conversations (last year!)
EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It?
EP90 Next Special - Google Cybersecurity Action Team: One Year Later!
A cybersecurity expert's guide to securing AI products with Google SAIF Next 2024 session
How AI can transform your approach to security Next 2024 session
Guests:
Umesh Shankar, Distinguished Engineer, Chief Technologist for Google Cloud Security
Scott Coull, Head of Data Science Research, Google Cloud Security
Topics:
What does it mean to “teach AI security”? How did we make SecLM? And also: why did we make SecLM?
What can “security trained LLM” do better vs regular LLM?
Does making it better at security make it worse at other things that we care about?
What can a security team do with it today? What are the “starter use cases” for SecLM?
Are we seeing the limits of LLMs for our use cases? Is the “LLM is not magic” finally dawning?
Resources:
“How to tackle security tasks and workflows with generative AI” (Google Cloud Next 2024 session on SecLM)
EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It?
Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma?
Considerations for Evaluating Large Language Models for Cybersecurity Tasks
Conference on Applied Machine Learning in Information Security
Speakers:
Maria Riaz, Cloud Counter-Abuse, Engineering Lead, Google Cloud
Topics:
What is “counter abuse”? Is this the same as security?
What does counter-abuse look like for GCP?
What are the popular abuse types we face?
Do people use stolen cards to get accounts to then violate the terms with?
How do we deal with this, generally?
Beyond core technical skills, what are some of the relevant competencies for working in this space that would appeal to a diverse set of audience?
You have worked in academia and industry. What similarities or differences have you observed?
Resources / reading:
Guests:
Evan Gilman, co-founder CEO of Spirl
Eli Nesterov, co-founder CTO of Spril
Topics:
Today we have IAM, zero trust and security made easy. With that intro, could you give us the 30 second version of what a workload identity is and why people need them?
What’s so spiffy about SPIFFE anyway?
What’s different between this and micro segmentation of your network–why is one better or worse?
You call your book “solving the bottom turtle” could you tell us what that means?
What are the challenges you’re seeing large organizations run into when adopting this approach at scale?
Of all the things a CISO could prioritize, why should this one get added to the list? What makes this, which is so core to our internal security model–ripe for the outside world?
How people do it now, what gets thrown away when you deploy SPIFFE? Are there alternative?
SPIFFE is interesting, yet can a startup really “solve for the bottom turtle”?
Resources:
“Solving the Bottom Turtle” book [PDF, free]
“Surely You're Joking, Mr. Feynman!” book [also, one of Anton’s faves for years!]
Guest:
Ahmad Robinson, Cloud Security Architect, Google Cloud
Topics:
You’ve done a BlackHat webinar where you discuss a Pets vs Cattle mentality when it comes to cloud operations. Can you explain this mentality and how it applies to security?
What in your past led you to these insights? Tell us more about your background and your journey to Google. How did that background contribute to your team?
One term that often comes up on the show and with our customers is 'shifting left.' Could you explain what 'shifting left' means in the context of cloud security? What’s hard about shift left, and where do orgs get stuck too far right?
A lot of “cloud people” talk about IaC and PaC but the terms and the concepts are occasionally confusing to those new to cloud. Can you briefly explain Policy as Code and its security implications? Does PaC help or hurt security?
Resources:
“No Pets Allowed - Mastering The Basics Of Cloud Infrastructure” webinar
EP126 What is Policy as Code and How Can It Help You Secure Your Cloud Environment?
EP138 Terraform for Security Teams: How to Use IaC to Secure the Cloud
Guest:
Jennifer Fernick, Senor Staff Security Engineer and UTL, Google
Topics:
Since one of us (!) doesn't have a PhD in quantum mechanics, could you explain what a quantum computer is and how do we know they are on a credible path towards being real threats to cryptography? How soon do we need to worry about this one?
We’ve heard that quantum computers are more of a threat to asymmetric/public key crypto than symmetric crypto. First off, why? And second, what does this difference mean for defenders?
Why (how) are we sure this is coming? Are we mitigating a threat that is perennially 10 years ahead and then vanishes due to some other broad technology change?
What is a post-quantum algorithm anyway? If we’re baking new key exchange crypto into our systems, how confident are we that we are going to be resistant to both quantum and traditional cryptanalysis?
Why does NIST think it's time to be doing the PQC thing now? Where is the rest of the industry on this evolution?
How can a person tell the difference here between reality and snakeoil? I think Anton and I both responded to your initial email with a heavy dose of skepticism, and probably more skepticism than it deserved, so you get the rare on-air apology from both of us!
Resources:
Securing tomorrow today: Why Google now protects its internal communications from quantum threats
“Quantum Computation & Quantum Information” by Nielsen & Chuang book
EP154 Mike Schiffman: from Blueboxing to LLMs via Network Security at Google
Guest:
Phil Venables, Vice President, Chief Information Security Officer (CISO) @ Google Cloud
Topics:
You had this epic 8 megatrends idea in 2021, where are we now with them?
We now have 9 of them, what made you add this particular one (AI)?
A lot of CISOs fear runaway AI. Hence good governance is key! What is your secret of success for AI governance?
What questions are CISOs asking you about AI? What questions about AI should they be asking that they are not asking?
Which one of the megatrends is the most contentious based on your presenting them worldwide?
Is cloud really making the world of IT simpler (megatrend #6)?
Do most enterprise cloud users appreciate the software-defined nature of cloud (megatrend #5) or do they continue to fight it?
Which megatrend is manifesting the most strongly in your experience?
Resources:
Megatrends drive cloud adoption—and improve security for all and infographic
“Keynote | The Latest Cloud Security Megatrend: AI for Security”
“Lessons from the future: Why shared fate shows us a better cloud roadmap” blog and shared fate page
“Spotlighting ‘shadow AI’: How to protect against risky AI practices” blog
EP47 Megatrends, Macro-changes, Microservices, Oh My! Changes in 2022 and Beyond in Cloud Security
Guest:
Kat Traxler, Security Researcher, TrustOnCloud
Topics:
What is your reaction to “in the cloud you are one IAM mistake away from a breach”? Do you like it or do you hate it?
A lot of people say “in the cloud, you must do IAM ‘right’”. What do you think that means? What is the first or the main idea that comes to your mind when you hear it?
How have you seen the CSPs take different approaches to IAM? What does it mean for the cloud users?
Why do people still screw up IAM in the cloud so badly after years of trying?
Deeper, why do people still screw up resource hierarchy and resource management?
Are the identity sins of cloud IAM users truly the sins of the creators? How did the "big 3" get it wrong and how does that continue to manifest today?
Your best cloud IAM advice is “assign roles at the lowest resource-level possible”, please explain this one? Where is the magic?
Resources:
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.