O'Reilly Security Podcast - O'Reilly Media Podcast

O'Reilly Media

The O'Reilly Security Podcast examines the challenges and opportunities for security practitioners in an increasingly complex and fast-moving world. Through interviews and analysis, we highlight the people who are on the frontlines of security, working to build better defenses.

  • 50 minutes 39 seconds
    Rich Smith on redefining success for security teams and managing security culture

    The O’Reilly Security Podcast: The objectives of agile application security and the vital need for organizations to build functional security culture.

    In this episode of the Security Podcast, I talk with Rich Smith, director of labs at Duo Labs, the research arm of Duo Security. We discuss the goals of agile application security, how to reframe success for security teams, and the short- and long-term implications of your security culture.

    Here are some highlights:

    Less-disruptive security through agile integration

    Better security is certainly one expected outcome of adopting agile application security processes, and I would say less-disruptive security would be an outcome as well. If I put my agile hat on, or could stand in the shoes of an agile developer, I would say they would have a lot of areas where they feel security gets in the way and doesn't actually really help them or make the product or the company more secure. Their perception is that security creates a lot of busy work, and I think this comes from that lack of understanding of agile from the security camp—and likewise of security from the agile camp.

    Along those lines, I would also say one of the key outcomes should be less security interference (where it's not necessary) in the agile process. The goal is to create more harmonious working relationships between these two groups. It would be a shame if the agile process was slowed down purely at the expense of security, and we weren't getting any tangible security benefits from that.

    Changing how security teams measure their success

    If you’re measuring the success of your security program by looking at what didn’t happen, the hard work your security team is doing may never really be apparent, and people may not understand the amount of hard work that went in to prevent bad things from happening. And obviously, that's difficult to quantify as well, from a management perspective. This often has had the unfortunate side effect that security teams measure themselves and measure their success from the perspective of bad things they stopped from happening. That may well be the case, but it's hard to measure, and it's actually quite a negative message. It can push security teams into the mindset that the way they can stop the bad things from happening is by trying to make sure as few things change as possible.

    Security teams should measure themselves on what they enable, and what they enable to happen securely. That's a much more tangible and positive way of measuring the worth of that security team and how effective they are. Any old security team, whether it's good or bad, can say no to everything. Good security teams understand the business, understand what the development team is trying to get done. It's really more about what they can enable the business to do securely, and that's going to require some novel problem solving. That's going to mean that you're not just going to take solutions off the shelf and throw them at every problem.

    Evaluating your organization’s security culture

    Every company already has a security culture. It may not be the one they want, but they already have one.

    You need to build a security culture that works well for the larger organization and is in keeping with the larger organization's culture. I think we absolutely can take control of that security culture, and I'll go further and say that we have to. Otherwise, you're just going to end up in a situation where you have a culture that is not serving your organization well.

    There’s a lot of questions you should be considering when evaluating your culture. What is your current security culture? How does the rest of the company think abut security? How does the rest of the company view your security team? Do people go out of their way to include the security team in conversations and decision-making, or do they prefer to chance it and hope that they don't notice and try to squeak under the radar? That says a lot about your security culture. If people aren't actively engaging with the subject matter experts, well, something's wrong there.

    6 December 2017, 11:50 am
  • 27 minutes 20 seconds
    Christie Terrill on building a high-caliber security program in 90 days

    The O’Reilly Security Podcast: Aligning security objectives with business objectives, and how to approach evaluation and development of a security program.

    In this episode of the Security Podcast, I talk with Christie Terrill, partner at Bishop Fox. We discuss the importance of educating businesses on the complexities of “being secure,” how to approach building a strong security program, and aligning security goals with the larger processes and goals of the business.

    Here are some highlights:

    Educating businesses on the complexities of “being secure”

    This is a challenge that any CISO or director of security faces, whether they're new to an organization or building out an existing team. Building a security program is not just about the technology and the technical threats. It's how you're going to execute—finding the right people, having the right skill sets on the team, integrating efficiently with the other teams and the organization, and of course the technical aspects. There's a lot of things that have to come together, and one of the challenges about security is that companies like to look at security as its own little bubble. They’ll say, ‘we'll invest in security, we'll find people who are experts in security.’ But once you're in that bubble, you realize there's such a broad range of experience and expertise needed for so many different roles, that it's not just one size fits all. You can't use the word ‘security’ so simplistically. So, it can be challenging to educate businesses on everything that's involved when they just say a sentence like, ‘We want to be secure or more secure.’

    Security can’t (and shouldn’t) interrupt the progress of other teams

    The biggest constraint for implementing a better security program for most companies is finding a way to have security co-exist with other teams and processes within the organization. Security can’t interrupt the mission of the company or stop the progress and projects other IT teams already have in progress. You can’t just halt everything because security teams are coming in with their own agendas. Realistically, you have to rely on other teams and be able to work with them to make sure the security team could make progress either without them or alongside them.

    Being able to work collaboratively and to support the teams with your security goals is absolutely critical. Typically, teams have their own projects and agendas, and if you can explain how security will actually help those in the end—they want to participate in your work as well but it's also integrated. You have to rely on each other.

    How to approach security program strategy and planning

    The assessment of a security program usually starts with a common triad of people, process, and technology. On the people side, there’s reevaluating the organizational structure—how many people should there be? What titles should they have? What should the reporting structure be? What should security take on itself versus what responsibility should we ask IT to do or let them keep doing?

    Then, for processes, there can be a lot of pain points. When we develop processes, including the foundational security practices, we start with the ones that would solve immediate problems to show value and illustrate what a process can achieve. A process is not just a piece of paper or a checklist intended to make people's lives more difficult—a process should actually help people understand where something is at in the flow, and when something will get done. So, defining processes is really important to win over the business and the IT teams.

    Then finally on the technology side, we try to emphasize that you should first evaluate the tools you already have. There may be nothing wrong with them. Look at how they're being used and if they're being optimized. Because investing, not just the upfront investment in security technology but the cost to replace that, perhaps consulting cost or churn cost of having to rip and replace, can be very high and can derail some of your other progress. To start, you should make sure you’re using every tool to its fullest capacity and fullest advantage before going down the path of considering buying new products.

    22 November 2017, 1:15 pm
  • 17 minutes 33 seconds
    Susan Sons on building security from first principles

    The O’Reilly Security Podcast: Recruiting and building future open source maintainers, how speed and security aren’t mutually exclusive, and identifying and defining first principles for security.

    In this episode of the Security Podcast, O’Reilly’s Mac Slocum talks with Susan Sons, senior systems analyst for the Center for Applied Cybersecurity Research (CACR) at Indiana University. They discuss how she initially got involved with fixing the open source Network Time Protocol (NTP) project, recruiting and training new people to help maintain open source projects like NTP, and how security needn’t be an impediment to organizations moving quickly.

    Here are some highlights:

    Recruiting to save the internet

    The terrifying thing about infrastructure software in particular is that paying your internet service provider (ISP) bill covers all the cabling that runs to your home or business; the people who work at the ISP; and their routing equipment, power, billing systems, and marketing—but it doesn't cover the software that makes the internet work. That is maintained almost entirely by aging volunteers, and we're not seeing a new cadre of people stepping up and taking over their projects. What we're seeing is ones and twos of volunteers who are hanging on but burning out while trying to do this in addition to a full-time job, or are doing it instead of a full-time job and should be retired, or are retired. It's just not meeting the current needs.

    Early- and mid-career programmers and sysadmins say, 'I'm going to go work on this really cool user application. It feels safer.' They don't work on the core of the internet. Ensuring the future of the internet and infrastructure software is partly a matter of funding (in my O’Reilly Security talk on saving time, I talk about a few places you can donate to help with that, including ICEI and CACR), and partly a matter of recruiting people who are already out there in the programming world to get interested in systems programming and learn to work on this. I'm willing to teach. I have an Internet Relay Chat (IRC) channel set up on freenode called #newguard. Anyone can show up and get mentorship, but we desperately need more people.

    Building for both speed and security

    Security only slows you down when you have an insecure product, not enough developer resources, not enough testing infrastructure, not enough infrastructure to roll out patches quickly and safely. When your programming teams have the infrastructure and scaffolding around software they need to roll out patches easily and quickly—when security has been built into your software architecture instead of plastered on afterward, and the architecture itself is compartmented and fault tolerant and has minimization taken into account—security doesn't hinder you. But before you build, you have to take a breath and say, 'How am I going to build this in?' or 'I’m going to stop doing what I’m doing, and refactor what I should have built in from the beginning.' That takes a long view rather than short-term planning.

    Identifying and defining first principles for security

    I worked with colleagues at the Indiana University Center for Applied Cybersecurity Research (CACR) to develop the Information Security Practice Principles (ISPP). In essence, the ISPP project identifies and defines seven rules that create a mental model for securing any technology. Seven may sound like too few, but it dates back to rules of warfare and Sun Tzu and how to protect things and how to make things resilient. I do a lot of work from first principles. Part of my role is that I’m called in when we don't know what we have yet or when something's a disaster and we need to triage. Best practice lists come from somewhere, but why do we teach people just to check off best practice lists without questioning them? If we teach more people to work from first principles, we can have more mature discussions, we can actually get our C-suite or other leadership involved because we can talk in concepts that they understand. Additionally, we can make decisions about things that don't have best practice checklists.

    8 November 2017, 11:55 am
  • 27 minutes 26 seconds
    Charles Givre on the impetus for training all security teams in basic data science

    The O’Reilly Security Podcast: The growing role of data science in security, data literacy outside the technical realm, and practical applications of machine learning.

    In this episode of the Security Podcast, I talk with Charles Givre, senior lead data scientist at Orbital Insight. We discuss how data science skills are increasingly important for security professionals, the critical role of data scientists in making the results of their work accessible to even nontechnical stakeholders, and using machine learning as a dynamic filter for vast amounts of data.

    Here are some highlights:

    Data science skills are becoming requisite for security teams

    I expect to see two trends in the next few years. First, I think we’re going to see tools becoming much smarter. Not to suggest they're not smart now, but I think we're going to see the builders of security-related tools integrating more and more data science. We're already seeing a lot of tools claiming they use machine learning to do anomaly detection and similar tasks. We're going to see even more of that.

    Secondly, I think rudimentary data science skills are going to become a core competency for security professionals. Considering, I expect we are going to increasingly see security jobs requiring some understanding of core data science principles like machine learning, big data, and data visualization. Of course, I still think there will be a need for data scientists. Data scientists are going to continue to do important work in security, but I also think basic data science skills are going to proliferate throughout the overall security community.

    Data literacy for all

    I'm hopeful we're going to start seeing more growth in data literacy training for management and nontechnical staff, because it's going to be increasingly important. In the years to come, management and executive-level professionals will need to understand the basics—maybe not a technical understanding, but at least a conceptual understanding of what these techniques can accomplish.

    Along those lines, one of the core competencies of a data scientist is, or at least arguably should be, communication skills. I'd include data visualization in that skillset. You can use the most advanced modeling techniques and produce the most amazing results, but if you can't communicate that in an effective manner to a stakeholder, then your work is not likely to be accepted, adopted, or trusted. As such, making results accessible is really a vital component of a data scientist’s work.

    Machine learning as a dynamic filter for security data

    Machine learning and deep learning have definitely become the buzzwords du jour of the security world, but they genuinely bring a lot of value to the table. In my opinion, the biggest value machine learning brings to the table is the ability to learn and identify new patterns and behaviors that represent threats. When I teach machine learning classes, one of the examples I use is domain-generating algorithm detection. You can do this with a whitelist or a blacklist, but neither one of these is going to be the most effective approach. There's been a lot of success in using machine learning to identify this, allowing you to then mitigate the threat.  A colleague of mine, Austin Taylor, gave a presentation and  wrote a blog post about this as well—about how machine learning fits in the overall schema. He views data science in security as being most useful in building a very dynamic filter for your data.

    If you imagine an inverted triangle, you begin examining tons and tons of data, but you can use machine learning to filter out the vast majority of it. From there, a human might still have to look at the remaining portion. By applying several layers of machine learning to that initial ingested data, you can efficiently filter out the stuff that's not of interest.

    25 October 2017, 1:30 pm
  • 26 minutes 22 seconds
    Andrea Limbago on the effects of security’s branding problem

    The O’Reilly Security Podcast: The multidiscliplinary nature of defense, making security accessible, and how the current perception of security professionals hinders innovation and hiring.

    In this episode of the Security Podcast, I talk with Andrea Limbago, chief social scientist at Endgame. We discuss how the misperception of security as a computer science skillset ultimately restricts innovation, the need to make security easier and accessible for everyone, and how current branding of security can discourage newcomers.

    Here are some highlights:

    The multidisciplinary nature of defense

    The general perception is that security is a skillset in the computer science domain. As I've been in the industry for several years, I've noticed more and more the need for different disciplines, outside of computer science, within security. For example, we need data scientists to help handle the vast amount of security data and guide the daily collection and analysis of data. Another example is the need to craft accessible user interfaces for security. So many of the existing security tools or best practices just aren't user friendly. Of course, you also need that computer science expertise as well--from the more traditional hackers to defenders. All that insight can come together to help inform a more resilient defense. Beyond that, there’s the consideration of the impact of economics and psychology. This is especially relevant when you think about insider threat. It's really something I wish more people would think about in a broader perspective, and I think that would actually help attract a lot more people into the industry as well, which we desperately need right now.

    Making security accessible and easier for all

    We need to do a better job of informing the general public about security. Those of us in the security field see information on how to secure our accounts and devices all the time, but I consistently come across people outside of our industry who still don't understand things like two-factor authentication, or why that would be helpful for them. These are very smart people. Part of the challenge is we, as an industry, haven't done a phenomenal job branching out and talking in more common language about the various aspects and steps people can take.

    People know they need to be secure, but they really don't know what the key steps are. This month for National Cybersecurity Awareness Month, there are going to be hundreds of ‘Here are 10 things you need to do to be secure’-style articles, but these messages are not always making their way to the actual target audience. It needs to become more of a mainstream concern, and it needs to be made easier for people to secure their accounts and devices. We talk a lot about the convenience versus security trade-off, and for a lot of people, convenience is still what matters most. It's really hard to switch the incentive structure for people to help them understand that taking all these steps toward better security truly is worth the investment of their time. For us, as an industry, if we make it as easy as possible, I think that will help.

    Security has a branding problem

    We need to do a better job of making security appealing to a broader audience. When I talk to students and ask them what they think about security and cyber security and hacking, they immediately think of a guy in a dark hoodie. And that alone is limiting people from getting excited about entering the workforce. Obviously, the discipline and the industry is much broader than that. We, as an industry, need to rework our marketing campaigns to show other kinds of stock photos. If we can do that, we can start getting more and more diverse people interested and coming into the industry. By attracting the interest of a broader range of students and having them bring their diverse skillsets in from other disciplines, we can strengthen our defenses and increase innovation. If we change the branding of security and the perception of what it means to be a security professional, we can help fill the pipeline, which is one of our most crucial missions as an industry at this time.

    12 October 2017, 2:24 pm
  • 16 minutes 46 seconds
    Window Snyder on the indispensable human element in securing your environment

    The O’Reilly Security Podcast: Why tools aren’t always the answer to security problems and the oft overlooked impact of user frustration and fatigue.

    In this episode of the Security Podcast, I talk with Window Snyder, chief security officer at Fastly. We discuss the fact that many core security best practices aren’t easy to achieve with tools, the importance of not discounting user fatigue and frustration, and the need to personalize security tools and processes to your individual environment.

    Here are some highlights:

    Many security tasks require a hands-on approach

    There are a lot of things that we, as an industry, have known how to do for a very long time but that are still expensive and difficult to achieve. This includes things like staying up-to-date with patching or moving to more sophisticated authorization models. These types of tasks generally require significant work, and they might also impose a workflow obstacle to users that's expensive. Another proven and measurable way to improve security is to review deployments and identify features or systems that are no longer serving their original purpose but are still enabled. If they're still enabled but no longer serving a purpose, they may may leave you unneccessarily open to vulnerabilities. In these cases, a plan to reduce attack surface by eliminating these features or systems is work that humans generally must do, and it actually does increase the security of your environments in a measurable way because now your attack surface is smaller. These aren’t the sorts of activities that you can throw a tool in front of and feel like you've checked a box.

    Frustration and fatigue are often overlooked considerations

    Realistically, it's challenging for most organizations to achieve all the things we know we need to do as an industry. Getting the patch window down to a smaller and smaller size is critical for most organizations, but you have to consider this within the context of your organization and its goals. For example, if you’re patching a sensitive system, you may have to balance the need to reduce the patch window with the stability of the production environment. Or if a patch requires you to update users’ work stations, the frustration of having to update their systems and having their machines rebooted might derail productivity. It's an organizational leap to say that it's more important to address potential security problems when you are dealing with the very real obstacle of user frustration or security exhaustion. This is complicated by the fact that there's an infinite parade of things we need to be concerned about.

    More is not commensurate to better

    It’s reasonable to try to scale security engineering by finding tools you can leverage to help address more of the work that your organization needs. For example, an application security engineer might leverage a source analysis tool. Source analysis tools help scale the number of applications that you can assess in the same amount of time, and that’s reasonable because we all want to make better use of everyone's time. But without someone tuning the source analysis tool to your specific environment, you might end up with a source analysis tool that finds a lot of issues, creates a lot of flags, and then is overwhelming for the engineering team to try to address because of the sheer amount of data. They might conceivably look at the results and realize that the tool doesn't understand the mitigations that are already in place or the reasons these issues aren't going to be a problem and may create a situation where they disregard what the tool identifies. Once fatigue sets in, the tool may well be identifying real problems, but the value the tool contributes ends up being lost.

    28 September 2017, 3:14 pm
  • 36 minutes 11 seconds
    Chris Wysopal on a shared responsibility model for developers and defenders

    The O’Reilly Security Podcast: Shifting secure code responsibility to developers, building secure software quickly, and the importance of changing processes.

    In this episode of the Security Podcast, I talk with Chris Wysopal, co-founder and CTO of Veracode. We discuss the increasing role of developers in building secure software, maintaining development speed while injecting security testing, and helping developers identify when they need to contact the security team for help.

    Here are some highlights:

    The challenges of securing enduring vs. new software

    One of the big challenges in securing software is that it’s most often built, maintained, and upgraded over many years. Think of online banking software for a financial services company. They probably started building that 15 years ago, and it's probably gone through two or three major changes, but the tooling and the language and the libraries, and all the things that they're using are all built from the original code. Fitting security into that style of software development presents challenges because they're not used to the newer tool sets and the newer ways of doing things. It's actually sometimes easier to integrate security into a newer software. Even though they're moving faster, it's easier to integrate into some of the newer development toolchains.

    Changing processes to enable small batch testing and fixing

    There are parallels between where we are with security now and where performance was at the beginning of the Agile movement. With Agile, the thought was, ‘We're going to go fast, but one of the ways we're going to maintain quality is we're going to require unit tests written by every developer for every piece of functIonality they do, and that these automated unit tests will run on every build and every code change.’ By changing the way you do things, from a manual backend weighted full system test to smaller batch incremental tests of pieces of functionality, you're able to speed up the development process, without sacrificing quality. That's a change in process. To have a high performing application, you didn't necessarily need to spend more time building it. You needed better intelligence—so, APM technology put into production to understand performance issues better and more quickly allowed teams to still go fast and not have performance bottlenecks.

    With security, we're going to see the same thing. There can be some additional technology put into play, but the other key factor is changing your process. We call this ‘shifting left,’ which means: find the security defect as quickly as possible or as early as possible in the development lifecycle so that it's cheaper and quicker to fix. For example, if a developer writes a cross-site scripting error as they're coding in JavaScript, and they're able to detect that within minutes of creating that flaw, it will likely only require minutes or seconds to fix. Whereas if that flaw is discovered two weeks later by a manual tester, that's going to be then entered into a defect tracking system. It's going to be triaged. It's going to be put into someone's bug queue. With the delay in identification, it will have to be researched in its original context and will slow down development. Now, you're potentially talking hours of time to fix the same flaw. Maybe a scale of 10 or 100 times more time is taken. Shifting left is a way of thinking about, ‘How do I do small batch testing and fixing?’ That's a process change that enables you to keep going fast and be secure.

    Helping developers identify when they need to call for security help

    We need to teach developers about application security to enable them to identify when there’s a problem and when they don't know enough to solve it themselves. One of the problems with application security is that developers often don't know enough to recognize when they need to call in an expert. For example, when an architect is building a structure and knows there’s a problem with the engineering of a component, the architect knows to call in a structural engineer to augment their expertise. We need to have the same dynamic with software developers. They're experts in their field, and they need to know a lot about security. They also need to know when they require help with threat modeling or to perform a manual code review on a really critical piece of code, like account recovery mechanism. We need to shift more security expertise into the development organization, but part of that is also helping developers know when to call out to the security team. That's also a way we can help the challenge of hiring security experts, because they're hard to find.

    13 September 2017, 5:00 pm
  • 27 minutes 56 seconds
    Scott Roberts on intelligence-driven incident response

    The O’Reilly Security Podcast: The open-ended nature of incident response, and how threat intelligence and incident response are two pieces of one process.

    In this episode of the Security Podcast, I talk with Scott Roberts, security operations manager at GitHub. We discuss threat intelligence, incident response, and how they interrelate.

    Here are some highlights:

    Threat intelligence should affect how you identify and respond to incidents

    Threat intelligence doesn't exist on its own. It really can't. If you're collecting threat intelligence without acting upon it, it serves no purpose. Threat intelligence makes sense when you integrate it with the traditional incident response capability. Intelligence should affect how you identify and respond to incidences. The idea is that these aren't really two separate things, they're simply two pieces of one process. If you're doing incident response without using threat intelligence then you’ll keep getting hit with the same attack time after time. Now, by the same token, if you have threat intelligence without incident response, you're just shouting into the void. No one is taking the information and making it actionable.

    The open-ended nature of incident response

    It’s key to think about incidents as ongoing. There are very few times when an attacker will launch an attack once, be rebuffed, and simply go away. In almost all cases, there's a continuous process. I've worked in organizations where we would do the work to identify an incident and promptly forget about it. Then three weeks later, we would suddenly stumble across the exact same thing. Ultimately, intelligence-driven incident response happens in those intervening three weeks. What are you doing in that time between incidents from the same actor, with the same target? And how are you using what you've learned to prepare for the next time? Regardless of the size of your organization, you can implement processes to better your defenses after each incident. It can be as simple as keeping good notes, thinking about root causes, and considering what could better protect your organization from the same or similar attackers in the future. Basically, instead of marking an incident closed as soon as you’ve dealt with the immediate threat, think beyond the current incident and try to understand what the attack is going to look like the next time. Even if you can't identify the next iteration, you don't want to get hit by the same thing again. As your team expands and matures, there are opportunities for more specialized types of analysis and processes, but intelligence-driven incident response is something you can adopt regardless of your size or maturity.

    Why more threat intelligence data is not always better

    When a team gets started with threat intelligence, their first impulse is to try collecting the biggest data set imaginable with the idea that there's going to be a magic way to pick out the needle in the haystack. While I understand why that may seem like a logical place to start, that's often a very abstract and time-intensive approach. When I look at intelligence programs, I first want to know what the team is doing with their own investigation data. The mass appeal of gathering a ton of information is all about trying to figure out which IP is most important to me or which piece of information I need to find. Often, I find that information is already available in a team's incident response database or their incident management platform. I think the first place you should always look is internally. If you want to know what threats are going to be important to an organization, look at the ones you've already experienced. Once you’ve got all those figured out, then go look at what else is out there. The first place to be effective and truly know that you're doing relevant work for your organization's defense in the future is to look at your past.

    30 August 2017, 11:00 am
  • 42 minutes 56 seconds
    Jack Daniel on building community and historical context in InfoSec

    The O'Reilly Security Podcast: The role of community, the proliferation of BSides and other InfoSec community events, and celebrating our heroes and heroines.

    In this episode of the Security Podcast, I talk with Jack Daniel, co-founder of Security Bsides. We discuss how each of us (and the industry as a whole) benefits from community building, the importance of historical context, and the inimitable Becky Bace.

    Here are some highlights:

    The indispensable role and benefit of community building

    As I grew in my career, I learned things that I shared. I felt that if you're going to teach me, then as soon as I know something new, I'll teach you. I began to realize that the more I share with people, the more they're willing to share with me. This exchange of information built trust and confidence. When you build that trust, people are more likely to share information beyond what they may feel comfortable saying in a public forum and that may help you solve problems in your own environment. I realized these opportunities to connect and share information were tremendously beneficial not only to me, but to everyone participating. They build professional and personal relationships, which I've become addicted to. It’s a fantastic resource to be part of a community, and the more effort you put into it, the more you get back. Security is such an amazing community. We’re facing incredible challenges. We need to share ideas if we're going to pull it off.

    Extolling InfoSec history with the Shoulders of InfoSec

    I realized a few years ago that despite the fact I was friends with a lot of trailblazers in the security space, I didn't have much perspective on the history of InfoSec or hacking. I recognized that I have friends like Gene Spafford and the late Becky Bace who have seen or participated in the foundation of our industry and know many of the stories of our community. I decided to do a presentation a few years ago at DerbyCon that introduced the early contributors and pioneers who made our industry what it is today and built the early foundation for our practices. I quickly realized that cataloging this history wasn't a single presentation, but a larger undertaking. This is why I created the Shoulders of InfoSec program, which shines a light on the contributions of those whose shoulders we stand on.

    The idea is to make it easy to find a quick history of information security and, to a lesser extent, the hacker culture. As Newton actually paraphrased, if he has seen farther, it's by standing on the shoulders of giants, and we all stand on the shoulders of giants.

    The inimitable Becky Bace

    Becky was known as the den mother of IDS, for her work fostering and supporting intrusion detection and network behavior analysis. But even beyond her amazing technical expertise and contributions, Becky gave the best hugs in the world. She was just an amazingly warm, friendly, and welcoming person. One of the things that always struck me about Becky is the number of people she mentored through the years, and the number of people whose careers got a start or a boost because of Becky. She was just pure awesome. She would go out of her way to help people, and the more they needed help, the more likely she would be to find them and help them.

    She came from southern Alabama, and when she came north to the D.C. area, her dad said, ‘You can go up north and get a job and marry a Yankee, but when you're done doing that, I want you to come home because, remember, we need help down here.’ For those who don't know, when she left her consulting practice, she went to the University of South Alabama—not even University of Alabama, but the University of South Alabama—and set up a cyber security program. She was bringing cyber security education to people who otherwise wouldn't get it and she built a fantastic program. She did it because she promised her dad she would.

    17 August 2017, 11:55 am
  • 28 minutes 35 seconds
    Jay Jacobs on data analytics and security

    The O’Reilly Security Podcast: The prevalence of convenient data, first steps toward a security data analytics program, and effective data visualization.

    In this episode of the Security Podcast, Courtney Nash, former chair of O’Reilly Security conference, talks with Jay Jacobs, senior data scientist at BitSight. We discuss the constraints of convenient data, the simple first steps toward building a basic security data analytics program, and effective data visualizations.

    Here are some highlights:

    The limitations of convenient data

    In security, we often see the use of convenient data—essentially, the data we can get our hands on. You see that sometimes in medicine where people studying a specific disease will grab the patients with that disease in the hospital they work in. There's some benefits to doing that. Obviously, the data collection is easy because you get the data that’s readily available. At the same time, there's limitations. The data may not be representative of the larger population.

    Using multiple studies combats the limitations of convenient data. For example, when I was working on the Verizon Data Breach Investigations Report, we tried to tackle that by diversifying the sources of data. Each individual contributor had their own convenient sample. They're getting the data they can access. Each contributing organization had their own biases and limitations, problems, and areas of focus. There are biases and inherent problems with each data set, but when you combine them, that's when you start to see the strength because now all of these biases start to level out and even off a little bit. There are still problems, including representativeness, but this is one of the ways to combat it.

    The simple first steps to building a data analysis program

    The first step is to just count and collect everything. As I work with organizations on their data, I see a challenge where people will try to collect only the right things, or the things that they think are going to be helpful. When they only collect things they originally think will be handy, they often miss some things that are ultimately really helpful to analysis. Just start out counting and collecting everything. Even things you don't think are countable or collectible. At one point, a lot of people didn't think that you could put a breach, which is a series of events, into a format that could be conducive to analysis. I think we've got some areas we could focus on like pen testing and red team activity. I think these are areas just right for a good data collection effort. If you're collecting all this data, you can do some simple counting and comparison. ‘This month I saw X number and this month I saw Y.’ As you compare, you can see whether there’s change, and then discuss that change. Is it significant, and do we care? The other thing: a lot of people capture metrics and don’t actually ask the question do we care if it goes up or down? That's a problem.

    Considerations for effective data visualization

    Data visualization is a very popular field right now. It's not just concerned with why pie charts might be bad—there's a lot more nuance and detail. One important factor to consider in data visualization, just like communicating in any other medium, is your audience. You have to be able to understand your audience, their motivations, and experience levels.

    There are three things you should evaluate when building a data visualization. First, you start with your original research question. Then you figure out how the data collected answers that question. Then once you start to develop a data visualization, you try to ask yourself does that visualization match what the data says, and does it match and answer the original question being asked? Trying to think of those three parts of that equation, that they all have to line up and explain each other, I think that helps people communicate better.

    2 August 2017, 11:05 am
  • 32 minutes 6 seconds
    Katie Moussouris on how organizations should and shouldn’t respond to reported vulnerabilities

    The O’Reilly Security Podcast: Why legal responses to bug reports are an unhealthy reflex, thinking through first steps for a vulnerability disclosure policy, and the value of learning by doing.

    In this episode, O’Reilly’s Courtney Nash talks with Katie Moussouris, founder and CEO of Luta Security. They discuss why many organizations have a knee-jerk legal response to a bug report (and why your organization shouldn’t), the first steps organizations should take in formulating a vulnerability disclosure program, and how learning through experience and sharing knowledge benefits all.

    Here are some highlights:

    Why legal responses to bug reports are a faulty reflex

    The first reaction to a researcher reporting a bug for many organizations is to immediately respond with legal action. These organizations aren’t considering that their lawyers typically don't keep their users safe from internet crime or harm. Engineers fix bugs and make a difference in terms of security. Having your lawyer respond doesn't keep users safe and doesn't get the bug fixed. It might do something to temporarily protect your brand, but that's only effective as long as the bug in question remains unknown to the media. Ultimately, when you try to kill the messenger with a bunch of lawsuits, it looks much worse than taking the steps to investigate and fix a security issue. Ideally, organizations recognize that fact quickly.

    It’s also worth noting that the law tends to be on the side of the organization, not the researcher reporting a vulnerability. In the United States, the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act have typically been used to harass or silence security researchers who are trying to report something along the lines of “if you see something say something.” Researchers take risks when identifying bugs, because there are laws on the books that can be easily misused and abused to try to kill the messenger. There are laws in other countries as well, that similarly would act as discouragement from well-meaning researchers to come forward. It’s important to keep perspective and remember that, in most cases, you’re talking to helpful hackers, who have stuck their neck out and potentially risked their own freedom to try to warn you about a security issue. Once organizations realize that, they're often more willing to cautiously trust researchers.

    First steps toward a basic vulnerability disclosure policy

    In 2015, market studies showed (and the numbers haven't changed significantly since then) that of the Forbes Global 2000, arguably some of the most prepared and proactive security programs, 94% had no published way for researchers to report a security vulnerability. That’s indicative of the fact that these organizations probably have no plan for how they would respond if somebody did reach out and report a vulnerability. They might call in their lawyers. They might just hope the person goes away.

    At the very basic level, organizations should provide a clear way for someone to report issues. Additionally, organizations should clearly define the scope of issues they’re most interested in hearing about. Defining scope also includes providing the bounds for things that you prefer hackers not do. I've seen a lot of vulnerability disclosure policies published on websites that say, please don't attempt to do a denial of service against our website, or against our service or products, because with sufficient resources, we know attackers would be able to do that. They clearly request people don’t test that capability, as it would provide no value.

    Learning by doing and the value of sharing experiences

    At the Cyber U.K. Conference, the U.K. National Cyber Security Centre’s (NCSC) industry conference, there was an announcement about NCSC’s plans to launch a vulnerability coordination pilot program. They've previously worked on vulnerability coordination through the U.K. Computer Emergency Response Team (CERT U.K.) that merged under NCSC. However, they hadn’t standardized the process. They chose to learn by doing and launch pilot programs. They invited focused security researchers, who they knew and had worked with in the past, to come and participate, and then they outlined their intention to publicly share what they learned.

    This approach offers benefits, as it's not only focused on specific bugs, but more so on the process, on the ways they can improve that process and share knowledge with their constituents globally. Of course, bugs will be uncovered and strengthening security of targeted websites obviously represents one of the goals of the program, but the emphasis on process and learning through experience really differentiates their approach and is particularly exciting.

    19 July 2017, 1:45 pm
  • More Episodes? Get the App
© MoonFM 2024. All rights reserved.