O'Reilly Radar Podcast - O'Reilly Media Podcast

O'Reilly Radar tracks the technologies and people that will shape our world in the years to come. Each episode of O'Reilly Radar features an interview with an industry thought leader. We also take a step back from the breathless pace of the latest tech news to examine why new developments are important and what they might mean down the road.

  • 18 minutes 36 seconds
    Aman Naimat on building a knowlege graph of the entire business world

    The O'Reilly Radar Podcast: The maturity of AI in enterprise, bridging the AI gaps, and what the U.S. can do with $4 trillion.

    This week, I sit down with Aman Naimat, senior vice president of technology at Demandbase, and co-founder and CTO of Spiderbook. We talk about his project to build a knowledge graph of the entire business world using natural language processing and deep learning. We also talk about the role AI is playing in those companies today and what’s going to drive AI adoption in the future.

    Here are a few highlights:

    Surveying AI adoption

    We were studying businesses for the purpose of helping sales people talk accounts, and we realized we could use our technology to study entire markets. So, we decided to study entire markets of how companies are adopting AI or big data. Really, the way it works is, we built a knowledge graph of how businesses interact with each other, their behavioral signals, who's doing business with whom, who are their partners, customers, suppliers? Who are the influencers, the decision-makers? Who's buying what product?

    In essence we have built a universal database, if I may, or a knowledge graph, of the entire business world. We use natural language processing and deep learning—the short answer for what data sets we look at is everything. We are now reading the entire business internet, completely unstructured data, from SCC filings to financial regulatory filings to Tweets to every blog post, every job post, every conference visit, every power point, every video. So, it's really pretty comprehensive. We also have a lot of proprietary data around the business world, as to who's reading or viewing what ad, and we triangulate all of that in this graph and do machine learning on top to classify maturity levels of each company out of the 500,000 into how mature they are in AI. How many people do they have working, what are they doing with it, what are the use cases, how much money are they spending. That's how we built the study.

    Bridging the AI gap between academia and enterprise

    What will drive adoption in AI, I think, is also investment. The current landscape, according to our study, which was the first data-driven study of the market, was that only a few companies are really investing in it. There's some interest in other places, but companies like Google—the CEO recently came out and said that AI is really how the company will be framed going forward. So, we need more investments, more venture capital investments, more government investments, and that's not just in starting startups, but putting together data sets that data scientists could consume. Public data sets is a huge gap in the market between what is available in academia and what companies like us at Demandbase have—we have a ton of data, proprietary data. So, to be able to have such data available in open source...that could spark new types of use cases.

    Can we build an AI-based representative democracy?

    Another use case: the largest set of spend in the world is actually the United States government—$4 trillion; it's a huge market. So, how do you allocate those resources? Is it possible that we can build systems that, in essence, become some sort of an AI-based representative democracy where we can optimize the preferences of individual citizens?

    Today, most citizens are completely unaware of what's happening at their local government level or state level. If I ask you who's your state senator, you probably don't know. Nobody actually does, yet the state level pretty much has the biggest impact on our lives. They control education, roads, environment, and they have some of the largest budgets— health care. There's suddenly areas where we can try to understand individual preferences automatically, and there's a lot of data—for each bill that is passed, there are thousands and thousands of pages of feedback, text, that AI can process and understand. So, obviously some of this is really far out, but that doesn't mean we can't do something today.

    23 March 2017, 12:25 pm
  • 23 minutes 2 seconds
    AI adoption at the atomic level of jobs and work

    O'Reilly Radar Podcast: David Beyer on AI adoption challenges, the complexities of getting an AI ROI, and the dangers of hype.

    This week, I sit down with David Beyer, an investor with Amplify Partners. We talk about machine learning and artificial intelligence, the challenges he’s seeing in AI adoption, and what he thinks is missing from the AI conversation.

    Here are a few highlights:

    Complexities of AI adoption

    AI adoption is actually a multifaceted question. It's something that touches on policy at the government level. It touches on labor markets and questions around equity and fairness. It touches on broad commercial questions around industries and how they evolve over time. There's many, many ways to address this. I think a good way to think about AI adoption at the broader, more abstract level of sectors or categories is to actually zoom down a bit and look at what it is actually replacing.

    The way to do that is to think at the atomic level of jobs and work. What is work? People have been talking about questions of productivity and efficiency for quite some time, but a good way to think of it from the lens of the computer or machine learning is to divide work into four categories. It's a two-by-two matrix of cognitive and manual, cognitive versus manual work, and routine versus non-routine work. The 90s internet and computer revolution, for the most part, tackled the routine work—Spreadsheets and word processing, things that could be specified by an explicit set of instructions.

    The more interesting stuff that's happening now, and that should be happening over the next decade, is how does software start to impact non-routine, both cognitive and manual, work? Cognitive work is tricky. It can be divided into two categories: things that are analytical (so, math and science and the like) and things that are more interpersonal and social—sales, being a good example.

    Then with non-routine work, the first instinct is to think about whether the job seems simple to us as people—so, cleaning a room for us, at first blush, seems like something pretty much anyone who's able could do; it's actually incredibly difficult. There's this bizarre, unexpected result that the hard problems are easier to automate, things like logic. The easier problems are incredibly hard to automate—things that require visuospatial orientation, navigating complex and potentially changing terrain. Things that we have basically been programmed over millennia in our brains to accomplish are actually very difficult to do from the perspective of coding a set of instructions into a computer.

    AI ROI

    The question I have in my mind is: in the 90s and 2000s, was simply applying computers to business and communication its own revolution? Does machine learning and AI constitute a new category or is machine learning the final complement to extract the productivity out of that initial Silicon revolution, so to speak? There's this economic historian Paul David, out of Oxford, who wrote an interesting thing looking at American factories and how they adapted to electrification because, previously, a lot of them were steam powered. The initial adoption was really with a lack of imagination: they used motors where steam used to be and hadn't really redesigned anything. They didn't really get much of any productivity.

    It was only when that crop of old managers was replaced with new managers that people fully redesigned the factory to what we now recognize as the modern factory. The question is the technology itself: from our perspective as investors, it's insufficient. You need business process and workplace rethinking. An area of research, as it relates to this model of AI adoption, is how reconstructible is it—is there an index to describe how particular industries or particular workflows or businesses can be remodeled to use machine learning with more leverage?

    I think that speaks to how those managers in those instances are going to look at ROI. If the payback period for a particular investment is uncertain or really long, we're less likely to adopt it, which is why you're seeing a lot of pickup of robots in factories. You can specify and drive the ROI; the payback period for that is coming down because it's incredibly clear, well-defined. Another industry is, for example, using machine learning in a legal setting for a law firm. There are parts of it—for example, technology assisted review—where the ROI's pretty clear. You can measure it in time saved. Other technologies that help assist in prediction or judgment for, say, higher-level thinking, the return on that is pretty unclear. A lot of the interesting technologies coming out these days—from, in particular, deep learning—enable things that operate at a higher level than we're used to. At the same time, though, they're building products around that that do relatively high-level things that are hard to quantify. The productivity gains from that are not necessarily clear.

    The dangers of AI hype

    One thing I'd say, rather than missing from the AI conversation, is something that there's too much of: I think hype is one of them. Too many businesses now are pitching AI almost as though it's batteries included. That's dangerous because it's going to potentially lead to over-investment in things that overpromise. Then, when they under-deliver, it has a deflationary effect on people's attitudes toward the space. It almost belittles the problem itself. Not everything requires the latest whiz-bang technology. In fact, the dirty secret of machine learning—and, in a way, venture capital—is so many problems could be solved by just applying simple regression analysis. Yet, very few people, very few industries do the bare minimum.

    9 March 2017, 2:05 pm
  • 17 minutes 35 seconds
    Sara Watson on optimizing personalized experiences

    The O'Reilly Radar Podcast: Turning personalization into a two-way conversation.

    In this week's Radar Podcast, O’Reilly’s Mac Slocum chats with Sara Watson, a technology critic and writer in residence at Digital Asia Hub. Watson is also a research fellow at the Tow Center for Digital Journalism at Columbia and an affiliate with the Berkman Klein Center for Internet and Society at Harvard. They talk about how to optimize personalized experience for consumers, the role of machine learning in this space, and what will drive the evolution of personalized experiences.

    Here are a few highlights:

    Accountability across the life cycle of data

    One of the things I'm paying a lot of attention to is how the machine learning application of this changes what can and can't be explained about personalization. One of the things I'm really looking for as a consumer is to say, "Okay. Why am I seeing this?" That's really interesting to me. I think more and more we're not going to be able to answer that question. Even so, now I think a lot of times we can only provide one piece of the answer as to why I'm seeing this ad, for example. It's really going to get far more complicated, but at the same time, I think there's going to be a lot more need for accountability across that life cycle of data, whether we're talking about following data between the data brokers and the browser history, and my kind of preference model as a consumer. There's got to at least be a little bit more accountability across that pattern. It's obviously going to be a very complicated thing to solve.

    ...Honestly, I think accountability is going to be demand oriented, whether that is from a policy side or a consumer side. People have started to understand there is something happening in the news feed. It's not just a purely objective timeline. It's not linear. Just that level of knowledge has changed the discussion. That's why we're talking about the objectivity of Facebook's news feed and whether or not you're seeing political news on one side or the other, or the trending topics. Being part of the larger discussion, even if that's not reaching a huge range of consumers, is making consumers more educated toward caring about these things.

    Empowering the consumer

    The ideal is not far off. It's just that in practice we're not there yet. I think a lot of people would probably agree that ideal personalization is about relevancy. It's about being meaningful to the consumer and providing something that's valuable. I also think it has to do with being empowering, so not just pushing something onto the consumer, like we know what's best for you or we're anticipating your needs, but really giving them the opportunity to explore what they need and make choices in a smart way.

    Shaping the conversation

    One of the things we talk about on the data side of things is 'targeting' people. Think about that word. It's like targeting? Putting a gun to a consumer's head? When you think about it that way, it's like, okay, yeah, this is a one-way conversation. This is not really giving any agency to the person who is part of that conversation. I'm really interested in trying to open up that dialog in a way that's beneficial to all parties involved.

    ...I think a lot about the language that we use to talk about this stuff. I've written about the metaphors we use to talk about data—with metaphor examples in talking about data lakes, and data's the new oil, and all these kinds of industrial-heavy analogies that really put the focus on the people with the power and the technology and the industry side of things, without necessarily supporting the human side of things. ...It shapes whatever it is you think you're doing, either as a marketer or as the platform that's making those opportunities possible. It's not very sensitive to the subject, really.

    23 February 2017, 12:00 pm
  • 17 minutes 23 seconds
    Tom Davenport on mitigating AI's impact on jobs and business

    The O'Reilly Radar Podcast: The value humans bring to AI, guaranteed job programs, and the lack of AI productivity.

    This week, I sit down with Tom Davenport. Davenport is a professor of Information Technology and Management at Babson College, the co-founder of the International Institute for Analytics, a fellow at the MIT Center for Digital Business, and a senior advisor for Deloitte Analytics. He also pioneered the concept of “competing on analytics.” We talk about how his ideas have evolved since writing the seminal work on that topic, Competing on Analytics: The New Science of Winning; his new book Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, which looks at how AI is impacting businesses; and we talk more broadly about how AI is impacting society and what we need to do to keep ourselves on a utopian path.

    Here are some highlights:

    How AI will impact jobs

    In terms of AI impact, there are various schools of thought. Tim O'Reilly's in the very optimistic school. There are other people in the very pessimistic school, thinking that all jobs are going to go away, or 47% of jobs are going to go away, or we'll have rioting in the streets, or our robot overlords will kill us all. I'm kind of in the middle, in the sense that I do think it's not going to be an easy transition for individuals and businesses, and I think we should certainly not be complacent about it and assume the jobs will always be there. But I think it's going to take a lot longer than people usually think to create new business processes and new business models and so on, and that will mean that the jobs will largely continue for long periods.

    One of my favorite examples is bank tellers. We had about half a million bank tellers in the U.S. in 1980. Along come ATMs and online banking, and so on. You'd think a lot of those tasks would be replaced. We have about half a million bank tellers in the United States in 2016, so... Nobody would recommend it as a growth career, and it is slowly starting to decline, but I think we'll see that in a lot of different areas. And then I think there will be a lot of good jobs working alongside these machines, and that's really the primary focus of our book [Only Humans Need Apply: Winners and Losers in the Age of Smart Machines] was identifying five ways that humans can add value to the work of smart machines.

    The appeal of augmentation

    Think about what is it that humans bring to the party. Automation, in a way, is a kind of a downward spiral. If everybody's automating something in an industry, the prices decline, and margins decline, and innovation is harder because you’ve programmed this system to do things a certain way. So, as a starting assumption, I think augmentation is a much more appealing one for a lot of organizations than, ‘We're going to automate all the jobs away.’

    Guaranteed job programs

    If I were a leader in the United States, I would say the people who are going to need the most help are not so much the knowledge workers who are kind of used to learning new stuff and transforming themselves, to some degree, but the long-distance truck drivers. We have three million in the United States, and I think you'll probably see autonomous trucks on the interstate, maybe in special lanes or something, before we see it in most city, before we see autonomous cars in most cities.

    That's going to be tougher, because truck drivers probably, as a class, are not that comfortable in transforming themselves by taking courses here and there, and learning the skills they need to learn. So in that case, maybe we will need some guaranteed income programs—or, I'd actually prefer to see guaranteed job programs. There's some evidence that if you have a guaranteed income, you think, ‘Well, maybe they'll take up new sports or artistic pursuits,’ or whatever. Turns out, what most people do when they have a guaranteed income is, they sleep more and they watch TV more, so kind of not good for society in general. Guaranteed job programs worked in the Great Depression for the Civilian Conservation Corps, and artists and writers and so on, so we could do something like that. Whether this country would ever do it is not so clear.

    The (lacking) economic value of AI

    In a way, what’s missing in the AI conversation is the same thing I saw missing when I started working in analytics: it's a very technical conversation, for the most part. Not that much yet on how it will change key business and organizational processes—how do we get some productivity out of it? I mean, we desperately need more productivity in this country. We haven't increased it much over the past several years—a great example is health care. We have systems that can read radiological images and say, ‘You need a biopsy, because this looks suspicious,’ in a prostate cancer or breast cancer image, or, ‘This pathology image doesn't look good. You need a further biopsy or something, a more detailed investigation,’ but we haven't really reduced the number of radiologists or pathologists at all, so what's the economic value? We've had these for more than a decade. What's the economic value if we're not creating any more productivity?

    I think the business and social and political change is going to be a lot harder for us to address than the technical change, and I don't think we're really focusing much on that. I mean, there's no discussion of it in politics, and not yet enough in the business context, either.

    9 February 2017, 12:20 pm
  • 22 minutes 31 seconds
    Genevieve Bell on moving from human-computer interactions to human-computer relationships

    The O’Reilly Radar Podcast: AI on the hype curve, imagining nurturing technology, and gaps in the AI conversation.

    This week, I sit down with anthropologist, futurist, Intel Fellow, and director of interaction and experience research at Intel, Genevieve Bell. We talk about what she’s learning from current AI research, why the resurgence of AI is different this time, and five things that are missing from the AI conversation.

    Here are some highlights:

    AI’s place on the wow-ahh-hmm curve of human existence

    I think in some ways, for me, the reason of wanting to put AI into a lineage is many of the ways we respond to it as human beings are remarkably familiar. I'm sure you and many of your viewers and listeners know about the Gartner Hype Curve, the notion of, at first you don’t talk about it very much, then the arc of it's everywhere, and then it goes to the valley of it not being so spectacular until it stabilizes. I think most humans respond to technology not dissimilarly. There's this moment where you go, 'Wow. That’s amazing' promptly followed by the 'Uh-oh, is it going to kill us?' promptly followed by the, 'Huh, is that all it does?' It's sort of the wow-ahh-hmm curve of human existence. I think AI is in the middle of that.

    At the moment, if you read the tech press, the trade presses, and the broader news, AI's simultaneously the answer to everything. It's going to provide us with safer cars, safer roads, better weather predictions. It's going to be a way of managing complex data in simple manners. It's going to beat us at chess. On the one hand, it's all of that goodness. On the other hand, there are being raised both the traditional fears of technology: is it going to kill us? Will it be safe? What does it mean to have autonomous things? What are they going to do to us? Then the reasonable questions about what models are we using to build this technology out. When you look across the ways it's being talked about, there are those three different factors. One of excessive optimism, one of a deep dystopian fear, and then another starting to run a critique of the decisions that are being made around it. I think that’s, in some ways, a very familiar set of positions about a new technology.

    Looking beyond the app that finds your next cup of coffee

    I sometimes worry that we imagine that each generation of new technology will somehow mysteriously and magically fix all of our problems.

    The reality is 10, 20, 30 years from now, we will still be worrying about the safety of our families and our kids, worrying about the integrity of our communities, wanting a good story to keep us company, worrying about how we look and how we sound, and being concerned about the institutions in our existence. Those are human preoccupations that are thousands of years deep. I'm not sure they change this quickly. I do think there are harder questions about what that world will be like and what it means to have the possibility of machinery that is much more embedded in our lives and our world, and about what that feels like.

    In the fields that I come out of, we've talked a lot since about the same time as AI about human computer interactions, and they really sat inside the paradigm. One about what should we call a command-and-control infrastructure. You give a command to the technology, you get some sort of piece of answer back; whether that’s old command prompt lines or Google search boxes, it is effectively the same thing. We're starting to imagine a generation of technology that is a little more anticipatory and a little more proactive, that’s living with us—you can see the first generation of those, whether that's Amazon's Echo or some of the early voice personal assistants.

    There's a new class of intelligent agents that are coming, and I wonder sometimes if we move from a world of human-computer interactions to a world of human-computer relationships that we have to start thinking differently. What does it mean to imagine technology that is nurturing or that has a care or that wants you to be happy, not just efficient, or that wants you to be exposed to transformative ideas? It would be very different than the app that finds you your next cup of coffee.

    There’s a lot of room for good AI conversations

    What's missing from the AI conversation are the usual things I think are missing from many conversations about technology. One is an awareness of history. I think, like I said, AI doesn’t come out of nowhere. It came out of a very particular set of preoccupations and concerns in the 1950s and a very particular set of conversations. We have, in some ways, erased that history such that we forget how it came to be. For me, I think a sense of history is missing. As a result of that, I think more attention to a robust interdisciplinarity is missing, too. If we're talking about a technology that is as potentially pervasive as this one and as potentially close to us as human beings, I want more philosophers and psychologists and poets and artists and politicians and anthropologists and social scientists and critics of art—I want them all in that conversation because I think they're all part of it.

    I worry that this just becomes a conversation of technologists to each other about speeds and feeds and their latest instantiation, as opposed to saying, if we really are imagining a form of an object that will be in dialogue with us and supplemental and replacing us in some places, I want more people in that conversation. That's the second thing I think is missing.

    I also think it's emerging, and I hear in people like Julia Ng and my colleagues Kate Crawford and Meredith Whitacre an emerging critique of it. How do you critique an algorithm? How do you start to unpack a black-boxed algorithm to ask the questions about what pieces of data are they waging against what and why? How do we have the kind of dialogue that says, sure we can talk about the underlying machinery, but we also need to talk about what's going into those algorithms and what does it mean to train objects.

    For me, there's then the fourth thing, which is: where is theory in all of this? Not game theory. Not theories about machine learning and sequencing and logical decision-making, but theories about human beings, theories about how certain kinds of subjectivities are made. I was really struck in reading many of the histories of AI, but also of the contemporary work, of how much we make of normative examples in machine learning and in training, where you're trying to work out the repetition—what's the normal thing so we should just keep doing it? I realized that sitting inside those are always judgements about what is normal and what isn't. You and I are both women. We know that routinely women are not normal inside those engines.

    There's something about what would it mean to start asking a set of theoretical questions that come out of feminist theory, out of Marxist theory, out of queer theory, critical race theory about what does it mean to imagine normal here and what is and what isn't. Machine learning people would recognize this as the question of how do you deal with the outliers. I think my theory would be: what if we started with the outliers rather than the center, and where would that get you?

    I think the fifth thing that’s missing is: what are the other ways into this conversation that might change our thinking? As anthropologists, one of the things we're always really interested in is, can we give you that moment where we de-familiarize something. How do you take a thing you think you know and turn it on it's head so you go, 'I don’t recognize that anymore'? For me, that’s often about how do you give it a history. Increasingly, I realize in this space there's also a question to ask about what other things have we tried to machine learn on—so, what other things have we tried to use natural language processing, reasoning, induction on to make into supplemental humans or into things that do tasks for us?

    Of course, there's a whole category of animals we've trained that way—carrier pigeons, sheep dogs, bomb sniffing dogs, Coco the monkey who could sign. There's a whole category of those, and I wonder if there's a way of approaching that topic that gets us to think differently about learning because that’s sitting underneath all of this, too. All of those things are missing. When you've got that many things missing, that’s actually good. I means there's a lot of room for good conversations.

    26 January 2017, 1:15 pm
  • 17 minutes 4 seconds
    Pagan Kennedy on how people find, invent, and see opportunities nobody else sees

    The O'Reilly Radar Podcast: The art and science of fostering serendipity skills.

    On this week's episode of the Radar Podcast, O'Reilly's Mac Slocum chats with award-winning author Pagan Kennedy about the art and science of serendipity—how people find, invent, and see opportunities nobody else sees, and why serendipity is actually a skill rather than just dumb luck.

    Here are some highlights:

    The roots of serendipity

    It's really helpful to go back to the original definition of serendipity, which arose in a very whimsical, serendipitous way back in the 1700s. There was this English eccentric named Horace Walpole who was fascinated with a fairy tale called 'The Three Princes of Serendip.' In this fairy tale, the three princes are Sherlock Homes-like detectives who have amazing skills, forensic skills. They can see clues that nobody else can see. Walpole was thinking about this and very delighted with this idea, so he came up with this word 'serendipity.' In that original definition, Walpole really was talking about a skill, the ability to find what we're not looking for, especially really useful clues that lead to discoveries. In the intervening couple hundred years, the word has almost migrated to the opposite meaning, where we just talk about dumb luck. ... I'm not against that meaning, but I think it's really useful to go back, especially in the age of big data, to go back to that original meaning and talk again about this as a skill.

    The interplay between technology, the human mind, and serendipity

    There's a really interesting interplay between tools and the human mind and serendipity. If you look at the history of science, when something like the telescope or the microscope appears, there are waves of discovery because these tools have made things that were formerly invisible visible. When patterns that you couldn't see before become visible, of course, people, smart people, creative people, find those patterns and begin working with them. I think the data tools and all the new tools that we've got are amazing because they make patterns visible that we wouldn't have been able to see before; but in the end, they're tools, and you've got to have a human mind at other end of that tool. If the tool throws up a really important anomaly or pattern, you've got to have a human being there who not only sees it, recognizes it, but also gets super excited about it, and defends it and explores it and figures, and gets excited about an opportunity there.

    Serendipity as a highly emotional process

    A class of people who tend to be very good at finding, inventing, and seeing opportunities that nobody else sees are surgeons. I'd really like to emphasize that this kind of problem solving or this kind of pattern finding is not just intellectualizing. It can be very emotional. Surgeons, when they have a problem, somebody dies and they stay up at 3 a.m. thinking about what went wrong with their tools. It's that kind of worrying that is often involved in this kind of search for patterns or opportunities nobody else is seeing. It's not just an intellectual process, but a highly emotional one where you're very worried. This kind of process might not be very good for your health, but it's very good for your creativity, that kind of replaying. Not just noticing at the moment what's going wrong or what might be in the environment that nobody else is seeing, but going over it in your head and thinking about alternative realities.

    12 January 2017, 12:24 pm
  • 31 minutes 9 seconds
    Giles Colborne on AI's move from academic curiosity to mainstream tech

    The O'Reilly Radar Podcast: Designing for mainstream AI, natural language interfaces, and the importance of reinventing yourself.

    This week we're featuring a conversation from earlier this year—O'Reilly's Mary Treseler chats with Giles Colborne, managing director of cxpartners. They talk about the transformative effects of AI on design, designing for natural language interactions, and why designers need to nurture the ability to reinvent themselves.

    The conditions are ripe for AI to enter the mainstream

    Mobile is the platform people want to use. ... That means that a lot of businesses are seeing their traffic shift to a channel that actually doesn't work as well, but people would like it to work well. At the same time, mobile devices have become incredibly powerful. Organizations are suddenly finding themselves flooded with data about user behavior. Really interesting data. It's impossible for a person to understand, but if you have a very powerful device in the user's hand, and you have powerful computers than can crunch this data and shift it around quickly, suddenly, technologies like AI become really important, and you can start to predict what the user might want. Therefore, you can remove a little bit of the friction from mobile.

    Looking around at this landscape a couple years ago, it's obvious that is going to be where something interesting happens soon. Sure enough, you can see that everywhere now. The interest in AI is phenomenal. At its simplest, the crudest application of AI is simply that: to shortcut user input. That's a very simple application, but it's incredibly powerful. It has a transformative effect. That's why I think AI is really important, is why I think its time is now. That's why I think you're starting to see it everywhere. The conditions are ripe for AI to move from being an academic curiosity into what it is now: mainstream.

    Designing natural language interfaces

    One of the things we've been working on a lot recently is designing around chat interfaces, learning natural language interfaces, NLIs. That's a form of algorithms, a really kind of complex form. Essentially, a lot of the features that you find in other forms of AI design are there in designing natural language interfaces. As we've been exploring that space, obviously our instinct is to go back to the psychology of language and really study that so that we're building it in, where we're understanding what we're hearing and trying to model artificial conversations.

    That's led us very quickly to realize that we need tools that support those sorts of language structures as well. We've been working with a company called Artificial Solutions, that provided us with wonderful tools that enables us to very rapidly model—and almost prototype in the browser—natural language interactions much faster than writing out scripts or running through Post-It notes. You can very quickly see, 'This is where this conversation feels awkward; this is where this conversation is breaking down.' I think that ability to rapidly prototype is incredibly important.

    Embracing reinvention

    I think anybody working today needs to be endlessly curious to keep up with the speed with which technology forces us to reinvent ourselves—AI is a great example of that; there's going to be an awful lot of roles that are going to need to be reinvented as AI support tools become mainstream. That ability to be curious and to reinvent yourself is really important.

    The ability to see things from multiple points of view simultaneously is important as well. We've hired some great people from media backgrounds, and they very naturally have that ability to shift between the actor, if you like—which in our case is the interactive thing that we're designing—the audience, and the author, and are able to think about each of those viewpoints. As you're learning through a design process, you need to be able to hold each of those viewpoints in your head simultaneously. That's really important.

    29 December 2016, 1:15 pm
  • 59 minutes 43 seconds
    Brad Knox on creating a strong illusion of life

    The O'Reilly Radar Podcast: Imbuing robots with magic, eschewing deception in AI, and problematic assumptions of human-taught reinforcement learning.

    In this episode, I sit down with Brad Knox, founder and CEO of Emoters, a startup building a product called bots_alive—animal-like robots that have a strong illusion of life. We chat about the approach the company is taking, why robots or agents that pass themselves off as human without any transparency should be illegal, and some challenges and applications of reinforcement learning and interactive machine learning.

    Here are some links to things we talked about and some highlights from our conversation:

    Links:

    Creating a strong illusion of life

    I've been working on a startup company, Emoters. We're releasing a product called bots_alive, hopefully in January, through Kickstarter. Our big vision there is to create simple, animal-like robots that have a strong illusion of life. This immediate product is going to be a really nice first step in that direction. .... If we can create something that feels natural, that feels like having a simple pet—maybe not for a while anything like a dog or cat, but something like an iguana, or a hamster—where you can observe it and interact with it, that it would be really valuable to people.

    The way we're creating that is going back to research I did when I was at MIT with Cynthia Breazeal and a master's student, Sam Spaulding—machine learning from demonstration on human-improvised puppetry. Our hypothesis for this product is that if you create an artificially intelligent character using current methods, you sit back and think, 'Well, in this situation, the character should do this.' For example, a traditional AI character designer might write the rule for an animal-like robot that if a person moves his or her hand quickly, the robot should be scared and run away.

    That results in some fairly interesting characters, but our hypothesis is that we'll get much more authentic behaviors, something that really feels real, if we first allow a person to control the character through a lot of interactions. Then, take the records and the logs of those interactions, and learn a model of the person. As long as that model is good fidelity—it doesn't have to be perfect, but captures with pretty good fidelity the puppeteer—and the puppeteer is actually creating something that would be fun to observe or interact with, then we're in a really good position. … It's hard to sit back and write down on paper why humans do the things we do, but what we do in various contexts is going to be in the data. Hopefully, we'll be able to learn that from human demonstration and really imbue these robots with some magic.

    A better model for tugging at emotions

    The reason I wrote that Tweet [Should a robot or agent that widely passes for human be illegal? I think so.] is that if a robot or an agent—you could think of an agent as anything that senses the state of its environment, whether it's a robot or something like a chat bot, just something you're interacting with— if it can pass as human and it doesn't give some signal or flag that says, 'Hey, even if I appear human, I'm not actually human,' that really opens the door to deception and manipulation. For people who are familiar with the Turing Test—which is by far the most well-known test for successful artificial intelligence—the issues I have with it is that, ultimately, it is about deceiving people, about them not being able to tell the difference between an artificially intelligent entity and a human.

    For me, one real issue is that, as much as I'm generally a believer in capitalism, I think there's room for abuse by commercial companies. For instance, it's hard enough when you're walking down the street and a person tries to get your attention to buy something or donate to some cause. Part of that is because it's a person and you don't want to be rude. When we create a large number—eventually, inexpensive fleets—of human-like or pass-for-human robots that can also pull on your emotions in a way that helps some company, I think the negative side is realized at that point.

    ...

    How is that not a contradiction [to our company's mission to create a strong illusion of life]? The way I see illusion of life (and the way we're doing it at bots_alive) is very comparable to cartoons or animation in general. When you watch a cartoon, you know that it's fake. You know that it's a rendering, or a drawing, or a series of drawings with some voice-over. Nonetheless, if you're like most people, you feel and experience these characters in the cartoon or the animation. ... I think that's a better model, where we know it's not real but we can still feel that it's real to the extent that we want to. Then, we have a way of turning it off and we're not completely emotionally beholden to these entities.

    Problematic assumptions of human-taught reinforcement learning

    I was interested in the idea of human training of robots in an animal training way. Connecting that to reinforcement learning, the research question we posed was: instead of the reward function being coded by an expert in reinforcement learning, what happens if we instead give buttons or some interface to a person who knows nothing about computer science, nothing about AI, nothing about machine learning, and that person gives the reward and punishment signals to an agent or a robot? Then, what algorithmic changes do we need to make the system learn what the human is teaching the agent to do?

    If it had turned out that the people in the study had not violated any of the assumptions of reinforcement learning when we actually did the experiments, I think it wouldn't have ended up being an interesting direction of research. But this paper dives into the ways that people did violate, deeply violate, the assumptions of reinforcement learning.

    One emphasis of the paper is that people tend to have a bias toward giving positive rewards. A large percentage of the trainers we had in our experiments would give more positive rewards than punishment—or in reinforcement learning terms, 'negative rewards.' We found that people were biased toward positive rewards.

    The way reinforcement learning is set up is, a lot of the reinforcement learning tasks are what we call 'episodic'—roughly, what that means is that when the task is completed, the agent can't get further reward. Its life is essentially over, but not in a negative way.

    When we had people sit down and give reward and punishment signals to an agent trying to get out of a maze, they would give a positive reward for getting closer to the goal, but then this agent would learn, correctly (at least by the assumptions of reinforcement learning), that if it got to the goal, (1) it would get no further reward, and (2) if it stayed in the world that it's in, it would get a net positive reward. The weird consequence is that the agent learns that it should never go to the goal, even though that's exactly what these rewards are supposed to be teaching it.

    In this paper, we discussed that problem and showed the empirical evidence for it. Basically, the assumptions that reinforcement learning typically makes are really problematic when you're letting a human give the reward.

    15 December 2016, 2:55 pm
  • 27 minutes 26 seconds
    Fang Yu on using data analytics to catch constantly evolving fraudsters

    The O'Reilly Radar Podcast: Big data for security, challenges in fraud detection, and the growing complexity of fraudster behavior.

    This week, I sit down with Fang Yu, cofounder and CTO of DataVisor, where she focuses on big data for security. We talk about the current state of the fraud landscape, how fraudsters are evolving, and how data analytics and behavior analysis can help defend against—and prevent—attacks.

    Here are some highlights from our chat:

    Challenges in using supervised machine learning for fraud detection

    In the past few years, machine learning has taken a big role in fraud detection. There are a number of supervised machine learning techniques and breakthroughs, especially for voice, image recognition, etc. There's also an application for machine learning to detect fraud, but it's a little challenging because supervised machine learning needs labels. It needs to know what good users and bad users look like, and to know what good behavior is, what bad behavior is; the problem in many fraud cases is that attackers constantly evolve. Their patterns change very quickly, so in order to detect an attack, you need to know they will do next.

    That is ultimately hard, and in some cases—for example, financial transactions—it is too late. For supervised machine learning, you will have a charge back label from the bank because someone sees their credit card got abused and they called the bank. That's how you get the label. But that happens well after the actual transaction takes place, sometimes even months later, and the damage is already done. And moving forward, by the time you have a model to train to prevent it from happening again, the attacker has already changed his or her behavior. Supervised machine learning is great, but when applied to security, you need a quicker and more customized solution.

    An unsupervised machine learning approach to identify sleeper cells

    At DataVisor, we actually do things differently from the traditional rule-based or supervised machine learning-based approaches. We do unsupervised detection, which does not need labels. So, at a high-level, today's modern attackers do not use a single account to conduct fraud. If they have a single account, the fraud they can conduct is very limited. What they usually do is construct an army of fraud accounts, and then either do a mass registration or conduct account takeovers, then each of them will commit a little fraud. They can do spamming, they can do phishing, they can do all types of different bad activities. But together, because they have many accounts, they conduct attacks at a massive scale.

    For DataVisor, the approach we take is called an unsupervised approach. We do not look at individual users anymore. We look at all the users in a holistic view and uncover their correlations and linkages. We use graph analysis and clustering techniques, etc., to identify these fraudsters' rings. We can identify them even before they have done anything, or while they are sleeping, so we call them "sleeper cells."

    The big payoff of fraudulent faking

    Nowadays, we actually see fraud becoming pretty complex and even more lucrative. For example, if you look at e-commerce platforms, they sometimes offer reviews. They let users rate, like, and write reviews about products. And all of these can be leveraged by the fraudsters—they can write fake reviews and incorporate bad links in the writeups in order to promote their own products. So, they do a lot of fake likes to promote.

    Now, we also see a new trend going from the old days of having fake impressions, fake clicks now to actual fraudulent installs. For example, in the old days, when a gaming company had a new game coming out, they would purchase users to play these games—they would pay people like $50 dollars to play an Xbox game. Now, many of the games are free, but they need to drive installs to improve their rank in app stores. These gaming providers rely on app marketing, purchasing the users from different media sources, which can be pretty expensive—a few dollars per install. So, the fraudsters start to emulate the users and download these games. They are pretending they are media sources and cashing in by just downloading and playing the games. That payoff is 400 times more than that of a fake click or impression.

    The future of fraudsters and fraud detection

    Fraudsters are evolving to look more like real users, and it's becoming more difficult to detect them. We see them incubate for a long time. We see them using cloud to circumvent IP blacklists. We see them skirting two-factor authentication. We see them opening apps, making purchases, and doing everything a real, normal user does. They are committing fraud at a huge scale across all industries, from banking and money laundering to social, and the payoff for them is equally as massive. If they are evolving, we need to evolve, too. That's why new methods, such as unsupervised machine learning, are so critical to staying ahead of the game.

    1 December 2016, 1:40 pm
  • 17 minutes 38 seconds
    Hilary Mason on the wisdom missing in the AI conversation

    The O'Reilly Radar Podcast: Thinking critically about AI, modeling language, and overcoming hurdles.

    This week, I sit down with Hilary Mason, who is a data scientist in residence at Accel Partners and founder and CEO of Fast Forward Labs. We chat about current research projects at Fast Forward Labs, adoption hurdles companies face with emerging technologies, and the AI technology ecosystem—what's most intriguing for the short term and what will have the biggest long-term impact.

    Here are some highlights:

    Missing wisdom

    There are a few things missing [from the AI conversation]. I think we tend to focus on the hype and eventual potential without thinking critically about how we get there and what can go wrong along the way. We have a very optimistic conversation, which is something I appreciate. I'm an optimist, and I'm very excited about all of this stuff, but we don't really have a lot of critical work being done in things like how do we debug these systems, what are the consequences when they go wrong, how do we maintain them over time, and operationalize and monitor their quality and success, and what do we do when these systems infiltrate pieces of our lives where automation may have highly negative consequences. By that, I mean things like medicine or criminal justice. I think there's a big conversation that is happening, but the wisdom still is missing. We haven't gotten there yet.

    Making the impossible possible

    I'm particularly intrigued at the moment by being able to model language. That's something where I think we can't yet imagine the ultimate applications of these things, but it starts to make things that previously would have seemed impossible possible, things like automated novel writing, poetry, things that we would like to argue are purely human creative enterprises. It starts to make them seem like something we may one day be able to automate, which I'm personally very excited about.

    The impact question is a really good one, and I think it is not one technology that will have that impact. It's the same reason we're starting to see all these different AI products pop up. It's the ensemble of all of the techniques that are falling under this umbrella together that is going to have that kind of impact and enable applications like the Google Photos app, which is my favorite AI product, or self-driving cars or things like Amazon's Alexa, but actually smarter. That's a collection of different techniques.

    Making sentences and languages computable

    We've done a project in automated summarization that I'm very excited about—that is applying neural networks to text, where you can put in a single article and it will extract; this is extractive summarization. It extracts sentences from that article that, combined together, contain the same information in the article as a whole.

    We also have another formulation of the problem, which is multi-document summarization, where we apply this to Amazon product reviews. You can put in 5,000 reviews, and it will tell you these reviews tend to cluster in these 10 ways, and for each cluster, here's the summary of that cluster review. It gives you the capability to read or understand thousands of documents very quickly. ... I think we're going to see a ton of really interesting things built on the techniques that underlie that. It's not just summarization, but it's making sentences and languages computable.

    Adoption hurdles

    I think the biggest adoption hurdle [for emerging technologies]—there are two that I'll say. The one is that sometimes these technologies get used because they're cool, not because they're useful. If you build something that's not useful, people don't want to use it. That can be a struggle.

    The second thing is that people are generally resistant to change. When you're in an organization and you're trying to advocate for the use of a new technology to make the organization more efficient, you will likely run into friction. In those situations, it's a matter of time and making the people who are most resistant look good.

    17 November 2016, 2:15 pm
  • 25 minutes 44 seconds
    Richard Cook and David Woods on successful anomaly response

    O'Reilly Radar Podcast: SNAFU Catchers, knowing how things work, and the proper response to system discrepancies.

    In this week's episode, O'Reilly's Mac Slocum sits down with Richard Cook and David Woods. Cook is a physician, researcher, and educator, who is currently a research scientist in the Department of Integrated Systems Engineering at Ohio State University, and emeritus professor of health care systems safety at Sweden’s KTH. Woods also is a professor at Ohio State University and is leading the Initiative on Complexity in Natural, Social, and Engineered Systems, and he's the co-director of Ohio State University’s Cognitive Systems Engineering Laboratory. They chat about SNAFU Catchers; anomaly response; and the importance of not only understanding how things fail, but how things normally work.

    Here are a few highlights:

    Catching situations abnormal

    Cook:

    We're trying to understand how Internet-facing businesses manage to handle all the various problems, difficulties, and opportunities that come along. Our goal is to understand how to support people in that kind of work. It's a fast changing world, mostly that appears on the surface to be smoothly functioning, but in fact, as people who work in the industry know, is always struggling with different kinds of breakdowns, and things that don't work correctly, and obstacles that have to be addressed. Snafu Catchers refers to the idea that people are constantly working to collect, and respond to, all the different kinds of things that foul up the system, and that that's the normal situation, not the abnormal one.

    Woods:

    [SNAFU] is a coinage from the grunts in World War II on our side, on the winning side. Situation normal, so the normal situation is all fucked up, right? That the pristine, smooth work is designed, follow the plan, put in automation, everything is great, isn't really the way things work in the real world. It appears that way from a distance, but on the ground, there are gaps, uncertainties, conflicts, and trade offs. Those are normal—in fact, they're essential. They're part of this universe and the way things work. What that means is, there is often a breakdown, a limit in terms of how much adaptive capability is built into the system, and we have to add to that. Because surprise will happen, exceptions will happen, anomalies will happen. Where does that extra capacity to adapt to surprise come from? That's what we're trying to understand, and focus on, not the SNAFU—that's just normal. We're focusing on the catching: what are the processes, abilities, and capabilities of the teams, groups, and organizational practices that help you catch SNAFU's. That's about the anticipation and preparation, so you can respond quickly and directly when the surprise occurs.

    Know how things work, not just how they fail

    Cook:

    There's an old surgical saying that, 'Good results come from experience, and experience comes from bad results.' That's probably true in this industry as well. We learn from experience by having difficulties and solving those sorts of problems. We live in an environment in which people are doing this as apprenticeships very early on in their life, and the apprenticeship gives them opportunities to experience different kinds of failure. Having those experiences tells them something about the kinds of activities that they should perform, once they sense a failure is occurring. Also, some of the different kinds of things that they can do to respond to different kinds of failures. Most of what happens in this, is a combination of understanding how the system is working, and understanding what's going on that suggests that it's not working in the right sort of way. You need two kinds of knowledge to be able to do this. Not just knowledge of how things fail, but also knowledge of how things normally work.

    No anomaly is too small to ignore

    Woods:

    I noticed that what's interesting is, you have to have a pretty good model of how it's supposed to work. Then you start getting suspicious. Things don't quite seem right. These are the early signals, sometimes called weak signals. These are easy to discount away. One of the things you see, and this happened in [NASA] mission control, for example, in its heyday, all discrepancies were anomalies until proven otherwise. That was the cultural ethos of mission control. When you lose that, you see people discounting, 'Oh, that discrepancy isn't going to really matter. I've got to get this other stuff done,' or, 'If I foul it up, some other things will start happening.'

    What we see in successful anomaly response is this early ability to notice something starting to go wrong, and it is not definitive, right? If it was definitive, then it would cross some threshold, it would activate some response, it would pull other resources in to deal with it, because you don't want it to get out of control. The preparation for, and success at, handling these things is to get started early. The failure mode is, you're slow and stale—you let it cook too long before you start to react. You can be slow and stale, and the cascade can get away from you, you lose control. When teams or organizations are effective at this, they notice things are slightly out, and then pursue it. Dig a little deeper, follow up, test it, bring some other people to bare with different or complimentary expertise. The don't give up real quick and say, 'That discrepancy is just noise and can be ignored.' Now, most of the time, those discrepancies might probably be noise, right? Isn't worth the effort. But sometimes those are the beginnings of something that's going to threaten to cascade out of control.

    3 November 2016, 12:45 pm
  • More Episodes? Get the App
© MoonFM 2024. All rights reserved.