Podcast of the annual conference on emerging legal issues surrounding digital publishing and content distribution, produced by the Media Law Resource Center and the Berkeley Center for Law & Technology
The European Union’s General Data Protection Regulation has been in effect for nearly a year, and California’s expansive Consumer Privacy Act is set to go into effect in 2020. This panel of regulators, in-house and outside counsel will review enforcement trends that have the greatest impact on online platforms, and actions that platforms have taken to comply and to reduce their risk. The panel will also address the new requirements of the California CPA and the potential impact of the law on advertising supported business models. In addition, this session will discuss the policy implications of new legislation being proposed in Congress to federally regulate digital privacy in the U.S. The panel discussion will include:
• A review of the enforcement actions during the first year of GDPR, and what they reveal about the challenges of obtaining consent under GDPR
• Challenges of responding to Data Subject Access Requests, Correction and Deletion requests and how organizations have responded
• The impact of Brexit on GDPR compliance plans
• The differences and similarities of CCPA and GDPR; and
• Handicapping the (many) pending amendments to the CCPA
Panelists:
Tyler Newby, Partner, Fenwick & West LLP (Moderator)
Emily Jones, Partner, Osborne Clarke LLP
Kandi Parsons, Shareholder, ZwillGen PLLC
Nithan Sannappa, Associate Legal Director, Product, Twitter, Inc.
This session will take an in-depth look at legal strategies for protecting anonymous speech online from the perspective of the platforms that provide the channels of communication, as well as from users seeking to maintain their online anonymity. A panel of expert inside and outside counsel will consider:
• What are the legal standards for maintaining user anonymity when a platform is served with a subpoena – how do they vary from jurisdiction to jurisdiction – and differ depending on the nature of the action (e.g., defamation vs. copyright vs. criminal investigation)? Should the standard be codified?
• How do efforts to identify defendants through indirect means (e.g., by IP address) affect courtroom battles over anonymity?
• What is the duty of platforms to their anonymous users? How should platforms address the issue of user notification when responding to subpoenas, and under what circumstances must (or should) a platform withhold such notice?
• For platforms maintaining a forum for anonymous speech, what are the best policies with respect to data collection and retention of user information, balancing the desire to protect user identity with the need to operate and protect the platform and generate advertising revenues?
• What assurances should (or should not) be made in the platform’s terms of service with respect to protecting anonymity? Do those assurances affect whether a user can remain anonymous?
• What are the practical mechanics of going into court – either on behalf of the platform or on behalf of an anonymous user; and what are the challenges to attaining a court’s acceptance of an attorney’s appearing on behalf of the anonymous user? Can anonymity be maintained even if the subpoenaing party meets its burden to overcome a motion to quash? What are the mechanisms at a court’s disposal to do so?
• Are anonymity rights litigated haphazardly, and is this bad for development of the law? What if the platform does not want to devote the resources to moving to quash in a particular case, or the user does not have the resources to do so? How can anonymous users find competent lawyers to help them?
• How can we ensure anonymity rights are adequately protected when foreign litigants seek to unmask users in American courts via federal ex parte applications for discovery?
Panelists:
Ashley I. Kissinger, Of Counsel, Ballard Spahr LLP (Moderator)
Raymond Oliver Aghaian, Partner, Kilpatrick Townsend & Stockton
Joshua Koltun, Esq.
Tom O’Brien, VP, Deputy General Counsel, Glassdoor, Inc.
Practioners from Europe will bring us the latest developments on global takedown cases working their way through the European courts (e.g., CNIL v. Google; Glawischnig v. Facebook); and controversial provisions of the EU Copyright Directive that threaten to impose a so-called “link tax” on platforms that aggregate news content and to require platforms to take affirmative measures to prevent unauthorized posting of copyrighted content.
Presenters:
Bryony Hurst, Partner, Litigation, Bird & Bird LLP
Remy Chavannes, Partner, Brinkhof
In its landmark decision in Carpenter v. United States, the Supreme Court held that the Fourth Amendment requires that law enforcement obtain a warrant before gathering historic cell site location data about a suspect from cellular service providers, calling into question the validity of the “third-party doctrine” in the online context. The decision has opened the door to a new way of thinking about constitutional privacy in the digital age, where third-party platforms store some of our most personal data. How will (and how should) courts respond to government requests for IP addresses, search history, emails and the like? And what could Congress do to clarify existing law? A former federal magistrate judge, the Hon. Stephen Wm. Smith, will discuss these issues with noted practitioner, Marc Zwillinger, and together they will provide their analysis of where we’ve been and where we’re going.
Speakers:
Jim Dempsey, Executive Director, Berkeley Center for Law & Technology (Moderator)
Hon. Stephen Wm. Smith, U.S.M.J. (retired), Director of Fourth Amendment & Open Courts, Stanford Center for Internet and Society
Marc Zwillinger, Founder & Managing Member, ZwillGen PLLC
Many decisions that affect public discourse on online platforms are made before the first user logs on. Speech on the internet is shaped by platforms’ structural choices including: the length of permitted submissions; whether posts are permanent or disappear over time; how the content that users see is selected; the control granted to users over who sees their own posts; mechanisms for the reporting and removal of content considered offensive; and more. These choices can result in rigidly controlled discussions or free-for-all melees, in-depth analysis or the exchange of quick thoughts, and private discussions or public debates. How do concepts of freedom of speech play into these decisions, and how does that affect the advice given by counsel with respect to the development of new products? This session will explore these and other questions, including:
• What does it mean to design a product with values such as freedom of speech, privacy, etc., in mind? How do design choices with respect to privacy affect free speech, and vice versa?
• Which kinds of design choices are likely to chill the exchange of ideas? What forces – internal or external – drive a company to make these choices?
• What are best practices for product counsel attempting to balance a commitment to freedom of speech with other commitments and priorities their companies might have?
• Are legal principles such as the First Amendment irrelevant? To what extent have the judgments embodied in First Amendment doctrine been supplanted by other ethical considerations or the desires of a platform’s particular community?
• To what extent is it possible to build the highly subjective and fact-based standards on which free speech decisions often depend into technological tools such as content filters?
• What, if any, obligation does a tech platform have to consider the use/abuse of their products by government officials, either in terms of public access to government activity or the potential use of those products by the government to suppress citizens’ speech?
Panelists:
Jeff Hermes, Deputy Director, Media Law Resource Center (Moderator)
Ambika K. Doran, Partner, Davis Wright Tremaine LLP
Ben Glatstein, Asst. General Counsel, Microsoft
Alexis Hancock, Staff Technologist, Electronic Frontier Foundation
Jacob Rogers, Senior Legal Counsel, Wikimedia Foundation, Inc.
Does one infringe a copyright by in-line linking? If so, how much will our internet be shrinking? Just how pervasive are in-line linking, embedding and framing in today’s digital media? A content development and distribution pro first explains the present state before envisioning a hypothetical internet without these tools. Then counsel who won the two leading cases dive deep into the controversy: does copyright law hold embedding a link to be an infringing “display” whether or not the work is hosted on the servers of an independent platform (Goldman v. Breitbart)? Or does infringement depend on actually hosting a copy of the work, rather than pointing a browser to another internet location (Perfect 10 v. Amazon.com)? Would other defenses (like DMCA or implied license) bail out current business models? Has the Supreme Court’s indeterminate Aereo decision now come home to roost? And what is the best advice for an anxious client in the current environment?
Panelists:
Erik Stallman, Assistant Clinical Professor of Law, Berkeley Law (Moderator)
Andrew Bridges, Partner, Fenwick & West LLP
Angela Kim, Audience Development Manager, Verizon Media
Ken Norwick, Partner, Norwick & Schad
New machine-learning technology is allowing even amateur video editors to conjure videos that convincingly replace people’s faces with those of others – frequently unwitting celebrities – to both creative and destructive ends. This digital face-swapping tech has been used for satirical internet videos and perhaps most famously to recreate a younger Princess Leia in the Star Wars film, Rogue One. In their most provocative form, these so-called “deepfakes” digital AI tools have been used to create X-rated content featuring the faces of popular Hollywood actresses grafted on to porn stars’ bodies. The videos have already engendered a firestorm that has led to bans on even freewheeling platforms like Reddit and Pornhub. This short presentation will explore whether the law can keep up with this controversial form of speech, and whether a balance can be struck to protect the reputational and privacy interests of unwitting subjects while upholding First Amendment principles.
• Do existing laws governing defamation, privacy, right of publicity, copyright, or the intentional infliction of emotional distress, or anti-revenge porn laws, protect the unwitting subjects of “deepfakes” videos?
• How does the legal analysis change when fake videos are passed off as real? When celebrities are involved?
• Will this technology make it harder to verify audiovisual content, and easier to generate fake news?
Presenter:
Jim Rosenfeld, Partner, Davis Wright Tremaine LLP
Kara Swisher, influential technology journalist and co-founder of Recode, speaks with fellow journalist, Sarah Jeong, on the state of the tech world in a climate where Silicon Valley is facing growing scrutiny from public officials and the public at-large.
http://www.medialaw.org/images/events/2018podcast/Kara_Swisher.mp3The Computer Fraud & Abuse Act was enacted by Congress in 1986, primarily as a tool to criminally prosecute hackers, in an era before the web and online publishing, when the internet was mostly used by a small universe of academics, government and military staff. Although the CFAA has been updated by Congress several times, its meaning in the modern age of universal internet access and porous digital borders has eluded courts as to what it means to access a computer without authorization. This panel will attempt to make sense of the various, often contradictory, judicial rulings in this area, and debate a better way forward which balances platforms’ private property right to its data with the right of public access to online information. The session will consider:
• Can platforms deny other companies the right to access and process otherwise public information on their sites, and what are the rights of aggregators to gather news and process information by scraping other sites?
• What technical measures, such as password protection, is sufficient to enjoy the protections of the CFAA?
• Reconciling CFAA decisions in Power Ventures & Craigslist v. 3Taps with contradictory rulings in Hi-Q and others.
• Is there a kind of “public forum doctrine” emerging on private social media in light of notions of mandatory access and free speech protections arguably extended to privately-owned social media platforms in other cases.
• What should a modern update or replacement of the CFAA look like?
Panelists:
Brian Willen, Partner, Wilson Sonsini (Moderator)
Jonathan Blavin, Partner, Munger Tolles
Stacey Brandenburg, Partner, ZwillGen
Jamie Williams, Staff Attorney, Electronic Frontier Foundation
It has been approximately a year since the Uber scandal uncovered a culture of sexual harassment and gender bias in the tech community. Silicon Valley still faces a dearth of female founders and women are still underrepresented at executive levels in tech companies and law firms. But is the outlook showing signs of improvement? What steps are tech companies taking to reduce gender bias and discrimination? How can the legal community contribute to increased diversity? This session will examine the current climate and highlight the strides the tech community is making to improve the future of women in tech.
Panelists:
Regina Thomas, Associate General Counsel, Oath Inc. (Moderator)
Lora Blum, General Counsel, SurveyMonkey
Connie Loizos, Silicon Valley Editor, TechCrunch
Nikki Stitt Sokol, Associate General Counsel – Litigation, Facebook
This session will begin with a tutorial on how algorithms and machine learning work in order to provide lawyers with a better understanding of how these technologies apply to solving real world problems. For example: how does machine learning help a review site spot fake reviews, a social media platform identify misinformation campaigns, or sites identify a banned user trying to rejoin the site under a new identity? Our tutorial will explore the limits of what algorithms and machine learning can and cannot do. The demonstration will be followed by a broader policy discussion, which will explore some of the practical, legal and ethical challenges of using algorithms:
• Since it’s almost impossible to run a large network with millions of users without algorithms, how do you strike the right balance between machine learning and human moderators for legal compliance and/or takedowns to comply with company policies, e.g., copyright, pornography, hate speech.
• Does more reliance on machines to make decisions create new problems like unfair takedowns and lack of transparency?
• Under what circumstances does legal liability for machine-made decisions attach?
• What happens when a government agency (such as under the new GDPR “right to an explanation”) requires platforms to disclose an explanation of algorithmic decision making and – not only is the algorithm proprietary – but the complexity of machine learning may make it impossible for even the platform to know precisely why a particular choice is made, e.g., why certain content was delivered.
Panelists:
Jim Dempsey, Executive Director, Berkeley Center for Law & Technology
Travis Brooks, Group Product Manager – Data Science and Data Product, Yelp (Tutorial)
Glynna Christian, Partner, Orrick
Cass Matthews, Senior Counsel, Jigsaw
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.