The International Association of Privacy Professi…
It's hard to believe we’ve reached the final weeks of 2024, a year filled with policy and legal developments across the map. From the continued emergence of AI governance, to location privacy enforcement, children’s online safety to novel forms of privacy litigation, no doubt this was a year that kept privacy and AI governance pros very busy.
One such professional in the space is Goodwin Partner Omer Tene. He’s been immersed in many of these thorny issues, and as always, has thoughts about what’s transpired in 2024 and what that means for the year ahead. I caught up with Tene to discuss the year in digital policy. Here's what he had to say.
AI governance is a rapidly evolving field that faces a wide array of risks, challenges and opportunities. For organizations looking to leverage AI systems such as large language models and generative AI, assessing risk prior to deployment is a must. One technique that’s been borrowed from the security space is red teaming. The practice is growing, and regulators are taking notice.
Brenda Leong, a partner of Luminos Law, helps global businesses manage their AI and data risks. I recently caught up with her to discuss what organizations should be thinking about when diving into red teaming to assess risk prior to deployment.
As the U.S. enters the final stretch of the 2024 election cycle, we face a tight race at the presidential and congressional levels. With a razor-thin margin separating Vice President Kamala Harris and former president Donald Trump, we decided to take a look at the possible policy positions of each campaign with regard to privacy and artificial intelligence governance.
Of course, reading tea leaves is no easy feat, but while attending IAPP Privacy. Security. Risk. 2024 in Los Angeles, California, IAPP Editorial Director Jedidiah Bracy sat down with Managing Director, D.C., Cobun Zweifel-Keegan, CIPP/US, CIPM, to gain his insight on each camp's policy positions, from the administrative state to international data transfers and beyond. Here's what he had to say.
In May 2024, the U.S. National Institute for Standards and Technology launched a new program called ARIA, which is short for Assessing Risks and Impacts of AI. The aim of the program is to advance sociotechnical testing and evaluation of artificial intelligence by developing methods to quantify how a given system works within real-world contexts. Potential outputs include scalable guidelines, tools, methodologies and metrics. Reva Schwartz is a research scientist and principal investigator for AI bias at NIST and the ARIA program lead. In recent years, she's also helped with NIST's AI Risk Management Framework.
IAPP Editorial Director Jedidiah Bracy recently caught up with Reva to discuss the program, what it entails, how it will work and who will be involved.
With the proliferation of comprehensive U.S. state privacy laws in recent years, there’s been an understandable focus by privacy professionals on this growing patchwork. But privacy litigation is also on the rise and the plaintiff’s bar has explored some novel theories, particularly around the use of onlin tracking technologies.
Greenberg Traurig Shareholder Darren Abernethy advises clients in the ad tech, data privacy and cybersecurity space and is familiar with these recent litigation trends involving theories related to pen registers, chatbots, session replay, Meta pixels, software development kits and the Video Privacy Protection Act. Here’s what he had to say about these growing litigation trends.
For many of us following along with the EU AI Act negotiations, the road to a final agreement took many twists and turns, some unexpected. For Laura Caroli, this long, complicated road has been a lived experience.
As the lead technical negotiator and policy advisor to AI Act co-rapporteur Brando Benefei, Caroli was immersed in high stakes negotiations for the world’s first major AI legislation.
IAPP Editorial Director Jedidiah Bracy spoke with Caroli in a candid conversation about her experience and policy philosophy, including the approach EU policy makers took in crafting the AI Act, the obstacles negotiators faced, and how it fundamentally differs from the EU General Data Protection Regulation.
She addresses criticisms of the act, highlights the AI-specific rights for individuals, discusses the approach to future proofing a law that regulates such a rapidly developing technology, and looks ahead to what a successful AI law will look like in practice.
In tandem with privacy, cybersecurity law is rapidly evolving to meet the needs of an increasingly digitized and complex economy. To help practitioners keep up with this ever-changing space, the IAPP published the first edition of Cybersecurity Law Fundamentals in 2021. But there have been a lot of developments since then.
Cybersecurity Law Fundamentals author Jim Dempsey, lecturer at UC Berkeley Law School and senior policy advisor at Stanford Cyber Policy Center, brought on a co-author, John Carlin, partner at Paul Weiss and former Assistant Attorney General, to help with the new edition.
IAPP Editorial Director Jedidiah Bracy recently spoke with both Dempsey and Carlin about the latest trends in cybersecurity, including best practices in dealing with ransomware, the significance of the new SEC disclosure rule, cybersecurity provisions in state privacy laws, trends in FTC enforcement, the recent Biden Executive Order on preventing access to bulk sensitive personal data to countries of concern, and much more.
We even hear about the time Carlin briefed the U.S. president on the Sony Pictures hack.
For those following the regulation of artificial intelligence, there is no doubt passage of the AI Act in the EU is likely top of mind. But proposed policies, laws and regulatory developments are taking shape in many corners of the world, including in Australia, Brazil, Canada, China, India, Singapore and the U.S. Not to be left behind, the U.K. held a highly touted AI Safety Summit late last year, producing the Bletchley Declaration, and the government has been quite active in what the IAPP Research and Insights team describes as a “context-based, proportionate approach to regulation.” In the upper chamber of the U.K. Parliament, Lord Holmes, a member of the influential House of Lords Select Committee on Science and Technology, introduced a private members’ bill late in 2023 that proposes the regulation of AI. The bill also just received a second reading in the House of Lords 22 March. Lord Holmes spoke of AI’s power at a recent IAPP conference in London. While there, I had the opportunity to catch up with him to learn more about his Artificial Intelligence (Regulation) Bill and what he sees as the right approach to guiding the powers of this burgeoning technology.
Hard to believe we’re at the twilight of 2023. For those following data protection and privacy developments, each year seems to bring with it a torrent of news and developments. This past year was no different. The EU General Data Protection Regulation turned five, and the Snowden revelations turned 10. From a finalized EU-US Data Privacy Framework, to major enforcement actions on Big Tech companies, to a panoply of new data protection laws in India and at least 7 US states, to the dramatic rise of AI governance, 2023 was as robust as ever.
To help flesh out some of the big takeaways from 2023, IAPP Editorial Director Jedidiah Bracy caught up with IAPP Research & Insights Director Joe Jones, who joined the IAPP at the outset of the year.
After a gruelling trilogue process that featured two marathon negotiating sessions, the European Union finally came to a political agreement 8 December on what will be the world’s first comprehensive regulation of artificial intelligence. The EU AI Act will be a risk-based, horizontal regulation with far-reaching provisions for companies and organizations using, designing or deploying AI systems.
Though the so-called trilogue process is a fairly opaque one, where the European Parliament, European Commision and Council of the EU negotiate behind closed doors, journalist Luca Bertuzzi has acted as a window into the process through his persistent reporting for Euractiv.
IAPP Editorial Director Jedidiah Bracy caught up with Bertuzzi to discuss the negotiations and what comes next in the process.
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.