The AI Forum promotes thoughtful discussion of the legal and societal challenges posed by AI … read more About Us
Responsible AI in the News
The Tony Blair Institute's recent report advocates for the UK to relax copyright laws, enabling AI firms to utilize protected materials without explicit permission. This proposal has ignited backlash from the creative industry, which fears potential exploitation and economic harm. Governments must decide whether to prioritize advancement in the stiffly competitive AI race or the protection of creative works by human authors.
A U.S. District Court judge allowed The New York Times' lawsuit alleging copyright infringement by OpenAI and Microsoft to proceed, marking a significant moment in the intersection of AI and intellectual property law. The eventual final ruling is likely to draw upon the U.S. Copyright Office's Report on Copyright and Artificial Intelligence and hinge upon the nature of machine learning used by OpenAI.
The Trump administration received numerous comments in response to its Request for Information (RFI) for the development of an AI Action Plan. The comments highlight stakeholders' priorities on AI policies and reflect debates over copyrighted information for AI model training, federal preemption of state AI laws, and export controls. The AI Action Plan is expected to be announced by mid-July 2025.
The third draft of the EU's General-Purpose AI Code of Practice faces criticism for inadequately addressing fundamental rights risks. Despite the addition of illegal discrimination, most such risks remain optional, raising concerns about the code's effectiveness in ensuring AI providers mitigate these risks under the EU AI Act.
State legislatures are introducing hundreds of AI bills in 2025, focusing on consumer protection, sector-specific regulations, chatbot transparency, generative AI oversight, data center energy usage, and frontier model safety. These diverse proposals aim to shape the U.S. AI regulatory landscape amid a current absence of federal action and will have implications on business development and consumer interactions.
The California Civil Rights Council is developing new regulations to curb AI-driven employment discrimination by requiring anti-bias testing and extended record-keeping. Employers must demonstrate proactive measures to prevent unlawful bias in hiring and recruitment processes, ensuring AI systems align with fair employment practices.
The HHS's proposed HIPAA changes will address AI security risks amidst the rapid integration of the technology into the healthcare space, applying a previously tech-agnostic regulation to AI. They require entities to integrate AI into risk assessments and manage vulnerabilities in ePHI handling. This shift aims to ensure the confidentiality and integrity of sensitive health data amid evolving AI technologies.
Proposed CPPA regulations target automated decision-making, enhancing transparency and consumer rights. Businesses must notify consumers about AI use and provide opt-out options, ensuring ethical AI integration in significant decision-making processes. How should they prepare to navigate these new regulations and create new AI governance plans?
In January 2025, New Jersey released new guidance detailing laws on algorithmic description. These new rules outline several ways the usage of AI can constitute as discriminatory and thus will impact employers using such tools for decision-making. The guidance also highlights key steps businesses must take to mitigate risks in this evolving technological and legal landscape.
Finding the Intersection of Law and Society
How is artificial intelligence reshaping modern workplaces? In this episode of the Responsible AI podcast, cybersecurity expert Ted Alben discusses AI's impact on business and the need for human oversight. We discuss the arrival of ChatGPT and the evolving landscape of workplace technology and security challenges.
How can AI be used responsibly in litigation? On The Legal Department podcast, Stacy Bratcher speaks with Alex Alben about ensuring ethical AI use, identifying AI-generated content, and mining legal data for better outcomes. Don’t miss this episode exploring the future of AI-driven lawyering and its impact on litigation.
President Trump has rescinded Biden's AI Executive Order, leaving a policy vacuum. What will replace it? Alex Alben analyzes the implications of this decision and poses five key questions that will shape the future of AI regulation in the United States.
What happens when cutting-edge AI meets centuries-old legal principles? In this episode of Responsible AI, Rebecca Staffel interviews AI Forum founder Alex Alben about large language models’ ability to analyze legal patterns, tackle bias, and predict case outcomes. Explore how AI is reshaping copyright law and courtroom decisions in this thought-provoking discussion.
Hear more about how AI Forum founder Alex Alben’s legal career focusing on media and intellectual property has led him to found, together with Patrick Ip, the new company Theo.AI. Alex joined Matt Cohen on the podcast Tank Talks to discuss Theo.AI’s mission to predict legal case outcomes, the impact of AI on law, and the ethical challenges AI poses.
AI Forum Contributors
Alex Alben, Director
Nicole Alonso, Syro
David Alpert, Covington & Burling
Chris Caren, TurnItIn
William Covington, UW Law
Tiffany Georgievski, Sony
Bill Kehoe, State of WA
Ashwini Rao, Eydle
Linden Rhoads, UW Regent
Sheri Sawyer, State of WA
Eugene Volokh, UCLA Law