The AI Forum promotes thoughtful discussion of the legal and societal challenges posed by AI … read more About Us
Responsible AI in the News
Legal experts and judges agree that AI can help process information and streamline judicial tasks, but it lacks the reasoning, empathy, and moral judgment needed for true adjudication. While courts experiment with AI in administrative roles, the consensus is that human judges remain essential, at least for now.
A federal judicial panel has advanced a draft rule to regulate AI-generated evidence in court, seeking public input to ensure such evidence meets reliability standards. The move reflects growing pressure for regulations to keep pace with the judiciary’s need to adapt to rapid technological change, balanced with concern about the trustworthiness of machine-generated information.
The California State Bar faces scrutiny after using AI to generate bar exam questions, leading to widespread criticism. Issues regarding conflict of interest, questionable test writing, and the viability of online testing have arisen amidst a financial crisis. The California State Bar faces lawsuits and an audit, highlighting the risks and accountability challenges as legal institutions experiment with AI in high-stakes, high-impact settings.
President Trump’s latest executive order pushes AI education nationwide, prioritizing AI in federal grant programs and establishing a White House task force. While questions remain about how schools should responsibly adopt AI into the classroom, the order sets the stage for new legal frameworks and public-private partnerships in AI policy and compliance.
In a historic move, the U.S. House and Senate passed the Take It Down Act, the nation’s first major law directly addressing AI-induced harms such as deepfakes. The bipartisan bill garnered support from tech companies like Meta and Snapchat and aims to give victims of malicious AI-generated content a legal path to demand removal and seek damages. Despite concerns over its enforceability and potential for abuse, this landmark bill signals a new era for AI accountability.
The U.S. Office of Management and Budget recently released sweeping new guidelines for how federal agencies buy and use AI, aiming to spur innovation and cut red tape. These policies could ripple out to the private sector, reshaping how companies approach responsible AI governance and compliance.
China’s embrace of open-source AI is potentially vulnerable as regulatory pressures mount. Once seen as a pathway to innovation, the approach could face government scrutiny that could stifle development. Will China’s AI ambitions clash with its tightening controls? Discover how shifting policies may reshape the nation’s and the world’s AI landscape.
The Tony Blair Institute's recent report advocates for the UK to relax copyright laws, enabling AI firms to utilize protected materials without explicit permission. This proposal has ignited backlash from the creative industry, which fears potential exploitation and economic harm. Governments must decide whether to prioritize advancement in the stiffly competitive AI race or the protection of creative works by human authors.
A U.S. District Court judge allowed The New York Times' lawsuit alleging copyright infringement by OpenAI and Microsoft to proceed, marking a significant moment in the intersection of AI and intellectual property law. The eventual final ruling is likely to draw upon the U.S. Copyright Office's Report on Copyright and Artificial Intelligence and hinge upon the nature of machine learning used by OpenAI.
Finding the Intersection of Law and Society
While we might joke about the impending robot takeover, the humans are still responsible for how AI is deployed and regulated. Alex and Ted Alben explore the promise, pitfalls, and policy challenges of AI in society today and share a few conclusions they’ve come to about where humans should be paying most attention. Spoiler: the humans aren’t dead, but we do need to stay alert.
How does 2025 remind Merrill Brown of 1995? And how can journalists use AI responsibly and efficiently? In this episode of the Responsible AI podcast, we speak with the trailblazing journalist and advisor about the future of content automation and local journalism and how the two can work together.
How is artificial intelligence reshaping modern workplaces? In this episode of the Responsible AI podcast, cybersecurity expert Ted Alben discusses AI's impact on business and the need for human oversight. We discuss the arrival of ChatGPT and the evolving landscape of workplace technology and security challenges.
How can AI be used responsibly in litigation? On The Legal Department podcast, Stacy Bratcher speaks with Alex Alben about ensuring ethical AI use, identifying AI-generated content, and mining legal data for better outcomes. Don’t miss this episode exploring the future of AI-driven lawyering and its impact on litigation.
President Trump has rescinded Biden's AI Executive Order, leaving a policy vacuum. What will replace it? Alex Alben analyzes the implications of this decision and poses five key questions that will shape the future of AI regulation in the United States.
AI Forum Contributors
Alex Alben, Director
Nicole Alonso, Syro
David Alpert, Covington & Burling
Chris Caren, TurnItIn
William Covington, UW Law
Tiffany Georgievski, Sony
Bill Kehoe, State of WA
Ashwini Rao, Eydle
Linden Rhoads, UW Regent
Sheri Sawyer, State of WA
Eugene Volokh, UCLA Law