The AI Forum promotes thoughtful discussion of the legal and societal challenges posed by AI … read more About Us
Responsible AI in the News
Mississippi has partnered with tech giant Nvidia to launch an AI education initiative across state colleges and universities. The program aims to train at least 10,000 Mississippians in AI, machine learning, and data science, preparing the workforce for high-demand tech careers and boosting the state’s economic future. Governor Tate Reeves will call a special legislative session to determine funding for the project.
Congress faces mounting backlash over a proposed decade-long ban on state AI regulation, embedded in a sweeping budget bill. Some argue the pause is necessary to preserve American competitiveness by preventing a disorganized patchwork of state laws. However, lawmakers, advocacy groups, and state officials warn the moratorium would strip states of consumer protections, leaving AI oversight solely to Congress, despite its track record of legislative gridlock and Big Tech lobbying.
The European Commission has launched a six-week public consultation on implementing rules for high-risk AI systems under the EU AI Act. Amid the discussions are debates over compliance burdens, requests from American tech giants to simplify the code, worries over European innovation competitiveness, and expert concerns about transparency and downstream obligations for high-risk AI applications.
Tech and music leaders urged Congress to pass the No Fakes Act, a bipartisan bill protecting Americans from AI-generated deepfakes that steal voices and likenesses. The proposed law would build upon the Take It Down Act, creating federal safeguards, holding platforms accountable, and empowering victims to remove unauthorized digital replicas quickly, addressing urgent privacy and creative rights concerns.
In February of 2024, a teenager shot himself after months of conversation with a chatbot whose last message asked him to “come home to me as soon as possible.” A federal judge has just rejected AI firm Character.AI’s claim that its chatbot’s outputs are protected free speech, allowing a wrongful death lawsuit to proceed. The case will likely set a major precedent for AI liability and regulation as technology outpaces legal safeguards.
The Chicago Sun-Times and Philadelphia Inquirer syndicated a 50-page AI-generated "summer guide" filled with fake books, dubious tips, and non-existent individuals, exposing systemic editorial failures. The freelancer behind “Heat Index” admitted using unchecked AI content from King Features, highlighting journalism's crisis as financial strains and AI "slop" erode trust in legacy media. While a benign slip-up, this incident indicates the problematic AI future we could be heading towards.
As AI integration accelerates, state attorneys general leverage existing privacy, consumer protection, and anti-discrimination laws to address AI risks like deepfakes, bias, and fraud. California, Massachusetts, Oregon, New Jersey, and Texas lead enforcement, signaling heightened scrutiny for businesses deploying AI systems without robust compliance safeguards.
The viral “AI Barbie” trend, where users create Barbie-themed AI images, raises thorny legal issues from copyright and trademark infringement to data privacy and regulatory uncertainty. As generative AI collides with pop culture, creators and developers must navigate a shifting legal landscape to avoid infringing on intellectual property and exposing themselves and others to cybersecurity risks.
Legal experts and judges agree that AI can help process information and streamline judicial tasks, but it lacks the reasoning, empathy, and moral judgment needed for true adjudication. While courts experiment with AI in administrative roles, the consensus is that human judges remain essential, at least for now.
Finding the Intersection of Law and Society
What if AI could diagnose illness before symptoms appear—but the knowledge it needs still lives on paper? In this episode of the Responsible AI podcast, Michael Cockrill unpacks the promises and pitfalls of AI in healthcare, government services, and education.
Personalization is a major benefit of AI and can be used to students’ benefit with the development of tutoring programs that are tailored to each learner’s gaps and needs. Recent studies show that a combination approach of human tutors and personalized AI learning tracks are preferred by students, but more study is needed. Andrew Rosston digs into the challenges of AI in tutoring.
For students, AI is both an aid for and guard against plagiarism. As these tools become more ubiquitous, schools and individual educators must come up with nuanced policies for AI usage. In this second article in his series, Andrew Rosston examines the future of AI as an aid to student writing.
What are the limitations of AI in education? In this first article in a series, Andrew Rosston explores the use of AI as a personalized learning aid for students. He offers guidance around the benefits and boundaries for teachers as they implement AI in student learning.
While we might joke about the impending robot takeover, the humans are still responsible for how AI is deployed and regulated. Alex and Ted Alben explore the promise, pitfalls, and policy challenges of AI in society today and share a few conclusions they’ve come to about where humans should be paying most attention. Spoiler: the humans aren’t dead, but we do need to stay alert.
AI Forum Contributors
Alex Alben, Director
Nicole Alonso, Syro
David Alpert, Covington & Burling
Chris Caren, TurnItIn
William Covington, UW Law
Tiffany Georgievski, Sony
Bill Kehoe, State of WA
Ashwini Rao, Eydle
Linden Rhoads, UW Regent
Sheri Sawyer, State of WA
Eugene Volokh, UCLA Law
Responsible AI Shorts
Hear more conversations about responsible AI on our YouTube channel or explore our podcast.