The AI Forum promotes thoughtful discussion of the legal and societal challenges posed by AI … read more About Us
Responsible AI in the News
President Trump’s new “One Big Beautiful Bill Act” injects over $1 billion into federal AI initiatives, targeting defense, cybersecurity, and financial audits. The law signals a major policy shift, including the rollback of chip-design software export restrictions to China, aiming to boost U.S. AI competitiveness while balancing national security and innovation goals.
Upon the removal of the 10-year AI law moratorium from the “One Big Beautiful Bill”, Texas enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), setting new guardrails for AI systems, strengthening civil rights protections, and empowering the state attorney general with enforcement powers. The law takes effect January 1, 2026, and signals a new era of state-driven AI policy.
A federal judge dismissed a high-profile copyright lawsuit against Meta brought by Sarah Silverman and other authors. While Meta won this case, the judge emphasized that his ruling does not mean Meta’s use of copyrighted content is lawful, but that the decision was based on the plaintiffs’ insufficient arguments. not on a finding that Meta’s actions were legally permissible. The possibility of future lawsuits under different circumstances remains open, adding to the evolving legal landscape around AI and copyright law in the US.
Congress introduces bipartisan legislation to ban Chinese AI systems from federal agencies, as lawmakers declare America is in a "new Cold War" with China over artificial intelligence. The bill follows concerns that China's DeepSeek AI model rivals U.S. platforms while costing significantly less to develop. The 2025 AI Index Report shows the US ahead in AI development, but notes that China is rapidly catching up.
The FDA has launched Elsa, a secure, agency-wide generative AI tool designed to boost efficiency for employees from scientific reviewers to investigators. Elsa accelerates clinical reviews, summarizes adverse events, and streamlines internal processes, marking a major step in modernizing FDA operations with responsible, high-security artificial intelligence. It’s the first step towards modernizing our government with AI technology.
Mississippi has partnered with tech giant Nvidia to launch an AI education initiative across state colleges and universities. The program aims to train at least 10,000 Mississippians in AI, machine learning, and data science, preparing the workforce for high-demand tech careers and boosting the state’s economic future. Governor Tate Reeves will call a special legislative session to determine funding for the project.
Congress faces mounting backlash over a proposed decade-long ban on state AI regulation, embedded in a sweeping budget bill. Some argue the pause is necessary to preserve American competitiveness by preventing a disorganized patchwork of state laws. However, lawmakers, advocacy groups, and state officials warn the moratorium would strip states of consumer protections, leaving AI oversight solely to Congress, despite its track record of legislative gridlock and Big Tech lobbying.
The European Commission has launched a six-week public consultation on implementing rules for high-risk AI systems under the EU AI Act. Amid the discussions are debates over compliance burdens, requests from American tech giants to simplify the code, worries over European innovation competitiveness, and expert concerns about transparency and downstream obligations for high-risk AI applications.
Tech and music leaders urged Congress to pass the No Fakes Act, a bipartisan bill protecting Americans from AI-generated deepfakes that steal voices and likenesses. The proposed law would build upon the Take It Down Act, creating federal safeguards, holding platforms accountable, and empowering victims to remove unauthorized digital replicas quickly, addressing urgent privacy and creative rights concerns.
Finding the Intersection of Law and Society
You can’t put the genie back in the bottle. As students embrace AI, teachers must help them understand things to watch for, including algorithmic bias, data hallucination, and gaps in the AI’s source material. And educators must ensure students' privacy rights are respected and address possible abuses as policies evolve. In this final excerpt from his paper, Andrew Rosston addresses safe AI usage for underage students.
Personalization is a major benefit of AI and can be used to students’ benefit with the development of tutoring programs that are tailored to each learner’s gaps and needs. Recent studies show that a combination approach of human tutors and personalized AI learning tracks are preferred by students, but more study is needed. Andrew Rosston digs into the challenges of AI in tutoring.
For students, AI is both an aid for and guard against plagiarism. As these tools become more ubiquitous, schools and individual educators must come up with nuanced policies for AI usage. In this second article in his series, Andrew Rosston examines the future of AI as an aid to student writing.
What’s on the Podcast?
What if AI could diagnose illness before symptoms appear—but the knowledge it needs still lives on paper? In this episode of the Responsible AI podcast, Michael Cockrill unpacks the promises and pitfalls of AI in healthcare, bionics, and education.
In this episode of the Responsible AI podcast, we speak with Merrill Brown, a veteran journalist and educator, about the evolving landscape of journalism in the age of AI. Merrill shares insights from his experiences with early AI adoption in news organizations and explores the challenges faced by journalists today, the need for imagination in reporting, and the future of local news—and the responsibility of its readers.
How is artificial intelligence reshaping modern workplaces? In this episode of the Responsible AI podcast, cybersecurity expert Ted Alben discusses AI's impact on business and the need for human oversight. We discuss the arrival of ChatGPT and the evolving landscape of workplace technology and security challenges.
Responsible AI Shorts
Hear more conversations about responsible AI on our YouTube channel or explore our podcast.