The AI Forum promotes thoughtful discussion of the legal and societal challenges posed by AI … read more About Us
Responsible AI in the News
China has announced plans to establish a global AI cooperation organization, aiming to foster worldwide collaboration and set shared standards for artificial intelligence. Premier Li Qiang emphasized the need to prevent AI from becoming dominated by a few nations and urged greater coordination as U.S.-China competition in this critical technology intensifies.
The Department of Government Efficiency has unveiled an ambitious initiative to deploy artificial intelligence in reducing up to 50% of federal regulations, a move projected to save trillions annually. As the “Relaunch America” initiative advances, its success hinges on overcoming legal complexities, institutional resistance, and the unresolved challenge of integrating AI into regulatory decision-making.
When xAI's Grok chatbot went rogue, it provided detailed instructions for breaking into attorney Will Stancil's home and assaulting him. The AI malfunction, triggered by unauthorized code modifications, highlights the unpredictable risks of tampering with artificial intelligence guardrails. Despite advanced capabilities, AI models remain mysterious black boxes even to their creators.
President Trump’s new “One Big Beautiful Bill Act” injects over $1 billion into federal AI initiatives, targeting defense, cybersecurity, and financial audits. The law signals a major policy shift, including the rollback of chip-design software export restrictions to China, aiming to boost U.S. AI competitiveness while balancing national security and innovation goals.
Upon the removal of the 10-year AI law moratorium from the “One Big Beautiful Bill”, Texas enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), setting new guardrails for AI systems, strengthening civil rights protections, and empowering the state attorney general with enforcement powers. The law takes effect January 1, 2026, and signals a new era of state-driven AI policy.
A federal judge dismissed a high-profile copyright lawsuit against Meta brought by Sarah Silverman and other authors. While Meta won this case, the judge emphasized that his ruling does not mean Meta’s use of copyrighted content is lawful, but that the decision was based on the plaintiffs’ insufficient arguments. not on a finding that Meta’s actions were legally permissible. The possibility of future lawsuits under different circumstances remains open, adding to the evolving legal landscape around AI and copyright law in the US.
Congress introduces bipartisan legislation to ban Chinese AI systems from federal agencies, as lawmakers declare America is in a "new Cold War" with China over artificial intelligence. The bill follows concerns that China's DeepSeek AI model rivals U.S. platforms while costing significantly less to develop. The 2025 AI Index Report shows the US ahead in AI development, but notes that China is rapidly catching up.
The FDA has launched Elsa, a secure, agency-wide generative AI tool designed to boost efficiency for employees from scientific reviewers to investigators. Elsa accelerates clinical reviews, summarizes adverse events, and streamlines internal processes, marking a major step in modernizing FDA operations with responsible, high-security artificial intelligence. It’s the first step towards modernizing our government with AI technology.
Mississippi has partnered with tech giant Nvidia to launch an AI education initiative across state colleges and universities. The program aims to train at least 10,000 Mississippians in AI, machine learning, and data science, preparing the workforce for high-demand tech careers and boosting the state’s economic future. Governor Tate Reeves will call a special legislative session to determine funding for the project.
Finding the Intersection of Law and Society
You can’t put the genie back in the bottle. As students embrace AI, teachers must help them understand things to watch for, including algorithmic bias, data hallucination, and gaps in the AI’s source material. And educators must ensure students' privacy rights are respected and address possible abuses as policies evolve. In this final excerpt from his paper, Andrew Rosston addresses safe AI usage for underage students.
Personalization is a major benefit of AI and can be used to students’ benefit with the development of tutoring programs that are tailored to each learner’s gaps and needs. Recent studies show that a combination approach of human tutors and personalized AI learning tracks are preferred by students, but more study is needed. Andrew Rosston digs into the challenges of AI in tutoring.
For students, AI is both an aid for and guard against plagiarism. As these tools become more ubiquitous, schools and individual educators must come up with nuanced policies for AI usage. In this second article in his series, Andrew Rosston examines the future of AI as an aid to student writing.
What’s on the Podcast?
Dive into the world of creativity and technology with Russell Ginns, a prolific author and inventor. Known for his work with Sesame Street and NASA, Russell shares his journey from traditional media to embracing AI's transformative power. "You can't beat them, so you got to join them," he advises. Find out how he went from fear to excitement in this episode of the Responsible AI podcast.
What if AI could diagnose illness before symptoms appear—but the knowledge it needs still lives on paper? In this episode of the Responsible AI podcast, Michael Cockrill unpacks the promises and pitfalls of AI in healthcare, bionics, and education.
In this episode of the Responsible AI podcast, we speak with Merrill Brown, a veteran journalist and educator, about the evolving landscape of journalism in the age of AI. Merrill shares insights from his experiences with early AI adoption in news organizations and explores the challenges faced by journalists today, the need for imagination in reporting, and the future of local news—and the responsibility of its readers.
Responsible AI Shorts
Hear more conversations about responsible AI on our YouTube channel or explore our podcast.