The AI Forum promotes thoughtful discussion of the legal and societal challenges posed by AI … read more About Us
Responsible AI in the News
Ropes & Gray’s innovative AI program lets associates dedicate up to 20% of their required billable hours to AI training and simulations, as law firms reimagine traditional billing models. The experiment highlights how AI is nudging the legal industry toward more flexible workflows, promising greater efficiency and skill development for junior lawyers.
Europe’s landmark AI Act faces delays after heavy lobbying from U.S. tech giants and the Trump administration, with the European Commission considering a one-year pause on enforcement and softer compliance measures. This marks a dramatic shift in EU regulatory ambitions and reflects fears that strict rules could hinder innovation and global competitiveness in AI.
Faced with backlog and quality challenges at the United States Patent and Trademark Office (USPTO), a new approach dubbed AI-First Triage proposes using generative AI to perform early prior-art searches, engage applicants sooner, and reshape the examination queue. This article unpacks how this shift could cut pendency, raise patent quality, and outlines the risks that overshadow the promise.
Italy just passed its first national AI law, setting a bold precedent in Europe. The new rules, effective October 2025, introduce sector-specific safeguards, national AI strategies, and novel rules for healthcare, employment, and copyright. Businesses and professionals must now adapt to dual compliance amid evolving regulations and heightened government oversight.
California Governor Gavin Newsom has signed a landmark AI safety law requiring advanced AI companies to publicly disclose their safety measures and safeguard whistleblowers. This move positions California as a leader in AI regulation, sparking debates over innovation versus consumer protection, and prompting scrutiny of state legislative authority within a rapidly evolving technology sector.
Eve, a San Francisco-based legal AI startup, has reached a $1 billion valuation after raising $103 million in new funding. The firm specializes in AI-driven tools for plaintiffs’ law firms, reflecting accelerated demand for automation in legal practice as more firms turn to generative AI to streamline case work and document analysis.
The FTC has launched a sweeping inquiry into seven major tech firms, including Meta, OpenAI, and Alphabet, demanding details on how their AI chatbots protect children and teens. The investigation seeks to uncover companies’ practices for monitoring safety, disclosing risks, and complying with privacy laws amid rising concern about AI-powered companion bots’ impacts on youth.
Anthropic just became the first major AI company to endorse California’s groundbreaking SB 53 bill, which would require unprecedented transparency, reporting, and safety standards for cutting-edge AI developers. If passed, the legislation promises to reshape how AI risks are managed—not just in California, but potentially nationwide. The final vote is imminent.
Anthropic has reached a historic $1.5 billion settlement in a landmark copyright dispute over its use of pirated and lawfully acquired books to train AI models. The deal sets a precedent for the industry, highlighting the need for robust data governance, licensing, and legal compliance for AI developers and enterprise users navigating evolving copyright rules.
What’s on the Podcast?
Join Dr. Avijit Ghosh as he challenges the AI industry's focus on chatbots, arguing that “AGI should not be the North Star for AI development.” Explore how AI's current trajectory impacts human agency and why he thinks that open-source models are crucial for real progress.
How do history’s lessons shape AI’s future? Is history dangerous? Explore Seattle's legacy of innovation with Leonard Garfield, Executive Director of the Museum of History and Industry. All this and more in our latest Responsible AI Podcast episode.
Join former Washington State Governor Jay Inslee on the Responsible AI podcast as he delves into technology regulation, clean energy, and the challenges of misinformation. We discover his insights on fostering innovation and what he thinks is more dangerous for politics than AI. Tune in for a thought-provoking conversation on shaping a sustainable future.
Finding the Intersection of Law and Society
Can we truly trust AI if we don't understand it? The "black box paradox" in AI poses a significant challenge to transparency, making it impossible to fully grasp decision-making processes. Samy Yacoubi delves into the problem of demanding AI transparency while acknowledging the inherent opacity of complex algorithms and explores whether there are alternative approaches.
You can’t put the genie back in the bottle. As students embrace AI, teachers must help them understand things to watch for, including algorithmic bias, data hallucination, and gaps in the AI’s source material. And educators must ensure students' privacy rights are respected and address possible abuses as policies evolve. In this final excerpt from his paper, Andrew Rosston addresses safe AI usage for underage students.
Personalization is a major benefit of AI and can be used to students’ benefit with the development of tutoring programs that are tailored to each learner’s gaps and needs. Recent studies show that a combination approach of human tutors and personalized AI learning tracks are preferred by students, but more study is needed. Andrew Rosston digs into the challenges of AI in tutoring.
Responsible AI Shorts
Hear more conversations about responsible AI on our YouTube channel or explore our podcast.