Opinion: Five Key Questions on President Trump's Decision to Rescind the Biden Administration's AI Executive Order

The Trump Administration’s first-week rescission of the Biden Administration’s AI Executive Order (EO) raises critical questions for policymakers, industry leaders, and the legal community. While some may view this decision as a step backward in AI governance, others may see it as an opportunity to craft a more effective regulatory approach. Rather than assume the nullification is inherently good or bad, we should consider how to build a coherent, practical AI policy framework.

Here are five essential questions we should be asking: 

1. What Should Replace the Biden AI Executive Order? 

The Biden EO established guidelines for AI safety, transparency, and risk mitigation, but much of it was non-binding and dependent on voluntary compliance from AI developers. Without this EO, what regulatory approach will take its place? Will the Trump Administration opt for a deregulatory stance or will it introduce alternative policies that better align with industry needs while still ensuring accountability? 

Any new framework must balance AI innovation with key concerns: preventing bias, protecting privacy, and ensuring responsible AI deployment in government and critical industries. The absence of the Biden EO should not mean the absence of AI oversight or strong policy development. The new president was certainly entitled to nullify the previous administration’s largely advisory EO, but should not leave a policy vacuum in its wake. 

2. How Can the Federal Government Provide Clearer AI Guidance to Industry? 

One of the major critiques of the Biden EO was its reliance on aspirational language rather than enforceable policies. While it emphasized best practices, it left ambiguity in areas such as AI-generated content labeling, government procurement of AI tools, and data protection. In short, the EO, while the longest ever issued by any president in American History, was vague.

If rescinding the EO means moving toward clearer, more predictable AI regulations, that could be a positive step—provided that the federal government engages meaningfully with industry leaders, legal experts, and civil society to establish workable policies. The goal should be practical guidance that companies can follow, rather than vague principles that lack enforcement mechanisms. 

3. What Will Be the Role of the States in AI Regulation? 

AI policy is increasingly being shaped at the state level, with California, New York, and Texas recently passing laws on AI safety, deepfake regulations, and consumer protections. The Biden EO attempted to establish federal leadership on AI governance, but its revocation may leave an even larger regulatory vacuum. 

Without federal coordination, are we heading toward a patchwork of conflicting state laws? Will major AI companies choose to follow the strictest state regulations by default, creating a de facto national standard? If the Trump Administration does not put forward an alternative AI policy, the states may fill the void—but with potentially inconsistent results. 

4. How Will This Impact AI Deployment in Government and National Security? 

The Biden EO directed federal agencies to develop AI risk assessments and implement standards for AI use in critical areas such as defense, cybersecurity, and healthcare. If these guidelines are rescinded, what happens next? 

Will federal agencies have discretion to develop their own AI policies, or will there be a unified national approach? Given that AI is already playing a role in military applications, intelligence gathering, and law enforcement, the federal government cannot afford to be directionless in this space. This is a critical national security concern. If rescinding the EO means eliminating bureaucratic red tape, that could be a win for efficiency—but only if it’s replaced by a thoughtful, strategic framework. 

5. What Is the U.S. Strategy for Competing in the Global AI Race? 

The European Union, China, and other major economies are aggressively moving forward with AI governance frameworks. The Biden EO, while imperfect, at least signaled that the U.S. was taking AI regulation seriously. Without it, does the U.S. risk falling behind in the global AI policy race? 

China’s AI regulations emphasize state control, while the EU’s AI Act focuses on consumer protection and risk-based oversight. If the U.S. pursues a laissez-faire approach, will it be an advantage for innovation, or will it create regulatory uncertainty that harms long-term competitiveness? What strategy will the Trump Administration adopt to ensure that the U.S. remains a leader in AI while maintaining public trust and ethical safeguards? 

Conclusion: The Need for a Proactive Approach 

Rescinding the Biden AI Executive Order is not necessarily a bad thing, but it must be accompanied by a well-defined strategy to address the risks and opportunities of AI. If the goal is to reduce unnecessary bureaucracy, promote innovation, and ensure AI remains a tool for economic growth, then the next step should be crafting a policy framework that achieves those objectives while still addressing privacy, bias, and security concerns. 

The AI revolution is not waiting for Washington to decide its regulatory future. Whether through executive action, congressional legislation, or industry self-regulation, the U.S. must chart a clear path forward—one that fosters AI development while ensuring it serves society’s best interests.

Alex Alben teaches Privacy and Internet Law at the UCLA School of Law and is the author of “Analog Days--How Technology Rewrote Our Future.”

Next
Next

Alex Alben on the Responsible AI Podcast