Global AI Policy Trends: Safety, Regulation & Compliance in a Divided World

Explore global AI regulation—from the EU AI Act to U.S. federal guidance and China’s strict controls. Understand trends in AI safety, compliance, and risk.

Dean Taylor

12/16/20244 min read

As AI rapidly evolves, governments around the world are racing to regulate it—balancing innovation with public safety, ethics, and competitiveness. This overview examines international frameworks like the EU AI Act, China’s state-controlled approach, and emerging U.S. efforts. Whether you’re a legal professional, compliance officer, or policy strategist, understanding this fractured landscape is essential for navigating the future of AI law.

Do We Really Need AI Regulation?

Just like the Internet itself (and any new, widely adopted technology) AI brings unprecedented opportunities and risks. In applications ranging from healthcare, law, medicine to transportation the risks of AI are unlike other risks. The scale of how many people a single AI system can affect has legislatures worried about preemptively regulating AI. It also raises ethical concerns, including privacy violations, algorithmic bias and malicious misuse like deepfakes and cyberattacks.

Governments worldwide are grappling with these challenges, trying in different ways to create frameworks encouraging innovation while safeguarding fundamental rights and societal values. However, as is expected, different countries and cultures will have different regulatory priorities. For people and companies operating in the global legal environment, this represents a fragmented regulatory landscape.

International Efforts Toward AI Governance

United Nations and Global Partnerships

The United Nations (UN) and organizations like the OECD have proposed guiding principles for trustworthy AI. The UN recently introduced a draft resolution to encourage national regulatory frameworks aligned with global standards. Similarly, the OECD’s AI Principles advocate for transparency, accountability, and ethical AI use. As with any of these international agreements, the key question is always, who decides? Meaning, who decides whether to prioritize innovation (likely a key focus of developed countries) versus societal protection (likely a higher concern of less developed countries). Which countries feel they are behind in AI development and may want to influence international regulations for competitive purposes while using societal protection as a pre-text?

Global AI Safety Summit

In 2023, the UK hosted the inaugural Global AI Safety Summit, emphasizing responsible AI development. The event highlighted the need for international collaboration to address risks associated with generative AI and other advanced technologies. Again, as noted above, the countries who are less advanced in their AI sophistication or development obviously have an interest in trying to catch up. One aspect of catching up is slowing down the countries that have the resources to go full speed ahead.

Approaches to AI Regulation

European Union: The AI Act

The European Union (EU) enacted the comprehensive AI Act, designed to classify AI systems by risk levels (minimal, limited, high, and unacceptable). The Act bans certain practices, such as social scoring and real-time biometric surveillance, while imposing stringent requirements on high-risk applications in healthcare, education, and law enforcement. I have written several posts discussing various aspects of the EU AI Act already with more anticipated in the future as its application becomes real for companies generated case law.

United States

The United States lacks a unified federal AI law, relying on sector-specific regulations and state initiatives. Recent developments include the Biden Administration’s AI Bill of Rights and proposed legislation to establish federal oversight of high-risk AI systems. The National Security Memorandum on AI published in October 2024 is the latest federal guidance on AI development. It’s important to note that the memorandum mentions concerns about risks, but emphasizes the desire to stay competitively ahead of potential adversaries.

China: State-Controlled Development

China emphasizes state control and ethical guidelines, requiring domestic companies to store user data locally and comply with national standards. Its 2023 “Interim Measures for Generative AI” illustrate a hands-on approach to managing emerging technologies. This is unsurprising given the habit of China to control all the potentially explosive parts of their economy.

Other Nations

Canada: Introduced the Artificial Intelligence and Data Act (AIDA) to establish federal AI governance.

Australia: Relies on voluntary AI ethics principles, with ongoing discussions about a national AI strategy.

India: Focuses on sector-specific initiatives in finance and healthcare.

Japan: Balances soft law with plans for targeted legislation addressing specific risks.

Emerging Themes in AI Regulation

Transparency and Accountability

Transparency in AI decision-making processes is a recurring theme in regulatory frameworks. Many jurisdictions require organizations to disclose AI usage and ensure decisions made by AI are explainable. The so-called frontier models, (those produced and used by OpenAI, Meta and Anthropic) are likely to be unwilling or just unable to provide the level of transparency these laws envision. Their models are increasingly large and complex easily escaping the ability of even teams of developers to complete document their operation.

Bias and Fairness

Addressing algorithmic bias remains a critical challenge. Regulations in the EU and US emphasize fairness, with laws mandating audits of AI systems for discriminatory practices. Of course, no one wants any tool, AI powered or otherwise, to intentionally or unintentionally generate discriminatory results. However, even the most carefully crafted and tested model will likely result in some bias in some direction. In addition, users have the ability to fine-tune (read: change) the operation of these models once released further complicating the assessment of liability when things go awry and generate discriminatory results.

Challenges Ahead

The “pacing problem”—the lag between rapid technological advancement and the creation of corresponding regulations—remains a significant hurdle. The law, as we all know, is notoriously slow to change. This pace has its benefits. Slower changes in the law mean less upheaval in society as new regulations are applied and others are repealed. Additionally, balancing innovation with public safety requires nuanced policies that adapt to AI’s evolving capabilities.

Governments will forever now be required to navigate competing interests, including those of technology developers, civil rights advocates, and the public.