US AI risk regulation and compliance explained: what the fragmented legal landscape means for businesses 

For teams who follow AI policy in the United States, the missing American equivalent to the EU AI Act is easy to misunderstand. Many readers assume this signals hesitation or a light touch approach. From a distance, the US model can appear unclear and even permissive.  That view gets the story wrong.  Regulation is already here, just…

Ava Kernan Avatar
New York Times Square

For teams who follow AI policy in the United States, the missing American equivalent to the EU AI Act is easy to misunderstand. Many readers assume this signals hesitation or a light touch approach. From a distance, the US model can appear unclear and even permissive. 

That view gets the story wrong. 

Regulation is already here, just not in one place 

AI obligations in the US are emerging through laws and regulators that long pre-date the technology. Federal agencies such as the Federal Trade Commission, the Department of Justice, financial supervisors and healthcare authorities are applying existing statutes to AI systems. When harm occurs, they do not wait for a bespoke AI law, as they might in the EU. They use the rulebook they already have.1

At CoreStream GRC, we have been looking closely at what this means in practice for organizations operating partially or wholly in the US as AI becomes embedded in everyday business decisions. 

Why doesn’t the US have a singular AI Act?  

The US has taken a deliberately fragmented approach to AI regulation. Rather than legislating a single framework, lawmakers and regulators have leaned on existing legal authorities. 

These include consumer protection, civil rights, financial regulation, healthcare oversight, competition law, and criminal enforcement. The underlying logic is not that AI-related harm is being ignored. It is that AI-enabled decision-making should be governed through the same legal duties that already apply to corporate conduct.2

In practice, this means AI is treated as a risk multiplier rather than a separate legal category. If an AI system produces discriminatory outcomes, unsafe products, misleading disclosures, or governance failures, regulators do not need a bespoke AI law to act. Existing statutes already provide enforcement routes. 

The result is not a regulatory vacuum. It is a distributed system of obligations. While this can be difficult for organizations to navigate, it reflects an intentional attempt to avoid duplicating compliance regimes and to anchor AI oversight within laws companies are already expected to follow. 

What does US AI regulation actually look like in practice? 

US regulators have been clear on one point. AI systems are not exempt from existing legal duties. 

If an AI-driven process leads to discrimination, consumer harm, unsafe outcomes, or control failures, enforcement can already follow. This is particularly visible in highly regulated sectors such as financial services, healthcare, and criminal justice, where regulators have consistently emphasized explainability, accountability, and meaningful human oversight.3

As a result, AI governance in the US is not defined by the presence or absence of AI-specific legislation. It is defined by how existing obligations are interpreted and enforced once AI systems are embedded into business processes. 

For compliance teams, the implication is uncomfortable but simple. Regulatory exposure depends less on how novel the technology is and more on whether the organization can demonstrate that AI risks were understood, owned, documented, and controlled. 

In practice, the NIST Artificial Intelligence Risk Management Framework has become the main reference point. Rather than requiring approval before deployment, it treats AI risk as an ongoing enterprise governance responsibility, emphasizing clear ownership, robust documentation, and continuous monitoring across design, deployment, and everyday use. 

Research suggests many US organizations are responding pragmatically. AI compliance is increasingly being treated like privacy or anti-money laundering: not a standalone project, but something embedded into product development, procurement, and operational controls as part of normal business governance.4 

Consider 3 everyday scenarios: 

  1. Recruitment software; An AI tool screens applicants and consistently downgrades women or minority candidates. In the EU, this would trigger AI-specific duties. In the US, it immediately engages “Title VII” and equal employment enforcement. The company, not the algorithm, carries the liability. 
  1. Health analytics; A predictive system recommends shorter consultations for patients with complex needs, leading to worse outcomes. US healthcare regulators can treat this as unsafe practice under existing patient protection laws and professional negligence standards. 
  1. Marketing content; A chatbot generates product descriptions that exaggerate performance. The FTC can call this misleading advertising without citing a single AI Act. 

These examples show why the absence of one statute does not equal absence of law. Oversight attaches to outcomes and organizational behavior. 

Policy Management solution download

When “voluntary” AI standards become compliance benchmarks 

Nowhere is the presence of AI law in the US clearer than in the rise of “voluntary” standards which are quietly becoming benchmarks for acceptable corporate conduct.  

The most important example is the NIST Artificial Intelligence Risk Management Framework. Designed as guidance rather than legislation, it has rapidly become a reference point adopted by organizations, regulators and policymakers for what reasonable and defensible AI risk management looks like in practice. 

This has been criticized as confusing for companies  

“2026 will see a flood of state AI bills that creates a regulatory tower of babble, laws with different definitions, standards, and mandates.” 

Kevin Frazier, AI Innovation & Law Fellow at The University of Texas School of Law to the National law review 

While frustrating, this pattern will be familiar to experienced risk leaders. Cybersecurity followed the same trajectory. What began as best-practice guidance gradually became the baseline against which organizations were judged following incidents, supervisory reviews, and enforcement actions. 

Evidence in governance literature suggests that AI is now following the same path. NIST’s emphasis on lifecycle oversight, accountability, and continuous monitoring closely mirrors the criteria regulators already use when assessing whether organizations acted reasonably. In practice, this means NIST is fast becoming the backbone of US AI risk expectations, regardless of its voluntary label.

Do risk leaders already see AI as a governance issue? 

Yes, evidence from risk leaders indicates that AI is now firmly embedded within enterprise risk discussions. Recent survey data from US chief risk officers shows how decisively AI has shifted from experimentation to enterprise risk consideration. 

In the 2026 CRO Outlook Survey, technology and cyber risk were cited as the top risk category by nearly 75% of respondents. AI was consistently described as amplifying fraud risk, third-party exposure, and operational vulnerabilities rather than existing as a standalone concern.

While more than 50% of surveyed institutions reported AI in production, governance maturity lagged behind adoption. Only a small minority described their AI governance and approval frameworks as highly developed. This gap between deployment and control is where regulatory and enforcement risk now concentrates. 

As several CROs noted, regulators are less interested in whether AI is innovative and more focused on whether its risks are understood, documented and actively managed. 

CoreStream GRC sits in the same place risk leaders are pointing to, the place where AI needs structure, evidence and a steady human hand rather than blind faith in a vendor tool. 

“Using the power of our integration capability, we are looking to continue to be the backbone of enterprise governance, preserving client control over if and when they adopt AI.” 

Rich Eddolls, CoreStream GRC  

Where companies are most exposed in terms of Ai risk right now? 

Right now, the pressure points are simple and human: 

Unclear ownership

  • AI systems frequently cut across departments, leaving no single person willing to sign their name. 

Over-automation   

  • High-impact decisions are delegated to tools with only symbolic human checks. 

Third-party risk   

  • Vendors provide AI features, and corporations assume the supplier has taken the problem away. 

Weak documentation  

  • Teams cannot show how risks were assessed or why the system was considered safe. 

Jurisdictional sprawl  

  • Multinational firms run different standards in California, Texas, and New York, and governance becomes uneven. 

The biggest danger is not adoption of AI. It is adoption without a defensible governance trail.5

The wider context: AI as a global story within risk and compliance 

European Union’s policy on AI compliance  

The European Union approach is repeatedly referenced as setting the temperature for global expectations. EU policy emphasizes data protection, accountability, and risk classification, particularly in sectors like healthcare. Because GDPR obligations already require explainability, validation, and lawful processing, AI systems operating in Europe must fit inside that structure. Multinational companies often design programs that meet EU standards first and then reuse them internationally.6

While often criticized by the US, the EU model has become influential beyond its borders. Even organizations with no physical presence in Europe borrow its language of rights, documentation, and categorized risk. That influence means that AI governance increasingly resembles existing compliance regimes focused on proving responsibility rather than simply announcing innovation. 

United Kingdom

Principles first, checklists last 

The United Kingdom model is best understood as a principles-based extension of governance thinking. UK authorities favor proportionality and sectoral oversight by established regulators. Guidance evolves case by case instead of relying on rigid statutory controls. This flexible system aligns closely with the NIST lifecycle approach and focuses on continuous risk management and regulator-led direction rather than static compliance lists.

Empirical research shows UK organizations responding with governance programs that mirror broader Western practice. Companies are encouraged to identify human oversight mechanisms and clear responsibility lines. The British stance reflects a belief that technology changes too fast for Parliament to micro-manage effectively. 

Beyond Western jurisdictions

Growing state oversight 

Outside Western jurisdictions, available evidence points to increasing state involvement in high-impact AI uses. Governance frameworks stress tighter supervision in sensitive sectors such as healthcare, infrastructure, and public administration.7

What this means for businesses everywhere 

While the US model differs from Europe and the United Kingdom, all three are circling the same idea. Regulators want proof of responsibility rather than glossy claims of innovation. Whether a company sits in Paris, London, or Chicago, the questions sound alike:  

  • Who owns the tool?  
  • How was the risk tested?  
  • What human oversight has occured?  
  • Can the organization evidence its decisions? 

Corporations that treat AI as part of their core governance, risk and compliance operating model will be ready for the next phase of scrutiny. Those waiting for a single American AI Act to tell them what to do may find that the standards used to judge them were visible all along. 

Want to learn more about the Corestream GRC approach to AI? 

FAQ on US AI Regulation  

Is AI actually regulated in the US, or not?

Yes. AI is already regulated in the US, just not through a single AI Act. Regulators apply existing laws on consumer protection, civil rights, healthcare, finance, and competition when AI systems cause harm.

How does US AI regulation compare to the EU and UK?

The EU relies on formal legislation and risk classification, the UK uses principles-based oversight, and the US relies on distributed enforcement through existing laws. Despite different structures, all three demand proof of responsibility, not just claims of innovation.

Why doesn’t the US have one AI law like the EU AI Act?

The absence of a single US AI Act is deliberate rather than accidental. US lawmakers and regulators have chosen to rely on existing legal authorities instead of creating a new, centralized framework. The logic is that AI does not change the underlying duties companies already owe to consumers, employees, patients, or markets. If AI-driven decisions cause harm, regulators believe existing statutes are sufficient to intervene without creating a parallel compliance regime.

How do US regulators enforce AI risks in practice?

US regulators focus on outcomes and organizational behavior rather than technical novelty. If an AI system produces discriminatory results, misleads consumers, or creates unsafe conditions, enforcement can follow immediately. Agencies such as the Federal Trade Commission act under consumer protection and advertising laws, while sector regulators in healthcare, finance, or employment enforce their own standards. The key point is that enforcement attaches to the company using the system, not the algorithm itself.

What matters most for AI compliance in the US?

What matters most is whether an organization can demonstrate responsible governance. Regulators care about who owns the AI system, how risks were assessed, what controls were put in place, and how decisions were monitored over time. The question is not whether AI was used, but whether its risks were understood, documented, and actively managed as part of normal business operations.

Why are “voluntary” AI standards starting to feel mandatory?

This pattern is familiar from cybersecurity and privacy. Voluntary frameworks often become the standards regulators look to after incidents occur. When something goes wrong, organizations are asked whether they followed recognized best practices. For AI, the NIST framework is quickly becoming that reference point. Its emphasis on lifecycle oversight, accountability, and continuous monitoring aligns closely with how regulators already assess corporate conduct.

Footnotes and further reading sources  

  1. General overview of US AI regulatory posture and distributed enforcement model, drawn from Davtyan, T. (2023) The U.S. Approach to AI Regulation: Federal Laws, Policies, and Strategies Explained (Journal of Law, Technology & the Internet).  ↩︎
  2. ‘Artificial intelligence governance in U.S. corporations: Legal and ethical implications’, International Journal of Publication and Reviews, 6(3), pp. 3083–3089. (2025)  ↩︎
  3. Osifowokan, A.S., Oghenerobowo, T., Agbadamasi, A.O., Adukpo, T.K. and Mensah, N. (2025) ‘Regulatory and legal challenges of artificial intelligence in the U.S. healthcare system: Liability, compliance, and patient safety’, World Journal of Advanced Research and Reviews↩︎
  4. ‘Artificial intelligence governance in U.S. corporations: Legal and ethical implications’, International Journal of Publication and Reviews, 6(3), pp. 3083–3089. (2025)  ↩︎
  5. ‘Artificial intelligence governance in U.S. corporations: Legal and ethical implications’, International Journal of Publication and Reviews, 6(3), pp. 3083–3089. (2025)  ↩︎
  6. Vidal, J., Smith, R. and Thompson, L. (2023) ‘Principles-based regulation of artificial intelligence in the United Kingdom: Governance, proportionality and regulatory discretion’, Journal of Law, Technology and Policy, 2023(2), pp. 145–168.  ↩︎
  7. Vidal, J., Smith, R. and Thompson, L. (2023) ‘Principles-based regulation of artificial intelligence in the United Kingdom: Governance, proportionality and regulatory discretion’, Journal of Law, Technology and Policy, 2023(2), pp. 145–168.  ↩︎
  • The latest cyber shocks and impact every business leader needs to know

    The latest cyber shocks and impact every business leader needs to know

    Over the past year, cyber-attacks have stopped looking like technical failures and started behaving like prolonged business crises.  Retailers, airlines, manufacturers, healthcare providers and media organizations have all been headline news for their cyber incidents. In many cases, the initial breach was only the beginning. We witnessed; operations were disrupted, supply chains stalled, customer services faltered and leadership teams were forced into crisis mode long after systems…

  • What a Head of Controls looks for in a GRC platform: A real-life case study and the common mistakes to avoid

    What a Head of Controls looks for in a GRC platform: A real-life case study and the common mistakes to avoid

    At CoreStream GRC, we recently wrapped up a successful GRC implementation with Wickes, and it highlighted something we see time and again. The difference between a smooth GRC rollout and a painful one is rarely about features alone. It usually comes down to a handful of early decisions. Small choices that either remove friction or…

  • Stop playing defense: The comprehensive guide to enterprise risk management for value-based GRC leaders

    Stop playing defense: The comprehensive guide to enterprise risk management for value-based GRC leaders

    The enterprise risk management wake-up call Enterprise risk management (ERM) has been talked about for years. Yet, in practice, many programs still amount to little more than documentation and reporting. While, they may look reassuring on paper, they are rarely tested when it matters. In our conversation with our expert community, we have seen that…