← KeepSanity
Mar 30, 2026

AI Laws: 2026 Guide to US and Global Regulation

AI laws and regulation in 2026 are a fast-moving puzzle. This guide is for AI builders, deployers, investors, and legal teams who need to understand the evolving landscape of AI laws to help you na...

AI laws and regulation in 2026 are a fast-moving puzzle. This guide is for AI builders, deployers, investors, and legal teams who need to understand the evolving landscape of AI laws to help you navigate compliance and risk in a rapidly changing legal environment. If you build, deploy, or invest in artificial intelligence systems, you’re now navigating a fragmented landscape where US federal actions, aggressive state statutes, and ambitious global frameworks like the EU AI Act all demand attention. This guide cuts through the noise to give you what actually matters.

Key Takeaways

The image depicts a world map illuminated with glowing connection points, symbolizing global technology networks and the interconnectedness of artificial intelligence systems. These connections represent the complexities of AI legislation and regulation across various jurisdictions, including state laws and federal policies.

What Are “AI Laws” in 2026?

Many jurisdictions have implemented AI-specific laws, while others address AI-related concerns through existing legal frameworks. AI laws encompass any binding rules-statutes, regulations, or executive orders-that specifically target ai systems or heavily impact how ai technology can be developed, deployed, and used. They’re distinct from voluntary guidelines or ethics principles, though those often inform the binding rules.

The EU AI Act is the first comprehensive horizontal legal framework for the regulation of AI systems across EU Member States.

Understanding the landscape requires separating three categories:

Category

Examples

Binding?

AI-specific laws

EU AI Act, Colorado AI Act, California TFAIA

Yes

General laws applied to AI

Privacy statutes, consumer protection, anti-discrimination

Yes

Soft-law frameworks

NIST AI RMF, OECD AI Principles, company ethics codes

Usually no

The 2026 ai regulation environment is characterized by fragmentation. The US has no single federal AI statute, so federal vs. state tensions dominate. Meanwhile, the EU, US, and China operate under sharply different philosophies-the EU prescriptive and risk-tiered, the US leaning innovation-first, China integrating AI rules with content and security policy.

For busy AI leaders (the core KeepSanity AI readership), the goal isn’t memorizing every statute. It’s understanding the main regimes, timelines, and risk themes they introduce-then building governance that can flex as clarity emerges.

US Federal AI Policy: Executive Orders and National Frameworks

The US still has no single comprehensive AI statute. In 2026, the federal landscape is driven by executive orders, policy plans, and existing agency powers rather than new federal legislation passed by Congress.

President Trump signed Executive Order 14365 on December 11, 2025, titled “Ensuring a National Policy Framework for Artificial Intelligence.” Its core aim: maintaining American leadership in ai development via a minimally burdensome national policy framework that can preempt conflicting state laws.

This order rescinded the Biden administration’s October 2023 AI executive order, which had emphasized safety testing and equity reporting for frontier ai models. The shift moves away from broad “AI safety” mandates toward economic policy and national security competitiveness-explicitly targeting “global AI dominance.”

Complementing the executive order, America’s AI Action Plan (published July 2025) outlines over 90 initiatives:

The earlier statutory foundation-the National Artificial Intelligence Initiative Act of 2020-still underpins federal R&D coordination, NIST standards work, and cross-agency collaboration in 2026. It’s the quiet backbone supporting efforts like the AI Safety Institute and ongoing AI RMF updates.

Trump’s 2025 Executive Order: Building a Uniform Federal AI Framework

This section breaks down the December 2025 Executive Order and its potential to disrupt state ai laws taking effect in 2026. If you’re tracking compliance obligations, these are the mechanics that matter.

Core Policy Statement

The order commits the US to “global AI dominance” under a uniform federal policy framework that is explicitly “minimally burdensome.” It directly challenges what the Trump administration calls state-level “regulatory patchworks.”

AI Litigation Task Force

Within 30 days of December 11, 2025, the Department of Justice was required to create an ai litigation task force. Its mandate:

Commerce Department Evaluation

By March 11, 2026, the Secretary of Commerce must publish an evaluation of existing state ai laws, flagging “onerous” statutes. Examples of problematic laws include those that:

Federal Communications Commission Proceedings

Within 90 days of key publications, the Federal Communications Commission must initiate proceedings for a federal AI reporting and disclosure standard. The intent: this rule would preempt conflicting state transparency requirements for ai models.

Federal Trade Commission Policy Statement

The FTC issues a policy statement clarifying that the FTC Act preempts state ai laws requiring “deceptive changes” to AI outputs. The order explicitly cites the Colorado AI Act as potentially problematic-arguing it could force models to produce “false results” to mitigate discrimination risks.

Funding Leverage

States whose ai laws are identified as “onerous” may lose access to certain federal funds, including:

This creates strong financial incentives for states to align with the federal ai framework.

Carve-Outs Preserved

The executive order explicitly preserves state authority in specific areas:

What This Means

For companies, expect litigation and uncertainty. Courts will ultimately decide how far the executive order’s preemption vision extends against California, Colorado, and Texas AI statutes. The ai litigation task force may seek injunctive relief against specific state provisions, but outcomes remain unclear through at least 2027.

The image depicts a traditional courthouse building characterized by its grand columns, set against a clear blue sky, symbolizing the judicial system and legal governance. This structure reflects the importance of state laws and federal regulations in the context of AI legislation and governance.

Key US State AI Laws Taking Effect in 2026

While the federal government talks about uniform federal policy framework, 2026 is the year several major state ai laws actually start to bite. These require immediate compliance decisions.

Three flagship regimes demand attention: California’s Transparency in Frontier Artificial Intelligence Act (TFAIA), Texas’s Responsible Artificial Intelligence Governance Act (RAIGA/TRAIGA), and the Colorado AI Act. They share themes-risk-based regulation, transparency, anti-discrimination-but implement them in different, sometimes conflicting ways.

Each law has its own triggers and enforcement structures:

State

Effective Date

Key Triggers

Enforcement

California TFAIA

January 1, 2026

10^26 FLOPs, $500M revenue

State authority

Texas RAIGA

January 1, 2026

Impact on Texas residents

Attorney General, civil penalties

Colorado AI Act

June 30, 2026

High-risk decision domains

Attorney General, civil penalties

California’s Transparency in Frontier Artificial Intelligence Act (TFAIA)

The California AI Transparency Act framework (TFAIA) takes effect January 1, 2026, targeting “frontier” ai development activity occurring in or significantly affecting California.

Who qualifies as a “frontier developer”:

Revenue threshold: Frontier developers with annual global revenue of at least USD 500 million must prepare and publish a “Frontier AI Framework” describing:

Critical safety incident reporting: Developers must monitor, document, and report serious system failures or misuse events within specified timelines. Reportable incidents include:

The TFAIA layers atop concurrent California laws also effective January 1, 2026:

This creates a dense compliance environment for companies with large-scale AI operations touching California residents.

Texas RAIGA/TRAIGA: Responsible AI Governance with a Sandbox

Texas enacted the Responsible Artificial Intelligence Governance Act (RAIGA or TRAIGA), effective January 1, 2026. It applies to developers and deployers operating ai systems that impact Texas residents.

Prohibited restricted purposes:

Attorney General powers: The Texas Attorney General may:

Liability defenses: Liability often hinges on knowledge or intent. Defenses include:

36-month regulatory sandbox: Texas offers a distinctive feature-businesses can test certain ai systems under relaxed regulatory conditions for up to 36 months, subject to:

Texas positions itself as “pro-innovation but tough on abuse.” This may attract AI firms willing to operate under a sandbox model, though exposure to high-profile enforcement remains if harms occur.

Colorado AI Act: Risk-Based Consumer Protection

The Colorado AI Act (S.B. 24-205) represents one of the first explicit, risk-based US state AI frameworks. Delayed to effective June 30, 2026, it focuses on “high-risk” ai systems making or shaping consequential decisions about consumers.

High-risk domains include:

Reasonable care duty: Developers and deployers of high-risk ai tools must use reasonable care to protect consumers from algorithmic discrimination-bias based on protected characteristics like race, gender, or disability.

Required compliance elements:

Enforcement: The Colorado Attorney General can bring actions for violations, with civil penalties and injunctive relief. No private right of action exists, but Colorado becomes a focal point for early AI discrimination enforcement in the US.

Global AI Law Landscape: EU, China, and Beyond

While US law is still converging, 2026 is the year when several non-US regimes operationalize detailed AI statutes affecting any company serving those markets.

The emerging global split:

Critically, many of these laws have extraterritorial reach. If an ai system targets EU users or is placed on the EU market, the EU AI Act may apply regardless of where the provider is based.

The image shows a variety of international flags waving in the breeze outside a modern office building, symbolizing global collaboration and diversity. The contemporary architecture of the building serves as a backdrop, reflecting the advancements in artificial intelligence and technology regulation, such as the California AI Transparency Act and the EU AI Act.

European Union: The EU AI Act

The EU AI Act stands as the world’s first comprehensive, horizontal artificial intelligence act. Finalized in 2024-2025, it enters phased applicability with full application by August 2026.

Risk-based structure:

Risk Level

Examples

Requirements

Unacceptable (banned)

Real-time remote biometric ID by law enforcement (with narrow exceptions), manipulative subliminals, emotion inference at work/school

Prohibited

High-risk

Employment hiring tools, credit scoring, medical devices, critical infrastructure

Full compliance regime

Limited-risk

Chatbots, deepfake generators

Transparency duties

Minimal-risk

Most AI applications

Voluntary codes

Core high-risk obligations:

Penalties: Administrative fines can reach up to EUR 35 million or 7% of global annual turnover for banned practices violations-similar in scale to GDPR. AI law breaches could be financially existential for major players.

Governance: The Act creates an EU AI Office to coordinate enforcement and update technical standards. National supervisory authorities handle day-to-day oversight.

China: Interim Measures for Generative AI

China’s Interim Administrative Measures for Generative Artificial Intelligence Services, effective since August 2023, continue shaping generative ai offerings in 2026.

Key requirements for public-facing generative AI tools:

Operational obligations:

Leading platforms like Baidu’s Ernie implement localized models with heavy content filtering for China-based deployment, creating significant divergence from versions deployed globally. These measures integrate with 2024 Generative AI Safety National Standards specifying training data audits.

Canada

Canada’s Artificial Intelligence and Data Act (AIDA): This federal ai bill, proposed in 2022 and advancing through the legislative process in mid-2026, would regulate “high-impact” ai systems, impose risk-management duties, and empower regulators. It includes Criminal Code amendments for reckless AI harms.

UK

UK approach: The UK relies on “pro-innovation, pro-flexibility” guidance rather than a single Artificial Intelligence Act. Existing regulators (ICO for data protection, CMA for competition, FCA for financial services) apply current legal frameworks to AI. The option to legislate remains if gaps emerge.

South Korea

South Korea’s Framework Act on AI: Enacted December 2024 with phases through 2025-2026, it defines AI broadly as a machine based system inferring outputs influencing environments. It promotes R&D funding (KRW 2.5 trillion by 2027) and introduces early governance requirements.

Multilateral Initiatives

Multilateral initiatives:

These foster harmonization on accountability and transparency but lack binding enforcement power.

Core Legal Themes Across AI Laws

Rather than memorizing country-by-country details, AI builders should understand the recurring legal themes and design for them by default.

The image depicts a diverse group of professionals in a modern office setting, engaged in reviewing documents on tablets and laptops, reflecting collaboration in the context of AI legislation and governance. The scene conveys a sense of focus and teamwork, as they discuss topics related to state AI laws and federal regulations.

Practical Compliance: How Companies Can Navigate AI Laws

For AI leads and legal teams, the goal is pragmatic: stay compliant without drowning in every minor bill or guidance document. Here’s what works.

AI Inventory

Framework Alignment

Legal Review Integration

Incident Reporting

Contract Language

Ongoing Monitoring

How to Stay Sane While Tracking AI Laws

This is where KeepSanity AI’s core philosophy applies: reduce noise, surface only what matters.

Build a lean monitoring stack:

Appoint an internal “AI law vanguard”: Often one person in legal plus one in AI/ML, responsible for:

Focus on structural shifts: Chase the big moves, not every draft bill:

Most draft bills never pass. Don’t build compliance programs around speculation.

Design for flexibility: When uncertainty is high-such as pending court challenges to the 2025 executive order-build modular governance processes that can be tightened or relaxed as clarity emerges. Prioritize adaptability over rigid, jurisdiction-specific silos.

The ai race is real, and so is the regulatory attention. But panicking at every headline wastes resources. Focus on your highest-risk use cases, align with frameworks that travel across borders, and let courts and regulators sort out the edge cases before you overreact.

The image depicts a serene and organized workspace featuring a laptop, a coffee cup, and minimal desk accessories, creating an inviting atmosphere for productivity. The clean layout emphasizes a focus on efficiency, making it ideal for tasks related to artificial intelligence legislation or other professional endeavors.

FAQ

Do AI laws apply to small startups, or only to big tech companies?

While some rules explicitly target very large developers-California’s TFAIA Frontier AI Framework obligations require USD 500M+ in revenue and 10^26 FLOPs compute-many ai regulations apply regardless of company size.

The Colorado AI Act can apply to smaller firms offering high-risk decision tools in employment or lending. EU AI Act obligations depend on the nature of the system, not company size. And existing consumer protection or discrimination laws in the US already cover small employers and service providers using AI for consequential decisions.

If you’re deploying ai tools that affect individuals in covered jurisdictions or high-risk domains, size alone won’t exempt you.

Is there a single definition of “AI” in law that I can rely on?

No universal, uniform legal definition exists. The EU AI Act, US federal initiatives, and state privacy laws each define AI or “automated decision-making” in slightly different ways.

Most definitions focus on machine-based systems that infer from inputs to generate outputs (predictions, recommendations, decisions) affecting real-world environments or individuals. But thresholds vary-some laws target specific compute levels, others focus on use cases, and some capture any algorithmic decision system.

Always check the specific definition in each regime you’re subject to. It determines whether your tools fall within scope.

How do AI laws interact with existing data protection and privacy rules?

AI laws typically sit on top of existing privacy frameworks rather than replacing them. You must comply with both simultaneously.

Practical examples:

Data governance-lawful basis, minimization, security, rights handling-is now core to AI compliance, not a separate issue.

Can I rely on “sandbox” programs to avoid AI liability?

Regulatory sandboxes like Texas RAIGA’s 36-month program or EU member state variants can reduce enforcement risk during supervised experimentation-but they don’t grant blanket immunity.

Sandbox participation typically requires:

Treat sandboxes as opportunities to test and document robust controls early, not as a way to ignore long-term compliance obligations. Harms that occur during sandbox testing can still trigger enforcement action.

What should I prioritize if I’m just starting to build an AI compliance program?

Start with three building blocks:

  1. Accurate inventory: Know what ai systems and use cases exist across your organization

  2. Lightweight risk classification: Categorize systems as high vs. low risk, aligned with EU/Colorado-style definitions

  3. Standardized documentation templates: Create reusable formats for data sources, model behavior, human oversight, and risk assessment

Once those are in place, layer jurisdiction-specific requirements (EU AI Act, Colorado AI Act, California TFAIA) rather than building bespoke processes for each law from scratch.

Prioritize your most consequential ai use cases first-hiring, lending, health, or safety-critical systems-because those are most likely to be treated as “high-risk” under emerging laws and face the maximum extent of regulatory scrutiny.


The ai laws landscape in 2026 demands vigilance without panic. Build flexible governance, focus on high-risk use cases, align with frameworks that travel across jurisdictions, and monitor structural shifts rather than every draft bill. The signal is there-you just need to filter out the noise.