The term “state artificial intelligence” in 2025–2026 carries two distinct meanings: AI systems deployed by the U.S. Department of State for diplomacy (like StateChat and NorthStar), and the growing patchwork of AI regulation enacted by individual U.S. states.
The State Department already runs concrete AI tools aligned with OMB M-25-21 and President Biden’s 2023 AI Executive Order, using generative AI for cable summarization, translation, and open-source intelligence analysis.
States including Colorado, California, Texas, Tennessee, and New York have passed their own ai laws, creating fragmented compliance requirements that major tech firms and federal policymakers view as unsustainable.
President Trump’s December 11, 2025 Executive Order sought to assert federal preemption over many state ai laws, establishing an AI Litigation Task Force within the Department of Justice to challenge conflicting state statutes.
For professionals tracking this fast-moving tug-of-war without burning out, KeepSanity AI offers a weekly, ad-free digest covering only the major shifts-no daily filler, no sponsored content, just signal.
The phrase “state artificial intelligence” means two very different things depending on who’s using it.
First, it refers to how the U.S. Department of State-the federal agency responsible for diplomacy and foreign affairs-is deploying artificial intelligence to modernize how America engages with the world. Second, it describes the regulatory and legislative activity happening across 50 individual U.S. states, each developing their own frameworks to regulate ai within their borders.
This dual meaning isn’t just a semantic curiosity. It reflects a deeper constitutional and political fault line that’s reshaping technology policy in real time.
The key dates tell the story: President Biden signed his AI Executive Order on October 30, 2023, establishing early federal frameworks. Between late 2023 and 2025, more than one hundred state AI bills were introduced or enacted. Then, on December 11, 2025, President Trump signed Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence,” which explicitly sought to preempt conflicting state laws and create uniform federal authority over ai governance.
What was once a purely technical topic has become a core instrument of foreign policy, economic competitiveness, and domestic lawmaking. Innovators want national consistency so they don’t have to navigate 50 different compliance regimes. Civil-society groups and many states want strong safeguards to protect residents from algorithmic discrimination and other harms. Federal actors are trying to assert primacy while Congress remains gridlocked.
This article covers the State Department’s AI strategy, model state AI acts and their “Learning Laboratory” experiments, the federal preemption push, and the messy politics surrounding all of it. If tracking every micro-update sounds exhausting, you’re not alone-that’s exactly why KeepSanity AI exists as a weekly, curated source for the major shifts without the daily noise.

The Department of State views AI not as a marginal IT upgrade, but as a centerpiece of 21st-century diplomacy. Secretary of State Marco Rubio has stated explicitly: “Winning the AI race is nonnegotiable. America must continue to be the dominant force in artificial intelligence to promote prosperity and protect our economic and national security.”
This framing positions ai technology directly within U.S. competition with China and the European Union on AI norms, standards, and diplomatic influence. The stakes are about more than efficiency-they’re about who shapes the future of global governance.
AI helps the State Department process and synthesize enormous volumes of unstructured data that would overwhelm human analysts working alone:
Diplomatic cables from embassies worldwide
Open-source intelligence from news outlets and think tanks
Social media feeds tracking emerging narratives
Real-time analysis of geopolitical shifts and potential crises
The department has moved beyond pilots to actual deployed systems:
StateChat is the department’s first generative ai chatbot, approved for use with Sensitive But Unclassified (SBU) data. Diplomats use it for:
Summarizing diplomatic cables and reports
Generating first drafts of country analyses and policy papers
Translating between English and major UN languages (French, Spanish, Russian, Arabic, Mandarin)
Assisting with research and documentation
NorthStar is a large-scale open-source intelligence tool that ingests millions of global news articles daily. It applies AI clustering and priority-ranking to help analysts identify emerging narratives, regional trends, and early warning signals in strategically important regions.
Both ai tools are framed as augmenting-not replacing-diplomats. Written guidance emphasizes that AI outputs are drafts and recommendations, not authoritative intelligence. Staff are reminded to apply their own expertise before relying on AI outputs for policy, negotiations, or public statements.
Risk Category | Description |
|---|---|
Data Security | Preventing classified or diplomatic-sensitive data from leaking into AI systems |
Algorithmic Bias | Models trained on Western sources may skew analysis toward certain regions |
Over-Reliance | Risk of institutional deskilling if diplomats depend too heavily on AI summaries |
Adversarial Manipulation | Sophisticated actors could feed false information to trick AI detection systems |
For readers tracking whether these ai systems actually matter, KeepSanity AI selectively covers major State or White House ai governance moves-like new public frameworks or governance boards-rather than every minor pilot experiment.
On September 30, 2025, the State Department unveiled its “Enterprise Data and Artificial Intelligence Strategy for 2026”-a three-year roadmap to modernize how the department handles data and AI across all bureaus and embassies worldwide.
The strategy is explicitly aligned with two major federal directives:
OMB Memorandum M-25-21 (released late 2024): Sets baseline standards for federal AI adoption and ai governance
President Biden’s 2023 AI Executive Order: Established principles for responsible AI development and deployment in federal agencies
The strategy organizes itself around three interrelated pillars that mirror OMB language:
Innovation and Rapid Experimentation Creating mechanisms to rapidly pilot and test ai systems across different bureaus, learn from pilots, and scale successful approaches. The emphasis is on moving at the “speed of relevance” without analysis paralysis.
Governance and Institutional Accountability Establishing clear roles, responsibilities, and oversight structures. This includes:
Strengthening the Chief Data Officer position
Creating AI governance boards
Maintaining inventories of ai systems in use or planned
Conducting risk assessments for high-impact systems
Public Trust and Civil Rights Acknowledging that deploying ai systems affecting diplomatic outcomes, visa adjudication, or public communications requires protecting civil rights, maintaining privacy, and being transparent about how AI is used.
The strategy includes an internal Implementation Plan with concrete metrics-though specific numbers aren’t public. These likely cover the percentage of mission-critical processes with AI support, reduction in manual analytic hours, and number of ai systems inventoried and risk-assessed.
This strategy overlaps with broader U.S. diplomacy aims, including promoting democratic AI norms in multilateral forums like the G7 Hiroshima AI Process and the U.K. AI Safety Summit. Readers can expect these big diplomatic AI milestones-not internal minutiae-covered in KeepSanity’s weekly briefings.

OMB Memorandum M-25-21 serves as the central White House directive setting standards for how all federal agencies-including the State Department-must inventory, manage, and govern ai systems.
The April 2025 White House guidance that built on M-25-21 instructed all federal agencies to:
Appoint Chief AI Officers
Develop strategies to expand AI use within government
Adopt minimum risk-management practices for “high-impact” ai systems
Institute agency-specific generative ai policies
The guidance also emphasized “American-made” AI procurement preferences and prioritized faster, more interoperable AI acquisition.
The department’s Compliance Plan operationalizes M-25-21 through several concrete steps:
Requirement | Implementation |
|---|---|
AI Inventory | Cataloging all ai systems with metadata including purpose, data inputs, vendor info, and risk classification |
Risk-Tiering | Classifying systems by impact level with appropriate governance gates |
Civil Rights Compliance | Testing for bias in systems affecting visa adjudication, security clearances, or enforcement |
Privacy Protection | Ensuring AI systems comply with the Privacy Act and classified information rules |
Red-Team Testing | Conducting adversarial evaluation to identify vulnerabilities before deployment |
The Compliance Plan emphasizes alignment with “American values”-fairness, nondiscrimination, and due process. It explicitly addresses concerns about algorithmic discrimination in immigration and consular systems, recognizing that AI models trained on historical data can perpetuate existing biases.
From a news-monitoring standpoint, KeepSanity AI focuses on the handful of major federal governance steps-like new OMB guidance or landmark Department of Justice and Federal Trade Commission actions-rather than every internal compliance deadline.
Moving from strategy documents to the actual tools diplomats use at desks in Foggy Bottom and at posts abroad reveals a more concrete picture.
StateChat operates with specific capabilities and constraints:
Core Capabilities:
Summarizing lengthy diplomatic cables into concise briefs
Generating first drafts of country reports and briefing memos
Real-time translation between English and major UN languages
Research assistance for background information and historical precedent
Operational Guardrails:
Approved only for Sensitive But Unclassified (SBU) data-no classified inputs allowed
All interactions are logged and auditable
Written guidance reminds staff that outputs are drafts requiring human review
Outputs cannot be transmitted to foreign governments or published without approval
NorthStar functions as a “world brain” for situational awareness, continuously scanning global information flows:
Scrapes and ingests global news from thousands of sources
Applies clustering and topic modeling to group content by region, actor, and theme
Assigns priority scores based on velocity, novelty, and relevance
Generates visualizations and dashboards for analyst review
The Indo-Pacific, Eastern Europe, and the Sahel are explicitly mentioned as regions benefiting from this capability-areas where early detection of emerging crises could give U.S. policymakers valuable lead time.
Visa Fraud Detection: Machine-learning models flag applications with statistical anomalies for human investigation
Sanctions Evasion Analysis: AI identifies patterns in financial data suggesting sanctions circumvention
Public Diplomacy Localization: AI assists with translating and optimizing content for specific regional audiences
None of these ai systems currently make final sovereign decisions. They feed into human decision-makers who remain accountable for policy.
In a typical week, dozens of small pilots launch across the federal government. Only a handful are transformative enough to feature in a curated digest like KeepSanity AI.
Shifting from federal and foreign policy to domestic state-level experimentation reveals a different approach to ai governance.
Organizations like the American Legislative Exchange Council (ALEC) have promoted “model” state AI acts-template legislation that states can adopt as blueprints. These acts embody a particular philosophy:
Strong preference for market-driven ai innovation
Technological neutrality
Minimal regulation as a default
Skepticism of wide-ranging state or local AI restrictions
This approach represents a conscious alternative to stricter regulatory models like Colorado’s algorithmic discrimination law.
Definitions Section Defines what counts as “artificial intelligence” for purposes of the law, identifies covered entities and sectors, and establishes scope of application.
Office of Artificial Intelligence Policy Creates a new office (usually within the attorney general’s office or governor’s office) tasked with:
Promoting ai innovation
Identifying regulatory barriers
Running experimental programs
Making legislative recommendations
State Agency Inventory Requirements State agencies must maintain inventories documenting:
AI tools used or planned
Vendors and data flows
Bias evaluations and security controls
Fiscal impacts
These inventories give lawmakers a baseline picture of public-sector AI before regulating.
The “Learning Laboratory” concept is central to these model acts. Here’s how they work in practice:
Feature | Description |
|---|---|
Participants | Selected startups, universities, open-source projects |
Duration | Typically 12 months, with possible 12-month extension |
Benefits | Lighter regulatory conditions, reduced penalties |
Requirements | Share data, best practices, and incident reports |
Oversight | Supervised by Office of AI Policy with revocation authority |
Participants agree to operate in specified geographic areas, share performance reports, implement specific safeguards, and allow state audits. In exchange, they receive relief from certain regulatory requirements-though core consumer protection, privacy, and civil-rights laws still apply.
Regulatory mitigation agreements are time-limited waivers tied to specific safeguards and revocable for violations. The effective date for these programs typically begins after state agency approval and participant agreement signing.
The sandbox model is positioned as a pro-innovation alternative to strict rules-but whether it actually protects the american people while enabling ai development remains to be seen.
KeepSanity AI watchers should pay attention to which other states actually adopt these blueprints and with what results.

By late 2025, dozens of differing state AI bills had produced a patchwork of compliance obligations that major tech firms and many policymakers saw as unsustainable for national AI competitiveness.
Large AI deployers faced a genuine problem: complying with 50 different standards for documentation, impact assessment, transparency, and risk management is enormously costly. A company deploying an AI hiring tool nationally might need different impact assessments, transparency disclosures, and decision algorithms in different states.
President Trump signed “Ensuring a National Policy Framework for Artificial Intelligence” (Executive Order 14365) on December 11, 2025. The order asserts federal supremacy over many aspects of AI regulation through several key provisions:
Policy Goals:
Maintain U.S. leadership in AI with a uniform, minimally burdensome framework
Push back on “onerous” state laws viewed as ideological or anti-innovation
Establish clear federal authority over ai governance
AI Litigation Task Force The order directs the Department of Justice to establish a task force mandated to challenge state AI statutes that allegedly violate the Commerce Clause or conflict with federal directives. Primary targets include state algorithmic discrimination laws in Colorado and California.
Commerce Department Review The Secretary of Commerce must, within approximately 90 days, publish a review of state laws that conflict with federal law and might trigger funding consequences.
Mechanism | Purpose |
|---|---|
BEAD Funding Conditions | Conditioning eligibility for Broadband Equity Access and Deployment funds on not enacting certain ai laws |
FCC AI Reporting | Encouraging the FCC to consider a federal AI reporting standard to preempt state requirements |
FTC Clarification | Directing the Federal Trade Commission to clarify when state mandates requiring “truthful output alteration” are preempted as deceptive practices |
The order also creates roles like a special advisor on AI to coordinate federal efforts and consult with stakeholders across the administration.
This is a bold, contested use of executive power rather than a bipartisan statute-primed to trigger lawsuits by states and civil-society groups.
KeepSanity AI would cover these disputes only at key inflection points-landmark court rulings, major injunctions-not every filing or hearing.
Federal legislative attempts to preempt state AI regulation stalled in 2025 after bipartisan resistance. A proposed 10-year moratorium championed by Senator Ted Cruz (R-TX) aimed to establish uniform federal authority but failed to advance.
Similar moratorium language appeared in:
Drafts of the FY 2026 National Defense Authorization Act (NDAA)
A non-binding White House AI Action Plan released July 23, 2025
Both efforts were stripped out during negotiations and government shutdown brinkmanship.
In the absence of comprehensive federal AI legislation, several states moved forward with their own approaches:
State | Focus Areas |
|---|---|
Colorado | Algorithmic discrimination, impact assessments |
California | Consumer protection, transparency, child safety |
Texas | Employment AI, government procurement |
Tennessee | Deepfakes, creative industries |
New York | Hiring AI, biometric data |
This state-driven activity created genuine compliance barriers for large AI deployers-potentially 50 different standards for documentation, transparency, and risk assessments across the businesses deploying ai systems nationally.
The December 2025 executive order is best understood as an attempt to reclaim initiative from Congress and the states-using executive power and agency actions while waiting for a durable legislative framework that still does not exist.
This pattern-states moving first, the federal government reacting later-is common in tech policy.
KeepSanity AI’s weekly curation helps readers follow only the big structural moves rather than every committee hearing or amendment proposed.
AI governance has scrambled traditional partisan lines, producing fractures inside both parties and unusual state-federal alliances.
Republican Divisions:
Some Republicans back federal preemption to protect ai innovation and reduce regulatory costs
Others oppose preemption on states’-rights grounds, wanting to preserve state authority
Figures like Steve Bannon and Senator Josh Hawley oppose blanket moratoriums, fearing they empower Big Tech companies while weakening state ability to regulate ai and address social harms
Democratic Divisions:
Many Democrats want strong federal guardrails and worry about “regulatory arbitrage”
Others fiercely defend state authority, viewing federal preemption as a corporate power grab
Some are skeptical of regulation altogether, viewing it as economically harmful
Numerous states across party lines are pursuing legislation addressing:
Child safety and ai use affecting minors
Mental health impacts of generative media
Labor displacement and workforce concerns
Deepfake political ads in elections
Public polling shows strong support for AI rules and skepticism toward bans on state action. Majorities of the american people across the political spectrum support limits on AI discrimination, transparency requirements, and child protections-suggesting aggressive federal preemption could carry significant political risks.
The information-overload problem is real: policymakers and innovators are bombarded with overlapping narratives and fear-mongering.
A focused, once-a-week digest like KeepSanity AI offers a sanity-preserving way to track what legislation actually passes and what actually affects ai development and deployment-without the daily noise that burns focus and energy.
For anyone building, deploying, or governing ai systems in America, the current landscape requires navigating three overlapping regulatory layers:
Federal Executive Branch Rules OMB memoranda, FTC guidance, DOJ enforcement priorities, and agency-specific AI strategies carry binding or quasi-binding force. These set baseline expectations for responsible ai use across federal agencies and, increasingly, for companies seeking federal contracts.
Emerging Federal Legislation As Congress debates comprehensive AI regulation, new requirements will emerge through standalone bills, amendments to existing statutes, or appropriations riders. The subject matter could include liability rules, safety standards, or data-protection requirements.
State Statutes and Sandboxes A growing web of state ai laws, ranging from algorithmic discrimination requirements to child-safety protections, creates fragmented compliance obligations. Learning Laboratory programs in some states offer flexibility, but participation conditions impose their own governance costs.
For policymakers and staffers: “Model acts” and preemption EOs are starting points, not final answers. They will be re-interpreted by courts and reshaped by public opinion over the next 2–3 years. Assess rather than assume what any particular provision will mean in practice.
For businesses and innovators: The manner in which you document AI capabilities, conduct bias assessments, and establish governance structures now will affect compliance under whatever framework eventually emerges. Building for regulatory flexibility is essential.
For ordinary citizens: You’re primarily affected through downstream issues-availability of AI-powered services, protections against algorithmic discrimination, deepfake and child-safety safeguards, and stability of internet infrastructure funded by programs like BEAD that impose conditions on federal funds.
Because so much is in flux, it’s unrealistic for busy professionals to follow daily updates. Relying on curated, high-signal sources like KeepSanity AI that surface only the major shifts is a strategic advantage-not a shortcut.
The balance between state experimentation and federal uniformity will likely be one of the defining ai policy stories of the late 2020s. Staying calmly informed-not doom-scrolling-positions you to advance your work regardless of how the circumstances evolve.

Federal ai policy is set mainly by Congress, the White House, and agencies like OMB, FTC, DOJ, and Commerce. It governs interstate commerce, federal procurement, national security, and civil-rights enforcement at the national level. State ai policy, by contrast, is made by state legislatures and agencies and tends to focus on consumer protection, employment practices, education, and state-procured systems. When federal and state laws conflict, federal law often preempts state law under the Supremacy Clause-but the boundaries are actively contested. The December 2025 Executive Order is specifically designed to test and expand these boundaries through litigation and funding conditions, which is why the legal outcomes over the next few years will reshape what authority states actually retain.
Learning Laboratories function like regulatory sandboxes where selected participants-startups, universities, or open-source projects-agree to share data about their ai systems, risks, and incidents in exchange for time-limited waivers or reduced penalties. Participants operate under supervision from a state Office of AI Policy, typically for 12-month terms that can be extended once. These programs are not blanket immunities from all regulation. Participants must still comply with core consumer-protection, privacy, and civil-rights laws, and can be removed from the program if they violate their regulatory mitigation agreement. For companies considering participation, the benefits include lighter compliance burdens during testing phases, but the transparency requirements and audit obligations create their own management overhead.
Strong preemption language could limit certain kinds of state laws, especially those that directly regulate how AI models function across state lines or impose documentation requirements that burden interstate commerce. However, most proposals-including the Trump administration’s December 2025 EO-carve out specific areas where states retain authority, including child safety, state procurement rules, and infrastructure policy. Any sweeping attempt to ban state AI restrictions will likely face legal challenges from state attorneys general and civil-society groups, so the outcome will depend on court rulings and future congressional action. The security of current state protections is genuinely uncertain, making this an area worth monitoring through curated sources rather than assuming any single EO represents final resolution.
Designate one owner for ai governance tracking rather than expecting everyone to follow everything. Lean on high-quality secondary sources like weekly, ad-free newsletters (KeepSanity AI being designed specifically for this purpose) that filter the signal from the noise. Watch for official guidance from key regulators-FTC, OMB, state attorneys general-and prioritize actual binding laws and enforcement actions over draft bills and op-eds. Chasing every hearing or rumor is counterproductive; focusing on the handful of real regulatory turning points per month is usually sufficient to stay compliant and strategically informed. Build your internal processes for flexibility so you can adapt when rules change rather than scrambling to catch up.
For federal materials, the White House website and Federal Register contain executive orders, OMB memos, and agency guidance. State legislative websites host bills and enacted statutes-search by state and “artificial intelligence” to find current provisions and their effective dates. Official agency pages (DOJ, FTC, Department of State) publish ai strategies and compliance plans. For further information on model legislation, organizations like ALEC publish their template bills publicly. Be careful to distinguish between official government texts, advocacy group “model bills,” and secondary commentary from law firms or media outlets-each has different authority and potential bias. Curated outlets like KeepSanity AI help readers avoid wading through hundreds of pages each week by surfacing only what actually matters.