In 2026, the United States is entering a pivotal year for AI regulation. While Congress has yet to pass a comprehensive federal AI law, a wave of state-level legislation is now taking effect — requiring bias audits, impact assessments, and transparency disclosures for AI systems used in hiring, lending, and healthcare. At the same time, competing federal proposals are vying to set a national standard, creating a high-stakes tug-of-war between innovation and accountability.
General Audience
With no federal AI law on the books, states have stepped in to fill the regulatory vacuum. Three landmark laws are taking effect in 2026:
Illinois (January 1, 2026) — HB 3773 prohibits the use of AI in ways that intentionally or unintentionally discriminate against employees based on protected characteristics. Draft rules from the Illinois Department of Human Rights would require employers to notify employees and applicants whenever AI is used to influence employment decisions, including disclosures about the AI product, its purpose, and the data it collects.
Colorado (June 30, 2026) — The Colorado AI Act (SB 24-205) is the nation’s first comprehensive state AI law targeting “high-risk” systems. It requires developers and deployers to use “reasonable care” to prevent algorithmic discrimination, conduct impact assessments, and implement risk management policies. After being delayed from its original February 2026 effective date, Governor Jared Polis announced on March 17 that a working group of industry and civil rights experts had reached consensus on a plan to rework the law ahead of its June deadline.
California — The California Civil Rights Council finalized regulations governing employers’ use of AI in employment decisions, making bias testing explicitly relevant to discrimination claims. Meanwhile, the California Privacy Protection Agency issued rules requiring opt-out rights and enhanced disclosures when automated tools replace human decision-making in employment.
New York City’s Local Law 144, already in effect, continues to serve as a national reference point — requiring annual independent bias audits for any automated employment decision tool, with employers posting audit summaries publicly and notifying candidates at least 10 days before using such tools.
Two sharply different visions for federal AI policy are taking shape in Washington.
The TRUMP AMERICA AI Act — On March 18, 2026, Senator Marsha Blackburn released a nearly 300-page discussion draft for a sweeping national AI framework. The bill would impose a “duty of care” on AI developers, sunset Section 230 of the Communications Decency Act, and — controversially — preempt state AI laws. It also includes provisions making unauthorized use of copyrighted works in AI training explicitly outside fair use, and borrows children’s safety provisions from the proposed Kids Online Safety Act. The bill aligns with President Trump’s December 2025 executive order calling for a single federal AI framework to replace what the administration views as a burdensome patchwork of state regulations.
The AI Civil Rights Act — Senator Edward Markey and Representative Yvette Clarke introduced the Artificial Intelligence Civil Rights Act (S.3308 / H.R.6356) to regulate algorithmic discrimination in housing, hiring, lending, healthcare, and education. The bill would mandate pre-deployment evaluations and independent third-party bias audits for any AI system influencing material outcomes — such as loan denials, job selections, or medical diagnoses — and grant the FTC and Department of Justice new enforcement powers.
The tension between these approaches reflects a fundamental debate: Should AI regulation prioritize innovation speed or civil rights protections? The state laws taking effect in 2026 are already creating real compliance obligations for companies deploying AI in hiring and employment. Whether a federal law eventually preempts these state rules — and whether it leans toward Blackburn’s industry-friendly framework or Markey’s civil rights approach — will shape the AI governance landscape for years to come.
For organizations using AI in consequential decisions, the practical reality is clear: regardless of which federal proposal prevails, bias audits, impact assessments, and transparency disclosures are becoming standard expectations. Companies operating across multiple states face an increasingly complex compliance environment — exactly the kind of fragmentation both federal proposals aim to resolve, albeit in very different ways.
