State-level AI laws tighten focus on workplace bias

regulation

AI regulation has accelerated rapidly over the past few years, moving from broad federal policy discussions to more detailed rules emerging at state level.

While the EU pushed ahead with the EU AI Act and the US government introduced new federal guidance under both the Biden and Trump administrations, activity across individual US states has now become a major driver of how AI is governed in practice, claims Saifr.

Employment-related use cases, anti-bias standards, and protections around digital likeness are at the forefront of these developments.

This shift reflects a wider global trend. A Stanford study recently highlighted a 21.3% increase in AI-related regulations across 75 countries, signalling a worldwide effort to ensure that AI is deployed ethically and transparently. In the US, seven states have formally enacted legislation governing the use of AI, while others are in various stages of considering their own rules. The landscape is becoming increasingly fragmented, with each state setting its own requirements around safety, security, consumer protection, and bias mitigation.

California has been one of the most active states. Under CA A.B. 2602, employment agreements cannot enforce provisions that allow employers to create or use a digital replica of a worker’s voice or likeness without explicit consent from the individual or their union. From October 2025, another California rule will bring automated decision-making systems under the Fair Employment and Housing Act, meaning employers may violate discrimination laws if AI tools produce biased outcomes based on protected characteristics.

Colorado has also introduced targeted obligations. Under CO S.B. 205, which takes effect in June 2026, employers must comply with “high-risk” AI system standards, including conducting bias audits for AI tools used in employment or insurance. The law aims to ensure high-impact AI systems are monitored and evaluated before they shape decisions affecting individuals.

Illinois has expanded the Illinois Human Rights Act to restrict AI use in employment settings from January 2026. The amendment prohibits the use of AI for recruitment, hiring, promotions, training selection, or disciplinary decisions where it may result in discrimination against protected classes. The focus is broad, covering almost all employment-related decision-making processes.

Maryland and New York have embedded transparency requirements into their approaches. Maryland requires employers to obtain employee consent before using AI-driven facial recognition technology for hiring, while New York’s state-level bill restricts the use of digital replicas of workers’ voices or likenesses. New York City has introduced additional local-level obligations through its Administrative Code, mandating bias audits and transparency for automated tools used in hiring and promotions.

Texas has implemented one of the broadest prohibitions to date. From January 2026, employers cannot develop or use AI to intentionally discriminate against protected classes, manipulate behaviour, conduct social scoring, or uniquely identify individuals without consent. The law establishes a set of guardrails designed to limit harmful or manipulative applications of AI systems.

Although most of these state regulations currently centre on employment, the scope is expected to widen. Future rules may address AI in financial services, healthcare, consumer technology, and other sectors. This evolving patchwork raises concerns for firms operating nationally—if all 50 states produce different rules covering similar issues, compliance burdens will increase substantially.

In more regulated industries such as financial services, bodies like FINRA and the SEC are assessing how AI should be governed within existing frameworks. Their work continues despite recent withdrawals of proposed federal regulations, indicating that supervisory bodies still recognise AI’s transformative potential and associated risks.

It is clear that both federal and state governments acknowledge the significant impact AI will have across society. What remains uncertain is how these regulations will align, how conflicts will be managed, and what compliance will look like for organisations operating in every US state. As activity accelerates, businesses will need to monitor developments closely to ensure they remain compliant as the regulatory landscape shifts.

Read the daily FinTech news

Copyright © 2025 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.