How the US Executive Order on AI sets the stage for future regulations

AI

The recent executive order on AI issued by the Biden Administration has marked a significant stride in shaping the national AI landscape.

According to 4CRisk.ai, with stringent measures like enhanced transparency and rigorous safety testing mandates, this directive aims to mitigate potential risks of AI technologies, ensuring their security and reliability before broad adoption.

Advanced AI developers are now obliged to disclose their safety test outcomes to the U.S. government, and firms must alert federal authorities when training potentially hazardous AI models. However, when juxtaposed with the EU AI Act, the EO seems less stringent in escalating penalties for non-compliance and lacks clear repercussions, stated the firm.

The administration underscores the necessity for sustainable and trustworthy AI, focusing on building and sustaining public trust through stringent standards. The National Institute of Standards and Technology (NIST) is tasked with developing exhaustive standards for red-team testing, essential for pre-release safety evaluations of AI systems.

Furthermore, the Department of Homeland Security is set to enforce these standards across critical infrastructures, introducing the AI Safety and Security Board to elevate national security protocols. This comprehensive approach highlights the U.S. commitment to fostering responsible AI innovation that balances technological advances with safety and trust.

Despite these measures, voices in the tech community and beyond are questioning whether the EO sufficiently addresses the rapid evolution of AI, particularly outside critical infrastructure domains.

The EO broadly covers key areas including national security, privacy, and fairness, and extends its reach to the ethical use of AI in education, healthcare, and criminal justice. It introduces stringent standards for biological synthesis screening to mitigate misuse in life sciences, reflecting a broad-spectrum strategy aimed at national and consumer protection.

Comparison with EU’s Legislative Framework Notably, the EO was signed at a pivotal time—right before a major G7 meeting and an international AI safety summit hosted by the UK, emphasising the urgency of a cohesive national AI policy.

The White House released a fact sheet highlighting the EO’s focus on civil rights and calling for bipartisan data privacy legislation, suggesting a robust approach to algorithmic bias and privacy. Unlike the EU’s AI Act that offers a uniform regulatory framework across Member States, the U.S. opts for a more segmented approach, potentially leading to varied standards across industries and jurisdictions.

While the EO lays foundational regulations, it stops short of addressing all AI challenges, indicating a need for more enduring legal frameworks. The forthcoming months will be crucial in evaluating the effectiveness of these regulations and their ability to keep pace with AI’s rapid advancement. Critics argue that more decisive actions and clearer consequences are necessary to ensure comprehensive management of AI-related risks.

Keep up with all the latest FinTech news here

Copyright © 2024 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.