EU sets global AI standards with new regulatory framework

The EU has initiated the world’s first comprehensive legislation aimed at artificial intelligence, known as the European AI Act. 

The regulation, designed to create a trustworthy and secure environment for AI development and usage across the EU, underscores the EU’s commitment to protecting fundamental rights while fostering technological advancement and innovation.

The AI Act introduces a classification system for AI technologies based on the level of risk they pose, ranging from minimal to unacceptable. AI systems that present minimal risk, such as AI-enhanced spam filters and recommendation engines, are subjected to no specific obligations, allowing for voluntary adherence to additional codes of conduct. On the other hand, AI systems with specific transparency risks, like chatbots and synthetic media, must now clearly indicate their non-human nature to users.

This includes mandatory labelling of AI-generated content such as deep fakes, ensuring users are aware they are interacting with or viewing synthetic content.

The regulation imposes stringent requirements on high-risk AI applications, which include technologies used in critical areas like recruitment, loan eligibility assessments, and autonomous robotics. These systems must adhere to robust standards for data quality, risk mitigation, transparency, human oversight, and cybersecurity. To support innovation while ensuring compliance, the EU will establish regulatory sandboxes.

AI applications that pose an unacceptable risk to people’s rights, such as those manipulating behaviour or enabling social scoring, are outright banned under the AI Act. The Act also sets forth provisions for general-purpose AI models, increasingly prevalent in various applications, to enhance transparency and address potential systemic risks.

Enforcement of the AI Act will be coordinated by the newly established AI Office at the EU level, supported by the European Artificial Intelligence Board and other advisory bodies, ensuring a uniform application across member states. Companies failing to comply with the AI Act face substantial fines, up to 7% of their global annual turnover for the most severe breaches.

The AI Act’s most critical provisions will commence on 2 August 2026, with immediate application for certain bans and a transitional period that includes voluntary compliance initiatives like the AI Pact.

This pioneering legislation by the EU could set a global benchmark for AI regulation, balancing innovation with safety and ethical considerations.

Keep up with all the latest FinTech news here

Copyright © 2024 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.