In a recent post by RegTech firm Saifr, the company provided a contrast between the EU and US’ approaches to the regulation of AI.
The European Union (EU) and the United States (US) are powerhouses with the capability to mould the global governance of artificial intelligence (AI).
Their strategies for AI risk mitigation share similarities, aiming to bolster regulatory scrutiny and encourage transatlantic collaboration. However, when diving deep, each presents a unique stance in addressing the multifaceted challenges posed by AI.
The EU approach
In 2018, the EU initiated a study with 52 AI connoisseurs to lay down guidelines for reliable AI. The executive summary of their report highlighted three core tenets for an AI system throughout its lifecycle:
- It must align with the existing laws and regulations,
- Ethical compliance, ensuring the system upholds ethical values and principles, and
- Guarantee robustness from both technical and societal angles to avert unintended negative impacts.
- Elaborating further, the group presented seven detailed points emphasising ethics and robustness, often overshadowing legality.
Moreover, the EU’s strategy for AI risk mitigation is underpinned by legislation customised for specific digital domains. It aims to introduce new stipulations for high-risk AI implementations in socioeconomic arenas, governmental AI applications, and AI-integrated consumer products. Furthermore, there’s a thrust towards greater public transparency and influence over AI designs in platforms like social media and e-commerce.
The US approach
Contrastingly, the US’ AI risk management outlook diverges from the EU. The focus stateside is significantly towards building non-regulatory infrastructures, emphasising the business utilities and merits of AI rather than its regulatory implications.
In a notable meeting in September 2023, tech magnates including Elon Musk, Mark Zuckerberg, and Sam Altman convened with US senators to deliberate on AI’s future trajectory. A shared sentiment was the imperative role of the US government in AI’s regulation. Musk candidly expressed AI as a potential “civilisation risk”, while Google’s CEO Sundar Pichai articulated that “AI is too important not to regulate—and too important not to regulate well.” Financial institutions also voiced their opinions, advocating for decreased interference but amplified regulatory transparency.
In its endeavour to streamline AI regulations, the US government has actively sought to institutionalise AI governance. Furthermore, the SEC has intensified its regulatory measures, particularly with its recent proposal targeting conflicts of interest concerning predictive data analytics. The directive seeks to safeguard investor interests, focusing on technologies that influence investment decisions, like AI-powered chatbots.
Working together
Recognising the inherent benefits, both the EU and US, along with other nations, have felt the compulsion for cooperative action. This sentiment culminated with the formation of the EU-US Trade and Technology Council in 2021. Aimed at forging a shared understanding of AI, they have concurred on working in tandem for global AI standards and analysing new AI tech risks. Moreover, the G7’s “Hiroshima AI process” and initiatives from OECD and the United Nations reinforce the global tilt towards comprehensive AI governance.
To further synchronise EU-US efforts, the US could consider establishing a dedicated federal agency for AI oversight. A mutual exchange of insights between the two can pave the way for a more harmonised application of AI algorithms and governing rules. As AI regulations mature, the focus should inevitably shift towards global standardisation and rigorous monitoring of AI implementations. Collaborative initiatives between countries will be pivotal in shaping democratic AI governance.
Read the full post here.
Keep up with all the latest FinTech news here.
Copyright © 2023 FinTech Global