GlobalData has highlighted that adopting responsible AI practices can give businesses a significant competitive edge while also mitigating ethical and legal risks.
As artificial intelligence evolves, concerns about its misuse and the lack of adequate regulation remain prevalent. Without oversight, issues such as data protection breaches, copyright infringement, and misinformation are likely to escalate.
An open letter signed by 100 industry professionals emphasised the urgency of addressing AI risks, likening their potential societal impact to that of nuclear war. Despite these risks, over a third of British businesses see regulatory uncertainty as a barrier to adopting AI technologies. Laura Petrone, principal analyst at GlobalData, believes businesses can avoid this uncertainty by embracing a responsible AI strategy early in their adoption journey.
Continuous risk management through responsible AI
Petrone explained that responsible AI entails managing risks from ethical and legal perspectives through an iterative process. “Responsible AI means managing AI-related risks from an ethical and legal perspective. It’s not as simple as ensuring compliance, there is an iterative process in place to achieve this, and for companies to be ethically protected they need to move through it,” she said.
She further outlined the need for universal AI principles such as transparency, accountability, and social impact, which serve as the foundation for responsible AI practices. Companies adhering to these principles and committing to future regulations can engage with upcoming standards and certifications, strengthening their risk mitigation strategies.
Global approaches to responsible AI
Countries worldwide are developing unique approaches to enforce responsible AI practices. The UK is creating a principle-based framework tailored for specific sectors, allowing regulators to adapt their guidelines. The US takes a lighter regulatory approach to encourage innovation, while Japan views AI as a tool to address population decline and labour shortages, with innovation driving their policy development.
Enforcing AI regulations
Enforcement efforts are already underway, with regulators monitoring AI models and systems for compliance. For example, Italy’s temporary ban on OpenAI’s ChatGPT in 2023, due to GDPR violations, led to policy changes and the lifting of the ban. Petrone explained, “In this example we can see two of the five key aspects for responsible AI in action, regulations were enforced resulting in a temporary restriction and in response information was voluntarily presented leading to a change in OpenAI policy and a resolution.”
Additionally, copyright disputes between AI companies and creators underscore the need for transparency in AI training processes. High-profile cases, such as claims against Stability AI for using copyrighted material without consent, demonstrate the consequences of inadequate guidelines.
Keep up with all the latest FinTech news here.
Copyright © 2025 FinTech Global