Should AI be made accountable for its decisions?


It is hard for AI to be made accountable for an error, you need a human there to help support judgements made, according to a panel at the Global RegTech Summit 2019.

Ever since the financial crisis of 2008, accountability has been a big concern in the financial market. The UK was one of the countries to put in strict measures to ensure businesses know who is responsible for what. In 2016, the FCA deployed the Senior Managers & Certification Regime (SM&CR) which set out to foster a culture of staff taking personal responsibility for their actions. AI and machine learning technology is become smarter and more widely used, but can it be accountable?

The panel consisted of Reg Tech Berlin founder Astrid Freier, Theta Lake co-founder and CEO Devin Redmond, Redland compliance director and co-founder Carl Redfern, Eigen Technologies co-founder Lewis Z. Liu, and Credit Suisse managing director Raymond Hanson. Their discussion revolved around AI and machine learning technology and its use within the world of regulation.

Astrid Freier chaired the debate, pitching them the question on the ethical challenges of AI and whether a machine can be sanctioned.

AI is only as smart as the data it is given, we have not yet got sentient technology which can think for itself, so everything done is due to its programming. Can this be really trusted to do everything by itself without a human on the side-lines to ensure the correct actions are being taken? Technology might use data and generate insights, but everything can be traced back to who taught it what.

Eigen Technologies co-founder Lewis Z. Liu gave an instance where his company encountered a client that wanted to have a human involved rather than giving an AI full reign of operations this boiled down to a need of accountability.

The RegTech startup enables companies to transform documents into data, extracting key information for compliance, due diligence, and anti-fraud. Day-one version of its product used algorithms to fully automate these tasks without the need of an audit trail because it was 95 per cent accurate, more than a human could be. However, this was not appealing to clients. The reason was because they could not just report the results to a regulator, they needed to have a full training history report, the audit history and why decisions were derived.

After revising this, the company now ensures everything a machine was taught can be tracked back to the programmer, when it was implemented and the reason why the technology was trained it, he said. This puts more confidence in the platform as a human can explain why decisions were made.

Eigen Technologies co-founder Lewis Z. Liu said, ?I love the point around enhancing individual accountability. I will say the one caveat from my experience of this is more in the self-driving car space. If a self-driving car crashes and kills someone, who responsible? The software engineer, the people who created the training data or something else. And at that point it extremely difficult to trace back.p>

In a bank, there not likely to be a situation where an AI ends up killing someone, unless the 80s Sc-Fi movies are correct. Nonetheless, the accountability of AI is paramount. If a programme miscalculations lead to money being laundered or makes erroneous investment decisions, someone is at fault. Is it enough to slap an AI on the wrist and say, it not its fault, it did what it was programmed to do. Would that make the programmer at fault? Mistakes often work their way into programs by sheer accident, but should the programmer be penalised for an honest slipup.

Redland compliance director and co-founder Carl Redfern joined in the debate, stating that regulators around the world are seeking opportunities to improve individual accountability. He would argue that it is not plausible for a machine to be sanctioned, but instead, the senior manager personally accountable for the technology and what it handles can.

He said, ?I think that there is a potential fine line between responsibility and accountability. I think it gets very complicated to work out who is responsible and who is the design authority, who has made the decisions that resulted in this particular adverse outcome.

?I think key, though, to ensuring that the opportunities of AI and ML continue to be enhanced and developed and taken up in the business world is to ensure that even if you dont understand it, somebody is identified as accountable. Then it down to their attitude to risk it. It comes back to the point about whether they are comfortable with underwriting this activity because it being undertaken in my name.p>

Redland provides businesses with accountability and compliance solutions for SM&CR. Its platform is people-centric application which fits seamlessly in with an organisation existing infrastructure. A user can understand everybody role within a business, their responsibilities and reporting lines, making it easy to identify and highlight any transactions which need monitoring or checking.

Looking through the crystal ball, Redfern described AI future as ?sort of the Microsoft office paperclip but with superpowers.Instead of the AI becoming all-powerful, Redfern sees it becoming more of a support function for humans so they can spend less time on rudimental actions and focus on more important tasks.

AI can have such a massive impact on the market and to avoid making use of the tremendous benefits it could have, just because of putting faith in it and the accountability issues that could arise, would be a shame. Typically, when people discuss the nature of AI it normally leads towards how it will call the end of humans in the workplace. Instead of an office of bright-eyed staff, there will be a bot. Contrary to this, there was great support from the panel on the coming together of human and AI processes.

Theta Lake co-founder and CEO Devin Redmond said, ?The best usage of ML with a human is actually that combination. You can extract a lot of cognitive insights in a much more scalable way across a lot more datasets and then allow humans to do what they do best, which is apply judgement and reasoning around that and make an action.

?To rely on a machine by itself to make the ultimate decision, youre then relying on how well you trained it, what type of data sets you gave it, are you tying it to the right problem and if you do that inappropriately youre going to end up with a mess of things that you have to unwind. Combining those two is very much the right way to do that.p>

California-based Theta Lake uses AI, natural language processing and deep learning to automatically detect regulatory and corporate compliance risks in video, text or audio. Its services are also capable of generating insights to improve and automate workflows and help businesses conduct increase the number of accurate reviews.

New means of communication are entering the market, all opening up new challenges for firms to keep track of. Regulations like MiFID II or Dodd-Frank have clear stipulations requiring firms to track what is being disclosed and ensuring everything is being captured. One of the developments for the AI technology stack, Redmond is seeing is the technology being brought forward to the front-end of business applications.

He said, ?One of the things that weve been working on with Microsoft is having a compliance advisor built in the communications. By having a compliance advisor be at the front-end youre not just doing recording, review and supervision, youre actually providing advice in real-time as people do things like turn on the camera or share a document. It can make sure that a disclosure is right at their fingertips and having AI available to be part of that so that it can deal with the most common mistakes and the most common things that people do.p>

Copyright ?? 2019 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.