By Alexandra Mousavizadeh, CEO and co-founder of Evident

 

The rush to deploy Generative AI tools like ChatGPT has created a backlash and led to calls for a pause on deployment while we work out how to regulate these powerful systems. The challenge is one of imagination - what should the regulation look like and how should it be enforced? If they have the will to lead, the banks might hold the key to a workable solution...

 

The basis for The Future of Life Institute’s call to pause experimentation with large artificial intelligence (AI) systems was to buy some time. Time to do what, exactly?

 

OpenAI’s CEO and founder, Sam Altman, has argued that a vital ingredient for a positive AI future is an effective global regulatory framework. Yet no one can agree what this might look like. The 18,980 signatories to the open letter, (some of whom have since backed out or claimed to have been misrepresented) have not put forward a plan.

 

The current regulatory landscape for AI is a messy patchwork of national- and industry-level initiatives. These range from FTC and FDA efforts to address specific, yet limited industry use cases, to the EU’s AI Act and the US’s Algorithmic Accountability Act - both admirable in their intent to create a more universal framework, but flawed in their appraisal of risk.

 

Crucially, there has been no consensus reached amongst technologists, executives or regulators regarding what it’s like to be an end-user of AI-based products, and hence, what sort of regulatory framework is appropriate to pursue.

 

Like opening a bank account

For many people, AI and its potential harms remain theoretical or fantastical - conjuring up images of Terminator and Skynet rather than practical concerns. And yet, seen within an industry-specific setting such as financial services, it’s easier to understand the AI risks that are already emerging. For example, being defrauded of your life's savings, unfairly denied insurance for medical care, or extorted over loan repayments.

 

I’d argue that being an end user of an AI system is certainly comparable to a customer opening a new bank account, stepping onto a plane or taking a prescription pill - all industries that require strict external oversight due to the acknowledged risks involved.

 

When we open a bank account, we do so with the knowledge that we are protected by a rigorous, dutiful and democratically constructed set of regulation-enforced and accredited safety standards which are subject to external oversight. The regulator sets the standards for the industry, and while it won’t fully prevent bank runs, ID fraud or other depositor woes, it protects the vast majority of customers most of the time - to the benefit of the industry, and society at large.

 

It follows that we ought to create similar standards for any providers seeking to offer AI-based products within these industries and ensure clear oversight to prevent any breaches - intentional or otherwise - from occurring. We should even consider setting the bar higher when it comes to AI standards, due to the potential speed, scale and scope of deployment that ChatGPT has shown to be possible for these systems.

 

Banks can set the agenda

The idea of a global regulatory framework for AI is bandied about much more often than it is scrutinised. And yet, one key lesson from the financial sector is that overlapping national regulatory bodies, with a remit based in law and the powers to investigate and punish organisations that transgress, is the closest humanity has ever come to controlling systems which, like AI, are both powerful and profitable.

 

Look no further than the cryptocurrency sector as it is dragged kicking and screaming into the regulatory capture of traditional banking, shedding the worst of its fraud, misdemeanour and exploitation of users as it goes.

 

Similarly, by approaching AI through the prism of the strict regulatory regime that they’ve been working in for years, the banking industry has already taken significant pre-emptive steps to prevent potential harms from occurring.

 

The world’s leading banks have already developed best practices that are well-suited to an AI-led future. Kitemarked security (to stop users from seeing one another’s data, as was the case with ChatGPT); a mixture of auditing and industrial safety standards; accreditation for practitioners (where now most AI developers have no training at all in ethical application; transparency and accountable coding); interdepartmental oversight so leaders get early warning when something is going wrong. And of course, there’s intense scrutiny by regulators and regular submissions of financial and other performance data.

 

All of these tools will be extended to AI deployment in banking use cases. The challenge - and opportunity - for banks is to embrace this publicly. Banks have no greater asset than trust. Getting ahead of this topic will enable them to build public confidence in their approach and set an example across the wider economy - potentially encouraging some of their corporate and SMB clients to embrace a similar mindset.

 

Seizing the initiative

Time is running out for industry leaders, policymakers and regulators to fill the governance vacuum and ensure that the pursuit of powerful AI proceeds with greater caution and consideration.

 

Getting artificial intelligence regulation right is a matter of imagination, resources and speed. The imaginative step from current banking best practice to include AI is a feasible one. Banks do not lack in resources. It’s time for banking leaders to seize the initiative, reaffirm their own commitments to (and internal standards) around responsible AI self-governance, and drive the public discourse around workable, industry-specific AI regulation.