finance
monthly
Personal Finance. Money. Investing.
Contribute
Newsletter
Corporate

2017 was a busy year for regulatory compliance and technology across the globe. We witnessed countless mass data breaches, sexual misconduct claims, money laundering scandals, and of course, the Wild West that is the Blockchain. Alongside that, we continue to see significant advancements in Artificial Intelligence (AI) and Machine Learning (ML) technologies across all industries, being applied to automate business functions, gain insights into behavior patterns, and more.This year, the Banking Industry will adopt ML and AI-based automation for enhanced efficiency and data-driven decision making.

Banks were slow to adopt ML based automation in 2017, but to remain competitive in 2018 and onward, banks will have to consider  how adding AI and ML fueled technologies will impact their growth and improve the efficiency of their business processes.

Many financial institutions have been quick to experiment with AI applications in the frontend of the business, for example, to streamline and improve customer service via chatbots. In general, the value proposition is that AI can automate manual and repetitive roles but now, we are seeing AI being applied towards broader data-driven analysis and decision making.

This not only reduces costs and saves time, it also eliminates the risk associated with human prone errors. The machine is well-situated to consume large data sets while also self-learning overtime. But before even considering the tremendous opportunities to implement this technology on the backend of the business, organization leaders will need to educate themselves on how the technology actually works.

 

AI in the Enterprise

While many AI-based solutions have advanced over years, the financial industry remains suspicious of the science behind the decisions made by such technologies. Now we are seeing a shift towards increased transparency in AI-based solutions, where the science behind machine learning (ML) based decisions can be justified, tracked, and verified. This should help move along industries on the cusp of adoption.

Artificial Intelligence and machine learning in the long term can be applied to reduce costs and time by automating a once manual process.  However, on average, most AI algorithms are only about 80% accurate, which doesn’t live up to the business standards of accuracy. That leaves 20% flawed, which requires human input to bridge the gap. There is an inherent design flaw to any AI solution which does not utilize some human component in development. It is a general understanding that the most successful AI models use the 80:20 rule, where 80% is AI generated, and 20% is human input. This is implemented in the form of supervised learning or human-in-the-loop.

 

Human-in-the-loop Integration

A best practice in the successful development of AI includes a human component, typically referred to as “Human-in-the-loop” or supervised learning model. The way it works is that machine learning makes the first attempt to process the data and it assigns a confidence score on how sure the algorithm is at making that judgement.  If the confidence value is low, then it is flagged for one or many humans to help with the decision.  Once humans make the decision, their judgements are fed back into the machine learning algorithm to make it smarter. Through active learning, the intelligence of the machine is strengthened, but the quality of the training data is based on the human contributors.

(CrowdFlower Inc, n.d.)

Some data analysis is specific and complex, such as the case with Financial Regulation. The evolving and complex nature of regulation is a tough subject matter to master. AI in RegTech requires an in-depth knowledge and understanding of the regulatory framework and how to read and interpret the text.  In these types of fields, expertise is far more critical than the tool. However, if a tool could incorporate subject matter experts into the machine learning model, then the tool becomes exponentially more viable.

Expert-in-the-Loop takes Human-in-the-Loop to another level. It makes use of subject matter experts to train the machine and flag the machine’s errors. For example, a well trained machine in the RegTech industry could eliminate countless hours a compliance officer takes in researching, reading, and interpreting regulations, by automatically classifying documents into topic-specific categories or by summarizing the aspects of a document that have changed from a previous version.

The Expert-in-the-Loop model differs from Human-in-the-Loop in one major way: Human-in-the-Loop doesn’t differentiate between the aptitude level of the various participants to judge the particular question correctly. Human-in-the-Loop takes advantage of the Law of Averages which states that if many people participate, the average response will yield the correct result. So the response from a college student and a PHd student would be weighed the same. On the other hand, Expert-in-the-Loop , specifically looks at the experience level of the participant to determine how their result will be weighed.  With Expert-in-the-Loop, a human is essentially supervising another human’s qualifications. While the cost is higher than both the unsupervised and the Human-in-the-Loop models, the results of Expert-in-the-Loop models are proportionally more accurate, making them suitable for highly specialized and industry specific topics.

Nearly every industry is exploring how to use AI and machine learning as tools to increase efficiency and streamline data analysis, among other things. The future holds endless possibilities for this emerging technology. It serves as a bridge to close the gap between information and the time it takes to compile results. The speed of data can bring about a new era of understanding and increased reaction time in the Financial Services industry.  There are a lot of unknowns still left to address, but the technology is becoming more intelligent and its applications more advanced. Early adopters will have the benefit of experience on their side once the inevitable industry-wide adoption finally falls in place. Until then, organizations can pilot new applications and evaluate their impact and success. Ultimately, the financial industry will need to educate themselves on the pros and cons, while considering the implementation of this new technology.

August 9th marked precisely a decade since the start of the financial crisis. On that date in 2007, BNP Paribas was the first major bank to acknowledge the risk of exposure to the sub-prime mortgage market when it announced that it was terminating activity in three hedge funds that specialised in US mortgage debt. Inevitably, the regulatory progress as a result has evolved and became extremely rigorous.

Looking back, we can attribute the financial crisis in part to model complexity and systemic obfuscation in the derivatives and credit markets. Banks have been placed under a microscope and scrutinised for their resilience to a wide range of risks. This is hardly surprising, given the post-Global Financial Crisis (GFC) realisation of interconnectedness of systemic risk and the financial services industry, particularly for so-called globally Systemically Important Financial Institutions (SIFIs).

To maintain the industry’s public good of wealth creation, liquidity provision and sound money management, financial services must address deficiencies in model governance by learning from practices implemented by “high integrity” aerospace, medical, and automotive industries. While model and data governance have been elevated in regulations such as TRIM, BCBS 239, Solvency II and the PRA Stress Test guidance, regulators and all industry participants on sell- and buy-sides must work harder to drive thorough model governance standards.

Now, the financial services industry is complex and rightly thrives on complexity – that’s how it fosters wealth creation in the real economy. The industry can manage complexity in risk better, but not by patching together systems with additional spreadsheets and tools of often unknown origin. It can work towards a harmonised architecture/platform which manages, models and reports risk whatever the department and job role, whether a chief risk officer, risk modeller, developer or front office representative. It should be able to deal, too, with the uncoordinated barrage from regulators and supervisors.

In software terms, this is achievable. It has been realised in non-financial organisations. Look at the automotive industry: building an automated safety-first model to production process, with validation and verification, has increased vehicle reliability, assurance and environmental protection, as well as unleashing the vehicle design creativity resulting in multiple new features at reduced cost, and significantly improving the driver experience. This process has enabled the development of safety-first assisted or fully-automated driving delivery of capabilities to the market.

Financial services can and should strive to do the same. It is quite possible for risk, projection and valuation models to be built, customised and improved, rapidly, consistently and in coordination. It is also possible to implement while minimising technical debt, applying good development processes that in turn foster continuous system improvement. Those models can be made available to whoever needs them, whether ardent researcher, FATCA-liable executive or prospective customer.

However, this requires cultural change. Established bureaucracies need to be at worst crushed, at best, reformed. Cries of “we’ve always done it this way” should be challenged. Financial institutions must seek to reduce complexity where complexity adds nothing, both in communication and in model development.

That said, we are seeing an increase in the development of new financial models. After the crash, experts highlighted the importance of model review, or as some presenters termed it, challenger modelling. Regulations are more baked here, with CCAR promoting “benchmark or challenger models”, and SR11-7 favouring “benchmarking”. The Bank of England PRA, has previously pinpointed model review and challenge as processes needing improvement.

Some suggested challenger models could incorporate machine learning, perhaps too black box for current frontline regulatory calculations, but this could prove interesting for validation and potentially improving accuracy.

In comparison to other industries, the financial services industry lacks a certain degree of understanding of software verification – that the software, as opposed to the model, delivers true output. The aerospace industry boasts regulations such as DO178C, which go far beyond anything in the financial services.

But, new financial models have greater predictive capacity and have far superior accuracy overall than previously obtained with linear models. Using an effective mathematical modelling tool, it may be possible to predict another financial crisis before it arrives.

With all this in mind, to succeed in addressing good model governance, financial institutions should aspire to a unified system with reduced operational, model and legal risks, servicing multiple disconnected supervisory regimes, in turn improving productivity through risk-aware development.

A decade on, regulators and banks have tackled the problems of model complexity and systemic obfuscation in the derivatives and credit markets, but the industry now heads into a new bubble of artificial intelligence, even bigger data, crypto-currencies, robo-advisors and a proliferating patchwork of confusing, unsourced and often poorly-supported computer languages, putting the international population at risk of experiencing new global financial and economic crises.

With the wave of new technology, cybersecurity is another factor that must be considered when calculating financial risk. Details of measuring it and bank mitigation are still vague but it’s importance should not overshadow the ongoing problem of human errors, which have the potential to cause an equal amount of damage.

Given that we are still feeling the ramifications of the previous financial crisis, it is imperative that good model governance standards are agreed upon. In this regard, financial risk managers can and should lead the industry in developing sound model governance practices.

 

About Stuart Kozola

 Stuart Kozola is Product Manager Computational Finance and FinTech at MathWorks. He is interested in the adoption of model-led technology in financial services, working with quants, modellers, developers and business stakeholders on understanding and changing their research to production workflows on the buy-side, sell-side, front office, middle office, insurance, and more.

Website: https://uk.mathworks.com/

About Finance Monthly

Universal Media logo
Finance Monthly is a comprehensive website tailored for individuals seeking insights into the world of consumer finance and money management. It offers news, commentary, and in-depth analysis on topics crucial to personal financial management and decision-making. Whether you're interested in budgeting, investing, or understanding market trends, Finance Monthly provides valuable information to help you navigate the financial aspects of everyday life.
© 2024 Finance Monthly - All Rights Reserved.
News Illustration

Get our free monthly FM email

Subscribe to Finance Monthly and Get the Latest Finance News, Opinion and Insight Direct to you every month.
chevron-right-circle linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram