Amidst a large swathe of planned job cuts at Lloyds, at the beginning of November the bank announced that there was a silver lining – a £3 billion investment programme that will see the country’s biggest high-street lender radically transform its digital strategy. While 6,000 existing roles are being cut from a broad range of areas, 8,000 are being created to focus on areas of digital expansion, including in the group transformation unit. And, the CEO of Tectrade Alex Fagioli points out, it’s about time for Lloyds, as it begins to play catch up with an industry that has quietly been revolutionised by high-street banks and start-ups that have gone all-in on digital banking.
Digital banking provides a great deal of benefits to administrators and alike. Customers are given a more flexible way of banking, accessing their accounts and transferring their money without relying on bank hours. Managers have an unprecedented insight into the activity of branches and can offer services to their customers which they had previously been incapable of. However, the challenges and risks that come with digital transformation have led traditionally large financial institutions like Lloyds to poorly implementing such practices to the detriment of all involved.
In April, a routine systems upgrade at TSB went awry and left 1.9 million customers locked out of their accounts for up to a month. Similarly on Friday 1 June, 5.2 million transactions using Visa failed across Europe as a result of one single faulty switch in one of Visa’s data centres. This isn’t just a continental issue; Atlanta-based Sun Trust – a bank with 1,400 bank branches and 2,160 – experienced a significant outage to its online and mobile banking platforms in September due to a botched upgrade. In all of these cases, the outages weren’t the result of cyberattack or weather-related problems. Instead, these outages came as a result of seemingly insignificant technical factors that had been overlooked – and Lloyds would be wise to heed these cautionary tales.
The challenges and risks that come with digital transformation have led traditionally large financial institutions like Lloyds to poorly implementing such practices to the detriment of all involved.
In the first two instances, cause of the outages are very clear– and they were entirely preventable. TSB rushed into an upgrade by hastily initiating the update across its entire system. For a technical reason that we will likely never know, the update tanked the entire bank and left it at a standstill while it tried to pick up the pieces. Even when it managed to get everything back in place, TSB is now permanently scarred by the event, with its reputation still reeling. The prevention for this would have been a gradual rollout, as opposed to a sweeping installation. If the upgrade was initially piloted with non-essential systems, then the bugs would likely have been spotted early, with little fuss and no media spotlight.
Likewise, the Visa incident came as a result of a single faulty switch and that betrays a lack of understanding of its own systems. It is shocking how few companies have carried out any form of disaster recovery testing on their infrastructure. Administrators are incapable of having a full understanding of the systems they are responsible for without testing them in a controlled and simulated environment. With a controlled disaster test, that faulty switch would have been highlighted and those 5.2 million transactions would have been completed. It’s similar to a car – the reason that MOTs are essential is so that any issues can be highlighted well ahead of them having a serious effect on the vehicle’s performance. Banks must carry out a cyber MOT in order to keep their systems in check and to give IT teams a full working knowledge of any potential issues.
But this is all in the case of preventable issues, and in the modern day accepted wisdom is not if, it’s when outages will happen.
Thus far we’ve only addressed routine operations, but cyberattack is of course an omnipresent threat. Ransomware has spent the past couple of years as the ‘big bad’ in cybercrime, and it is an even bigger threat to the financial sector. Over the past 12 months, the financial services and insurance sector was attacked by ransomware more than any other industry, with the number of cyberattacks against financial services companies in particular, rising by more than 80%. If a bank were to be hit by a ransomware attack, all online systems for banking and insurance transactions will need to be taken offline, rendering that organisation unable to operate. According to a report from Osterman Research, there is a 50% chance of employees in this industry suffering productivity loss, a 30% chance that the financial and insurance services will shut down temporarily, and a 20% chance of revenue loss and adverse effect on customer perception. In cases of ransomware, data recovery can be very difficult as there is a large amount of customer information stored in a variety of disparate systems. As such, many organisations may feel they have no choice but to pay the fee demanded of them to regain access to the data.
Over the past 12 months, the financial services and insurance sector was attacked by ransomware more than any other industry, with the number of cyberattacks against financial services companies in particular, rising by more than 80%.
Equally as unpreventable are environmental factors. Areas like the Southern States of the USA are frequently dominated by hurricanes and tropical storms which can cause large disruptions to everything from schools to banks. Many of these buildings have to be built with this in mind, and network operations should be created with the same mindset. In the UK, by contrast, we don’t have to deal with such extreme weather conditions, but environmental considerations must be made with the potential for freak accidents. A burst pipe in a shared building or road workers drilling through electrical or network cabling, for example, could see a bank offline for an indeterminate period of time outside of its control. One example of this in action was with National Australia Bank, which suffered a power outage that downed ATMs, Eftpos and online banking across the country for five hours in May.
In all of these situations where outages can occur, banks must make sure they have the capacity to get their systems back online and fast. The best way to do this is by adopting a zero-day approach to architecture. Zero-day architecture won’t prevent an outage, but it will mitigate the effects. It allows organisations to minimise downtime and recover from backups without having to worry about lost data.
A zero-day recovery architecture is a service that enables administrators to quickly bring work code or data into operation in the event of any outages, without having to worry about whether the workload is still compromised. An evolution of the 3-2-1 backup rule (three copies of your data stored on two different media and one backup kept offsite), zero-day recovery enables an IT department to partner with the cyber team and create a set of policies which define the architecture for what they want to do with data backups being stored offsite, normally in the cloud. This policy assigns an appropriate storage cost and therefore recovery time to each workload according to its strategic value to the business. It could, for example, mean that a particular workload needs to be brought back into the system within 20 minutes while another workload can wait a couple of days.
Without learning the lessons of the high-profile outages that have come before it from banks that have undergone their own transformations, Lloyds is doomed to repeat the same mistakes.
As it begins its massive investment in digital transformation, Lloyds could very easily sink its budget into exciting features that promise to improve the lives of customers and employees. However, without learning the lessons of the high-profile outages that have come before it from banks that have undergone their own transformations, Lloyds is doomed to repeat the same mistakes. You can promise all the features in the world, but without a solid foundation the bank will essentially be a house of cards, ready to collapse at the slightest sign of danger. All banks, regardless of size, must prioritise the minimisation of downtime by having common sense policies in patch management, full knowledge of a system gained through disaster testing and a recovery strategy in place that enables it to get back online at speed.