As reported in the Financial Conduct Authority survey by Which?, the UK banking sector was hit by IT outages on a daily basis in the last nine months of 2018, demonstrating a higher frequency of major banking glitches than previously thought. Barclays alone reported 41 major incidents during those months, followed by Lloyds Bank with 37 IT failures and Halifax/Bank of Scotland with 31. Whilst TSB only reported 16 incidents, their week-long outage last year cost them around £330m as well as the longer-term impact of the clients lost.

Just minutes of downtime can significantly impact the financial sector, which holds the data and funds of millions of customers who are reliant on having access to these services and trust that their assets will be kept safe. To minimise the effects of a disaster and ensure business continuity in case of an IT failure or ransomware attack, businesses must invest in customised disaster recovery services which allow data to be brought back as quickly as possible in case of an outage. Diverting just a small proportion of the cybersecurity budget towards routine IT operations can deliver significant ROI in terms of increased operational resilience. Regular testing and optimisation of backup and recovery systems can deliver big rewards in terms of preventing issues and getting back up and running quickly should disaster strike.

As reported in the Financial Conduct Authority survey by Which?, the UK banking sector was hit by IT outages on a daily basis in the last nine months of 2018, demonstrating a higher frequency of major banking glitches than previously thought.

Safeguarding your data 

In the event of an IT failure or a ransomware attack, IT operators need a way to get systems back online and to do so fast. As noted by Gartner, the average cost of IT downtime is £4,400 per minute. The implications of IT failures go far beyond financial losses however, as they also damage the reputation of the business as well as lead to massive amounts of operative time lost. When a cyberattack or an IT outage takes place, it is not the failure or attack itself that causes the most harm but the resulting downtime of operations affecting productivity and credibility of the organisation. To avoid such losses organisations must put appropriate recovery systems in place. But to do so, they must first understand the IT systems they run and know what data they hold.

To stop the nightmare scenario from becoming reality, a solution able to recover business-essential data and get the most crucial systems back online in minutes is needed. A zero-day approach to IT architecture can do just that, as it allows organisations to prioritise workloads, with a planned recovery strategy of making sure the most important systems are brought back to first in case of an outage.

A zero-day recovery architecture is a service that enables operators to quickly bring workloads or data back into operation in the event of an IT failure or cyberattack, without having to worry about whether the workload is compromised. With the so-called 3-2-1 backup rule – meaning three copies of data stored on two different media and one backup kept offsite – zero-day recovery enables an IT department to partner with the cyber team and create a set of policies which define the architecture for what they want to do with data backups being stored offsite, normally in the cloud. This system assigns an appropriate storage cost and therefore recovery time to each workload according to their strategic value to the business, as all data is not created equal in terms of business continuity.

A zero-day recovery architecture is a service that enables operators to quickly bring workloads or data back into operation in the event of an IT failure or cyberattack, without having to worry about whether the workload is compromised.

This recovery system will only prove useful however when set up properly and tested thoroughly and frequently. Approximately 25% of organisations’ nightly backups fail – yet few will be aware of this due to a lack of recovery testing, meaning most businesses will have no idea what data has been lost in the process. With this in mind, operators need to perform disaster recovery testing on their data. Without testing in a controlled and simulated environment, it is impossible for IT and security teams to fully understand their systems’ integrity. Figuring out the data backup and recovery systems have failed after an IT outage has already taken place has no value – this needs to have been done before the worst has a chance to take place.

IT outages in the financial sector are becoming more frequent. In fact, the number of such incidents reported to the Financial Conduct Authority increased by 138% in the first 9 months of 2018, and are showing no signs of slowing down, making them a question of when, not if. With a large portion of the infrastructure in the financial sector relying on IT, minimising outages and limiting threats to this infrastructure should be number one priority to systems operators.