finance
monthly
Personal Finance. Money. Investing.
Contribute
Newsletter
Corporate

Richard Billington, Chief Technical Officer at Netcall, explores the changes that AI has brought to the business world,.

From the recommendations we receive on Amazon or Netflix to the AI-driven camera software used to improve the photos we take on our smartphones, AI forms parts of the popular services that are used multiple times a day. Even the map and Satnav applications we use rely on AI. Company chatbots are a more well-known use of AI, and can now be found on nearly every company website you visit. In fact, it’s been predicted that 80% of companies will be using chatbots this year.

However, consumers today are getting ever harder to please. The growing ramifications of the ‘Amazon Effect’ means that today’s customers expect instant gratification when liaising with companies – placing more pressure on business leaders to provide more, faster and better. Digital banks such as Monzo and Starling are continuing to build upon these expectations by enabling customers to open accounts in a matter of minutes. And that’s not all: companies are now under pressure to offer 24/7 customer service through a multitude of communications channels, including Twitter, Facebook messenger and other social media.

Furthermore, as millions of individuals are quarantined and isolated amid the current COVID-19 outbreak, never has there been more pressure on customer service teams to facilitate rapid and seamless responses to enquiries on a broad range of issues. In a time of crisis, a customer’s interaction with an organisation can leave a lasting impression, and potentially impact future trust and loyalty – another headache for CEOs, CIOs and CTOs.

Digital banks such as Monzo and Starling are continuing to build upon these expectations by enabling customers to open accounts in a matter of minutes.

AI-enabled systems are increasingly being viewed as the perfect solution for optimising customer service – as it’s extremely beneficial in allowing companies to provide agents that are ‘always on’, as well as hyper-tailored experiences for customers. However, some businesses are yet to harness these technologies – along with their benefits.

The barrier businesses must overcome

For many business leaders, a lack of the right skills in the right place has hampered their ability to implement AI across their company’s customer service function. According to an IBM institute of Business Value study, 120 million workers in the world’s twelve largest economies will need to retrain as a result of AI and intelligent automation.

Other business leaders may face budgetary constraints and can find themselves put off by the significant investment often required when integrating AI systems in their existing IT infrastructure. Misunderstanding surrounding AI can also mean that some CEOs are understandably concerned that the solution they are putting into place may end up being not quite right for their needs. Therefore, concerns over wasted time, money, and other resources often result in a rejection of adopting new technology. However, these concerns will be outweighed by the repercussion stemming from an inability to unlock the true value of this technology – and potentially fall behind in today’s fast-paced market.

Unlocking the benefits of AI

Smaller businesses tend to fall short of the IT foundation and personnel needed to remain up to date with the latest technological advancements in enhancing customer service. But it will ultimately be these investments that enable business leaders to contend with customer demands and flourish in an ever-evolving landscape. Adopting these low-code solutions will enable resource-poor teams to quickly test specific features or workflows without the need for specialised technical skill – enabling employees to innovate and implement significant change, without relying heavily on the IT department.

[ymal]

Low-code is helping companies surpass shortages within multiple digital skills, including AI, by removing the need for highly-trained developers who have traditionally been relied upon to bring new applications to the forefront. In fact, in a recent analyst report, Forrester predicts that savvy application design & development (AD&D) leaders will no longer try and reinvent the wheel and instead will now source algorithms and insight from their platform vendor or its ecosystem. Implementation consultants will now be able to differentiate themselves using AI-driven templates, add-ons and accelerators – particularly industry-specific ones.

With low-code software solutions, everyday business users are able to ensure automated and AI-driven solutions are up and running quickly and easily. Due to the lack of complex coding, the process of integrating AI is instantly simplified, and easily accessible by a range of workers across a variety of business sectors, regardless of size. The ability to test applications before implementation ensures business leaders are able to explore the capabilities of AI without investing valuable time and effort. As a result, they will be empowered to unlock a wave of new possibilities for AI development across a range of functions.

By breaking down walls between IT and other departments within organisations, low-code technology can be utilised to help bring teams together to work collaboratively on applications that rapidly improve processes, by harnessing the knowledge of customer facing wider-business teams. And as COVID-19 continues to cause ramifications for businesses across the globe, business leaders must respond with agility to keep up with increasingly complex customer demands. Speed of implementation and the technology that can help organisations get there is therefore essential when it comes to staying afloat and competitive. And, where many workforces are currently struggling from unprecedented circumstances, the adoption of AI processes through low-code applications can help minimise workloads and free up workers– enabling them to focus on more strategic tasks within the organisation, by automating some of the more mundane processes.

Finance teams are still spending too much time in ‘excel hell.’ Every hour spent grappling with spreadsheets, pivot tables, and pie charts are hours that could be spent helping make better business decisions. And yet, astonishingly, top finance functions are still devoting 75% of their time to data analysis, according to a recent PWC study. Eugene Hillery, Senior Director of International Operations at Tableau, offers Finance Monthly his thoughts on the issue and why it should be turned around.

Spreadsheet drudgery isn’t just frustrating and inefficient, it’s outdated. There is a huge range of intuitive, interactive and highly visual data software available – what some call ‘visual analytics’ - designed to help knowledge workers see and analyse the data that matters to them, faster.

Delivering insight from data should be the core competence of finance – not spreadsheet navigation. Yet, research from Sage shows two thirds of CFOs (64 %) are still unable to make data-driven decisions to drive business change. Here are five reasons to kick-off an analytics overhaul:

1. You Can Work (And Collaborate) From a Single Source of Truth

Conventional spreadsheets are capable of handling many tasks, but real time collaboration has never been their strongest suit.

Inconsistent version control, restricted server access and unnecessary duplication are a drag on far too many finance teams. When there are multiple sources of ‘truth’, hours of time are needed to make sure conclusions are built on accurate and up-to-date data. The longer this process takes, the less value you can claim from any time-sensitive data.

With more advanced analytics products, finance teams can bring diverse data sets together from across an entire organisation, allowing everyone to work from a single source of truth. This offers a holistic view and saves time especially when everyone, whether from AP, AR, Tax or Purchasing can collaborate on the same data in ‘real time’.

Inconsistent version control, restricted server access and unnecessary duplication are a drag on far too many finance teams.

2. You Can Get Insight Overnight

More than ever, the ability to connect to offices around the world is a business necessity. The power of a rolling international handover between knowledge workers using accurate, up-to-date data, is tremendous.

For example, if daily sales or staff performance data is be collected at the close of a business day in London, it can be turned into insight by teams in the US literally overnight. This means recommendations for action land on desks at the start of the next day in the UK, and issues can be resolved faster.

If a coherent view of your accounts means drawing information from data sources in China and the US, for example, trying to reconcile them through different spreadsheets will only bury insight. Quick answers are critical for teams operating across different time zones, as for any business that needs an accurate overview of what’s going on in a hurry.

When diverse data sources are unified in a single interactive dashboard, drilling into the numbers can be done by anyone, wherever they are.

3. You Can See Both Granular Detail and the Big Picture

Managing business expenses is a never-ending task, but it’s another area where working smarter beats working harder.

Data analytics software helps uncover the kind of hard to spot correlations that can be invaluable in finding new ways to keep costs down. Dashboards should make it easy, for example, to see which employees are in the habit of booking flights well in advance (saving the company money) and those who rack up huge bills by making last minute purchases.

A faster understanding of data outliers is also valuable in the quick response to business challenges that may exist. Instead of questioning ‘what’ is happening, conversations are led with ‘why’ it is happening. Data analytics makes it easier to uncover cost drivers and make predictions about cash flow. This equips finance teams to identify the source of a challenge faster than ever and help drive the solution.

[ymal]

 4. You Can Put Your Focus on the Future

Access to an organisation’s accounting full history means the finance team is best placed to offer predictions for its future. In general, the richer and more diverse the data that underpins those forecasts, the more accurate and useful they become.

With data analytics, finance teams can use a cash flow summary dashboard to help management understand the outlook in aggregate. They can ask useful questions like “what are our balances by currency, subsidiary, country, banking partner or geography?” The ability to reveal and answer these is fundamental to supporting other financial processes like preparing for audits and SOX compliance.

Combining effective data analytics and artificial intelligence support allows teams to compile and comprehend far bigger data sets, and even help present larger, more evidence-laden projections. This level of authority is what enables finance teams to play a more strategic role in the boardroom - advising CEOs, boards and investors, not to mention staff or customers. In fact, eight in 10 CFOs in the UK (78 %) say their role has changed recently and they are focusing more time and effort on business-wide operational transformation, according to Accenture.

Access to an organisation’s accounting full history means the finance team is best placed to offer predictions for its future.

The best visual analytics software make comparisons between external data sources like economic trends, and internal sources like operational numbers or sales figures. This in turn empowers finance teams to be more efficient and intuitive, making better recommendations with longer lasting impact.

5. Investing in Your Money People

The pace and scale of digital transformation is something finance teams understand better than most. After all, they are the ones processing payments for every major IT investment a company makes.

It’s not surprising then that it is so frustrating to see finance teams often overlooked for technology investments which could in fact create efficiencies that drive business forward.

Of all business areas that stand to benefit from the ongoing revolution in data analysis, finance departments have the most to gain. Gartner research shows that the number of finance departments deploying advanced analytics will double within the next three years. Visual and AI-empowered analytics can untap the insight and creativity currently locked in finance teams across the UK – but only if they can look up from their spreadsheets and see them.

Cloud computing is one of the most transformative digital technologies across all industries. Cloud services benefit businesses in so many ways, from the flexibility to scale server environments against demand in real-time, to disaster recovery, automatic updates, reduced cost, increased collaboration, global access, and even improved data security. Numerous financial institutions around the world are already reaping the benefits of cloud infrastructure to fit their technology needs today and help them scale up or down in the future as economies evolve. According to research by the Culture of Innovation Index, 92 per cent of corporate banks are already utilising cloud or planning to make further investments in the technology in the next year.

The Bank of England is the latest financial institution to announce it has opened bidding for a cloud partner to support its migration to the cloud. Craig Tavares, Head of Cloud at Aptum, explains the significance of the Bank's decision to Finance Monthly.

As the UK’s central bank seeks to move to a public cloud platform, IT decision makers are likely to encounter hurdles along the way. Figuring out the right partner will be half the battle for the Bank of England; it can be very difficult to identify and map out the broader migration and ongoing cloud infrastructure strategy.

The central bank’s cloud computing approach reflects an evolution in the way financial organisations are viewing data and the applications creating this data. The industry wide shift to viewing data as an infrastructural asset could have precipitated the Bank of England’s own move to the cloud. As such, the organisation should consider these four areas to determine their cloud strategy and partner -- performance, security, scalability and resiliency.

Figuring out the right partner will be half the battle for the Bank of England.

Performance

Traditionally, financial institutions are known for their risk aversion and have been hesitant to undertake digital transformation due to their reliance on legacy systems. Fraedom recently found that 46 per cent of bankers see this challenge as the biggest barrier to the growth of commercial banks. But due to issues surrounding compliance, moving completely away from legacy systems isn’t always an option. This is no different for the Bank of England which is looking to move to a public cloud platform in order to enhance the overall performance of customer payment systems in the new digital age.

Legacy IT systems can prove to be a challenge for financial organisations looking to move applications to the cloud. Outdated processes often lead to system failures, leaving customers unable to access services, resulting in increased customer loss. However, with public cloud it is crucial to find the right combination of cloud services by defining the proper metrics for application performance and storage of critical data.

Legacy IT systems will need to co-exist with new or refactored cloud-based applications. Because of this, the bank will need to consider different strategies using hybrid cloud and multi-cloud architectures to align performance and cost. And when it comes to time-to-revenue or time-to-value the bank will be looking at traditional IT methodologies while leveraging cloud native approaches. The cloud native approach will lead to adopting DevOps as a new culture and Continuous Integration and Continuous Delivery or Deployment (CI/CD) as a process. These practices automate the processes between software development and operational teams which as a result will allow the bank to deliver new features to customers in a quicker, more efficient manner.

Depending on the hybrid IT architecture being used and whether the approach is traditional IT or cloud native, there will be different ways to ensure the best application and data lake or data warehouse performance. In order to do this, the bank will need to partner with a technology expert who will be able to offer guidance on the different levels of technology stacks required during the cloud migration.

[ymal]

Security

Central banks have traditionally kept close control of their IT systems and long expressed concern over the security of their customers’ information and financial transactions. As such, migrating to a public cloud platform and handing over to a cloud partner could heighten these worries. Global banks are expected to adhere to strict regulations to reduce the number of security issues within the financial sector and all new technology implementations must be compliant.

As complex regulatory requirements – such as the Markets in Financial Instruments Directive (MiFID) and Anti-Money Laundering rules (AML) - continue to cause a barrier to cloud adoption in the financial sector, the Bank of England should consider a partner that is able to adapt to high regulatory demands. As such, a three-way partnership should form between the Bank of England, cloud consultants and cloud service providers. This particularly applies if the UK central bank were to take on a multi-cloud approach – leveraging Amazon, Azure or both. This way, the three can be aligned and acknowledge the journey the bank has taken so far as well as the future of the financial organisation from a regulatory standpoint.

Adopting a partnership approach decreases the risk of security breaches which often cause client relationships to disintegrate.  In the past, security was treated like a vendor-customer relationship rather than an important partnership from day 1. Data is a major focal point in this discussion -   how the bank is protecting customer data or how they are managing financial data. Cooperation between partners ensures the configuration of every cloud service being used has the right security measures integrated into it from the start observing compliance requirements like GDRP, data sovereignty and data loss prevention.

Adopting a partnership approach decreases the risk of security breaches which often cause client relationships to disintegrate.

Scalability and Resiliency

With a growing abundance of data, The Bank of England will need a cloud platform that will allow them to scale up or down accordingly. Fuelling the growth of the bank’s data are its applications, which also need special scaling and resiliency considerations just like the data itself.

Keep in mind, cloud is not an all or nothing discussion. Not every application the Bank of England has needs to go to the hyperscale public cloud. For example, it may start with a progression to private cloud and then to a public cloud vendor agnostic framework based on the scaling and resiliency needs. The financial institution should understand which applications are best suited for the cloud at this time and which will be migrated at a future point. They should ensure that cloud is an enabler and not a detractor. It’s important to understand the cloud journey is an ever-changing process of evaluating business goals, operational efficiencies and adopting the right technologies to meet these outcomes at the right point in time based on ROI.

The UK central bank should consider moving to a container-based environment and cloud platform services (but as mentioned, in a hybrid cloud architecture), technologies that will enable an efficient process of building and releasing complex applications with the right scale in/out and uptime capabilities. The bank may incorporate Site Reliability Engineering (SRE). SRE is a discipline that leverages aspects of software engineering and applies them to infrastructure and operations challenges. The key goals of SRE are to create scalable and highly reliable software systems.

[ymal]

The Bank of England has come to recognise the significant impact cloud can have on the business and the benefits cloud technology will bring to their customers. Banks will become leaders in setting the bar for other organisations and industries when it comes to moving to the cloud. However, when it comes to choosing the right collaborator, The Bank of England should seek a cloud partner who is able to meet their business objectives, understands both traditional IT and cloud native approaches, along with hybrid multi-cloud and the data challenge which includes performance, security, scalability and resiliency.  Working with the right Managed Service Provider (MSP) partner can provide them with the necessary expertise and developing solutions that bridge the gap from where they are today, to where they want to go.

Below, Danny Phillips, Zscaler's Senior Manager of Systems Engineers, discusses the importance of cloud technology and its implications for older entities in finance.

In the financial sector, bigger has always meant better. In fact, if a financial institution is suitably large, it is deemed so vital that, as the popular terms suggests, it is “too big to fail”. This situation has provided two key benefits for large players in the financial sector: the potential cushioning from government when things go wrong, and a tacit understanding that they’re essentially untouchable when smaller competitors enter the marketplace.

This all held true for some time, and newcomers in the space have generally carved themselves a niche, remained relatively small, and never really threatened the incumbents. As such, it can be argued that complacency has permeated the halls and boardrooms of some of the biggest players in the finance sector.

There are, however, signs of change across the sector. Whilst the financial institutions of old aren’t likely to shutter their doors anytime soon, recent technological innovation is shifting the balance of power considerably. It’s not the only technology playing its part in this equation, but cloud IT is the driving force behind disruptive industry newcomers, providing the agility they need to launch products and services faster, push forward better customer experiences and target new markets more precisely.

Lowering the Barriers to the Finance Sector

There have always been barriers to entry to the financial services sector. Chief amongst them has been the tight regulatory landscape, which ensures that smaller players can’t operate without first having the required licences (PSD2, PCI etc.). Although regulation is a necessity, a potential drawback is that it’s yet another barrier for smaller outfits to traverse.

There are efforts, however, to clear a path for smaller businesses. The success of Open Banking has allowed third parties to develop services around financial institutions, with plans to extend this beyond the retail banking sector and encompass products in the general insurance, cash savings and mortgage markets, under a new model called 'open finance'.

This has opened the door to smaller fintech companies to create a host of consumer-focused products, including money saving and credit-building apps. This has been significant because, when considered in conjunction with cloud, it’s arguably putting larger financial institutions, particularly banks, at a competitive disadvantage.

Cloud IT is the driving force behind disruptive industry newcomers, providing the agility they need to launch products and services faster, push forward better customer experiences and target new markets more precisely.

Cloud-Enabled Agility

Cloud is enabling new businesses to be set up without the burden of the physical infrastructure so intertwined with the older, larger industry players. For new entrants, any part of the business can be picked off the shelf. Salesforce for your CRM system or Workday for your HR, all paid for monthly or quarterly. Computing functionality can be purchased as-a-service and can even be paid for in increments as small as a CPU cycle. All that’s needed to enter a market is a great idea, a laptop, an internet connection and a credit card.

Bringing the conversation back to banking in particular, we’re already seeing the benefits disruptive new players are reaping. The likes of Monzo, Revolut and Starling Bank, are releasing new features and functions nearly every month (often based on customer feedback), enriching the customer experience and generating positive word of mouth. They’re able to do so because of their lean and agile structures, unburdened by a reliance on physical hardware, paperwork or branches. Currently, high-street banks are struggling to compete at this level.

Why the Bigger Financial Institutions Aren’t Embracing Cloud

Imitation is the sincerest form of flattery, so why aren’t larger financial institutions just doing the same as these smaller competitors to ward off the competition before it gets too big? A 2019 research paper from 451 Research revealed that financial services companies are behind other business in deploying cloud as a central part of IT operations. Around 70 per cent said their cloud projects were only at the initial, or trial and testing, stage.

There are a number of reasons why this might be the case, namely legacy IT debt, unfinished upgrades and compliance. Obviously, like any business, financial institutions are under pressure to use what they have already paid for before moving on to the next generation of infrastructure. Established businesses have to plan migrations, with business as usual taking priority.

In my experience, what we see instead is a one foot in, one foot out hybridised attempt at cloud adoption from larger financial institutions. Newer applications will be running in the cloud, but the majority will remain housed within the main datacentre. The end result is virtual machines running in the cloud with permanent connections between the corporate network and the cloud provider. So, when someone wants to access this application remotely, they have to dial back into the office on a VPN to get to the cloud instead of connecting to the cloud directly.

This dilutes the benefits of the cloud, and isn’t really a true step forward.

[ymal]

Thinking to the Future

If plans to open the financial services sector up to new businesses continue along this trajectory, we’re going to be seeing a far more varied industry landscape than we’re seeing today. If larger institutions are still playing catch-up on cloud, by the time they make it there the conversation will have moved on.

Financial services institutions therefore have a doubly difficult route ahead of them. As well as accepting the sunk costs of now outdated infrastructure and upgrading to a cloud-first mindset, they also have to keep an eye on the future. For my money, what they need to be thinking on is how to leverage 5G for competitive advantage, and how to do so in-step with their smaller competitors.

As the use of 5G becomes more widespread in the 2020s, local area networks (LANs) are set to disappear. We currently look to Wi-Fi to access the internet, but when every PC or mobile phone is equipped with ultrafast 5G, the use of Wi-Fi becomes an outdated notion. The traffic from 5G devices will connect the right people to the right applications—through a digital services exchange—and this will deliver faster, more secure, and more reliable access to apps and services.

The promised low latency, high data capacity and reliability of 5G networks has a host of applications in financial services, and creates a new platform for the delivery of services on mobile. For banking, reliable video conferencing sessions with mortgage brokers, or financial advisors, without having to travel to their nearest branch could be commonplace, rather than a seldom used novelty. Real-time data streams from customers, perhaps in conjunction with a machine learning platform, could aggregate a customer’s behavioural data in real time, enabling contextual financial recommendations.

If larger institutions are still playing catch-up on cloud, by the time they make it there the conversation will have moved on.

5G will not be solely a benefit to a bank’s customers though. Its impact will be felt so broadly that banks need also to think about how their own employees will utilise 5G. It’s highly likely that, as 5G becomes the norm, expectations for quick and hassle-free access to applications will climb. When applications are hosted in the cloud, fast internet access is more important than ever. If a user’s device has faster internet access than the corporate network, those users are likely to continue using their superior mobile access, as opposed accessing the internet via the corporate network. It’s a matter of human nature to take the path of least resistance.

However, security may not be top of mind for these users as they access work applications while away from the corporate headquarters. Protecting an on-premises network infrastructure will become less relevant and financial organisations will have to adapt to secure the “edge” once more: in this case, the individual user on their mobile device. Banks and other financial institutions will have to be able to respond to the effects of evolving user behaviour introduced by 5G.

In many ways, financial institutions are facing an uphill struggle to make back the ground they’ve already lost by dragging their feet on cloud adoption, both in terms of fulfilling the needs and expectations of their customers as well as their staff.

Ultimately, for end consumers, these newer, faster and more convenient financial services are inevitably coming our way. Whether they’re brought to us by one of the big four, by a challenger like Monzo or Starling, or even by Google or Apple, is still to be decided.

Here Andy Barratt, UK managing director at international cybersecurity specialist Coalfire, explores how the financial services sector can turn the tide on costly, high-profile cyber missteps.

It’s fair to say that the financial services sector has struggled to secure positive consumer sentiment for itself recently – particularly in relation to cybersecurity. At the end of October, the government’s Treasury Select Committee (TSC) went so far as to say that the number of IT failures at banks and other financial services firms has reached a level it deems “unacceptable”.

The criticism, which highlighted poor IT performance within financial firms and a lack of decisive action from their regulators, comes in the wake of a string of high-profile and costly cyber glitches in recent years. Most notable among those is TSB’s unsuccessful attempt to migrate its systems over to new parent company Banco Sabadell.

Customer details were left easily accessible and vulnerable to fraud attacks, as well as resulting in thousands being unable to access their accounts. But TSB are not the only culprits: Barclays, RBS and VISA are among a raft of other major financial service providers to have suffered serious technical glitches in the past few years.

Why then, with so much at stake, are financial firms lagging behind when it comes to their cyber strategy?

Complex legacy tech infrastructure

The first aspect that makes large firms so susceptible to attacks is that their IT systems are often complex and, significantly, outdated. Hackers can easily find weak spots in the system or, as in TSB’s case, vital information can slip through the cracks.

The first aspect that makes large firms so susceptible to attacks is that their IT systems are often complex and, significantly, outdated. Hackers can easily find weak spots in the system or, as in TSB’s case, vital information can slip through the cracks.

Our inaugural Penetration Risk Report, which took place around the time of TSB’s issues, found that the largest firms are less likely to be prepared to face up to cybercrime than their mid-sized equivalents – despite greater budgets and resources – due to their cumbersome and slow-moving infrastructure.

More recently, we’ve seen those larger businesses close the gap, mostly through the support of in-built cloud security services, but the risks still remain for many. In the financial services sector specifically, this year’s study indicated that the level of external threat has actually increased.

The rush to implement services under a new ‘Digital’ initiative sometimes comes at the cost of addressing the underlying legacy issues too. Whilst the big banks rush to keep up with the online-only challenger banks they re-allocate budget for the new apps and forget the underlying infrastructure they depend on.

‘Yes’ culture

One of the key risks boosting that threat is a habit within large corporate cultures for IT teams or risk managers consistently ‘downgrading’ risks due to lack of understanding or complacency when reporting to those further up the pecking order. This is dangerous and can lead senior figures to the conclusion that everything is ‘ok’ within their organisation when, in reality, an IT crisis is just around the corner. This is particularly true when organised crime groups are targeting financial services with highly sophisticated attacks that are often discounted by management with a throw away ‘nobody would do that’ comment.

Companies should attempt to foster a ‘safe’ environment where staff feel comfortable raising problems they encounter so that solutions can be found before disaster strikes. They should also to remain current with intelligence from their incident response and forensic partners who will see the sophisticated threats when they do cause a breach.

An enhanced understanding of the issues facing the business is less likely to leave senior spokespeople up a creek without a paddle when facing the media. No one would expect a CEO to know all the ins-and-outs of their IT infrastructure, but basic comprehension can go a long way. Knowledge is power.

[ymal]

Weak links in the chain

Due to the nature of the industry and the services they provide, banks and large financial firms are required to interact with third parties on a massive scale. Unfortunately, this isn’t without its drawbacks.

Many third parties – and, by extension, their own supply chain – lack the sophistication and / or the wherewithal to deal with cyberattacks. As such, they are often the first port-of-call for a hacker looking to worm their way into a major system.

An example includes the British Airways data breach in the summer of 2018, when hackers were able to take information directly from the airline’s website thanks to access from a third party.

Often, being subject to this form of intrusion is pure bad luck rather than bad planning. However, large firms must ensure that they’re sufficiently protected and that access for third parties is limited. It’s a simple case of making sure that your back’s covered wherever possible.

Human error

Perhaps the most common error (and the most tangibly addressable) is the human risk inherent within any business. Naturally, the larger your workforce, the greater the risk you face, which is a major issue within the financial services sector.

Phishing, a scam that prompts staff to provide their username and password, is still one of the simplest but most successful ways potential attackers get their foot in the door.

The key to combatting the danger is providing constant training to employees so that they’re fully aware of the threat and the responsibility that they have towards protecting the business.

What’s more, the high-profile cases mentioned above are dangers in themselves: when the glitch or failure makes the news, a sign post is placed for hackers looking to break in. Each headline is an ‘x-marks-the-spot’ for a company’s weak spot, as well as their competitors’.

It’s a brutal world that financial services businesses face as technology advances but, with such large amounts of money at stake, they must be up to the challenge.

The Which Group recently published a study [1]stating that the UK banking sector was hit by IT outages on a daily basis in the last nine months of 2018, with 302 reported failures. The major banks had suffered at least one incident apiece every two weeks. This is a highly concerning statistic that exposes the fact that bank outages and IT issues occur much more often than was previously thought[2]. And the impact can cause significant setbacks, financial and otherwise. In one recent example[3], a major bank suffered an outage with costs amounting to over £330 million.

Nick Coleman, Channel Director EMEA of Virtual Instruments, explains why this is happening and what the issues surrounding this trend are.

Regulatory pressure

Firstly, banks are now obliged to report any IT issues to the Financial Conduct Authority (FCA), (and, as in the case of Which, using this data to form their report), IT problems are much more visible. There is greater recognition than ever before on how much serious disruption can be caused by IT outages at financial services institutions for people and businesses. The FCA now regards IT system performance as more important than staff performance. So if a member of staff is signed off of work, it is considered normal business, but if an IT system fails to deliver, it is viewed as a violation. Under regulations enforced by the FCA in August 2018, banks and financial services have to report on how they recover from outages within three months and have been mandated with a maximum acceptable time for systems to be down. They are so reliant on IT systems, it is critical that they take the necessary steps to ensure the business can get back up and running as soon as possible after an outage.

Infrastructure complexity

Secondly, knowing the true root cause of problems before taking any action is key, but a lack of proper infrastructure visibility is preventing banks from effectively managing the situation. With the inherent complexity of today’s hybrid infrastructure brought about by new procurements layered over legacy systems that are not necessarily cohesive, interoperability issues often ensue. The knock-on effects of systems fighting for resources during busy periods can cause latency issues, in turn seriously affecting the performance of business-critical applications. Here, it is not a matter of if, but when.

Over the years, peoples’ perception as to the value of the IT infrastructure has eroded in the eyes of the business.

With digital transformation, IT systems are now beyond human comprehension and require automation and AI-powered IT operations management (AIOps[4]), also known as ‘algorithmic IT operations’, to run efficiently. Unfortunately, IT doesn’t have the investment or influence at board level that it should put the proper performance safeguards and assurances into place. The business insists that their customer-facing applications run as planned, but don’t really care who runs the IT infrastructure for them. They see the infrastructure as an overhead rather than a vital, profit-generating differentiator, giving a competitive advantage.

Lack of performance benchmarks in the cloud

Thirdly, the banking sector has been advised to embrace the cloud and is struggling to migrate applications, often written in the 1980s and 1990s, to a new platform. The cloud suppliers are reluctant to provide a service level agreement (SLA) on application performance, as they do not know the quality of application coding they will be hosting, so there is effectively no one fully accountable if problems occur. This means that at present, a bank can have its customer-facing applications slow down for an hour and as the cloud provider is not accountable, it is not in breach of contract.

For example, performance issues can impact upon banking applications for customer transactions, and if that capability goes down, not only will it be difficult for the IT team to locate the issue and get systems back up and running quickly, there are also implications for the business reputation to deal with following such an incident.

How can this imbalance be addressed?

Bringing insights and the value of IT back to the business

Over the years, peoples’ perception as to the value of the IT infrastructure has eroded in the eyes of the business. This type of thinking is not unique to IT. For example, years ago people used to care about the engine in their cars, but now they just expect it to work and are really irritated if it fails. The same goes for domestic automation: washing machines, dishwashers, stereo systems, etc. These are now just viewed as commodity items to be used and replaced with no emotion. The value of IT in general needs to be recognised and the way to do this is to report on exactly how it is helping the business, in language the business understands.

So how can IT be of more value to the business? Organisations must recognise that a shift in the perception of the importance of the application (which relates to the customer) is needed. As the organisation cares about application performance, and as IT supports the applications, IT should logically also show how well they are running, how cost-efficient they are compared to other suppliers, and how in-house IT has a better understanding of the company direction than any outsourced partner.

Organisations must recognise that a shift in the perception of the importance of the application (which relates to the customer) is needed.

Outmoded infrastructure monitoring methods

In terms of tackling the issue, traditional monitoring capabilities are falling short. The tools are commonly proprietary and simply not able to keep pace with digital transformation occurring today. The core of this recurring outage problem in the financial services industry is that IT teams are simply unable to holistically ‘see’ or create a map of their entire systems environment. Greater infrastructure transparency is required.

Currently, the applications themselves can be monitored using application performance monitoring (APM) tools, – but these only show the application performance outside the data centre with perhaps a bit of hypervisor information. It is a similar story from the switch providers and network monitors as they really only look at their own devices and lack context to other devices and to the applications themselves. The entire hybrid IT infrastructure supporting the application processing needs to be viewed live across the hypervisor, VM, server, network fabric and storage together.

The AIOps solution

AIOps-driven app-centric infrastructure management will be a significant part of the solution. Artificial intelligence applied to IT operations (AIOps) utilises AI and ML (machine learning) to help ensure application and infrastructure performance.

With this holistic approach, AI-based analytics are app-centric, with correlation capabilities that provide highly insightful and integrated views across siloes and end-to-end across the entire infrastructure. In this way, a shared context can be seen across all infrastructure management tools, so that the trends and behaviour of resources can be easily read and understood. With a visual representation of the current infrastructure, IT teams can be certain as to all of the dependencies and exactly which applications are utilising or competing for different infrastructure resources. Thus, potential problems can be avoided in advance, making a meaningful shift from reactive to proactive troubleshooting, saving millions in time, money, loss of business revenue and customer loyalty.

And this is not solely for on-premises; the cloud-based outsourced applications also need this level of scrutiny to ensure performance-based SLAs can be set and then met.

With the ability to assure the performance of their mission-critical applications, banks and financial services organisations place themselves in a position to successfully manage their digital transformation journey, whilst ensuring they meet business goals and most importantly, keep their customers happy.

 

[1] https://www.which.co.uk/news/2019/03/revealed-uk-banks-hit-by-major-it-glitches-every-day/

[2] https://www.parliament.uk/business/committees/committees-a-z/commons-select/treasury-committee/

[3] https://www.theguardian.com/business/2019/feb/01/tsb-computer-meltdown-bill-rises-to-330m

[4]Artificial intelligence for IT operations (AIOps) platforms are software systems that combine big data and AI or machine learning functionality to enhance and partially replace a broad range of IT operations processes and tasks, including availability and performance monitoring, event correlation and analysis, IT service management, and automation.” Gartner – Market Guide for AIOps Platforms. (Published August 2017)

Catastrophic impact

The cost and impact of IT system downtime has never been greater due to businesses’ increasing dependence on IT systems and infrastructure across all areas of their operations. Any system outage can have a catastrophic impact on an organisation in terms of costs, lost trade and reputation. Gartner estimates the average cost of network downtime at a staggering $5,600 per minute. This figure is even more startling when you consider that British businesses are reported to suffer at least three days of downtime per year.

Downtime is a major problem for any industry but is particularly damaging for the finance sector where consumer trust is paramount. Many high street banks have made the headlines in the last year after suffering system outages breaching customers’ data and affecting their access to accounts. Beyond these high profile cases, the true scale of the problem is evidenced by a report from Which? Money revealing that between 1 April and 31 December 2018, there were 302 reports of IT systems failure affecting customer transactions – equivalent to an incident each day. Which? Money said that six of the major banks had suffered at least one incident apiece every two weeks.

Gartner estimates the average cost of network downtime at a staggering $5,600 per minute.

Increased pressure

Banks are, therefore, under increasing pressure from politicians and regulators to improve their response to IT problems. In November last year, the Financial Conduct Authority said it was “deeply concerned” after finding that technology outages had more than doubled over the preceding 12 months, while the Treasury Select Committee launched an inquiry into the issue. The Bank of England has also threatened banks with higher capital charges if they do not do enough to deal with technical problems.

There is a common misconception that IT outages are an unavoidable part of business operations; however, a large percentage of all downtime is not related to a failure in the technology itself but to how that technology is being used, configured and administered. The failure is usually down to a combination of a lack of training and planning.

So how can financial organisations minimise the risk of IT failure causing them to become the next unwanted headline?

Prevention is better than a cure

The best way to avoid losing revenue, reputation and customers is to prevent outages, especially the type of routine failures that can’t be blamed on a major disaster. Adopting best practice processes - such as running regular threat and vulnerability assessments, conducting configuration reviews and including operation process validation checkpoints - can significantly reduce your chances of suffering from a systems failure.

British businesses are reported to suffer at least three days of downtime per year.

Testing is crucial

Testing of different systems requires time and resources that can sometimes be difficult to justify. However, it’s important to remember that thorough, targeted real-life testing can reveal incompatibilities, glitches and capacity issues unforeseen at planning stages. It was reported that one of the key causes of the Lloyds Banking Group outage which left customers unable to access their online banking services was the result of various systems not being as thoroughly tested as they should have been when accounts were migrated to the Group’s new core banking platform.

Regular training for employees required

According to a report by the Ponemon Institute, human error is the second most common cause for system failure - accounting for 22% of all incidents. Employees must be regularly trained on how to avoid an outage as well as how to mitigate the damage and impact should one occur. Within financial organisations, staff will be using a myriad of complex systems and technologies and it’s important to remember these technologies are only ever as good as the people using them. Clear, precise and regular usage guidance is imperative to minimise the chances of human error.

Remain vigilant

Vigilance should be an essential part of any financial organisation’s IT strategy. Organisations should be working with an IT managed service provider to ensure that they are always following up to date best practice guidelines and pro-actively questioning their IT set-up and the associated risks.

 Have a well-rehearsed recovery plan in place

Although an IT outage is sometimes unavoidable, prolonged downtime does not have to be. Having a well-rehearsed business continuity plan in place can help to mitigate the impact of any system failures.

Any business continuity plan needs an executive owner/sponsor who has the experience and authority to get things done in a timely and processed manner. All action plans should be regularly reviewed at board level and shared with all stakeholders across the organisation so that all the risks and organisational implications are planned for to avoid its implementation being hampered by budget or knowledge constraints.

About Park Place Technologies

Since 1991, Park Place Technologies has provided an alternative to post-warranty storage, server and networking hardware maintenance for IT data centres. As the world’s largest pure-play post-warranty data centre maintenance organisation, Park Place supports tens of thousands of client organisations around the globe. Headquartered in Cleveland, Ohio, Park Place maintains offices across the globe, including in San Diego, Denver, Boston, Toronto, London, Wiesbaden and Singapore.

Website: https://www.parkplacetechnologies.com/

On one hand there are the established, incumbent banks, including the UK’s four financial heavyweights – Lloyds, Barclays, HSBC, and RBS. On the other hand, there are the younger, more agile challenger banks: Monzo, Starling, Revolut and others. Needless to say, competition is fierce. Below Barney Taylor, Europe MD at Ensono, digs deeper.

Challengers have arrived quickly on the scene, specialising in areas not well-served by bigger banks at the time. Boasting speed, convenience, and excellent levels of customer satisfaction, challengers have seen particular success in the mobile banking market, with data from Fintech company Crealogix showing that 14% of UK bank customers now use at least one mobile-only challenger app.

How the incumbents are challenging the challengers

IT has been the linchpin of the challenger bank success story. Customers increasingly expect a seamless and ‘always on’ relationship with their banks, and challengers, built almost exclusively on digital foundations, have been able to deliver. Unsurprisingly, it is these digital foundations which traditional banks need to improve if they are to keep up with the shifting market.

Retail banks are generally attempting this by putting greater investment and development into mobile and online banking capabilities. HSBC, for example, recently launched its Connected Money app, allowing customers to easily access their account information from multiple providers within one central hub. RBS is even set to release its own digital lender called Bo in the near future.

This is a strategy that’s likely to pay off for many. However, retail banks have a larger asset right under their noses that’s typically overlooked and underestimated. It’s an asset that banks have been sitting on for decades: mainframe computers.

Mainframe: the trick up retail banks’ sleeves

Mainframe has been around since the late 1950s, when systems only had rudimentary interactive interfaces, punch cards, and paper to transfer data. Usage in the financial sector rapidly picked up in the 1960s, with Barclays among the first banks in the UK to adopt it, initially for account and card processing. In a world in which, arguably, the only constant is change, 50 years on the mainframe has adapted and thrived to become the most powerful computing power on the market, handling over 30 billion transactions per day (even more than Google).

In fact, IDC reports spending on mainframes reached $3.57 billion in 2017, with expectations that the market will still command $2.8 billion in spending annually by 2022. In particular, financial sector businesses have been noteworthy champions of the technology, with 92 of the world’s top 100 banks relying on mainframes today. And for good reason.

Firstly, mainframes, if properly modernised and maintained, provide the same fast and reliable banking experiences that have made challenger banks so successful.

Unlike server farms, mainframes can process thousands of transactions per second, and can support thousands of users and application programs concurrently accessing numerous resources. Today’s mainframes process a colossal 90% of the world’s credit card payments, with credit card giant Visa running 145,000 transactions every second on its mainframe infrastructure.

In the financial industry, where trust is everything, mainframe technology also reigns supreme with its air-tight data security. Mainframes have always been considered a secure form of storage, but new models of mainframe have gone one step further, introducing something call ‘pervasive encryption’. This allow users to encrypt data at the database, data set or disk level. If they so choose, users can encrypt all of their data.

While challenger banks have benefited from an inherently component-based technology infrastructure, which makes them agile, flexible, and fundamentally able to connect to mobile apps and other external ecosystems – new open source frameworks mean that the mainframe can achieve much the same, and can easily interact with cloud, mobile apps, and Internet of Things (IoT) devices.

Final thoughts

Challenger banks have benefited from simple, cloud-first infrastructures that provide speed and convenience, which has won them millions of customers as a result. However, traditional banks shouldn’t fall into the trap of simply mimicking the industry newcomers. Cloud has a lot to offer, but traditional banks shouldn’t disregard the mainframe computing power that they have at their disposal.

A modernised mainframe is a cost-effective workhorse and, far from dying out, it allows incumbent banks to compete toe to toe in areas that have thus far made challenger banks so successful. Modernisation allows workloads to be centralised and and streamlined, enabling even more agility.

The mainframe has a long history, but for enterprise, and for retail banks most of all, it’s still a technology of the here and now.

However, previous deals show that the process has hardly been plain sailing – 40% of those surveyed by Deloitte claim half of their deals over the past two years have failed to generate expected value or ROI. From eBay and Skype to Microsoft and Nokia, the past 20 years have been littered with multi-billion-dollar mistakes. Here, Mike Walton, CEO and Founder of Opsview, delves into the important role that IT operations have when it comes to planning your M&A strategy.

IT Operations fuels business success

An increasing number of business execs cite ‘gaps in integration execution’ as the reason behind M&A failures. The process of combining two businesses, its operations, staff and culture is extremely difficult in both principal and practice, and, whilst it is certainly not a silver bullet, the importance of involving IT as early as possible in M&A proceedings would certainly make integration a smoother process. At the end of the day, IT sits at the very centre of any organisation, supporting all aspects of day-to-day operations and innovation-driven services, fueled by digital transformation projects. That makes centralised IT operations’ management and monitoring critical to any M&A process and involvement needs to start at the discussion stage so that experts can provide visibility into core systems to support successful M&A planning and integration.

Acquiring firms, therefore, need clear visibility into their own and the target firm’s IT assets and initiatives to drive fast, effective integration at this level and to reduce time-to-innovation post-acquisition.

According to Ernst & Young, the role of IT fundamentally underpins the strategic objectives of M&A activity, whether that’s increasing market share, entering new markets, gaining new customers or consolidating product ranges. Acquiring firms, therefore, need clear visibility into their own and the target firm’s IT assets and initiatives to drive fast, effective integration at this level and to reduce time-to-innovation post-acquisition. If not, they risk eroding value and could create a situation where inaccurate timelines and cost estimates are produced.

Yet, given the importance of IT visibility to M&A success, it’s disappointing that just half of the respondents to the European E&Y report said they typically involve IT in the transaction process, compared to almost 80% for finance. Even fewer — 38% of corporate execs and 22% of PE — said they put ‘significant emphasis on IT’ in M&A. It’s perhaps no surprise that almost half (47%) said that in hindsight, more rigorous IT due diligence could have prevented value erosion.

Centralised monitoring minimises downtime

Currently, many financial organisations do not have a centralised view across its entire infrastructure that would deliver the required IT due diligence needed for a successful M&A deal, and this can only effectively be delivered through centralised IT monitoring. Research from analyst firm Enterprise Management Associates has indicated that a vast number of organisations have more than ten different monitoring tools and it can take organisations between three-six hours to find the source of an IT performance issue. This approach is clearly unsustainable, especially when companies have the added complication of merging two businesses. The true impacts of downtime during M&A can easily be seen by looking at the catastrophic IT outage suffered by UK bank TSB in 2018, where its migration from IT systems operated by its former owner to its new owners’ platform resulted in weeks of outage for millions of customers. At the other end of the scale is technology giant EMC, which proudly publicises its dedicated IT M&A integration team, which is brought in straight after a letter of intent is signed by potential acquisitions.

Post-deal, best practice IT operations can also help manage IT performance to ensure the customers of both companies involved suffer no adverse impact as a result of key staff being diverted to focus on the merger.

Visibility through a single pane of glass  

At its heart, effective IT monitoring is, therefore, a key component of IT operations which are designed to “manage the provisioning, capacity, performance and availability of the computing, networking and application environment” (Gartner). As such, they can be used to audit and analyse the critical IT assets of firms on both sides of the M&A deal. This data can then be employed to ensure the company is accurately valued, and integration roadmaps and timelines are realistic. Post-deal, best practice IT operations can also help manage IT performance to ensure the customers of both companies involved suffer no adverse impact as a result of key staff being diverted to focus on the merger. They play a central part in identifying under-utilised assets for optimisation, reconfiguring systems and stripping away duplicate technologies once a deal has completed.

However, it is important to remember that modern IT operations are becoming increasingly complex as they are now usually comprised of a mixture of dynamic, cloud and virtual-based systems, often operated by third-party providers. On top this, many organisations still operate legacy IT monitoring tools that are inadequately suited to provide visibility into these hybrid systems. As a consequence, tool sprawl is prolific among businesses who operate with this outdated, reactive and siloed approach to IT monitoring.

In order to combat this, financial organisations must centralise and consolidate these tools to rid themselves of data islands, improve decision-making, and proactively enhance IT performance and strategic advantage. From this single pane of glass, IT and operations managers can then accurately plan M&A due diligence and post-acquisition integration. However, it is critical that senior business leaders understand the strategic importance of bringing in IT into the M&A process as early on as possible.

 

Website: https://www.opsview.com/

Derick Fiebiger from 0chain explains its key benefits for your business.

Irrespective of what your opinion is, business executives have a duty to their organisations to assess relevant new technologies. Blockchain is an exciting new technology and companies the world over are evaluating whether blockchain offers a dependable, effective and valuable solution to their current challenges.

Seeing leading tech giants like IBM, AWS , Oracle and Accenture already on board and heavily invested in this new technology helps validate that blockchain is indeed more than hype and will transform many industries and systems in the years to come.

So what does this mean for me and my enterprise you may ask?

What are blockchain’s benefits for my business now and how will it help me innovate and stay ahead of the competition?

Blockchain’s advantages are many and as the underlying technology, applications and protocols evolve, more and more use cases emerge. At this stage though, the most important business benefits focus on increasing efficiency, agility, ROI, security, privacy and transparency.

The ability to easily access historical transactional data is particularly important for companies that have complex supply chains

  1. Transparency and Traceability 

Lack of transparency leads to delayed transactions, financial losses and situations that could compromise important commercial relationships.

Blockchain plays a critical role in tracing transactions and operations. The ability to easily access historical transactional data is particularly important for companies that have complex supply chains. It also helps with confirming transaction authenticity and preventing fraud.

As each transaction is recorded sequentially and indefinitely, you can easily provide an indelible audit trail for each transaction, operation or asset.

This accelerates reporting dramatically and enables you to access data regarding any potential issues in real time so you can fix problems as soon as they arise.

Furthermore, the audit process becomes much more efficient, faster and non-disruptive for the business.

  1. Security and Privacy

Security has become a massive issue for all enterprises and senior tech leaders are investing significant resources to prevent malicious attacks, stop data leakage and increase auditability and accountability.

Despite this investment, many companies only install low level security measures and pray solutions hold against malicious attacks. But, considering how many reputable global corporations have been victims of malicious parties recently, it’s becoming very clear that IT security not only has to protect confidential, sensitive data but there needs to be immutable records showing who did what, when and where in case something does go wrong.

Independently verified complex cryptography, definitive unchangeable records and decentralisation unite to make it far more difficult for hackers to compromise data. All these factors could revolutionise how critical information is shared, preventing fraud and loss of data.

With blockchain you can reduce data storage costs, store data in a more cost effective way and also eliminate many third parties that are now used for various transactions and trading processes.

  1. Efficiency and Agility

In order to navigate an increasingly complex business environment and fully leverage blockchain’s benefits, businesses need services with ample transaction capacity, near-instant finality and the ability to scale, all without sacrificing blockchain’s core benefits.

Think how much data your company generates and what’s managed on a daily basis. Countless transactions and operations happen every day inside and outside the company. Data flows to and from different parties.

With blockchain and tokenisation, you can reduce costs by storing and verifying all this data in a more efficient, secure way but also - transactions and data queries can be validated and completed far faster than traditional methods.

Furthermore, many companies still use paper heavy processes which are time-consuming, prone to human errors and offer little transparency. Blockchain streamlines and automates all these processes, enabling organisations to become more efficient and agile.

  1. Lower Costs

Reducing costs is a critical priority for many enterprises. With blockchain you can reduce data storage costs, store data in a more cost effective way and also eliminate many third parties that are now used for various transactions and trading processes.

This is increasingly important for companies with large IoT networks or business functions generating huge volumes of data every day.

Taking Control of Your Destiny 

Security, agility and efficiency are powerful blockchain benefits that businesses should be exploring. At the same time, there is an infinite number of tools, applications, and ideas that can be delivered through blockchains and it’s up to each enterprise to investigate how they can use the technology.

One thing to keep in mind if you’re considering implementing blockchain in your business is that this is not just an IT or R&D project. Blockchain, in many cases, is a fundamental business transformation operation which, if deployed and used properly, will significantly improve revenue and cost management. It will also cut across organisational silos and provide unique abilities for increased competitiveness and overall performance.

Regardless of whether you’re still on the fence regarding blockchain adoption or a passionate ambassador, one thing is clear - blockchain is here to stay and only the sky is the limit for the companies that are ready to take on board this new technology and leverage its full potential.

 

Derick’s Specialisms

 

LinkedIn: https://www.linkedin.com/in/derick-fiebiger-4605a040/

Website: https://0chain.net/

Following recent incidents such as TSB's systems failure and Visa's service outage, operational resilience is increasingly vital. Bank of England and FCA recently published a report stressing the importance of business continuity during a disaster. Below Finance Monthly hears from Peter Groucutt, Managing Director at Databarracks, who discusses what businesses need/can to do to strengthen their operational resilience during a disaster to absorb any shock a business may experience.

In July 2018, the Bank of England, Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) published a joint discussion paper aimed at engaging with the financial services industry to improve the operational resilience of firms and financial market infrastructures (FMIs).

At the time it was issued, banks and FMI’s were capturing media attention, following several high-profile incidents.

TSB’s failed IT migration has been well publicised, costing the firm £176.4m in various fees and leading to the departure of its chief executive, Paul Pester. In June 2018, shortly before the release of this paper, millions of people and businesses were unable to pay for shopping due to a sudden failure of Visa’s card payment system.

Financial services lead in business continuity

The financial services industry is a leader in business continuity and operational resilience. It has a requirement of a high level of systems-uptime and is well-regulated. The best practices it introduces are often taken and more widely adopted by other industries. Our own research supports this. Our annual Data Health Check survey provides a snapshot of the IT industry from the perspective of over 400 IT decision-makers. The findings from this year’s survey provided some revealing insights.

64% of financial institutions had a business continuity plan in place, compared to an industry average of 53%. Of the financial sector firms with a specific IT disaster recovery process within their business continuity plan, 64% had tested this in the past 12 months – compared to 47% across other industries. Finally, 81% of financial firms had tested their IT disaster recovery plans against cyber threats, versus 68% of firms in other sectors.

While these findings reinforce the strength of the industry’s operational resilience, incidents like TSB and Visa prove it is not immune to failures.

The regulators want to “commence a dialogue that achieves a step-change in the operational resilience of firms and FMIs”. The report takes a mature view to the kind of incidents firms may face and accepts that some disruptions are inevitable. It provides useful advice that can be taken and applied not only to the financial services community, but other industries too.

Leveraging advice to improve operational resilience

So, what can be learned from this report? Firstly, setting board-approved impact tolerances is an excellent suggestion. This describes the amount of disruption a firm can tolerate and helps senior management prioritise their investment decisions in preparation for incidents. This is fundamental to all good continuity planning; particularly as new technologies emerge, and customer demand for instant access to information intensifies. These tolerances are essential for defining how a business builds its operational practices.

Additionally, focusing on business services rather than systems is another important recommendation. Designing your systems and processes on the assumption there will be disruptions – but ensuring you can continue to deliver business services is key.

It’s also pleasing to see the report highlight the increased concentration of risk due to a limited number of technology providers. This is particularly prevalent in the financial sector for payment systems, but again there are parallels with other industries and technologies. Cloud computing, for example, it’s reaching a state of oligopoly, with the market dominated by a small number of key players. For customers of those cloud services, it can lead to a heavy reliance on a single company. This poses a significant supplier risk.

Next steps

Looking ahead, the BoE, PRA and FCA have set a deadline of Friday 5th October for interested parties and stakeholders to share their observations. The supervisory authorities will use these responses to inform current supervisory activity, helping to dictate future policy-making. The supervisory authorities will then share relevant information with the Financial Policy Committee (FPC), supporting its efforts to build resilience in the financial system.

Firms looking to improve their operational resilience should take advantage of this excellent resource – whether in financial services or not.

Last week TSB lost around 16,000 customers following a serious IT meltdown. This event serves as a display to how important customer service and customer experience are in the commercial banking sector.

In light of TSB’s recent customer service blunder, Jonny Davis, vice-president of global client management partnerships at Fraedom, comments on how banks can enhance their solutions and services delivery.

The TSB story should serve as a reminder of the importance of customer service and the customer experience. Times have changed – businesses have more choice in who they bank with and can switch banks relatively easily, as we have seen from TSB’s customer losses. In this day and age, it’s unacceptable for banks to have faults on this scale.

Over the last decade, customers have come to expect more from their banks, largely thanks to technological innovation which provides seamless mobile transactions, generally responsive customer service and fast transaction times. These services are now seen as a given and banks, whether consumer or commercial, falling short of these expectations is seen as a failure. With ever-growing customer expectation banks must adapt or innovate in these changing times.

A recent survey conducted by Fraedom found that account management and customer service are priorities for 71% of commercial clients. Ultimately, people want more from their banks and this often means more automation, a focus on online banking and a more personalised service. Customers are looking for the banking system to change and up their game when it comes to customer service. In fact, we discovered that 95% of commercial banking clients want their providers to supply the same aggregated account views and real-time transactional information that their personal apps do. This is one area where commercial banks must innovate to keep up with customer expectations.

The recent development and adoption of technology within the banking sector has certainly given way to an increase in our expectations, as consumers, both in the personal and commercial sphere. We have now come to realise that we can do more and more without ever having to step foot inside a bank or even talk to another human being – and we now expect it. With more than 70% of consumers willing to receive computer-generated banking advice according to Accenture, this is a great way for banks to offer the 24/7 service customers have come to expect. Nowadays, customers see no reason for an adherence to ‘office hours’ when chatbots can provide a solution to this thanks to their 24/7 availability and intelligent access to customer information.

Chatbots are just one area in which banks can innovate beyond the basic banking apps to provide a better customer experience, with other areas including biometrics, security and AI. For instance, banks can provide an added value service by incorporating AI into their existing services for spend analysis or risk identification. This would raise banking services above the level of a commodity, improving brand consideration and customer loyalty and cementing their relationships with clients.

TSB’s experience should be a lesson to its peers about the power of their customers. If customers aren’t happy with the service they are being provided, then it is highly likely they will take their banking elsewhere. It’s therefore up to banks to innovate and use technology to provide faster, safer and more intuitive solutions for their customers.

About Finance Monthly

Universal Media logo
Finance Monthly is a comprehensive website tailored for individuals seeking insights into the world of consumer finance and money management. It offers news, commentary, and in-depth analysis on topics crucial to personal financial management and decision-making. Whether you're interested in budgeting, investing, or understanding market trends, Finance Monthly provides valuable information to help you navigate the financial aspects of everyday life.
© 2024 Finance Monthly - All Rights Reserved.
News Illustration

Get our free monthly FM email

Subscribe to Finance Monthly and Get the Latest Finance News, Opinion and Insight Direct to you every month.
chevron-right-circle linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram