finance
monthly
Personal Finance. Money. Investing.
Contribute
Newsletter
Corporate

Most sectors are having to comply with said rules and conform to industry trends, thus evolving based on the limitations regulations have imposed on them. According to Aravind Srimoolanathan, Senior Research Analyst - Aerospace, Defence & Security at Frost & Sullivan, this is particularly applicable in the biometrics sector, as it progresses in line with regulation presenting increasing opportunities for biometrics to excel in a security driven data world.

The Swedish data protection authorities (DPA) recently levied the first fine of approximately $20,000 to a high school which ran trials of facial recognition technology among a group of students to monitor their attendance. The school authorities argue that the program had the consent of the students, though that did not soften the stance of the regulator. The European data protection board citing the ‘imbalance’ between the data subject and the controller of data. Canvassing the multiple opinions floating on the web1, Frost & Sullivan notes multiple cases of violations reported in Bulgaria and Austria post the incident in Sweden. The regulatory breaches have led to similar fines levied by the respective local data protection agencies tasked to enforce GDPR. Have the flood gates opened? Will this drown the Biometric market? Probably not, but it does raise significant concerns which need to be assessed and responded, to continue bringing the associated benefits of Biometric technologies to business and security operations.

General Data Protection Regulation (GDPR) is designed for the protection of personal data. GDPR emphasises on a person’s right to protect their personal data, irrespective of whether the data are processed within or outside the EU. Any data that could be linked to a person is subsumed into the definition of “personal data”. The regulation comprises of several articles and clauses which require compliance by all forms of agency - public, private or individual, that processes personal and sensitive data of clients, companies or other individuals. The regulations not only addresses data protection and privacy of individual citizens of European Union (EU) and European Economic Area (EEA) but also data transfer outside EU and EEA.

[ymal]

In summary- data is expected to be stored, managed, and shared in an individual-centric approach rather than a collateral approach.

The challenges in managing identity in the modern world through conventional methods such as ID cards and PINs/ passwords are failing to address efficiency, accuracy and security requirements. The exponential demand for biometric-based ID management and access control systems drives the need to overcome such challenges. Biometric technologies (yes, facial recognition is one of them) curtail unauthorised physical and cyber access preventing identity fraud, enhance public safety, and drive seamless and efficient processes ensuring higher safety, convenience, and profits.

The Sweden High School case indicates the extent of GDPR is not just limited to giant corporations such as British Airways but also smaller public and private entities ‘mishandling’ data and hence violating the dictates of the GDPR regulations.

Frost & Sullivan’s collation of perspectives and insights from across the industry indicates that biometric technologies will replace conventional methods of Identity and Access Management in the years to come, not a case of if but when. Continued enforcement of data regulations would drive proper use case definition and regulatory compliance, but for this the suppliers and operators of these technologies need to create compliant secure by design solutions and processes. The first step is ensuring secure operations of the systems, and second is to design robust and verifiable processes for the associated data generated. Thirdly, defining the application of harvested data within the ethos of GDPR and related governance.

In the short-term though, with a surge in biometric technologies adoption, Frost & Sullivan anticipates we will witness an uptick in number of GDPR violation cases, due to partial and/or improper understanding of data privacy regulations. Though there is a risk that the hefty fines may slow down the pace of widespread adoption of biometric technologies, Frost & Sullivan proposed three-step strategy will drive healthy demand. Organisations that are digitally transforming their businesses for enhanced process efficiencies as part of their digital strategy would need to realign strategies to comply with general data protection regulations.

Biometric technologies are gaining infamous popularity with the data breaches, privacy concerns and unethical commercialisation of the associated data. GDPR, the Achilles heel as it may prove to be for the Biometric market, does not necessarily need to be – instead, the principles of GDPR can itself become the value proposition of the future biometric technologies.

1 http://www.enforcementtracker.com/

2 https://www.infosecurity-magazine.com/news/gdpr-spurs-700-increase-data/

New research has revealed that, of the average number of banknotes required by an individual adult each year, new £10 notes release 8.77kg of CO2 compared to their cotton-paper predecessors’ 2.92kg - exactly three times as much.

For £5 notes, that’s 4.97kg for polymer against 1.8kg for paper, or 2.76 times as much - just in the manufacturing of the required number of notes.

The research, that combines data from the Bank of England’s own reports with information on cash manufacture and usage, from sources such as the British Retail Consortium, to give a more realistic comparison.

The plastic notes were initially introduced in 2016, on the basis of their ability to include greater security features, being more resistant to dirt and having a longer life.

This extended lifespan was cited as the main reason for the new notes having a lower environmental impact. However, the bank’s data is based on what it calls functional units - the circulation of 1,000 banknotes over 10 years - rather than the number actually used by an individual, their manufacture and the number of exchanges they go through.

When it comes to disposal at the end of their lives, paper notes are returned to the Bank of England, where they are granulated and composted in a process similar to that used for food waste. Meanwhile, polymer notes are granulated, melted and mechanically recycled into other objects.

The greenhouse gas production of each method for both the paper and plastic £5 notes is essentially the same. The slightly larger and thicker £10 notes, though, mean that the polymer versions create slightly more CO2 in their end-of-life process than their paper counterparts.

Not every alternative method of payment avoids the problem, either. The increasingly popular Apple Pay, for example, comes with the considerable environmental cost of manufacturing an iPhone, which will typically only be kept for two years.

According to Apple’s own reports, a 64GB iPhone XS represents lifetime emissions of 70kg of CO2, with 53.9kg coming from the unit’s production. Almost eight times more than polymer £10 notes - the next most damaging option.

The most environmentally-friendly payment method is a bank card, despite being made from PVC. Over its three-year life, a standard card represents just 20.8g of CO2 production. Even when the technology for wireless payments is added, it increases to just 40g of CO2 - a fraction of that from banknotes.

(Source: Moneyboat)

44% of requests were processed after detection of an attack during an early stage, saving the client from potentially severe consequences. These are among the main findings of Kaspersky’s latest Incident Response Analytics Report.

It is often assumed that incident response is only needed in cases when damage from a cyberattack  has already occurred and there is a need for further investigation. However, analysis of multiple incident response cases which Kaspersky security specialists participated in during the 2018 shows that this offering can not only serve as investigative, but also as a tool for catching an attack during an earlier stage to prevent damage.

In 2018, 22% of IR cases were initiated after detection of potential malicious activity in the network, and an additional 22% were initiated after a malicious file was found in the network. Without any other signs of a breach, both cases may suggest that there is an ongoing attack. However, not every corporate security team may be able to tell if automated security tools have already detected and stopped malicious activity, or these were just the beginning of a larger, invisible, malicious operation in the network and external specialists are needed. As a result of incorrect assessement, malicious activity evolves into a serious cyberattack with real consequences. In 2018, 26% of investigated “late” cases were caused by infection with encryption malware, while 11% of attacks resulted in monetary theft.19% of “late” cases were a result of detecting spam from a corporate email account, detection of service unavailability or detection of a successful breach.

“This situation indicates that in many companies there is certainly room for improvement of detection methods and incident response procedures. The earlier an organisation catches an attack, the smaller the consequences will be. But based on our experience, companies often do not pay proper attention to artifacts of serious attacks, and our incident response team often is being called when it is already too late to prevent damage. On the other hand, we see that many companies have learned how to assess signs of a serious cyberattack in their network and we were able to prevent what could have been more sever incidents. We call on other organisations to consider this as a successful case study,” said Ayman Shaaban, security expert at Kaspersky

Additional findings of the report include:

To effectively respond to incidents, Kaspersky recommends:

 

You visit your local bank branch’s ATM to withdraw cash or to print out a mini statement and you are met with a message informing you that the ATM is out of service. That is frustrating at all times but can be especially aggravating when there is no other cash machine available nearby. On the theme of banking resilience, here Alan Stewart-Brown, VP EMEA at Opengear, discusses with Finance Monthly the network issues banks are currently dealing with.

For retail banks, the issues and challenges presented by ATM network downtime are likely to be high on the agenda. Financial institutions are reliant upon a resilient network to ensure unique compliance requirements are met, address customer needs and adapt to evolving industry trends. ATM resilience is an important element of this.

Many banks have extensive ATM networks across the UK and often further afield. They may have an ATM in every town or city across the country, and in some places, they may be running multiple ATMs. They are likely also to have machines in many other more remote sites.  If they have network issues or outages, a large number of ATMs could suddenly be out of commission and that presents a huge range of issues and challenges to the bank.

Whenever ATMs go down, it will inevitably result in a loss of revenue and customers for the bank, as they switch to other providers. It is likely to also have a negative impact on a bank’s reputation and brand image. Less well understood, but equally important, it presents a security issue, as the engineer will have to open the ATM up while on site.

In the past, when an ATM went down, an engineer would be scheduled. Depending on availability; how remote the ATM was geographically and the severity of the problem, that could mean at the least hours or even days of downtime.

Even when the engineer arrived on site after a potentially long journey, fixing the problem might not necessarily be straightforward. The ATM may be owned by a third party organisation, not necessarily the bank itself. It may therefore be difficult to access because it is located in a building or facility belonging to another organisation and/or because the engineer’s visit happens out of normal working hours.

Finding a Solution

Banks with ATM networks need something that allows them to get these remote units fixed without having to waste engineering time travelling to the site and dealing with the security issues of opening the box up and the logistical issues that may be involved in gaining access to the ATM itself. They need a solution that can give them remote access when the network is up and running and also when it is down. And they need one that can allow them to power cycle the equipment within the ATM when the router hangs - a common problem in these environments.

These networks also need a solution that is vendor neutral on the equipment it connects to but also on the power equipment it can manage. An out-of-band management unit can be added to each ATM to reduce downtime to just a few minutes and bring them back up very quickly. It also negates the need for someone to physically go to the site, and most importantly removes the necessity for the secure opening up of the ATM.

Keeping Branches Up and Running

ATM failures are of course one key aspect of a broader requirement facing banks to keep their retail branches up and running at all times. At Opengear, we are seeing a growing demand for solutions that deliver network resilience from core to edge in financial networks. One of the top performing banks in the US recently needed an out-of-band solution for its multiple locations across the country. With the challenge it faced highlighted by a recent outage at a remote location, the bank wanted to reduce the burden of travelling to geographically-distributed sites, decrease downtime and ensure compliance requirements were met. It chose to deploy ACM7000 Resilience Gateways from Opengear at each branch location, paired with the Lighthouse Central Management System (CMS), also from Opengear.

Failover to Cellular (F2C) and Smart Out-of-Band (OOB) technology ensure security requirements are met while also providing access to infrastructure during a disruption, with an alternate path to the primary network using 4G LTE. In addition, the bank is able to deploy and provision new sites remotely.  It is a great example of the benefits of resilient access to networks in financial services when an outage occurs.

In summary, outages are bad news for banks and other financial institutions. ATM outages are arguably especially bad because they are particularly visible to customers; cause immediate loss of revenue and customer churn; as well as negatively impacting reputation and presenting a security risk. But they are inevitable because of human error, cyberattack, and the ever-increasing complexity of network devices, modern software stacks, and hardware devices. To keep consumers happy and the institution’s reputation intact, financial services must be prepared for outages. Smart OOB with Failover to Cellular can keep services running even when part of the network is down.

As reported in the Financial Conduct Authority survey by Which?, the UK banking sector was hit by IT outages on a daily basis in the last nine months of 2018, demonstrating a higher frequency of major banking glitches than previously thought. Barclays alone reported 41 major incidents during those months, followed by Lloyds Bank with 37 IT failures and Halifax/Bank of Scotland with 31. Whilst TSB only reported 16 incidents, their week-long outage last year cost them around £330m as well as the longer-term impact of the clients lost.

Just minutes of downtime can significantly impact the financial sector, which holds the data and funds of millions of customers who are reliant on having access to these services and trust that their assets will be kept safe. To minimise the effects of a disaster and ensure business continuity in case of an IT failure or ransomware attack, businesses must invest in customised disaster recovery services which allow data to be brought back as quickly as possible in case of an outage. Diverting just a small proportion of the cybersecurity budget towards routine IT operations can deliver significant ROI in terms of increased operational resilience. Regular testing and optimisation of backup and recovery systems can deliver big rewards in terms of preventing issues and getting back up and running quickly should disaster strike.

As reported in the Financial Conduct Authority survey by Which?, the UK banking sector was hit by IT outages on a daily basis in the last nine months of 2018, demonstrating a higher frequency of major banking glitches than previously thought.

Safeguarding your data 

In the event of an IT failure or a ransomware attack, IT operators need a way to get systems back online and to do so fast. As noted by Gartner, the average cost of IT downtime is £4,400 per minute. The implications of IT failures go far beyond financial losses however, as they also damage the reputation of the business as well as lead to massive amounts of operative time lost. When a cyberattack or an IT outage takes place, it is not the failure or attack itself that causes the most harm but the resulting downtime of operations affecting productivity and credibility of the organisation. To avoid such losses organisations must put appropriate recovery systems in place. But to do so, they must first understand the IT systems they run and know what data they hold.

To stop the nightmare scenario from becoming reality, a solution able to recover business-essential data and get the most crucial systems back online in minutes is needed. A zero-day approach to IT architecture can do just that, as it allows organisations to prioritise workloads, with a planned recovery strategy of making sure the most important systems are brought back to first in case of an outage.

A zero-day recovery architecture is a service that enables operators to quickly bring workloads or data back into operation in the event of an IT failure or cyberattack, without having to worry about whether the workload is compromised. With the so-called 3-2-1 backup rule – meaning three copies of data stored on two different media and one backup kept offsite – zero-day recovery enables an IT department to partner with the cyber team and create a set of policies which define the architecture for what they want to do with data backups being stored offsite, normally in the cloud. This system assigns an appropriate storage cost and therefore recovery time to each workload according to their strategic value to the business, as all data is not created equal in terms of business continuity.

A zero-day recovery architecture is a service that enables operators to quickly bring workloads or data back into operation in the event of an IT failure or cyberattack, without having to worry about whether the workload is compromised.

This recovery system will only prove useful however when set up properly and tested thoroughly and frequently. Approximately 25% of organisations’ nightly backups fail – yet few will be aware of this due to a lack of recovery testing, meaning most businesses will have no idea what data has been lost in the process. With this in mind, operators need to perform disaster recovery testing on their data. Without testing in a controlled and simulated environment, it is impossible for IT and security teams to fully understand their systems’ integrity. Figuring out the data backup and recovery systems have failed after an IT outage has already taken place has no value – this needs to have been done before the worst has a chance to take place.

IT outages in the financial sector are becoming more frequent. In fact, the number of such incidents reported to the Financial Conduct Authority increased by 138% in the first 9 months of 2018, and are showing no signs of slowing down, making them a question of when, not if. With a large portion of the infrastructure in the financial sector relying on IT, minimising outages and limiting threats to this infrastructure should be number one priority to systems operators.

This week Finance Monthly hears from Simon Rodway, a solutions architect at Entersekt, on the potential and realistic impacts of Libra on the traditional banking system.

The social media giant Facebook announced in June that it has developed a cryptocurrency dubbed Libra and plans to launch it early next year. While some may dismiss it as just more hype, the sheer dominance of Facebook in people’s social lives gives it huge potential to disrupt banking and payments as we know it today.

The company claims that Libra will improve the way we send money online, making it faster and cheaper, as well as improving access to financial services – even for those without bank accounts or limited access to traditional banking. It will be based on a blockchain platform called the Libra Network and Facebook says that it will run faster than other cryptocurrencies, making it ideal for purchasing and sending money quickly. Importantly, Libra will not be managed by Facebook itself; rather, by the Libra Association – a not-for-profit organisation comprised of 28 companies (so far) from around the world such as Paypal, Lyft and Coinbase. It aims to sign up 100 companies by the time the cryptocurrency is launched.

One thing’s for sure: it’s going to be an interesting development to watch, especially in the wake of Facebook’s cryptocurrency wallet company Calibra’s David Marcus presenting his testimony to the United States Congress banking committee. The result was that Facebook would “take time to get this right” and there would be no launch until all concerns could be fully addressed.

So, even though it’s still early days, Libra has given us a lot to think about. Ill-informed speculation and click bait aside, there are legitimate concerns around fraud – with reports already of over one hundred fake domains being set up relating to Libra. There are also the money laundering and financial risk concerns.

In terms of the impact and financial risk, most of what we’re hearing is coming from within the more established financial sectors. They’re either dismissing Libra as noise or decrying it as a vehicle for potential terrorist activities – something, they say, that regulators won’t allow to happen, despite Calibra openly reporting its intention to work with said regulators and policymakers to ensure the platform is secure, auditable and resilient.

At the same time, of course, they’re defending the current system, claiming that it works well, is safe and secure, and doesn’t support terrorism. But, if we’re honest, Anti-Money Laundering (AML) systems have, to date, been largely unable to stop the vast amounts of laundered funds from moving around. In addition, our Know Your Customer (KYC) and Know Your Business (KYB) processes use data from the likes of Companies House, which has been heavily criticised for their own lack of data validation and governance.

All that aside, what’s become quite clear is that the existing system presents too many blockers for the poorer, under-banked members of our society. Those working in the UK, for example, and legitimately wanting to transfer their wages to their families in other countries, end up paying exorbitant banking fees, only to wait days for their funds to clear.

This is where Libra, with its vision for financial inclusion, could make a difference. And if Libra doesn’t make it happen this time around, the technology and conceptual design are essentially open source, so someone else will. The wheels are in motion, and financial institutions that ignore the trend do so at their peril.

Two thirds (66%) of people rate safe and secure payments as most important in the online checkout process, with only one in ten being most concerned about speed or simplicity. Security ranked highest across all age groups, and was a particular concern for over 55s (75%) compared to just over half of 18-24 and 25-34 year olds (52% and 53% respectively).

The survey, conducted online with YouGov, also revealed a further 76% of Brits would be willing to accept a slower or less convenient checkout experience in return for greater payment security. Meanwhile, almost half (45%) said security concerns about online payment processes were the reason most likely to put them off using a particular online retailer, more so than having to create an account (14%), a confusing process (8%), or too many steps during checkout (6%).

Keith McGill, head of ID and fraud at Equifax, said: “With more than 20% of retail revenues coming from online sales*, it’s positive to see so many consumers have security front of mind when they’re at the online checkout. The latest stats from Cifas do however show an increase in identity fraud** so it’s important shoppers remain vigilant. If you have any doubts about the professionalism of a website you should always think very carefully before entering your personal or payment details.

“New European wide regulations are on the horizon which will require two stage verification for any online purchase for more than 30 euros, similar to the security checks used for online banking. While this might feel like an extra hoop to jump through, it’s an important step forward in the ongoing battle to fight fraud.”

(Source: Equifax)

This is according to Aaron Lint, Chief Scientist and VP of Research at Arxan Technologies, who discusses with Finance Monthly below, touching on the key elements of tech security and the use of financial applications across devices.

There’s a systemic problem across the financial services industry with financial institutions failing to secure their mobile apps. With mobile banking becoming the primary user experience and open banking standards looming, mobile security must become a more integral part of the institution’s overall security strategy, and fast.

When a company fails to consider a proper application security technology strategy for its front line apps, the app can be easily reverse-engineered. This sets the stage for potential account takeovers, data leaks, and fraud. As a result, the company may experience significant financial losses and damage to brand, customer loyalty, and shareholder confidence as well as significant government penalties.

Where’s the proof?

A recent in-depth analysis conducted by Aite Group of financial institutions’ mobile applications highlighted major vulnerabilities including easily reverse-engineered application code. Each app was very readily reversible, only requiring an average of 8.5 minutes per application analysed. Some of the serious vulnerabilities exposed included insecure in-app data storage, compromised data transmission due to weak cryptography, insufficient transport layer protection, and potential malware injection points due to insufficient integrity protection.

For example, of the apps tested, 97% lacked binary code protection, meaning the majority of apps can be trivially reverse engineered. Of equal concern was the finding that 90% of the apps shared services with other applications on the same device, leaving the data from the financial institution’s app accessible to any other application on the device.

This metadata is built by default into every single unprotected mobile application in the world. It provides not only an instruction manual for the APIs which are used to interact with the data center, but also the location of authorization keys and authentication tokens which control access to those APIs. Even if the applications are implemented without a single runtime code-based vulnerability, this statically available information can provide an attacker with the blueprints they are seeking when performing reconnaissance.

There is no shortage of anecdotal evidence which shows that hackers are actively seeking to take advantage of vulnerabilities like the ones identified in the research. For example, recently mobile malware was uncovered that leveraged Android’s accessibility features to copy the finger taps required to send money out of an individual’s PayPal account. The malware was posted on a third-party app store disguised as a battery optimisation app. This mobile banking trojan was designed to wire just under £800 out of an individual’s PayPal account within three seconds, despite PayPal’s additional layer of security using multifactor authentication.

So, what’s the solution?

To minimise the risk of all of the vulnerabilities being identified and ultimately exploited, it is essential that financial institutions adopt a comprehensive approach to application security that includes app shielding, encryption, threat detection and response; and ensure their developers receive adequate secure coding training.

App shielding is a process in which the source code of an application is augmented with additional security controls and obfuscation, deterring hackers from analysing and decompiling it. This significantly raises the level of effort necessary to exploit vulnerabilities in the mobile app or repackage it to redistribute it with malware inside. In addition, app-level threat detection should be implemented to identify and alert IT teams on exactly how and when apps are attacked at the endpoint. This opens a new avenue of response for an organisation’s SOC (Security Operations Center) Playbook, allowing immediate actions such as shutting down the application, or sandboxing a user – essentially isolating them from critical system resources and assets, revising business logic, and repairing code.

App shielding and the other types of application security solutions mentioned above should be incorporated directly into the DevOps and DevSecOps methodologies so that the security of the application is deployed and updated along with the normal SDLC (Software Development Life Cycle). App Shielding is available post-coding, so as not to disrupt rapid app development and deployment processes by requiring retraining of developers. This combination of best practices increases an organisation's ability to deliver safe, reliable applications and services at high velocity.

Conclusion

It’s no secret that the finance industry is a lucrative target because the direct payoff is cold, hard cash. Research is showing that virtually none of the finance apps have holistic app security measures in place that could detect if an app is being reverse-engineered, let alone actively defend against any malicious activity originating from code level tampering.

We would reasonably expect our fundamental financial institutions to be leaders in security, but unfortunately, the lack of app protection is a disturbing industry trend in the face of a significant shift into reliance on mobility. Organisations need to take a fresh look at their mobile strategy and the related threat modeling, and realise how significant the attack surface really is.

Refinitiv, one of the world’s largest providers of financial markets data and infrastructure, has published its second annual financial crime report today. Innovation and the fight against financial crime: How data and technology can turn the tide highlights that almost three-quarters (72%) of organisations have been victims of financial crime over the past 12 months with a lax approach to due diligence checks when onboarding new customers, suppliers and partners cited as creating an environment in which criminal activity can thrive. This wake-up call has led to 59% of companies adopting new technologies to plug compliance gaps.

In its 2018 report, Refinitiv outlined that $1.45 trillion of aggregate turnover is lost as a result of financial crime. This year’s report shows that the cost could indeed be much greater. Only 62% of the 3,000 compliance managers Refinitiv surveyed across 24 geographies claimed that financial crimes were reported internally, and just 60% said that they were reported to the relevant external organization.

Over the next year, companies are intending to spend on average 51% more to mitigate the crisis. The increased investment emphasises the priority placed on fighting financial crime in 2019 and reflects the amount of pressure respondents are under to be more innovative to both reduce risk and costs.

According to the report, an overwhelming majority of respondents (97%) believe that technology can significantly help with financial crime prevention with cloud-based data and technology the top choice, followed by AI and Machine Learning tools. Technology-driven solutions, such as Artificial Intelligence and Machine Learning, are already allowing businesses to implement processes and check up to millions of customer and third-party relationships, more quickly and efficiently.

Phil Cotter, Managing Director of the Risk business at Refinitiv, said the results showed that businesses need to do more to invest in technology to address the problem: “It is clear from the results of this report that businesses exposed to financial crime threats need to maximize their use of technology and future collaboration could prove key to realising the potential of innovation, particularly between tech companies, governments and financial institutions.

“Significant advancements in technology, facilitated by innovations such as AI, ML and cloud computing, are already under way. These technologies are enabling intelligence to be gathered from vast and often disparate data sets which together with rapid advances in data science, are transforming the approach to compliance, streamlining processes such as Know Your Customer (KYC) and helping to uncover previously hidden patterns and networks of potential financial crime activity.”

While the report focuses on the many emerging technologies coming on stream in the fight against financial crime, it also urges organisations not to overlook another vital form of innovation – collaboration. Just over eight in 10 (81%) respondents said that there is some sort of existing partnership or taskforce  in their country to combat financial crime. 86% believe that the benefits of sharing information within such a partnership organization outweighs any possible risks.

In 2018, Refinitiv partnered with the World Economic Forum and Europol to form a global Coalition to Fight Financial Crime. The Coalition is working with law enforcement agencies, advocacy groups, and NGOs to address the societal costs and risks that financial crime poses to the integrity of the global financial system.

At stake are our personal data, as well as our monetary possessions. While the concern for the former is a rather new phenomenon, the latter have been guarded by a multi-layered web of intermediaries. And still banks and other financial institutions regularly witness the weaknesses of this set-up. Below Igor Pejic, author of new book ‘Blockchain Babel: The Crypto-Craze and the Challenge to Business’, confronts the question: Is the Blockchain Really Unsinkable?

In recent years a technology hailed for immutability entered the stage: the blockchain. This cryptographically secured, distributed ledger technology was initially designed to bypass the financial system by enabling digital currencies, yet today banks are the most active in blockchain research, trying to reap the benefits of this supposedly tamper-proof ledger. But is the blockchain really unhackable?

In many a head there are probably stories whizzing around about stolen bitcoins and hacked exchanges. Mt. Gox is such a story. In 2014 Mt. Gox was the world’s largest crypto-exchange which processed around 70% of the world’s bitcoin transactions. 850.000 bitcoins were lost (of which around 200.000 were recovered). Further hacks such as the one of the Slovenian exchange Bitstamp followed. Most recently Quadriga, a Canadian exchange, made headlines because its founder Gerald Cotten supposedly passed away on a trip in India. He was the only one to knew the private keys to the wallets of 115,000 customers with funds worth $143m. That funds are thus not accessible and lost.

Yet when commentators use these examples to sow doubt about blockchain-security, they mix up different dimensions of data security, in particular data’s integrity during a transaction with its integrity before or after a transaction. The aforementioned hacks can be attributed to lax security standards aside of transactions such as the storage of private access keys. While parts of the crypto-sphere are reacting – Bitstamp has introduced two-factor authentication to access funds – many wallets and exchanges continue to operate with hair-raising security standards.

But what about the mechanism itself? Can attackers inject bogus transactions or rewrite past ones? This answer depends on the validation mechanism each particular blockchain uses. Let us illustrate this with bitcoin and other chains that work with so-called proof-of-work validation. In this set-up, validator nodes, also known as miners, are investing massive computing power to solve a mathematical puzzle with trial and error mechanisms. They are interested in the “right” solution, because only if they find it first, they are rewarded with freshly minted coins. Once found, the correct value can be verified quickly by the network. The major danger here is that a possible attacker gains control over more than 50% of the hashing power in a network and can vote a wrong truth into reality. The attacker could then submit a transaction to the network, and after getting the good or service he paid for simply use his computing majority to fork the network at a point in time before he sent the money.

Critics will point to the infamous DAO-hack. The DAO (Decentralized Autonomous Organization) was a leaderless organization that issued a token built on Ethereum’s smart contract code. A hacker exploited a cryptographic vulnerability to capture $50m. An ideological conflict of the Ethereum community prevented a soft fork that would have reversed the hack. Thus, a hard fork split the chain into Ethereum (version without the hack) and Ethereum Classic (version including the hack). But even this example was not a hack of the blockchain, but rather a bug that pestered the DAO-code sitting on top of the Ethereum-blockchain. Despite many problematic constellations – e.g. a high concentration of mining pools, as well as a limited number of ISPs hosting large parts of prominent blockchains – the mechanism as such has never been hacked. Attacks are very expensive and the advantages for the most part short-lived.

Does this mean the blockchain is immutable? No. We have to get the fairytale out of our heads that there is something like absolute security. There is always a way to trick the system, even if it is highly unlikely as the aforementioned 51%-attack. The question we should ask instead is whether blockchain is more secure than current systems. What most most critics of new payment technology do not know is that even the SWIFT-network, which enables monetary transactions between 11.000 financial institutions worldwide, has been subject to hacking in the past. In one heist, banks in Bangladesh and Ecuador lost millions. Blockchain technology has proven to be less susceptible to several attack vendors while doing away with intermediaries. This should render the discussion about absolute immutability superfluous.

In the UK, 88% of data breaches reported to the Information Commissioner’s Office (ICO) are caused by human error. The most common mistake is sending information to the wrong person. The number one culprit? Email. So what do you do? Peter Matthews, CEO of Metro Communications, knows what to do.

CFOs should not ignore the potential impact of such breaches on a company’s finances and reputation. Research for IBM suggests that the average cost of a data breach in the UK rose to £2.7m in 2018, with health, financial and service sectors most likely to experience breaches.

Few FDs would claim to be immune to accidental data transfer via email. So, what can you do if you inadvertently send a confidential message to the wrong person?

1. Recall or ‘unsend’ it

Email services offer different ways to cancel sent messages. In Outlook it is possible to recall and then delete an email providing it hasn’t been opened by the recipient. Gmail allows you to delay messages from leaving your outbox. If a sensitive email has been sent to a fellow employee then your IT department should be able to delete it, if they are informed fast enough.

2. Contact the recipient

Get in touch with the recipient as soon as you notice the mistake and ask them to delete the email without reading or sharing it. Request that they email you to confirm they’ve done so. Log the incident in an ‘cyber accident book’.

3. Report and act quickly

Report the incident internally and ensure it’s followed through to its conclusion. An employee of SSE Energy who sent a sensitive email in error promptly reported it in accordance with the company’s policies and procedures. However, SSE’s failure to notify the commissioner in a timely manner led to a £1,000 fine and negative publicity. The regulations have since been amended so that directors, managers and company secretaries can be fined up to £500,000.

4. Inform and advise customers

Good customer service goes a long way. Boeing was mocked for failing to use its own data protection software to prevent an accidental breach which compromised the personal data of 36,000 customers. But it was applauded for informing customers about the nature of the incident, taking action to ensure files were deleted, and giving detailed advice about how customers could check their personal data wasn’t being misused.

5. Notify the regulator, if necessary

Inform the regulator within 72 hours if you believe there’s a risk to customers. Even where you don’t feel an incident is notifiable, it is still worth recording, internally. This will help you review incidents as part of a health check and if you ever have to demonstrate regulatory compliance it could prove invaluable.

Once you’ve contained the incident, revisit your strategy and consider the need for other forms of action such as staff training, policy reviews, access rights, restrictive covenants and encryption. Data classification that ‘weights’ the sensitivity of each file and document on your company’s drive and then links highly confidential information to a closed group of authorised recipients, with blocks on copying such information onto memory sticks, can be helpful. Preventative tools like this make it difficult to email the wrong data to the wrong person and they also log user behaviour, flagging up employees who try to reclassify data so they can send it out of the business.

The law doesn’t distinguish between deliberate and accidental breaches, so don’t expect a discount on fines for damaging disclosures caused by an honest mistake, and don’t be surprised to find lawyers queuing up to help those whose financial, personal or health data has been incorrectly transferred.

But let’s look at it positively. Employee error is a significant contributor to data loss, but it is easier to prevent and generally takes less time to control than a malicious hack. Indeed, many accidental incidents can be contained or even prevented by steps so simple that everyone should be taking them. However, if you’ve decided you want to take a ‘belt and breaches’ approach then it’s time to trust yourself less. Preventative measures such as data classification will ensure you send that sinking feeling to your deleted folder once and for all.

Automated fund management is becoming a daily reality for many retail investors as advanced financial technology becomes miniaturised - companies like Nutmeg have built their business model on mobile-based automatic investment. Here, Adam Vincent, CEO at ThreatConnect, answers the question: brave new world or house of cards?

Even for larger, more traditional investment houses, essential market and risk analysis is shifting towards digital - as machine learning becomes more advanced, software is increasingly able to perform critical judgements that were previously the preserve of humans.

With that shift comes a heavy reliance on technology in frontline business as well as back-end processes. As such, the security of these applications is paramount. Banks and other financial institutions need to ensure they have full visibility of their systems and are able to detect potential threats to their customer-facing systems. A compromised investment app could lead to serious losses and, if the firm in question is influential enough, have a significant impact on wider markets.

Security’s weight problem

To add to that problem, the cyber security that guards those banks is often huge, unwieldy and poorly linked up. For decades, the young cybersecurity market has been about specialism: laser-focus companies designing highly-adapted solutions to solve a particular problem – malware, say, or phishing – as well as possible. That’s all well and good in the sense that each platform does the best job for its users, but over time it’s led to a highly expensive and unwieldy situation for buyers and security analysts who have to assemble a defence from multiple vendors.

Think of it this way: imagine you need a new car. But instead of going to the local dealership and buying a shiny Ford, you have to ring up the door manufacturer and ask them to bring you four doors. Then you call the seat company, and they deliver five seats. The engine makers, the boot shapers, the hubcap painters. All of them craft a quality product, but you’re left with an enormous bill and you still have to put the thing together and make sure it actually works.

That’s essentially the problem facing large banks in the current culture. They purchase a firewall, an email filter, a threat intelligence database, an antivirus software, and whatever else they need, and each of them does a great job – but overall, they’re a burden to run. They don’t talk to each other, and each has its own dashboard. Security analysts have to spend hours sifting through alerts to find the truly crucial issues, and valuable time is lost tending to individual systems.

That’s the CISO’s problem. But for the CEO, there’s a bigger issue – running multiple security systems is expensive. Really expensive. The more systems you have, the more highly-skilled staff you need, and they’re few and far between. Where cybersecurity used to be a classic back-office concern, like air conditioning or heating, it’s now a central part of strategy and a key pillar of both reputation and customer retention - financial legislation leaves no room for failure. Above all, though, at present, it’s a cost centre.

Send an algorithm to do a human’s job

So how do financial institutions maintain the benefits of digitisation whilst reducing the weight of security? In a word: orchestration. As cybersecurity has grown and developed, so has computer automation. Companies can now link their key systems together under a single automated management tool (often referred to as a security orchestration, automation and response or SOAR platform) to reduce the weight on their staff. Orchestrating your security landscape essentially means integrating systems so that their alerts and data flows are monitored by the SOAR, which then automatically resolves low-level alerts and flags up high-priority issues that need human review.

The upshot of that is that security resources can then be spent more profitably on strategic initiatives like system reviews and regulatory compliance. The CISO is happy because their security systems are preventing attacks and the team is more available for new projects, and the CEO is happy because costs can be streamlined by removing unnecessary admin tasks and slimming down software spend.

More importantly, an effectively orchestrated security system can be easily amended to accommodate new elements of the organisation’s digital landscape – meaning that financial organisations are freed up to innovate in the age of PSD2 and open banking without fear that every new application will come with a six-figure security cost.

Digital banking is the future – there’s no question about that. But financial organisations will have to change the way they approach security system management if they’re to keep up with and support innovation. Orchestration is one way to lighten the load – without compromising on quality.

 

About Finance Monthly

Universal Media logo
Finance Monthly is a comprehensive website tailored for individuals seeking insights into the world of consumer finance and money management. It offers news, commentary, and in-depth analysis on topics crucial to personal financial management and decision-making. Whether you're interested in budgeting, investing, or understanding market trends, Finance Monthly provides valuable information to help you navigate the financial aspects of everyday life.
© 2024 Finance Monthly - All Rights Reserved.
News Illustration

Get our free monthly FM email

Subscribe to Finance Monthly and Get the Latest Finance News, Opinion and Insight Direct to you every month.
chevron-right-circle linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram