finance
monthly
Personal Finance. Money. Investing.
Contribute
Newsletter
Corporate

The latest research from national audit, tax and advisory firm Crowe Clark Whitehill, together with the University of Portsmouth’s Centre for Counter Fraud Studies (CCFS), reveals a national fraud pandemic totalling £110 billion a year. For context, that figure would build more than 110 Wembley Stadiums, or cover the annual budget for every single local authority in England combined. Put differently, the figure would cover the UK’s Brexit divorce bill almost three times over, or cover the salaries of 4.8 million nurses for a year.

The Financial Cost of Fraud 2018’ estimates that the UK economy could be boosted by £44 billion annually if organisations step up efforts to tackle fraud and error.

Globally, fraud is costing £3.24 trillion each year, a sum equal to the combined GDP of the UK and Italy, or enough to build more than 3,000 Wembley Stadiums.

The report, which is the only one of its kind, draws on 20 years of extensive global research from 40 sectors, where the total cost of fraud has been accurately measured across expenditure totalling £15.6 trillion.

Since 2008, there has been a startling 49.5% increase in average losses with businesses losing an average of 6.8% of total expenditure. Driven by technological advances and increasing digitisation, businesses now face a threat which is growing in scale and mutating in complexity.

Fraud is the last great unreduced business cost. Included in the report are examples where fraud has been accurately measured, managed and losses minimised, including a major mining company which reduced losses due to procurement fraud by over 51% within a two-year period, equating to USD 20 million at a time when commodity prices were falling.

Insurance fraud is an another sector to look into. It is happened by changing the beneficiaries. A proper investigation can minimize the vast effect. When any individual is getting life insurance over 75 years, the particular company must go through all the original documentation and proper channels.

Jim Gee, Head of Forensic & Counter Fraud at Crowe Clark Whitehill, comments: “The threat of fraud is becoming increasingly like a clinical virus – it is ever-present and ever-evolving. The bad news is that digitalisation of information storage, and process complexity, coupled with the pace of business change, have created an environment where fraud has thrived, grown and continued to mutate. The better news is that there are examples where organisations have measured and minimised fraud like any other business cost and greatly strengthened their finances.”

“In the current climate, to not consider the financial benefits of making relatively painless reductions in losses to fraud and error is foolhardy. The message to all organisations is measure, mitigate and manage fraud, or your bottom line will continue to suffer.”

Mark Button, Director of the Centre for Counter Fraud Studies at the, University of Portsmouth, adds: “This research shows that the most accurate measurement of fraud in organisations continues to show an upward trend. Many organisations are losing significant amounts to fraud and much more can be done to reduce losses.”

“Organisations could do much more to enhance prevention through a number of measures such as effective vetting of new staff, investing in data analytics and developing an anti-fraud culture.”

(Source: Crowe Clark Whitehill)

Much that has been written about the General Data Protection Regulation (GDPR) relates to the burden of obtaining proper consents in order to process data. This general theme has provoked questions about whether and how financial institutions can process data to fight financial crime if they need consent of the data subject. While there are certainly valid questions, GDPR is much more permissive to the extent data is used to prevent or monitor for financial crime. Richard Malish, General Counsel at Nice Actimize, explains.

Clients and counterparties will oftentimes be more than happy to consent to data processing in order to participate in financial services. But consent can be withdrawn, so offering individuals the right to consent will give the impression that they can exercise data privacy rights which are not appropriate for highly-regulated activities.

Rather than relying on consent, the GDPR also permits processing which is necessary for compliance with a legal obligation to which the controller is subject and (2) processing which is necessary for the purposes of the legitimate interests pursued by the controller or by a third party.

Some areas of financial crime prevention are clearly for the purpose of complying with a legal obligation. For example, in most countries there are clear legal obligations for monitoring financial transactions for suspicious activity to fight money laundering. The European Data Protection Supervisor stated in 2013 that anti-money laundering laws should specify that "the relevant legitimate ground for the processing of personal data should… be the necessity to comply with a legal obligation by the obliged entities…." The 4th EU Anti-Money Laundering Directive requires that obliged entities provide notice to customers concerning this legal obligation, but does not require consent be received. And the UK Information Commissioner's Office gave the example of submitting a Suspicious Activity Report to the National Crime Agency under PoCA as a legal obligation which constitutes a lawful basis.

Very few commentators have attempted to cite a legal authority for anti-fraud legal obligations. The Payment Services Directive 2 (PSD2) requires that EU member states permit personal data processing by payment systems and that payment service providers prevent, investigate and detect payment fraud. But PSD2 has its own requirement for consent and this protection may fail without adequate implementing legislation in the relevant jurisdiction. Another possible angle is that fraud is a predicate offense for money laundering, and therefore the bank has an obligation to investigate fraud in order to avoid facilitating money laundering.

"Legitimate interests" are also permitted as a basis for processing. However, this basis can be challenged where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data. Financial institutions may not feel comfortable threading the needle between these ambiguous competing interests.

However, the GDPR makes clear that several purposes related to financial crime should be considered legitimate interests. For example, "the processing of personal data strictly necessary for the purposes of preventing fraud also constitutes a legitimate interest" and profiling for the purposes of fraud prevention may also be allowed under certain circumstances. It is also worth recognizing that many financial market crimes such as insider trading, spoofing and layering are oftentimes prosecuted under anti-fraud statutes.

Compliance with a foreign legal obligations, such as a whistle-blowing scheme required by the US Sarbanes-Oxley Act, are not considered "legal obligations," but they should qualify as legitimate interests.

While legal obligations and legitimate interests do not cover all potential use cases, they should cover most traditional financial crime processing. Some banks have been informing their clients that a legal obligation justifies their processing for AML and anti-fraud. Others have included legal obligations and/or legitimate interests as potential justifications for a laundry list of potential processing activities.

Financial institutions should use the remaining days before GDPR's effective date to provide the correct notifications to data subjects and confirm that their processing adequately falls under a defensible basis for processing. And with this basic housekeeping performed there is hopefully little disruption to their financial crime and compliance operations.

The financial services industry has witnessed considerable hype around artificial intelligence (AI) in recent months. We’re all seeing a slew of articles in the media, at conference keynote presentations and think-tanks tasked with leading the revolution. Below Sundeep Tengur, Senior Business Solutions Manager at SAS, explains how in the fight against fraud, AI is taking over as a dominant strategy, fuelled primarily by data.

AI indeed appears to be the new gold rush for large organisations and FinTech companies alike. However, with little common understanding of what AI really entails, there is growing fear of missing the boat on a technology hailed as the ‘holy grail of the data age.’ Devising an AI strategy has therefore become a boardroom conundrum for many business leaders.

How did it come to this – especially since less than two decades back, most popular references of artificial intelligence were in sci-fi movies? Will AI revolutionise the world of financial services? And more specifically, what does it bring to the party with regards to fraud detection? Let’s separate fact from fiction and explore what lies beyond the inflated expectations.

Why now?

Many practical ideas involving AI have been developed since the late 90s and early 00s but we’re only now seeing a surge in implementation of AI-driven use-cases. There are two main drivers behind this: new data assets and increased computational power. As the industry embraced big data, the breadth and depth of data within financial institutions has grown exponentially, powered by low-cost and distributed systems such as Hadoop. Computing power is also heavily commoditised, evidenced by modern smartphones now as powerful as many legacy business servers. The time for AI has started, but it will certainly require a journey for organisations to reach operational maturity rather than being a binary switch.

Don’t run before you can walk

The Gartner Hype Cycle for Emerging Technologies infers that there is a disconnect between the reality today and the vision for AI, an observation shared by many industry analysts. The research suggests that machine learning and deep learning could take between two-to-five years to meet market expectations, while artificial general intelligence (commonly referred to as strong AI, i.e. automation that could successfully perform any intellectual task in the same capacity as a human) could take up to 10 years for mainstream adoption.

Other publications predict that the pace could be much faster. The IDC FutureScape report suggests that “cognitive computing, artificial intelligence and machine learning will become the fastest growing segments of software development by the end of 2018; by 2021, 90% of organizations will be incorporating cognitive/AI and machine learning into new enterprise apps.”

AI adoption may still be in its infancy, but new implementations have gained significant momentum and early results show huge promise. For most financial organisations faced with rising fraud losses and the prohibitive costs linked to investigations, AI is increasingly positioned as a key technology to help automate instant fraud decisions, maximise the detection performance as well as streamlining alert volumes in the near future.

Data is the rocket fuel

Whilst AI certainly has the potential to add significant value in the detection of fraud, deploying a successful model is no simple feat. For every successful AI model, there are many more failed attempts than many would care to admit, and the root cause is often data. Data is the fuel for an operational risk engine: Poor input will lead to sub-optimal results, no matter how good the detection algorithms are. This means more noise in the fraud alerts with false positives as well as undetected cases.

On top of generic data concerns, there are additional, often overlooked factors which directly impact the effectiveness of data used for fraud management:

Ensuring that data meets minimum benchmarks is therefore critical, especially with ongoing digitalisation programmes which will subject banks to an avalanche of new data assets. These can certainly help augment fraud detection capabilities but need to be balanced with increased data protection and privacy regulations.

A hybrid ecosystem for fraud detection

Techniques available under the banner of artificial intelligence such as machine learning, deep learning, etc. are powerful assets but all seasoned counter-fraud professionals know the adage: Don’t put all your eggs in one basket.

Relying solely on predictive analytics to guard against fraud would be a naïve decision. In the context of the PSD2 (payment services directive) regulation in EU member states, a new payment channel is being introduced along with new payments actors and services, which will in turn drive new customer behaviour. Without historical data, predictive techniques such as AI will be starved of a valid training sample and therefore be rendered ineffective in the short term. Instead, the new risk factors can be mitigated through business scenarios and anomaly detection using peer group analysis, as part of a hybrid detection approach.

Yet another challenge is the ability to digest the output of some AI models into meaningful outcomes. Techniques such as neural networks or deep learning offer great accuracy and statistical fit but can also be opaque, delivering limited insight for interpretability and tuning. A “computer says no” response with no alternative workflows or complementary investigation tools creates friction in the transactional journey in cases of false positives, and may lead to customer attrition and reputational damage - a costly outcome in a digital era where customers can easily switch banks from the comfort of their homes.

Holistic view

For effective detection and deterrence, fraud strategists must gain a holistic view over their threat landscape. To achieve this, financial organisations should adopt multi-layered defences - but to ensure success, they need to aim for balance in their strategy. Balance between robust counter-fraud measures and positive customer experience. Balance between rigid internal controls and customer-centricity. And balance between curbing fraud losses and meeting revenue targets. Analytics is the fulcrum that can provide this necessary balance.

AI is a huge cog in the fraud operations machinery but one must not lose sight of the bigger picture. Real value lies in translating ‘artificial intelligence’ into ‘actionable intelligence’. In doing so, remember that your organisation does not need an AI strategy; instead let AI help drive your business strategy.

Andrius Sutas, CEO and Co-founder of AimBrain looks at the limitations of secrets-based authentication and the three simple steps that banks can take to enhance security and facilitate innovation.

In this digital world, security is more challenging and demands more resources than ever before. Customer centricity – remote onboarding and eKYC, faster payments, greater interconnectivity between FS providers and any other customer-first initiative – offers unprecedented convenience for the consumer, but places immense pressure on banks and FS providers to offer such services quickly, cost-effectively and, most importantly, securely.

Mobile banking, for example, is undoubtedly one of the greatest things to have happened to the sector. Reducing branch spends, rapidly enabling new products and greater segmentation, remote onboarding…it has been a pivotal step for the industry. But never failing to miss an opportunity are the criminals that seek to dupe, coerce and attack. Mobile banking is particularly susceptible to fraud; Trojan attacks doubled in volume last year against 2016 and increased 17-fold compared to 2015. McAfee also said that it had detected 16 million mobile malware infestations in Q3 2017; double the number of the same period in 2016. Supplement these attacks with omnipresent, large-scale data breaches and you’ve got one marathon migraine coming on.

So, it is no wonder that banks now find themselves in a position of having to pool resources just to defend against mobile account fraud; and that is a single channel in the customer engagement journey. On-device biometric authentication is a patch fix for a problem that is only going to grow; the fact is that the only way to be utterly certain of an individual’s authenticity is by verifying the person, not the device.

Passwords don’t work. It’s not rocket science. Anything that can be intercepted, guessed, hacked, teased out – does not work, and the more enterprises continue to rely on passwords and secrets, the more resources they will find themselves throwing at the problem. What’s left? Hardware is antiquated, OTPs via SMS have proven themselves to be dangerously easy to intercept, and push notifications rely on the physical proximity of a device.

So how can banks truly secure customer data, act compliantly and have the freedom and flexibility to innovate? We believe that the strength lies in layering on security, in a simple and easy-to-configure model that is fit for both today’s fraud and the challenges of tomorrow.

Biometrics (how someone behaves, looks or sounds) can fulfil these requirements, and more. Unlike securing the authenticity of a device, biometrics assure the authenticity of the person themselves. And better still – unlike passwords – they are not secrets. They are everywhere! We leave fingerprints wherever we go, our faces are on show, we talk into devices all day long.

This might seem counterintuitive, but it’s not the data, but the way in which biometric data is treated that creates the security. We’re not just talking about templating it using algorithms – pretty standard methodology across the industry – but about how to keep it secure.

If someone has your password, they have your password. It’s black and white. If they have a video of you, or a recording of your voice, this might be enough to beat some authentication gateways. So, the key is to continually add challenges to beat the fraudsters and make it impossible for someone to pretend to be the customer, whilst keeping it simple for the customer.

 

How? We think it boils down to three steps.

 

These steps will keep banks ahead of the capabilities of even the most sophisticated presentation attacks. We recently launched AimFace//LipSync, which combines facial authentication with a voice challenge and lip synchronisation analysis. A customer can enrol or access simply by taking a selfie and simultaneously reading a randomised number. Nothing exertive. Pretty simple really. But – we think – impossible to spoof by any method available today. It’s about staying one step ahead of fraud, in a way that minimises inconvenience to the user, and your biometrics partner should have a solid roadmap in place that demonstrates consideration for the fraud we haven’t yet seen.

The password is not fit for purpose. Secrets are dangerous. Biometrics are a simple yet secure way of authenticating the person and keeping their valuable data and assets safe.

AimBrain is a BIDaaS (Biometric Identity as-a-Service) platform for global B2C and B2B2C organisations that need to be sure that their users are who they say they are.

New research by BAE Systems has found that 74% of business customers think banks use machine learning and artificial intelligence to spot money laundering. In reality banks rely on human investigators to manually sift through alerts – a hard-to-believe fact selected only by 31% of respondents. This lack of automation and modern processes is having a major impact on efficiency and expense when it comes to the fight against money laundering.

Brian Ferro, Global Compliance Solutions Product Manager at BAE Systems Applied Intelligence, said: “Compliance investigators at banks can spend up to three days of their working week dealing with alerts – which most of the time are false positives.  By occupying key personnel with these manual tasks, banks are limiting the investigators’ role, impacting on their ability to stop criminal activity.”

Money laundering is known to fund and enable slavery, drug trafficking, terrorism, corruption and organised crime.  Three quarters (75%) of business customers surveyed see banks as central actors in the fight against money laundering. The penalty for failing to stop money laundering can be high for banks – and is not restricted to significant fines. When questioned, 26% of survey respondents said they would move their business’ banking away from a bank that had been found guilty and fined for serious and sustained money laundering that it had not identified.

Ferro continued: “For banks to be on the front foot against money laundering, their investigators need to be supported by machine intelligence. Simplifying, optimising and automating the sorting of these alerts to give human investigators more time is the single most valuable thing banks and the compliance industry can do in the fight against money launderers. Right now, small improvements in efficiency of the systems banks use to find laundering can yield huge results.

“At BAE Systems we use a combination of intelligence-led advanced analytics to track criminals through the world’s financial networks. By putting machine learning and artificial intelligence systems to work to narrow down the number of alerts, human investigators can concentrate on tasks more suited to their talents and insight.”

(Source: BAE Systems)

Fraud is an intricate practice. The methods of criminals creative and meticulous, and the cost to companies and consumers staggering. In the UK, the fraud economy is thriving. It’s a growth industry. And it’s showing no signs of slowing down.

Last year, the Annual Fraud Indicator report revealed the total cost of losses to the UK economy to be a colossal £190 billion. To put that huge number into context: it represents more than the government’s combined spend on health and defence.

The best way to describe the current fraud problem: pervasive. A recent survey by professional services firm PricewaterhouseCoopers (PwC) highlighted that half of UK companies had fallen victim to fraud or economic crime in the past two years. Today, businesses are finding themselves fighting a surge of sophisticated attacks.

At the centre of fraud is technology. As technology advances, new forms of fraud emerge, and more robust security solutions are developed. It’s a double-edged sword. But businesses need to be aware of trends and predictions that will allow them to offer the best possible protection to their customers.

In the financial services sector, the struggle has been striking a balance between innovation and protection. So far, it’s something that many in the sector have failed to get right. A large part of this is due to increasing market and consumer pressures. In an age of hyper-globalisation, with industries undergoing rapid digital transformation, financial institutions are facing demands to increase the pace of delivery and provide an omnichannel experience.

Due to the rise of digital commerce and the proliferation of multiple-channels and payment types, there are more data transactions taking place than ever before. While this is a big benefit to businesses, it brings with it greater risk. An omnichannel environment creates a number of challenges when it comes to fraud management. The sheer number of avenues exposed at any one time can stretch security thin. For fraudsters, this makes it ripe for exploitation.

Yet, many institutions still rely upon disparate services and products that act in isolation of one another. This piecemeal approach is a hindrance. It makes it much more difficult to recognise certain types of fraud and leads to delays in decision-making.

The truth is that most legacy security systems and anti-fraud measures simply aren’t able to keep up with modern fraud attacks. They’re too wide in scope, complex in execution and high in velocity. So, the sector is now turning to technology in a bid to strengthen its efforts to fight fraud.

Automation has been the most widely adopted, so far. It’s able to reduce the burden on finance professionals, particularly when it comes to back-office processes, such as transaction and application processing, and audit compliance. It’s also a viable solution for assessing risk and limiting exposure to fraud. As a result, institutions are introducing everything from machine-learning platforms to robotic process automation (RPA), network analysis and artificial intelligence (AI).

The common theme among automation technologies is that they use algorithms to spot suspicious activity, detect patterns and predict outcomes in large data pools. Some of the more advanced platforms are even capable of assessing the anatomy of a fraudulent transaction. These solutions can draw inferences based on the information available, raise questions where the data is incomplete and produce audit trails (vital in such a heavily regulated industry).

But while we’ve seen a greater uptake of automation technology within financial services, questions remain. The sector has a history of being risk-adverse and sceptical of new technologies. And industry experts have queried whether institutions are using technology on the same scale to prevent fraud as fraudsters are to perpetrate it.

One of the biggest concerns to businesses, especially banks, has been the up-front cost of investing in these technologies – as well as how fast they can be implemented and how well they integrate with the existing infrastructure.

It’s fair to say that it’s a large-scale change for such a traditional industry. But to hesitate to modernise anti-fraud measures – and to defer investment in technology that’s designed to combat this problem – based on whether or not it complements the current system is short-sighted. When it comes to fighting fraud, the financial services sector must analyse the impact of technology trends and invest accordingly.

The default position from those within the sector should be: Sooner or later, we will succumb to a fraud attack. And businesses need solutions that are intelligent, efficient and provide actionable insights.

Fighting fraud across the omnichannel is a difficult task. In the digital era, automation technology is vital. If the financial services sector is to lessen its exposure to fraudulent practices, and provide greater protection to its customers, then it must think strategically. At present, the sector finds itself locked in a technological arms race with fraudsters. Institutions need fast-acting, agile solutions – not quick fixes or outdated legacy security systems. It needs to invest in, and place its trust in, technology.

By increasing its reliance on automation, the sector will be better positioned to keep pace with and protect against the frenetic nature of modern fraud attacks.

Banks and card companies prevented £1,458.6 million in unauthorised financial fraud last year, equivalent to £2 in every £3 of attempted unauthorised fraud being stopped, the latest data from UK Finance shows.

In 2017, fraud losses on payment cards fell 8% year-on-year to £566.0 million. At the same time, card spending increased by 7%, meaning card fraud as a proportion of spending equates to 7.0p for every £100 spent – the lowest level since 2012. In 2016 the figure stood at 8.3p.

For the first time, annual data on losses due to authorised push payment scams (also known as APP or authorised bank transfer scams) has also been collated. A total of £236.0 million was lost through such scams in 2017.

The unauthorised fraud data on payment cards, remote banking and cheques for 2017 shows:

The new authorised push payment scams data, collected for the first time in 2017, shows:

Katy Worobec, Managing Director of Economic Crime at UK Finance, said: “Fraud is an issue that affects the whole of society, and one which everyone must come together to tackle. The finance industry is committed to playing its part – investing in advanced security systems to protect customers, introducing new standards on how banks respond to scam victims, and working with the Joint Fraud Taskforce to deter and disrupt criminals and better trace, freeze and return stolen funds.

“We are also supporting the Payment Systems Regulator on its complex work on authorised push payment scams, providing the secretariat for its new steering group. It’s a challenging timetable, but it is important that we get it right to stop financial crime and for the benefit of customers.”

The finance industry is responding to the ongoing threat of all types of fraud and scams by:

To help everyone stay safe from fraud and scams, Take Five to Stop Fraud urges customers to follow the campaign advice:

Tony Blake, Senior Fraud Prevention Officer at the Dedicated Card and Payment Crime Unit, said: “With criminals using social engineering to target people and businesses directly, it’s vital that everyone follows the advice of the Take Five campaign. Always stop and think if you are ever asked for your personal or financial details. Remember, no bank or genuine organisation will ever contact you out of the blue and ask you to transfer money to another account.”

Unauthorised fraud

In an unauthorised fraudulent transaction, the account holder does not provide authorisation for the payment to proceed and the transaction is carried out by a third-party.

Authorised fraud

In an authorised push payment (APP) scam, the account holder themselves authorises the payment to be made to another account. If a customer authorises the payment themselves, current legislation means that they have no legal protection to cover them for losses – which is different for an unauthorised transaction.

Banks will always endeavour to help customers recover money stolen through an authorised push payment scam but customers typically only approach their bank after the payment has been processed, once they realise they have been duped. By this time the criminal has often withdrawn the stolen funds and the customer’s money has gone. Alongside the extensive work already underway through the Joint Fraud Taskforce, UK Finance is also currently working with the Payment Systems Regulator on its proposals to tackle these scams.

Behind the data

Fraud intelligence points towards criminals’ use of social engineering tactics as a key driver of both unauthorised and authorised fraud losses. Social engineering is a method through which criminals manipulate people into divulging personal or financial details, or into transferring money directly to them, for example thorough impersonation scams and deception.

In an impersonation scam, a fraudster contacts a customer by phone, text message or email pretending to represent a trusted organisation, such as a bank, the police, a utility company or a government department. Under this guise, the criminal then convinces their victim into following their demands, sometimes making several separate approaches as part of one scam.

Data breaches also continue to be a major contributor to fraud losses. Criminals use stolen data to commit fraud directly, for example card details are used to make unauthorised purchases online or personal details used to apply for credit cards. Stolen personal and financial information is also used by criminals to target individuals in impersonation and deception scams, and can add apparent authenticity to their approach.

(Source: UK Finance)

Sharing confidential information is a data protection issue with more and more red tape every day. With more and more apps differentiating encryption methods, this becomes even harder to manage for authorities. Below Finance Monthly hears about the potential for banking fraud via apps such as WhatsApp from Neil Swift, Partner, and Nicholas Querée, Associate, at Peters & Peters LLP.

As ever greater quantities of sensitive personal data are shared electronically, software developers have been quick to capitalise on concerns about how susceptible confidential information may be to interference by hackers, internet services providers, and in some cases, governmental agencies. The result has been an explosion in messaging apps with sophisticated end-to end encryption functionality. Although ostensibly designed for day to day personal interactions, commonplace services such as WhatsApp and Apple’s iMessage use end-to-end encryption to transmit data, and more specialised apps offer their users even greater protection. Signal, for example, allows for its already highly encrypted messages to self-destruct from the user’s phone after they have been read.

The widespread availability of sophisticated and largely impregnable messaging services has led to a raft of novel challenges for law enforcement. The UK government, in particular, has been outspoken in its criticism of the way in which end-to-end encryption offers “safe spaces” for the dissemination of terrorist ideology.

Financial regulators are becoming increasingly conscious of the opportunity that these messaging services present to those minded to circumvent applicable rules, and avoid compliance oversight. 2017 saw Christopher Niehaus, a former managing director at Jeffries, fined £37,198 by the Financial Conduct Authority for sharing confidential client information with friends and colleagues via WhatsApp. Whilst the FCA accepted that none of the recipients needed or used the information, and the disclosure was simply boasting on Neihaus’ part, it was only his cooperation with the regulator that saved him from an even more substantial fine.

That same year saw Daniel Rivas, an IT worker for Bank of America, investigated by the US Securities and Exchange Commission and plead guilty to disclosing price sensitive non-public information to friends and relatives who used that information. One of the means of communication was to use Signal’s self-destructing messaging services. Rivas’ prosecution saw parallels with the 2016 conviction of Australian banker Oliver Curtis, an equities dealer, for using non-public information that he received from an insider via encrypted Blackberry messages.

These examples are likely to prove only the tip of an iceberg; given that encrypted exchanges are by definition clandestine, understanding the true scale of the issue, outside resorting simply to anecdote, is itself an unenviable task for regulators and compliance departments. Whilst those responsible for economic wrongdoing have often been at pains to cover their tracks – perhaps by using ‘pay as you go’ mobile phones, and internet drop boxes to communicate – access to untraceable and secure communication is now ubiquitous. It is difficult to imagine that future regulatory agencies will have access to the material of the same volume and colour that was obtained as part of the worldwide investigations into alleged LIBOR and FX manipulation.

How then can regulators respond? And how are firms to discharge their obligations both to record staff business communications, and monitor those communications for signs of possible misconduct? Many firms already ban the use of mobile phones on the trading floor, but such edicts – even where rigorously enforced – will only go so far. Neither Mr Rivas, nor Mr Neihaus, would have been caught by such a prohibition.

There may be technological solutions to technological problems. Analysing what unencrypted messaging data exists to see which traders are notably absent from regulated systems, or looking for perhaps tell-tale references to other means of communication (“check your mobile”), may present both investigators and firms with vital intelligence. Existing analysis of suspicious trading data may assist in identifying prospective leads, although prosecutors may need to become more comfortable in building inferential cases.

Fundamentally, however, such responses are likely to be both reactive, and piecemeal. Unless the ongoing wider debate as to the social utility of freely available end-to-end encryption prompts some fundamental rethink, the need to effectively regulate those who participate in financial markets – and thus the regulation of those markets themselves – may prove increasingly challenging.

The international community’s anti-money laundering watchdog is on UK soil putting the country through its paces.

Inspections of Britain’s defences against terrorists and money launderers by the Financial Action Task Force (FATF) are relatively rare but hugely important. The last evaluation was in June 2007 and negative findings can severely impact the country’s reputation in the war on terrorist financing and the laundering of criminal proceeds.

During the two-week visit, the UK has to prove to officials from some of the other 36 participating FATF countries that it has a framework in place to protect the financial system from abuse. The top secret inspectors have an “elaborate assessment methodology” but those involved are not allowed to talk publicly about the visit. The results of the inspection will be presented at an FATF Plenary session in October.

Julian Dixon, CEO of specialist Anti-Money Laundering (AML) and Big Data firm Fortytwo Data, comments: “AML supervisors are going to be on high alert this week because it’s not just public sector bodies who are inspected, but private organisations too.

“It’s also extremely timely, given the recent poisoning of ex-Russian spy Sergei Skripal, his daughter Yuliain and a policeman who came to their aid.

Tellhco

“The UK has been accused of being a soft touch for gangsters, politically exposed persons (PEPs) and criminal gangs, a theme that recently entered the popular imagination because of the TV series McMafia, written by journalist Misha Glenny.

“It’s unclear if this still holds true in the UK today, and that’s what the FATF are here to find out.

“It is up to the country being inspected to prove they have the right laws, systems and enforcement in place and the potential for reputational damage is high.

“After a recent inspection of Pakistan, FATF gave the country three months to prove it is doing enough to stay off an international watch list of those failing to curb the financing of terror groups.”

(Source: Fortytwo Data)

In the past week, India’s news have been dominated by billionaire jeweller, Nirav Modi (no relation to the Indian Prime Minister), who has been accused of defrauding India’s second largest government-owned bank of $1.8 billion or the biggest banking fraud the country has ever witnessed. Mr. Modi, who calls himself “haute diamantaire”, has been the preferred jeweller of both Hollywood and Bollywood celebrities, including actress Priyanka Chopra, who was appointed brand ambassador by the company last year. Earlier this month, Punjab National Bank (PNB) first filed a criminal complaint against the billionaire for causing the bank a “wrongful loss” of an estimated $40 million. However, as soon as investigations began, it was discovered that the actual figure was $1.77 billion – allegedly the result of a series of fraudulent transactions carried out in the past 7 years. Analysts suggest that Nirav Modi and his uncle Mehul Choksi have connived with PNB employees to create fake letters of undertaking (LoUs) – a guarantee that a bank is obliged to repay the loan if the actual borrower fails to do so – using them to secure loans from overseas branches of other, predominantly Indian, banks. This would mean that every time a loan was due, Mr. Modi and his uncle would ask bank employees to open another LoU equivalent to the loan amount, including the interest that was due on it. The money from the new LoU would then be used to pay off the loan and the interest due on the previous LoU.

 

What we know so far

Conveniently, it has been reported that Nirav Modi and his entire family left India during the first week of the year. He was seen at the World Economic Forum in Davos – only six days before PNB first filed a criminal complaint against Modi and his associates - where he even posed for a group photo with India’s Prime Minister Narendra Modi. The alleged scam has prompted a number of protests against the jeweller and the Indian Government, as well as a heated debate and hundreds of memes on social media.

Thus far, the government has made some key arrests, including two bank officials and a business associate of Mr. Modi's on suspicion of helping him, whilst global manhunt has been launched by India's Central Bureau of Investigation (CBI) for the billionaire and his uncle, whose passports have been revoked. Mr. Modi's lawyer has claimed that his client ‘went out of India for business purpose’ and that his family members normally ‘stay abroad most of the time’. Mr. Choksi's firm, Gitanjali Gems, has denied any involvement in the fraud scandal.

The Income Tax Department’s investigations have also shed light on the Nirav Modi Group’s unaccounted and unexplained funds, major discrepancies in stock valuations and instances of suspicious foreign funding. It has also been revealed that the jewellery brand accepted payments in cash on a regular basis, without accounting for all of them.

 

The Bigger Picture: India’s Fraud Problem

But how does the alleged scam look when compared to other frauds that Indian banks are faced with routinely?

The typical fraud cases in India refer to cases where the borrower intentionally tries to deceive the lending bank without repaying the loan.

In an article for BBC News, Vivek Kaul, author of India's Big Government—The Intrusive State and How It is Hurting Us, points out that in July 2017, India’s Finance Ministry shared data detailing that PNB’s controls were in bad shape and when compared to other 77 banks, the second largest government-owned bank  in the country was facing the highest losses when it came to fraud between the years 2012-2013 and 2016-2017. During the same period of time, Indian banks saw total losses amounting to $10.8bn, with PNB’s losses amounting to $1.4bn.

More generally, over the last few years, India’s government-owned banks have been experiencing issues related to corporate loan defaults – with their bad loans ratio stood at 13.5% as of September 2017. They have been forced to write off loans worth an estimated $38.8bn billion for the period of five years ending March 31, 2017. The Times of India Newspaper has estimated that in the past 11 years, the government has injected around $40.3bn into the banking sector. How could Indian people keep calm when it’s so clear that their government has been spending capital that could be used for healthcare, education and agriculture on trying to save that banks that it owns?

What makes all of this truly worrying is that it might be only the tip of an iceberg – the rest of the iceberg being a problem that runs much deeper. Who knows what tomorrow may bring for Nirav Modi’s case, PNB and the Indian banking system at large.

Stephen Ufford, Founder and CEO of Trulioo, discusses how mobile can offer increasing protection against modern fraud.

In a world where interaction is increasingly made through screens rather than face-to-face, it is often difficult for companies to tell exactly who their customers are online, which poses a serious risk to security and compliance.

This threat is doubled by increasing legislative pressure. A host of new regulations passed at the end of 2017 mean that companies have to focus more and more on knowing exactly who their customers are.

The end of January was the final deadline for financial services firms to register ‘ultimate beneficial owners’ so that the individuals behind every account, and those who benefit from it, are clearer. The Fourth Anti-Money Laundering Directive (4AMLD) stipulates that companies need to be aware of the ultimate identity of business entities. Prevents the development of shell companies for tax evasion and money laundering, among other financial crimes.

Under the Second Payment Services Directive (PSD2), which also passed in January, any transaction above 30€ needs to be subject to a two-factor authentication process, which verifies the identity of the customer through two separate pieces of information.

This can be based on something they know, such as a password; something intrinsic about them, such as biometric data like fingerprints or facial appearance; or something they possess, such as specific documentation.

In a digital age, this is easier said than done. Gone are the days when customers walk into a branch to set up their bank account in person. The vast majority of financial interactions nowadays are carried out simply through the click of a mouse or, more recently, the swipe of a phone. The number of mobile phone users in the world is expected to surpass the 5 billion mark by next year.[1] Last year, mobile transactions overtook those made online and in branches – according to data by Visa. [2]

But this increasing shift to mobile devices can provide a KYC opportunity, offering another item that customers possess, and can use to identify themselves. With access to Mobile Network Operators (MNOs), financial services firms can access another form of identification – possession of a specific handheld device.

This usually involves an SMS text message being sent with a verification code to the user’s mobile. The code can then be used to authenticate that the account being accessed is by the owner of the phone, verifying identity through possession of the device. MNOs already have access to extensive identity information on their subscription holders, as they are also expected to meet stringent KYC requirements. Financial Services firms can use this vital layer of identification and compare it against other pieces of evidence, such as document and passwords, for the benefit of all parties.

Another useful function of handheld devices is their capacity to record biometric data. The majority of smartphones include a front-facing camera that can be used to take a photo, capturing inherent data about a person’s appearance.

As technology on phones improves, this opens up opportunities for further layers of authentication. Many iPhones have the capacity to register fingerprints, as well as the facial recognition capacity extensively advertised in the iPhone X.

At the moment, these innovations are limited to higher-end devices. However, as this capability becomes more widespread amongst devices, using further biometric data proofs for customers will become increasingly feasible.

Additionally, the ability of mobile devices to verify identity has a wider potential for citizens of the world. Vast numbers of the global population are unbanked, not included in the financial system, and without a financial identity. But the extreme reach of mobile technology could change this.

In Mexico, for instance, only 40 percent of adults have a bank account, yet there are 80 phone subscriptions for every 100 people. Being unconnected to any formal bank can leave many people financially disempowered, unable to access any kind of financial services, which leaves their funds insecure and without growth potential. The ability to verify identity through mobiles means that previously unbanked individuals can be provided with access to financial services in the future.

In an increasingly globalised world, borders are becoming more fluid. The global population is more mobile than ever, with many people moving between borders for work or shopping in foreign countries over the internet. Cross-border e-commerce, for instance, is growing at 25 percent annually.[3] As individuals and money routinely travel increasing distances between geographical and legislative areas, this makes securing identity and tracing transactions more difficult than ever.

But mobile devices can be taken across borders and connected to their original MNO via other local networks. In a growingly interconnected world, as fraud threats become more sophisticated and regulation more stringent, mobiles and their networks can provide a consistent proof of identity that brings security and increased access to financial services for everyone.

[1] https://www.statista.com/statistics/274774/forecast-of-mobile-phone-users-worldwide/

[2] https://www.visaeurope.com/media/pdf/40172.pdf

[3] http://www.dhl.com/en/press/releases/releases_2017/all/express/cross_border_ecommerce_is_one_of_the_fastest_growth_opportunities_in_retail_according_to_dhl_report.html

Anomali recently released a new report that identifies major security trends threatening the FTSE 100. The volume of credential exposures has dramatically increased to 16,583 from April to July 2017, compared to 5,275 last year’s analysis. 77% of the FTSE 100 were exposed, with an average of 218 usernames and password stolen, published or sold per company. In most cases the loss of credentials occurred on third party, non-work websites where employees reuse corporate credentials.

In May 2017, more than 560 million login credentials were found on an anonymous online database, including roughly 243.6 million unique email addresses and passwords. The report shows that a significant number of credentials linked to FTSE 100 organisations were still left compromised over the three months following the discovery. This failure to remediate and secure employee accounts, means that critical business content and personal consumer information held by the UK’s biggest businesses has been left open to cyber-attacks.

The report, The FTSE 100: Targeted Brand Attacks and Mass Credential Exposures, executed by Anomali Labs also reveals that:

“Our research has uncovered a staggering increase in compromised credentials linked to the FTSE 100 companies. Security issues are exacerbated by employees using their work credentials for less secure non-work purposes. Employees should be reminded of the dangers of logging into non-corporate websites with work email addresses and passwords. While companies should invest in cyber security tools that monitor and collect IDs and passwords on the Dark Web, so that staff and customers can be notified immediately and instructed to reset accounts,” said Colby DeRodeff, Chief Strategy Officer and Co-Founder at Anomali.

The Anomali research team also analysed suspicious domain registrations, finding 82% of the FTSE 100 to have at least one catalogued against them, and 13% more than ten. In a change to last year the majority were registered in the United States (38%), followed by China (23%). With the majority of cyber attackers using gmail.com and qq.com (a free Chinese email service) to register these domains to mask themselves. With a deceptive domain malicious actors have the potential to orchestrate phishing schemes, install malware, redirect traffic to malicious sites, or display inappropriate messaging.

For the second year, the vertical hit hardest by malicious domain registrations was banking with 83, which accounted for 23%. This is double that of any other industry. To avoid a breach, organisations have to be more accountable and adopt a stronger cyber security posture, for themselves and to protect the partners and customers they directly impact.

“Monitoring domain registrations is a critical practice for businesses to understand how they might be targeted and by whom. A threat intelligence platform can aid companies with identifying what other domains the registrant might have created and all the IPs associated with each domain. This information can then be routed to network security gateways to keep inbound and outbound communication to these domains from occurring. No one is 100% secure against actors even with the intent and right level of capabilities. It is essential to invest in the right tools to help secure every asset, as well as collaborate with and support peers in order to reduce their risks to a similar attack,” continued Mr. DeRodeff.

(Source: Anomali)

About Finance Monthly

Universal Media logo
Finance Monthly is a comprehensive website tailored for individuals seeking insights into the world of consumer finance and money management. It offers news, commentary, and in-depth analysis on topics crucial to personal financial management and decision-making. Whether you're interested in budgeting, investing, or understanding market trends, Finance Monthly provides valuable information to help you navigate the financial aspects of everyday life.
© 2024 Finance Monthly - All Rights Reserved.
News Illustration

Get our free monthly FM email

Subscribe to Finance Monthly and Get the Latest Finance News, Opinion and Insight Direct to you every month.
chevron-right-circle linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram