By: 2 September 2024

An interview with Greg Hancell, fraud expert at Lynx Tech

The fight to protect people and companies from fraudsters is an ongoing battle, with every new intervention seen as a challenge by criminals. 

Our reporter Robert Welbourn spoke to Greg Hancell, a fraud expert working at Lynx  – a company utilising AI to combat fraud – about how AI can revolutionise the fight. 

Please give me a brief background. 

I’ve been involved in fraud prevention all my life. I actually started working in household insurance fraud at HBOS, where I gradually learned the techniques to identify suspicious and fraudulent patterns and problems. Eventually, I headed up a large loss team. 

I realised that those techniques were transferable into any kind of fraud, and so I joined a startup that was two people. One of them was Richard Churchman, and he was the smartest guy I’d ever met up to that point in my life. We were very successful; in two years we grew to around 21 customers and were recognised by some authentication and security companies. They focused on providing one-time passwords for digital banking users, but they didn’t know when they should authenticate a user or why. Our technology was server-side analytics so we could give support them with that. 

After our startup was acquired, I helped build a product which offered continuous and transparent adaptive authentication, as well as some concepts around trusted devices. We focused on working with banks, specifically around securing their user journeys and making sure that there was less friction, but also identifying fraud. 

I was introduced to Lynx and the chief technology officer and founder Carlos Santa Cruz; he’s a professor in theoretical physics, artificial intelligence, and machine learning. Again, the smartest person I’ve ever met in my life. He started talking about Lynx and said that they’d built machine learning models that automatically update daily. I couldn’t believe him! I asked him, “What happened during Covid? What did you do?” He said: nothing. This is true because they didn’t need to do anything; they automatically learned. At that point I knew I had to get on board, so I joined Lynx as the head of product for fraud prevention. 

AI must be a game changer in fraud prevention; because it learns with every piece of fraud that AI detects, it becomes better at detecting fraud. 

Exactly! Most people who aren’t specifically working with AI don’t know that there are different levels of machine learning being applied. For fraud prevention, if you’re using rules, they’re only going to find a small percentage of fraud. If you then move to unsupervised machine learning models, you’re going to find a bit more, but you’re going to generate a lot of false positives. Not only does that mean friction for genuine customers, but it also means a lot of operational costs for financial institutions as well. 

So then you move to static machine learning and supervised machine learning models. Those models have a label as to what fraud is or what a money mule is, so they can significantly improve the performance of identifying them. However, the problem is that over time there’s a drift in the performance of the model. 

The best way to think about that is ChatGPT3 versus ChatGPT4. ChatGPT3 was only trained with data up to a certain point in time, meaning it’s very, very powerful, however, it doesn’t know anything that’s happening now. If, a few weeks ago, I’d asked it who’d win Euro 2024, it won’t have had a clue that the finalists were Spain and England. It may know from historical information and be able to predict this; the same is true with static fraud or money mule models. 

Customer behaviour changes, financial behaviour changes. There are new solutions over time and the tactics of fraudsters change because they get a response as to whether it’s worked or not, whether they got money or not. What’s really important is that you have machine learning models that are constantly updated every day so that they can learn the new behaviours and attacks and they can stop more fraud. 

Fraud is as old as time; even with AI, computers, all the new innovations and technologies, essentially fraud remains the same at its heart. 

When we’re searching forensically, we don’t reach through the dark; we build very strong financial behavioural models so we fully understand the financial patterns of each and every customer of a financial institution. We know when an individual opened the account, their age, how long before they put money into the account, what their income is, all kinds of information. 

We also know that holistically for the financial institution and their customers. So, regardless of what the attack is, we know if it’s atypical financial behavior; either somebody has potentially been socially engineered, and they’re part of an authorized push payment fraud scam, or potentially they are a complicit or deliberate mule. We have this knowledge because of the way that we look at data; we don’t just look at the transaction that’s incoming, let’s say in a mule scenario. We also look at the information that customers provide when they join the financial institution and when they apply for the products. 

An interesting stat is that 65% of money mules in the UK are people under the age of 30. Typically, people start to bank when they’re around 16, so that means in 14 years, you have 65% of all mules in the UK. That’s a significant correlation. Then you tie that into strange activity and the device that was used, and find the location the account was created in is different from that which is used to log in. Then you see multiple devices and a high amount of money incoming from an unknown beneficiary, before being sent to another unknown beneficiary that’s unknown to the bank or to the financial institution, that’s significant. Leveraging these insights, we’re able to understand the financial behaviours and patterns of each and every customer inside the financial institution. 

The sheer volume of transactions money mules undertake is staggering; to have this technology looking at it where a human doesn’t have the time must be priceless. 

What we’re finding is that there’s a convergence between fraud and AML. And where does it converge? In money mules. 

For years, the UK has stood apart from the rest of the world with real-time rails allowing for instant money transfer. Now, the rest of the world is catching up digitally, with abilities to onboard new accounts—accounts that can receive money—quickly. Digital banking and digital-only financial institutions are great instruments for financial inclusion but are also used by criminals. 

Unfortunately, the mass adoption of different tools in the market, such as generative AI, to fake and falsify documents to bypass identity verification controls, means that criminals can automate mass onboarding of mule accounts. By doing so, they have a network of accounts that they can push money to, to try to legitimize it. What that means is that the financial institutions now need to not only recognise that there is money being laundered, but they need to identify this in real-time. 

AML typically is a reactive solution, not a proactive one, because AML solutions are not that dissimilar from the police. They’re looking over a long period of time to identify patterns and criminal organisations and criminal rings without tipping off the criminals, which is quite a fine art. They now also need to acknowledge that a criminal can mass-onboard thousands of accounts, send money to those accounts, and offboard them in a day. 

A lot of time and money has been spent improving the payments process, simplifying it and making it as quick as possible. But that can’t come at the detriment of proper AML and KYC checks; it’s a tough balance. 

We’ve perhaps misused a technology created for banking in other areas. When people provide one-time passwords to an application it might be just to authenticate an e-mail account or to get access to Office 365; unfortunately, that means people don’t associate a one-time password with a significant sum of money leaving their account potentially. 

We might want to consider a new banking technology for authenticating large sums of money that is used only for that purpose. So, when someone knows that they are going through a specific authentication process, they dynamically link the amount to the one-time password to send a large sum of money. 

Lynx operates across 10 quite disparate countries; do you see the same types of fraud across these areas? 

Each is different, and that’s why it is so important that we build a machine-learning model specific to each financial institution. There’s another way of doing that through a consortium, whereby you collect all your data and then try to build a global model to address the global problems. 

The problem with building a consortium is that you don’t acknowledge local and regional products, risks and attacks. You also restrict the amount of data that you have available because you can only use data that’s available for all financial institutions, that’s interoperable and consistent. So, what we do instead is build machine learning models specific to the financial institution using their onboarding and application data as well as transactional data. 

These models learn from the fraud labels we receive every day, so we learn about new types of fraud attacks and new types of customer behaviour. It’s really important that you build models that have that local behavioural understanding and local fraud and risk understanding. 

Fraud is ever changing, but fraud stays the same. When you think about money laundering in terms of layering and placement, isn’t it a risk that actually the more global the financial system becomes, the easier it might become to hide that money? 

Yes, exactly. And that’s the significant challenge that financial institutions are facing. If they don’t have technology that can, in real time, identify illicit sources of funds entering an account, then, unfortunately, the funds will flow out of that account. 

In the US, for example, they’ve recently gone to real-time rails, and there is that interoperability in Europe with the IBAN and Payment Services Directive. Indeed, the movement of that money is a challenge. 

In addition, the way that you can quickly spin up businesses online is also a problem because you can, therefore, somewhat legitimise the source of funds through a shadow business. The accessibility to real-time rails and these different types of products mean that now more than ever, criminals have more tools available to launder money. Luckily, we’ve come up with a challenge to that with supervised machine learning models.

Image: Lynx

Tags
Robert Welbourn
Robert Welbourn is an experienced financial writer. He has worked for a number of high street banks and trading platforms. He's also a published author and freelance writer and editor.