By: 9 September 2024

We spoke to DataStax’s AI expert to get the lowdown on banks’ generative AI journeys

AI is here and it isn’t going anywhere any time soon. But how are companies actually using it in their day to day operations? 

Dom Couldwell is head of field engineering for EMEA at DataStax, a leading GenAI platform supporting banks and other financial institutions in their GenAI transformation journey. 

Our reporter spoke to Dom about how the banks are progressing in their journey to using AI, the untapped potential hidden in data, and how AI can change the consumer landscape in banking permanently. 

Tell me a bit about yourself please Dom! 

I’ve been with DataStax a little under three years now. Before DataStax I worked for too many different organisations to list, but most recently Google and before that as part of the Apigee team that was acquired by Google. 

My first job was writing code for submarine command systems, so scarily, somewhere in the North Atlantic, there’s probably a submarine with my code controlling nuclear missiles! 

How are banks getting on with their GenAI journey? 

I think they’re getting there. The challenge with financial services is trust. The potential benefits of GenAI are bigger than anything we’ve seen before. 

The problem is it’s hard. The key balance for many banks is leveraging the value of AI without letting go of the data. Organisations like Bud are interesting because they’ve dealt with machine learning before; they’ve processed a huge amount of banking data, which hasn’t always been the highest quality. 

There’s no GenAI without data, so financial institutions (FIs) need to be pretty hot on that, because you can’t have poor data passing around. They’re dealing with numerous data sources, and once the foundation is set, the focus shifts to identifying where the deeper value lies. 

This is a broad industry problem; we’re past the initial hype and people are wondering where the value is.  For FIs, it’s a case of finding a meaningful use case that can actually drive value. We started with chatbots, which is fine because we made them a little bit better. But if we make them 1000% better, are we getting the return on the investment? That’s what I find the institutions are looking for. 

Banks are investing in things like chatbots because it’s quick and easy and visible to customers; do you think there might be issues of short-termism? 

If your chief executive is demanding to see something coming out of GenAI, chatbots are an obvious place to start. People have to play the game to a certain extent to justify the investments, but I think in the medium to long term I don’t think chatbots are where the biggest returns will come. 

For many institutions, FIs included, a copilot will provide the biggest return on investment. That’s the initiative I’m most excited about. Copilots are AI agents that provide recommendations to users for their actions to help them be more productive and efficient. Copilots will extend beyond being code assistants to being able to advise humans across a broad spectrum of roles. This ‘human in the loop’ approach will build trust, add value and allow us to validate results while the technology matures. At the moment copilots are very much focused on the developers, but when there are copilots for all kinds of roles, this approach will be much more powerful. 

It must be frustrating watching companies struggle to manually review their data when you know there’s a GenAI alternative. 

I’m very empathetic with any large organisation looking at this, because it’s almost too much data. Where do you start? How do you get the real benefits of it? 

We’ve gone through various cycles of progression: prompt engineering, retrieval, augmented generation, vector search. They each take us a little further forward, and every iteration improves, but there are always limitations. 

We now have things like knowledge graphs emerging as a better solution for complex problems where things like vector search alone can’t help.  

People might not like the idea their bank is looking at every single one of their transactions; do you think there could be a consumer pushback 

I think there’s a balance, like the cookie debate in browsers. I don’t always want to have cookies tracking what I do, but if I do I’ll get more relevant adverts. So choice comes into it: I can choose to use cookies or I can choose to say no to the pop-ups I get. Or I can go further; I can use a browser that blocks all the adverts. 

Financial institutions have the same dilemma. We want them to use our data for tackling fraud, for example. If suddenly there’s anomalous transactions in my account, I want them to look at that.  

It comes back to what we said before: there needs to be trust and value. If I believe it’s valuable for me to accept the cookies, I’ll accept them. If I think it’s valuable for FIs to use insights from my data to recommend products, I’ll be more comfortable with them recommending products. 

Do you think there might be an opportunity in the market for a bank to come out and say, ‘We will never use AI’? 

I don’t think it’ll ever be quite that extreme. I think there will definitely be people questioning trust; you see things like the DPD chatbot swearing at people because the chatbot had been broken using prompt engineering. Every time something like that happens, it worries people. 

Customers will buy into certain things, and they’ll be scared by other things. The focus here is in terms of how it’s pitched to customers because it’s not going to go away, particularly for the larger institutions. Data being the new oil is true, and companies are not going to ignore it. 

It seems like a million years ago in IT terms, but I worked at UBS, and we did algorithm trading within finance with foreign exchange. We used to look at the patterns of what people did, and we thought, “Oh, that guy always buys in the morning and sells in the afternoon,” so we used to change his transfer rate based on this. That was 20 years ago. These kinds of things have been there for a long time, they’re not new in that sense, but the potential is even bigger. 

The term ‘AI’ is being used a lot for things that really aren’t AI. Being an expert, I’d love to get your thoughts on that. 

It’s been a problem for the longest time; what is machine learning and what is AI? Part of me thinks it doesn’t matter. Machine learning experts will have a very opinionated view on this, as people always do when it’s something close to your heart. 

I think the interesting nuance to me is predictive AI versus GenAI. We’ve had predictive AI for a while, but what I’m seeing now is a little bit more of the predictive side getting tagged with generative. That’s the one that’s going to be a bit more confusing. Predictive AI takes your previous data and then looks at it to find patterns in the data that you can then apply to the business. Generative AI uses training data to create new images or text in response to a user’s query. They have very different use cases. 

But when you’ve got a buzz word, when you’ve got people focusing on it, you’re going to have this confusion sometimes. I think some techies get very het up about it, but I always go back to the value. This is the key with any technology transformation, it’s hot air until it’s really adding value. 

I think you hit the nail on the head there. If the average person on the street is getting a reduced mortgage – whether it was done by GenAI, an LLM, predictive AI, whatever – then they’re happy, aren’t they? 

In pharmaceuticals, the drugs which get the most funding are the ones that can make the most money, it’s not necessarily the ones that can offer the most benefit. I think we’re going to have a similar thing with Gen AI.  

But if we do this right, it has the potential to affect everybody. I was talking to my 70-year-old father-in-law a couple of weeks ago; he used to be a bank manager, he handles the household finances, and he’s also secretary of the bowls club. Would someone invest the time to build him a copilot for his lifestyle? 

You’ve got to be super careful with the community in terms of their comfort with technology, but there’s potential for enormous benefit. For banks, copilots that help customers fulfil specific roles or deliver advice to them about their finances – where a better rate is available, or where a mix of financial products could make their money go further – is where the GenAI opportunity will exist. Turning this from potential into real services is where the hard work is taking place.

Image: DataStax

Robert Welbourn
Robert Welbourn is an experienced financial writer. He has worked for a number of high street banks and trading platforms. He's also a published author and freelance writer and editor.