I spoke to the co-founder of Numra about their new finance assistant

David Kearney on artificial intelligence

David Kearney is co-founder of Numra, a startup aiming to streamline workloads for finance teams through AI. They’ve recently raised €1.5m to launch their AI-powered finance assistant Mary.

I spoke to David about Mary and how she, and AI in general, can change the financial world. 

Thanks for joining me David. Just to kick off, could you just give me a little background please? 

My personal background is in accounting; I used to work with PwC here in Dublin where I qualified as a chartered accountant. I then spent about three or four years living in Melbourne, where I worked in corporate accounting roles with large companies, mainly in banking.

I came home in 2021 back to Dublin and I co -founded a company called Peblo, which is an invoice factoring solution for content creators and their talent agencies. We scaled Peblo to about $250 million in financing and sold it to a company called Wayflyer. That was in March 2022, and I hung around in Wayflyer for about 16 months and then got the itch and decided to go and back myself again and start a new business. And the new business is Numra!

Thank you for that. So why Numra? 

Modern chief financial officers and their teams are very much expected now to be strategic business partners, but at the same time they’re still mired with the manual transactional work. That is, invoices still need to be recorded, revenue still needs to be reconciled, data still needs to be cleansed and consolidated before it can be analysed.

So you have a lot of these manual, low value tasks diverting the finance team’s attention away from what everyone – including themselves – expects them to be working on. This leads to a cycle in finance department of wasted time, costly hours, and really stressed team members. It also leads to frustrated stakeholders within the business who expect a certain experience from the finance team that they’re not getting.

With the arrival of large language models (LLMs), we can now automate a lot of this work. So you can have AI agents who can perform tasks that were previously just too complex for automation. These agents can understand context, they can interpret user requests, and they can think through problems. You can even have them interact with different digital tools and move data between systems. At Numra, we’ve developed an AI-powered accounts assistant who we have christened as Mary.

Please tell me more about Mary.

Mary is just like a real life team member, except she can manage significantly more work for just a fraction of the cost. Users can train Mary on their internal processes; they give her access to their systems, CRM, bank accounts, email inboxes, that kind of thing, and they can communicate with Mary via chat or email.

The real magic here is that Mary can access data from one system. She can read it, understand it, reformat it, and re-enter it into a different system. It makes her really good at all of those low value tasks that take up so much of the finance team’s time; things like monitoring the inbox for an invoice and putting it into SAP, reconciling credit card sale transactions, or maybe just cleaning up a data set before you do your analysis on it.

Could you just expand for me on what exactly you mean when you say you can train Mary? What are the actual mechanics of that?

During onboarding and implementation, we actually do process walkthroughs with our customers where we sit down with, say, the owner of the accounts payable process, and we document everything that they do. We then tweak the process so that it fits better with our automated solution, and we feed those operating procedures into the language model so that it can understand everything. It then knows what the process is within the company.

Have you have you noticed any pushback from people who don’t want to trust financial matters to AI?

A little bit. I think it’s important to distinguish between what we are and what we’re not doing. LLMs are really bad at doing calculations; we’re aware of this, and so we’re more focused on the physical workflows. LLMs are really good at extracting unstructured data and making sense of it, moving it around and then doing something else with it.

The best example is pulling information from an invoice and putting it into the system. Once this has been done, if it’s got a due date, Mary will trigger the payment on that date.

When it comes to more complex analysis and calculations, we do not advise to use LLMs because they do have a very high propensity to hallucinate. We do have a feature where you can query with Mary, but the data has to exist in the database and be directly callable. We do make sure all of our customers are aware of that as well.

Financial services is probably one of the most highly regulated industries; how does Mary fit in around those regulations?

In terms of financial regulations and the FCA, Mary’s not affected by them at all. We’re not making decisions for the user, we’re just giving them a tool to make it quicker and easier for them to process their data, so we’re excluded.

If you zoom out and look at regulation, the EU recently released the EU AI Act. There was a lot of moaning and groaning about it, but myself and my co-founder (Conor Digan) had a read and we thought it was actually OK.

They split everything out by different levels of risk, and Numra isn’t considered high risk so the rules don’t really impact us. But I do think there could be a world in a few years where a language model is capable of causing harm at scale. So for those kinds of systems, regulation is definitely really important. But on the other side, you also don’t want to stifle innovation. It’s a balance you have to find. 

The EU has passed that legislation, and you assume the UK will at some point. Do you have any concerns about what the UK legislation may end up looking like?

If anything, I think they’ll be less stringent. I think they’re taking a bit more of a ‘no regulation’ approach. I know a committee has been formed and they’re talking about it, but they’re not sure what they’re going to do yet.

I’d like to see certain things addressed, for example discrimination in hiring processes caused by AI and facial recognition. But equally, more legislation will always be a potential risk to the business, and something we’ll have to keep an eye on.

You mentioned that in the future AI may potentially cause harm. I think we’re a long way away from that, but I’d just be really interested in your opinion as a founder who utilises AI. There’s a famous quote from an IBM presentation in the 1970s that says there’s a risk around computers because they can’t be held accountable. I’d love to get your opinion on that with something like Mary; Mary potentially can’t be held accountable if she makes a mistake.

Mary can make mistakes, she can extract data wrong. She’s more than 99% accurate; there is margin for error, but it’s still more accurate than a human.

The thing is, when it’s a machine, people seem to be more sensitive to the mistake. It’s like self-driving cars; self-driving cars are actually more safe, but if someone dies by a self-driving car, there’s uproar about it. In terms of accountability, we’re definitely responsible for putting the right guardrails in place so that nothing can go wrong within people’s finance function. And we are liable within our contracts as well.

Image: Numra

Robert Welbourn
Robert Welbourn is an experienced financial writer. He has worked for a number of high street banks and trading platforms. He's also a published author and freelance writer and editor.