A guest editorial by Brian Wagner, chief technology officer, Revenir AI

AI has experienced a meteoric rise and organisations across the globe are finding new ways to use it every day. Chat GPT launched just six years ago in 2019, and since then we’ve seen it transform the ways in which we work, learn, live and play. Whilst the promise of AI is limited only by the imagination and its potential is still being explored, this technical opinion article by Brian Wagner, chief technology officer of Revenir AI, underscores the urgent need to be inherently sceptical of AI influence, balancing its output with technical controls to ensure its safety.

OpenAI was founded in December 2015. The then not-for-profit company started with the goal of advancing AI ‘in a way that benefits everyone’ and was fully committed to advancing the field of AI in a way that benefits humanity as a whole and in a ‘safe and friendly manner.’ The ChatGPT launch preview was a huge success, with over a million signups in the first five days, according to OpenAI. Additionally, unnamed sources cited by Reuters in the same month reported that OpenAI was projecting revenues of $200m in 2023 and $1bn in 2024. OpenAI’s switch to for-profit status in 2019 saw critics arguing that it went against the company’s claim of democratising AI. This has fuelled the AI hype-cycle.

Nevertheless, according to a popular Generative AI search engine, over 82% of companies globally are either using or exploring AI today. Businesses are using chatbots as customer service engines, using AI to develop website content, and employees are using AI personal assistants to perform administrative tasks like data entry and email management. Almost every business sector is harnessing AI to speed up workflows, save time and work more efficiently. Lawyers, for example, are saving hours every day by using AI for time management and streamlining the organisation of documents and data as well as using agentic chatbots to provide access to justice for all. The most fundamental use of AI across business sectors is the ability to free up human resources and allow more focus on driving innovation.

Approaching its ten-year anniversary, not every aspect of AI today is as rosy as predicted and we see examples every day of ‘AI gone wrong’. Just last month, in a highly publicised case, the British Broadcasting Corporation (BBC) complained to Apple after its AI model, Apple Intelligence, generated a false summary of one of its stories about the shooting of UnitedHealthcare CEO Brian Thompson, allegedly by Luigi Mangione. This article was incorrectly summarised by Apple in its news feed roundup as “Luigi Mangione shoots himself.

Another example is the rogue chatbot. One of the world’s largest airlines, was ordered to compensate a passenger who received incorrect refund information from its chatbot which had invented a non-existent policy. On a more threatening scale, we are seeing the potential for global crises. when AI is used by malicious actors. A Meta security report generated fear and mistrust when it revealed how Russia is using generative AI to lead ‘online deception campaigns’ and possibly interfere with the upcoming US Presidential election.

Additionally, with recent reports suggesting that almost half of US companies using ChatGPT have already replaced staff with AI, people are expressing understandable concerns over future job security.  

AI is clearly becoming divisive. Tech companies such as Apple and Samsung are taking a stand in banning generative AI use in their businesses over concerns about leaking confidential data. Major banks including Citigroup, Deutsche Bank, Bank of America are now following suit.

In the fintech sector, the safety of financial data, mitigating fraud, and maintaining trust is crucial. Without a steadfast commitment to AI security, the fintech sector risks becoming a vector for sophisticated cyber threats. The fintech industry’s reliance on artificial intelligence for mission-critical applications such as fraud detection, credit scoring and risk assessment has been a driver of technological progress which offers immense potential for innovation and optimisation. However, as access to AI has become such a commodity, it also puts the industry at risk of bad actors influencing its results and this needs to be kept in check.

Fintech companies investing in and deploying GenAI need to be mindful that the quality of AI output is directly related to the quality of input as well as understanding the source of the data and training methodology. The information we give AI programmes is the only way they can learn. However, if the programme is given faulty or untrustworthy data, results could be inaccurate or biased. As a result, the intelligence or effectiveness of AI is only as good as the data you provide and consistency of data is one of the key obstacles to the implementation of AI. Businesses trying to benefit at scale from AI face difficulties since data is frequently fragmented, inconsistent and of poor quality. This can lead to big issues.

For instance, when Amazon first used AI software to evaluate fresh job candidates in the early days of AI’s existence, its model was trained using CVs that were submitted in the previous ten years, the bulk of which were supplied by men. The algorithm therefore believed incorrectly that being male was the favoured attribute for new employment and mistakenly began excluding female candidates from the recruitment process as a result.

To avoid this, companies in the fintech space should have a well-defined plan in place from the beginning for gathering the data that AI will need. Whilst widely available, LLMs are language models and not necessarily fit for industry specific tasks such as fraud detection and credit scoring. Specialist models require specialist training and companies must be aware in advance of the costs – training is often expensive. Companies can optimise training efforts by measuring AI output effectiveness; only training when they know they need to and treating source data and training lifecycles like production code: using version control and documenting it.

Companies investing in AI must be mindful of the cost of mining, storing, and analysing data in terms of hardware and energy use. Businesses that lack in-house expertise or are unaccustomed to AI, frequently have to outsource, which presents problems with cost and upkeep. Smart technologies can be expensive due to their complexity and incur additional fees for continuous maintenance, repairs and the computational costs associated with building data models.

The majority of companies today have moved past the trial stage when it comes to deploying AI and are experiencing good results and a positive impact on their bottom lines.

The McKinsey Global Institute finds that generative AI has the potential to generate value equivalent to $2.6tn to $4.4tn in global corporate profits annually and will deliver its biggest impact in banking, high tech and life sciences.

However, there is still work to be done in determining the boundaries of AI use. Given current constraints, safety in AI is crucial. With AI expanding unpredictably and quickly in every industry and particularly in banking and financial services, immediate action by every fintech using AI is required.

Image: Revenir AI

Guest Editorial
This article was produced specially for Fintech Intel by an expert guest contributor.