The EU’s landmark Artificial Intelligence (AI) Act passed through the European Parliament last week, cementing the bloc as a world leader in regulating the rapidly emerging industry. Although the biggest impacts are likely to be felt in the generative AI development sector itself, finance’s increasing embrace of AI means that it too will have to reckon with the changes.
Of all the global fintech hubs, the Latvian capital of Riga, a city of just over half a million people located in the northeast of Europe, is perhaps the most surprising. Latvia’s heavy investment in the industry means that businesses were anxiously watching for the result of the vote. This does not mean that they hoped it would fail, however.
As Ryta Zasiekina, founder of Latvian fintech company Concryt, explains: “Courts and regulators are dispelling the myth of ‘tech exceptionalism,’ which suggests technology companies are somehow exempt from legal scrutiny. Instead, they are enforcing existing legal frameworks more rigorously to address new challenges posed by technological innovation. This shift signals a move towards holding tech companies accountable within the established legal framework.
“The European Parliament’s ratification of the AI Act marks a pivotal moment in many banking and financial services offerings, not least the e-commerce and cross-border payments industry, but it also raises important questions about the future of AI regulation.
“While the legislation aims to set guardrails for AI technology, particularly in industries like banking and electronic products, there are concerns it may stifle innovation and hinder the development of cutting-edge AI solutions. AI-powered systems have the potential to enhance customer experiences, streamline operations, and improve security, but these benefits may be curtailed by overly restrictive regulations.
“Despite these concerns, the AI Act does provide a much-needed framework for the ethical and responsible deployment of AI technology.”
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataHow the bill will affect existing AIs
Iain Swaine, director EMEA, global advisory at fraud detection company BioCatch points out that financial services providers have long been pioneers in AI.
“We know that legislation often lags innovation, and any legislation or techniques put in place to comply should not be too prescriptive for that reason,” he says. “However, banking is the sector that probably has the most experience in the use of AI already – with it being used within credit scoring and risk and especially in banking fraud detection.
“With the extremely large data sets banks have, they need some form of AI to be able to parse them effectively, and to remove repetitive and manual elements from existing processes. This has already led to better accuracy with lower false positives.”
As might be expected, this reliance will be impacted by the bill to some extent. Of this, John Byrne, CEO of regulatory risk intelligence firm Corlytics, explains: “[The bill] will impact how [banks] deploy AI in such high-risk critical areas as, for example, credit scoring. The banks have been big on AI bias already. When it comes to developing chatbots that offer financial advice and algorithms for groups of people experiencing financial challenges, the developers are typically rather similar individuals with a specific bias.
“Now, with the introduction of the AI Act, there will have to be a rigorous testing approach: before they put anything into production banks will have to test the models and see if they are transparent and if there is going to be traceability on the decisions.
“They will [also] need to evaluate their vendors in order to ensure that their AI technology systems are transparent, equitable, and compliant with ethical standards. This will have a much bigger impact on the retail banking sector and the wholesale banking sector. They have been a lot riskier from the GDPR point of view already, and these risks will only increase with the use of AI.
“The FCA/BOE 2022 paper is an excellent blueprint for best practices in both data, model risk and governance for AI & ML in the financial services sector and is a very practical guidance for forthcoming EU regulation, Everyone who follows this guidance will be in good shape.”
Byrne also notes that there are problems with relying too heavily on AI models for the banking sector. Outlining how the bill addresses them, he adds: “While encouraging responsible and transparent use of AI, the EU AI Act in fact emphasises how crucial it is to have high-quality accurate data in AI systems. Of course, implementing the requirements of the EU AI Act will involve additional compliance costs for banks.
“Quality data for learning, training and decision-making of AI models is vital. Banks in the EU have very strict General Data Protection Regulation (GDPR) regulation that sets stringent standards for the collection, processing, and storage of personal data – and that’s a challenge.
“The major risk with AI and specifically generative AI is that it can make things up, and it does not necessarily see the difference between reality and fiction.”
“In some areas such as art, music, filmmaking etc., creativity of AI models may work great. But in sectors such as financial services, it is a major risk when AI models are overly ‘creative’. Banks relying on AI in any processes, especially decision-making processes will have to be much more attentive to data quality.
“If a bank uses an AI in customer support, it needs to be able to tell the regulator how those models are trained and how they work and be able to trace why certain advice is given in a particular situation. If a bank employs AI algorithms for ‘know your client’ processes and rejects a customer account application, the client has the right to know the reasoning involved in the decision processes.
“That’s why under the AI Act the bank must document the decision-making process including details on the data inputs, features and factors considered by the AI model and be ready to sufficiently explain why the AI system arrived at that decision. Robust AI compliance policies, procedures and controls are a must-have.”
What does it mean for the future?
Greg Hanson, group vice president and head of sales for EMEA north at Informatica, believes that this requirement for transparency will also affect the way banks build future technologies, no matter where they are in the world.
“Final approval of the EU’s AI Act will resonate far beyond the region’s borders,” Hanson asserts. “We will likely see divergence in how countries across the globe regulate AI. This could look like different variants of the same policy that are loosely based on the EU’s AI Act.
“The challenge for local regulators will be striking a balance between regulation and innovation. What’s clear is that large, multi-national organisations will not be able to afford to do AI regulation on a siloed project-by-project, country-by-country basis. It is too complex.
“Instead, organisations will need to consider how AI regulation translates into policy and put solid foundations in place that can be easily adapted for individual regions. For example, regulators across countries are showing an appetite for transparency. To demonstrate banks are using AI safely and responsibly, they must be able to show why their AI model has made a certain decision.
“The onus is on banks to understand what the heritage and lineage of data inputs were, ensure that the data was high quality and illustrate how AI acted to generate a particular outcome.”
However, Prathiba Krishna, AI and ethics lead at SAS UK & Ireland, points to some concerns about whether banks will be able to make these demonstrations in time.
“Businesses must also take note to avoid unintentionally falling foul of these new requirements. Those found to be using AI systems that violate the Act face substantial fines of several million Euros when the Act will fully apply from around mid-2027. Interestingly, SAS research from 2023 found reasons for concern among respondents in their ability to comply.
“A lack of third-party support (69%) and a need for greater clarity around regulation guidelines (59%) were two of the main issues, followed by a lack of internal AI expertise (34%), a lack of information available about AI legislation (31%) and a lack of understanding of the AI legislation (24%).
“For example, there always need to be plans in place to govern trustworthy AI – something that those in the early stages of their AI deployment journey need to consider. Put simply, data fuels all AI. If the data going into an AI model is poor quality, the output will be the same.
“These new requirements shouldn’t be seen as a negative as it’s the organisations that can demonstrate responsible and ethical use of AI that are more likely to be commercially successful. Investing in solid governance frameworks and establishing a culture of trustworthy AI will pay dividends.”