Skip to main content

The EU AI Act Is Here: What Financial Institutions Must Do Next and How to Get Ahead

Artificial intelligence has rapidly moved from the periphery of business operations to the center of innovation. In doing so, it is reshaping business models and product development across industries, including financial services, and introducing new opportunities, risks, and regulatory responsibilities.

With the introduction of the EU Artificial Intelligence Act (The EU AI Act), AI governance has entered a new era. The Act moves AI governance from voluntary guidance to enforceable law, establishing a global benchmark for the design, deployment, and control of high-risk AI. A fragmented landscape of best-practice frameworks is being replaced by a binding regulatory regime with defined risk categories, clear obligations, and meaningful penalties, notwithstanding ongoing national and regional implementation.


For financial institutions using AI across know your customer (KYC), anti-money laundering (AML), and client lifecycle management (CLM), the message is clear: AI governance is no longer a future concern. It is now an operational imperative.


Why The EU AI Act Matters for Financial Services

The EU AI Act is the world’s first comprehensive attempt to regulate AI systems at scale. Its defining feature is a risk-based classification model that aims to differentiate between prohibited (unacceptable-risk) practices, high-risk AI systems subject to strict regulatory requirements, and lower-risk AI systems subject to transparency or voluntary obligations. For financial services, correct risk classification is a critical determinant of compliance and operating model impact.


The EU AI Act can require non-EU companies to evaluate and potentially adjust their AI strategies where AI systems are placed on, or have an impact on, the EU market, including where services are offered into the EU, potentially standardizing regulations across regions, enabling some organizations to gain competitive advantage through better alignment on compliance and operating partner relationships with greater harmonization.


What the EU AI Act Requires in Practice

Whilst the EU AI Act is extensive, several core requirements are especially relevant for financial institutions, particularly for higher-risk AI use cases:

Transparency and explainability

Organizations must be able to explain how AI systems function and why specific outcomes occur; even where AI systems are provided by third parties, deploying organizations retain full accountability for governance, oversight, and regulatory compliance. This includes providing complete explanations to internal teams, auditors, and regulators.


Human oversight and accountability

AI cannot operate unchecked. Institutions must demonstrate that humans can review, intervene, override, or escalate AI-driven actions where appropriate and act responsibly. This requires continuous monitoring and review.


Auditability and record-keeping

AI systems must maintain detailed logs of decisions, actions, and system behavior. This enables traceability during audits or regulatory investigations.


Data governance and privacy

Training data quality, bias mitigation, data minimization, and strict controls around personal data and maintaining audit trails are foundational obligations.


Risk management and governance integration

In practice, AI governance under the EU AI Act requires a regulated, risk-based approach, ensuring organizations embed AI governance within existing governance frameworks.

The Governance Gap Many Firms Face

The EU AI Act brings greater regulatory focus to a data governance gap for financial institutions, as existing GDPR and Model Risk Management frameworks only partially address how training and validation data influence automated customer decisions. While GDPR focuses on the lawful use and protection of personal data, the AI Act requires firms to demonstrate that data used in AI systems is fit for purpose, sufficiently representative, and free from avoidable bias. As a result, data can be GDPR-compliant and models statistically robust, yet still non-compliant if datasets systematically disadvantage certain groups, rely on biased proxies, or contain undocumented limitations.

To address this gap, institutions must adopt AI-grade data governance with explicit accountability at the dataset level. In practice, this involves treating training data as a regulated asset: formally approving datasets through model governance processes, conducting documented assessments of bias and representativeness, recording known data limitations, and applying enhanced scrutiny to vendor-provided data. The AI Act therefore elevates data governance from a supporting discipline to a core risk and compliance requirement, tightly integrated with model, product, and lifecycle decision-making.


How Fenergo is Prepared for The EU AI Act

Fenergo has been preparing for this new era of AI regulations and their impact for some time. As detailed in its new AI Governance whitepaper, Fenergo’s approach is built around a SaaS AI Control Framework comprising over 30 distinct controls, each mapped to relevant jurisdiction-specific regulatory requirements, including those arising under the EU AI Act. Rather than treating governance as an afterthought or an add-on, these controls are embedded throughout the AI lifecycle. Key capabilities include:

  • Explainability by design, with AI system rationale functionality that clearly shows why actions were taken.
  • Human-in-the-loop controls, allowing policy aligned approvals, review points, and escalation paths.
  • Full auditability, with systematic logging of every AI action, decision path, and condition.
  • Strong data governance, ensuring client data is never used to train or fine-tune models and remains regionally isolated.
  • Bias and fairness safeguards, delivering consistent, objective outcomes aligned with regulatory expectations.

All AI features are opt-in and / or configurable, ensuring clients retain control over how and when AI is used within their environment.


Turning Compliance Into a Business Advantage

The EU AI Act undoubtedly increases regulatory pressure. For institutions that prepare early, it can also create opportunities to strengthen governance, resilience, and operational readiness.

A recent PWC 2025 Responsible AI survey: From policy to practice, observed that 58% of executives report an association between responsible / governed AI practices and improved ROI and operational efficiency, while 55% say they improve customer

experience and innovation. This suggests that AI governance is increasingly associated with value creation, not just a compliance control.

  • Nearly 6 in 10 executives report that Responsible AI practices improve return on investment and organisational efficiency, highlighting the business value of governance.
  • Half of organisations view operationalising Responsible AI principles into repeatable processes — not just policies — as critical to scaling AI responsibly, suggesting that governance embedded early can streamline deployments.
  • Executives cite reduced regulatory risk and improved compliance outcomes as benefits of Responsible AI programs, indicating that governance can help avoid downstream regulatory and remediation costs.
  • More than half of respondents say Responsible AI practices enhance customer experience and innovation, reflecting the role of transparency, explainability, and trust in AI adoption.


Financial institutions that act early, embedding AI governance into platforms, processes, and vendor relationships, will be far better positioned than those forced into remediation later. Competitive advantage comes from how well governance enables scale, not just how it satisfies regulation.


Fenergo’s AI Governance Framework is designed to support our customers’ ambition by translating complex regulatory expectations into operational reality. By embedding governance directly into the platform, Fenergo enables financial institutions to innovate responsibly without sacrificing transparency, accountability, or trust.


Download the Fenergo AI Governance whitepaper to explore how to prepare for the EU AI Act and future-proof your AI strategy.


Also, read Fenergo’s latest Global FinCrime Operations Trends report here for insights into attitudes towards AI and financial crime compliance: Global Fincrime Operations Trends in 2025

About the Author

Mark Kettles is a global marketing and product leader passionate about helping organizations turn complex technology into clear customer value. At Fenergo, he is shaping a modern product marketing function—driving positioning, messaging, and launch excellence across a world-class fin-tech platform.

Profile Photo of Mark Kettles