Skip to main content

From Black Box to Glass Box: Inside Fenergo’s AI Governance Framework

This blog distills the key insights from Fenergo’s AI Governance white paper, explaining our practical approach to governing AI and why it matters for financial institutions seeking to scale AI responsibly.

Artificial intelligence and Generative AI systems are no longer on the periphery of financial services. It is rapidly becoming central to how organizations onboard clients, detect financial crime, and scale compliance operations, as adoption grows across many market sectors. In January 2026, Nvidia released the results of research which found that 65% of financial institutions are actively using AI, up from 45% in the previous year’s survey.

Moreover, industry surveys indicate that 92% of global banks reported active AI deployment in at least one core banking function as of early 20253. This is especially visible in Know-Your-Customer (KYC) and Anti-Money Laundering (AML), with investment in AI tools rising: nearly 100% of respondents to Nvidia’s survey said their AI budgets would increase or stay the same in the coming year.

But, as adoption accelerates, so too does regulatory scrutiny.

Regulators across the globe are making it clear that how AI is governed matters just as much as what it delivers. For financial institutions, the real challenge is no longer deciding whether to use AI but rather how to do so safely, transparently, and responsibly.


Regulation is Catching Up - Fast

For years, AI governance was generally characterized by a fragmented, voluntary, and principles-based landscape. In practice, governance relied far more on institutional self-regulation and soft law than on enforceable regulations. That is now changing rapidly.

The recent EU Artificial Intelligence Act introduced a risk-based classification of AI systems, with conformity assessments and post-market monitoring obligations for in-scope, high-risk AI systems (see more in this new article here). In the US, voluntary frameworks like the NIST AI Risk Management Framework and enforceable state-level regulations such as the California Consumer Privacy Act (CCPA) establish standards around the use of personal data in AI systems, including Generative AI. The UK, Singapore, Hong Kong, and Australia, as well as many other countries, are going the route of embedding AI oversight through existing financial regulation, data protection law, and supervisory guidance.


The common thread is clear: AI governance is increasingly treated by regulators as a mandatory supervisory expectation for higher-risk and regulated use cases. Financial organizations must demonstrate not only that controls exist, but that they are operational, auditable, and embedded across AI systems.

In other words, regulators are asking organizations to prove that they are using AI systems responsibly.

The Growing Risk of “AI-Washing”

As regulatory pressure mounts, so does a new risk: AI-washing, where some organizations create a false perception of the use of advanced technologies like AI in their products.

Many organizations are quick to claim responsible or ethical AI, but far fewer can evidence it. High-level policies, ethics statements, or vendor assurances may look reassuring on paper, yet they often collapse under regulatory examination.

In regulated environments like financial organizations, this could be a major challenge. This could mean that automated decisions are made without explainability and Generative AI models don’t have clear accountability or audit traceability.

The results can be far-reaching, eroding stakeholder trust and creating reputational damage as well as regulatory non-compliance.


From Principles to Practice: Why Governance Must Be Operational

While not all automation constitutes artificial intelligence, AI-enabled and model-driven systems introduce distinct governance challenges due to their adaptive, probabilistic, and data-driven nature.

In practice, enhanced governance, oversight, and assurance obligations apply primarily to higher-risk AI systems, with proportionate governance and transparency requirements for lower-risk and limited-risk use cases.

As a result, AI governance cannot be addressed through policy statements alone. It must encompass clear ownership, defined processes, effective controls, and ongoing

oversight to ensure AI systems are designed, deployed, and operated responsibly in line with an organization’s business objectives and legal obligations.

Critically, these governance requirements must be embedded directly into AI systems and workflows, and applied across key areas of control:

1. Accountability

  • Clear ownership of AI systems and their outcomes. Who is responsible, and who manages this?

2. Risk management

  • Identifying, assessing, and mitigating risks such as bias, explainability gaps, model drift, security, and misuse.

3. Compliance

  • Ensuring AI use aligns with guidelines, regulations, and supervisory expectations (e.g. data protection and privacy, non-discrimination, emerging AI regulation).

4. Transparency and explainability

  • Making AI decisions understandable to users, regulators, and stakeholders at the right level for the context.

5. Lifecycle control

  • Governing AI from design and data sourcing through deployment, monitoring, change management, continuous improvement, and retirement.

6. Human oversight

  • Ensuring human judgment can review, escalate, intervene, and control AI across automated processes.

These guardrails are essential to ensure that Generative AI systems deliver organizational value without regulatory, ethical or operational risk.

Inside Fenergo’s SaaS AI Control Framework

Fenergo takes a comprehensive approach to AI governance, starting with AI systems that are trustworthy and responsible in their usage, with defined access controls and security by design.

As outlined in the AI Governance whitepaper, Fenergo has embedded governance directly into its platform through a SaaS AI Control Framework comprising over 30 distinct controls mapped to jurisdiction-specific regulatory requirements and global best practices. Clients retain full accountability for governance, oversight, and regulatory compliance, with the platform designed to support and evidence these responsibilities rather than replace them.

Rather than treating governance as an overlay, these controls are built-in, applied across the entire AI lifecycle from design and deployment to monitoring and retirement.


The framework is built around the regulatory concerns that matter most to financial institutions:

  • Explainability – ensuring users can understand why an AI agent acted
  • Bias and fairness mitigation – delivering consistent, objective outcomes
  • Privacy compliance – strict data isolation and no training on client data
  • Auditability – complete traceability of actions and decisions
  • Human oversight and accountability – configurable controls and approvals

Crucially, all AI capabilities are opt-in and / or configurable by clients and are designed to operate within clearly defined boundaries.


Human Oversight and Auditability: The Cornerstones of Responsible AI

One of the clearest signals from regulators is the expectation that humans remain accountable for AI-driven outcomes. Fenergo’s AI capabilities are designed with this principle at their core. While AI agents can perform tasks independently, every action is logged, visible and subject to review. Clients can easily drill into:

  • What action was taken
  • When it occurred
  • Which agent initiated it
  • The rationale behind the decision

This “human-in-the-loop” approach ensures that automation enhances efficiency without removing control. It also enables organizations to respond confidently to regulatory queries, audits, or internal risk reviews.


From Black Box to Glass Box AI

For a long time, AI has been seen as a black box: powerful but lacking full understanding and explainability. Fenergo’s approach shifts AI governance from a black box to a glass box transparent by design, explainable in operation, and accountable in outcome. This is not just about compliance. It is about enabling responsible innovation at scale.


1. Responsible AI Governance as a business value driver

The organizations that succeed with AI in financial services will not be those that move fastest but those that move most responsibly.

McKinsey highlights in a recent 2025 report that poor AI governance materially increases legal, reputational, fraud, and cybersecurity exposure, which directly erodes AI-derived value in banks5. Furthermore, EY observes that Responsible AI enables both innovation velocity and sustained growth not just risk reduction, in a European Financial Services Responsible AI Pulse Survey 2025.

This is the philosophy behind Fenergo’s AI Governance Framework. Governance is not a brake on innovation; it is the foundation that makes innovation sustainable and enables clients to unlock business value.


Learn More

AI regulation is here. Expectations are rising. And the cost of getting it wrong has never been higher.To explore how Fenergo operationalizes trustworthy, regulator-ready AI, and how financial institutions can move from reactive compliance to proactive confidence, download the Fenergo AI Governance Whitepaper.

About the Author

Emmett Carolan, VP of Privacy and Risk joined Fenergo in 2022 and specializes in data privacy. He brings with him a breadth of experience from his background in law and from working across the technology sector at multinational companies, such as DocuSign and Verizon, for over 10 years.

Profile Photo of Emmet Carolan