From Semantics to Agents: The New Operating Model for Banks

Etienne Oosthuysen

Chief Technology Officer

AI

Summary

  • AI risk management is breaking down because most organizations respond to new tools and regulations in isolation, without a shared foundation.
  • As AI legislation evolves, CISOs who wait for clarity remain exposed due to gaps in inventory, ownership and governance.
  • Agentic AI introduces new forms of risk tied to identity, infrastructure layers and interconnected supply chains.
  • Third- and fourth-party AI use expands exposure beyond what traditional vendor assessments capture.
  • The most resilient organizations integrate AI governance into existing security, identity and risk processes rather than building parallel programs.
  • Framing AI risk as a business issue with clear accountability is critical to sustained board engagement.

Introduction

There is a transformation underway in financial institutions.

While we debate AI regulation, model risk and the ethics of autonomous decision-making, another foundational shift is taking place. The banks that will lead the next decade are not simply adopting AI; they are rebuilding the semantic foundation that makes AI trustworthy. And while they are at it, rewiring how engineers work.

While this article focuses on banks, the principles apply equally across financial services and across any industry operating at enterprise scale with complex data and AI ambitions.

Inconsistent Meaning is a Risk at Scale

Ask any senior data leader at a Tier-1 bank what "customer" means across their organization. The answer then often depends on the system, the domain, the team and sometimes the individual who built the data product, API, integration, etc. At small scale, that ambiguity is manageable. At enterprise scale, it becomes a material risk.

When AI agents begin operating across that landscape, reasoning over data and taking actions based on what they find, inconsistent meaning is no longer just a data quality problem. It is a significant operational, regulatory and reputational risk. An agent that resolves "customer" differently across two systems does not just produce a wrong answer. It produces a wrong answer that it’s confident is correct. And it does this at scale.

This is why the most forward-thinking institutions are investing in Enterprise Semantic Data Models. This is not as a documentation, glossary or cataloging exercise, it is a governed control plane that makes meaning explicit, reusable and enforceable across every service domain in the bank. The institutions doing this well are doing it in production, at scale, with real delivery teams and with real organizational resistance to navigate (cultural, perceived priority, scarcity of SMEs, availability of knowledge and so on).

Meaning as Infrastructure

The Banking Industry Architecture Network (BIAN) has long argued that meaning must be explicit and reusable across standardized service boundaries. For years, that principle lived in architecture documents, designs and actual delivery. Now, driven by the demands of AI and agentic solutions, it is becoming an operational necessity.

Banks aligned to BIAN are finding that a well-governed semantic layer does something unexpected. It does not slow things down. When done well, it removes the constant reconciliation tax that delivery teams pay every time they build something that crosses a domain boundary. It makes APIs composable, data products reusable and agents safe to deploy at scale. That said, the investment is front-loaded. The returns are real, but they compound over time rather than land immediately.

Getting there requires navigating genuine organizational challenge. Federated ownership of semantics is harder than centralized control, and convincing delivery teams that modelling meaning is worth their time, when they are under pressure to ship, requires both top-down mandate and demonstrated value close to where the work happens.

These are not theoretical challenges. We are, for example, working through some of these on the ground right now, inside some of the world's largest banks. And the lessons are consistent. The mandate must come from the top. Adoption friction is front-loaded, and the cultural impact must be acknowledged before long technical solutions are introduced. Federated ownership requires continuous reinforcement, not a one-time rollout, and at enterprise scale, human-led governance alone cannot keep pace. Injection of automation at key intersections within the semantic modelling lifecycle becomes the mechanism by which semantic discipline becomes sustainable.

The Other Half of the Equation

Semantic foundations alone are not enough. The second shift happening inside leading banks is equally important, and far less discussed.

“Agentic ways of working” is not a technology trend. It is a fundamental change in how engineering and data teams operate. When agents become active participants in the software delivery lifecycle; writing code, running tests, investigating data quality issues, raising tickets and iterating on hypotheses autonomously, the role of the human engineer does not disappear. It transforms.

The engineers who will thrive in this environment are not necessarily the ones who write the best code. They are the ones who know how to direct, supervise and quality-assure agents. Who understand how to set context, define boundaries and interrogate outputs. Who can operate with breadth across adjacent disciplines while maintaining depth in their primary craft. They are T-shaped, agent-augmented practitioners.

Progressive institutions are beginning to think about this more formally. Just as every human performing a task within an organization has a responsible line manager, accountable for their output, their development and their conduct, agents operating within the enterprise deserve the same structural consideration. An agent that investigates data, raises tickets or executes workflows is performing organizational work. That work should sit within a clear accountability structure, with a human owner responsible for its scope, behavior and outcomes. The organizations that get this right are not simply deploying agents. They are redesigning how work is owned, governed and sustained at scale, and that requires organizational structure adjustments to account for agents as part of the workforce.

The banks getting ahead of this are not waiting for the tooling to mature before building that capability. They are investing now in structured enablement that goes beyond technology training, focusing on the mindset, methodology and operating model shifts that determine whether agentic delivery lands as a genuine productivity multiplier or simply accelerates existing inconsistency. Working through this with institutions makes one thing increasingly clear: the cultural and organizational dimensions of this change are consistently underestimated.

Where it Comes Together

The most compelling signal of where this is heading is what happens when both shifts converge. When semantic governance meets agentic delivery, something qualitatively different becomes possible.

An agent that can resolve business meaning through a governed semantic layer before querying data, that knows what customer means, what attributes are valid, what data products represent that entity and what rules govern it, is not just a faster query tool. It is a fundamentally more trustworthy system. One that can investigate a data quality issue, profile a dataset, form and test hypotheses, validate findings against enterprise semantic rules and raise a governed ticket for human review, all without a human translating between business language and technical execution at every step.

This isn’t happening inside tier-1 banks, but in pockets. The difference between those pockets and enterprise-wide capability is not primarily a technology gap. It is a strategy, people, ways of working and governance gap.

The Window is Open, but not Indefinitely

For banks that have not yet started this journey, the window remains open. The tooling is maturing, the patterns are becoming clearer and the lessons from early movers are available to those willing to engage with them seriously.

But the compounding nature of both semantic governance and workforce capability means that delay has a cost. Every year without a governed semantic layer is another year of meaning diverging quietly across domains. Every year without deliberate investment in agentic ways of working is another year of engineers unprepared for the shift that is already underway.

The institutions that will lead are not the ones waiting for certainty. They are the ones building the foundation now, learning as they go and adjusting based on what the work reveals.

The Author

Etienne Oosthuysen
Etienne Oosthuysen

Chief Technology Officer

Etienne Oosthuysen is a Chief Technology Officer with expertise across data, AI, cloud and digital engineering. He operates at the intersection of technology strategy, client engagement and commercial outcomes, connecting capabilities across enterprise platforms to address complex challenges and shape cohesive solutions.

He focuses on helping organisations rethink how technology is applied, aligning technology, people and modern ways of working to move beyond siloed delivery toward integrated, outcome-driven models. He combines strong technical foundations with a pragmatic approach to deliver measurable business outcomes.