Organizing GenAI within a big corporate

With the rise of Generative AI, the democratization of AI has begun and so it seems the transformation of the AI landscape within big companies. In this Techblog, we will explore how our company is scaling Generative AI across the organization, from experimentation to deployment. By implementing a GenAI building block strategy, we can accelerate and scale use cases while providing control and oversight over our GenAI solutions.

When I was introduced to the world of data science almost a decade ago, building machine learning models was a highly specialized craft. Only a small group of people with deep expertise in statistics, algorithms, and engineering were working on these projects. It could take months—often longer—just to get a model to the point where it was ready for production.

And it wasn’t just about writing the code; it was a long, drawn-out process of endless data pre-processing, feature engineering, hyperparameter tuning, and carefully validating the model’s performance with rigorous statistical tests, all while you documented every step, insight and decision made. Once we had a model that seemed to work, we had to engineer it for deployment, which involved setting up data pipelines, managing infrastructure, and dealing with scaling issues. These scarce talents were typically centralized in one department, which made it easier to share knowledge, reuse tools, and work with unified platforms.

Enter Generative AI: the democratization of AI


Fast forward to today, and (some) things are very different. With the rise of Generative AI, the playing field has been leveled. AI is no longer confined to a small group of experts. Now, anyone—whether you’re in marketing, HR, or customer support—can leverage powerful AI tools to augment your work. Think of it like having a co-pilot for your job: Whether you’re drafting content, writing code, or answering customer queries, Generative AI tools can help you work faster, more efficiently, and sometimes even with better quality than doing it all manually.

For individuals, it’s like a superpower. You don’t need to be a data scientist to harness the power of AI anymore. Just grab a pre-built model or use a no-code tool to build your own, and you’re good to go. From some perspectives, that seems like an amazing shift, but this comes with its own set of challenges, especially when you try to scale it across an entire organization like a bank.

If you don’t have the right foundations in place, Generative AI is just another tool that isn’t performing as expected."
Veerle van den Akker
woman looking at her notebook


The double-edged sword: AI for everyone… But at what cost?


Now, here’s where things start to get tricky. While Generative AI is accessible to everyone in the organization, that also means everyone wants to build their own AI solution. The temptation is real. Within hours, you can have a chatbot or content generator up and running, testing ideas and proving that “AI works.” But as with all shiny new toys, there’s a catch: Most people want to experiment (and develop) with Generative AI, but they don’t want to maintain it.

And that’s where the 80/20 rule becomes painfully relevant. In the world of Generative AI, it’s super easy to launch an MVP or proof of concept. But that only covers the first 20% of the work. The next 80%—the part that actually makes AI scalable, reliable, and useful for (business) processes—is where the real work begins. That’s where the AI expertise is required (again), and you are probably going to realize that more is required to make sure your AI solution performs in production and that you can monitor it all times.
Everyone will see the potential of the technology, but getting it to deliver actual business value, in a consistent and reliable way, is a whole different ballgame.


Beyond the tool: the need for solid foundations


If you don’t have the right foundations in place, Generative AI is just another tool that isn’t performing as expected. And this is where the real heavy lifting comes in.

Recall that I mentioned that some things are very different? Well some things are still the same.

Data quality is paramount. The garbage-in-garbage-out rule hasn’t changed—if your data isn’t clean, consistent, and accurate, your AI will be no better than a broken compass. Similarly, your infrastructure needs to be solid. If you don’t have the right data platforms and tech stack to support AI at scale, everything falls apart.

Want to know how we are bringing AI into the future of banking?

Join our Generative AI team


But it’s not just about the infrastructure and the data—it’s also about having the right monitoring, logging, and evaluation systems in place. In a world where AI models can drift, hallucinate, or otherwise behave unpredictably, you need to be in control. Monitoring and logging ensure that you can track the model’s performance, understand where things are going wrong, and quickly address issues as they arise. It’s about setting up guardrails to make sure you're in control of your AI.

From experimentation to real-world deployment

Over the past year, we’ve been experimenting with Generative AI across different departments in our organization. We’ve learned a ton: which use cases work, what tools perform best, and where we need to tighten up our processes. But now it’s time to stop experimenting and start scaling.

This is where the transition from experimentation to real-world deployment becomes critical. The real challenge is taking the knowledge and insights we’ve gained through experimentation and building out a framework that allows us to deploy AI responsibly, consistently, and at scale across the organization.


Building a centralized team to accelerate


We do this by having a centralized team of talented experts in place, this team doesn’t just build random AI solutions for different departments. Instead, they help to accelerate the organization’s ability to leverage GenAI in a scalable, sustainable, and responsible way.

One of the key strategies we’re using to manage this complexity is the creation of GenAI Building Blocks —standardized, reusable components that can be leveraged across various use cases.

By engaging in co-creation with business Tribes on specific GenAI use cases, we ensure that the GenAI Building Blocks work in practice. These building blocks are made accessible to the entire organization, ensuring best practices are applied universally. They’re designed to be plug-and-play, which means teams across the organization can easily access them and apply best practices without having to reinvent the wheel. But most importantly, we want to prevent double work and we have learned that developing GenAI applications require time, focus and deep (Gen)AI expertise. With centralizing this knowledge and standardizing as much as possible we take away this burden from the business, which enables them to focus on creating value with their product delivery.

And it’s not just about standardization for convenience, additionally we develop Responsible GenAI Building Blocks, designed to provide control and oversight over our GenAI solutions, adhering to the organizational standards and mitigating risks like hallucination. They include mechanisms for monitoring and oversight. By putting these frameworks in place, we can scale AI responsibly, minimizing risks while maximizing its impact.



About the author

  • Veerle van den Akker
  • Veerle van den AkkerProduct Manager
Veerle van den Akker started her career as a data scientist and joined Rabobank in 2022 as a Product Manager within the Data & Analytics organization. Since the beginning of Generative AI at Rabobank, she has been involved in various roles and is currently road manager Generative AI.