Expert Mode from The Agile Brand Guide®

Expert Mode: Predictable AI Agents Are Coming—Here’s How to Stay in Control

This article was based on the interview with Peter van der Putten, Director of AI Lab and Lead Scientist at Pega by Greg Kihlström, AI and Marketing Technology keynote speaker for The Agile Brand with Greg Kihlström podcast. Listen to the original episode here:

The era of generative AI has been explosive—but unpredictable. That’s fine for brainstorming or content creation. But when lives, money, or compliance are on the line, unpredictability becomes a liability. That’s why Peter van der Putten, Director of the AI Lab and Lead Scientist at Pega, is focused on a new frontier: predictable, governed AI agents that can innovate and stay within the lines.

In this conversation from PegaWorld 2025, van der Putten explores how agentic AI can move from flashy demos to real enterprise applications. The vision? Autonomous systems that are capable, compliant, and fully auditable—bringing AI out of the sandbox and into the workflows of industries that don’t tolerate surprises.


From Prompts to Plans: Agentic AI with Guardrails

Most generative AI systems today are reactive: you ask, they respond. But agentic AI introduces a shift—toward autonomous systems that can sense context, plan steps, execute actions, and achieve goals.

“Generative AI is quite passive. You prompt it, it responds. But agentic AI has agency. It can act to achieve specific outcomes,” says van der Putten

This opens new possibilities—but also new risks. As van der Putten jokes, “You can’t just unleash a mob of agents and hope for the best.” That’s where predictability comes in. Pega’s platform is designed to ensure that agents aren’t just capable—they’re governed. They can act, but only within clearly defined parameters.

For example, underwriting a life insurance policy might include a rule: if a medical check is required, the decision must follow a pre-approved process. The AI can help identify the requirement—but it can’t make the medical call. That kind of boundary-setting is critical.


Predictability Over Creativity—Especially in Regulated Industries

AI creativity is valuable—in branding, storytelling, and design. But in healthcare, finance, or government, creativity isn’t always welcome. These sectors demand reliability and transparency.

“You can’t be creative in insurance if it means inconsistent decisions,” van der Putten notes. “You need tools that behave the same way every time”

In Pega’s model, agents don’t replace workflows. They enhance them. They can assist with tasks like data gathering, form generation, or identifying next steps—but final decisions still rely on structured rules and processes. It’s a layered system where autonomy is earned, not assumed.

And everything is auditable. Whether an outcome was triggered by a human, an agent, or a traditional rule engine, Pega logs every decision. That means leaders can trace outcomes back to specific logic or actions—critical for trust, governance, and compliance.

“Agents are just another actor in the system. And they’re audited like any other actor,” says van der Putten


Types of Agents—and How They Work Together

Pega’s framework supports multiple types of AI agents, each with a specific function:

  • Design agents: Analyze documentation, legacy systems, or process mining data to help design workflows.
  • Automation agents: Execute tasks within applications, such as verifying data or orchestrating steps.
  • Coach agents: Guide human users through decisions—like whether to approve or escalate a claim.
  • Knowledge agents: Pull relevant information from documents, rules, or policies to support real-time questions.

“One agent might even act as a tool for another agent,” van der Putten explains. “You can create multi-agent systems with layered interactions, all governed and observable”

This modular approach allows companies to roll out agentic AI gradually. Instead of a massive system overhaul, organizations can start by applying the right kind of agent to the right kind of problem—and scale from there.


What This Looks Like in Practice

Pega has already deployed internal agents like “Iris,” a knowledge assistant that fields thousands of internal requests daily. It’s low-risk—no direct changes to systems—but high-value in terms of time saved and context provided.

And clients are beginning to follow suit. Rabobank, for example, has been experimenting with agentic AI in financial crime operations. Their “knowledge buddy” helps 3,000 analysts interpret complex work instructions and regulations. The next step? Using agents to build full customer risk profiles during onboarding or investigationsfor-social-media-conten….

“It started with answering questions, and now it’s growing into more complex decision support,” van der Putten says

These experiments are tightly scoped, audited, and iterated based on feedback. That’s the Pega approach: try it, test it, then scale what works.


Conclusion

Agentic AI isn’t science fiction anymore. But as Peter van der Putten makes clear, its real promise lies not in wild autonomy—but in governed innovation. When AI agents have defined roles, clear boundaries, and full transparency, they don’t replace human judgment—they reinforce it.

The future of enterprise AI isn’t about letting machines run wild. It’s about deploying predictable agents that accelerate transformation without sacrificing control.

Because in regulated industries—and in any business that values trust—predictability isn’t a constraint.
It’s the foundation for doing AI right.

The Agile Brand Guide
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.