What if your agentic AI could innovate autonomously—and still follow your business rules?
Agility in the age of AI doesn’t just mean speed. It means predictability, accountability, and the ability to innovate autonomously without businesses losing control of what is important, and what their customers value.
Today we are here at PegaWorld 2025 at the MGM Grand in Las Vegas, and we’re going to talk about how enterprises are starting to move beyond prompt-based, freewheeling AI models and toward something more mature, governed, and scalable: Predictable AI Agents. And we’ll explore what that means for the future of autonomous enterprise decisioning and innovation.
To help me dig into this topic, I’d like to welcome Peter van der Putten, Director AI Lab and Lead Scientist at Pega.
About Peter van der Putten
Peter van der Putten is assistant professor of AI, Leiden University and Director AI Lab at Pegasystems. Through his expertise in artificial intelligence and machine learning, Peter helps leading brands to become more ‘human’ by transforming into customer centric organizations.
In addition to his role at Pegasystems, Peter is an assistant professor and creative researcher at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands.
Peter is particularly interested in how intelligence can evolve through learning, in man or machines. Peter has a MSc in Cognitive Artificial Intelligence from Utrecht University and a PhD in data mining from Leiden University, and combines academic research with applying these technologies in business. He teaches New Media New Technology and supervises MSc thesis projects.
Resources
Pega: https://www.pega.com https://www.pega.com
The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow
Catch the future of e-commerce at eTail Boston, August 11-14, 2025. Register now: https://bit.ly/etailboston and use code PARTNER20 for 20% off for retailers and brands
Online Scrum Master Summit is happening June 17-19. This 3-day virtual event is open for registration. Visit http://www.osms25.com and get a 25% discount off Premium All-Access Passes with the code osms25agilebrand
Don’t Miss MAICON 2025, October 14-16 in Cleveland – the event bringing together the brights minds and leading voices in AI. Use Code AGILE150 for $150 off registration. Go here to register: https://bit.ly/agile150
Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom
Don’t miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show
Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com
The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Transcript
Greg Kihlstrom (00:00)
What if your agentic AI could innovate autonomously and still follow your business rules? Agility in the age of AI doesn’t mean just speed. It means predictability, accountability, and the ability to innovate autonomously without businesses losing control of what’s important and what their customers value. Today, we’re here at PegaWorld 2025 at the MGM Grand in Las Vegas, and we’re going to talk about how enterprises are starting to move beyond prompt-based freewheeling AI models and towards something more mature, governed and scalable, predictable AI agents. And we’re going to explore what that means for the future of autonomous enterprise decisioning and innovation. To help me dig into this topic, I’d like to welcome Peter van der Putten Director, AI Lab and Lead Scientist at Pega. Peter, welcome to the
Peter van der Putten (00:49)
Yeah, thanks for having me.
Greg Kihlstrom (00:50)
Yeah, yeah, welcome and welcome back. So you’re, you know, you’re, I think it’s number three here. So you’re, you’re going for a record. I love that. So let’s, for those that didn’t catch you before, why don’t you give a little background on what you do at Pega.
Peter van der Putten (01:05)
Yeah, awesome. So my formal title is director of the AI lab and the lead scientist for Pega. So I’m responsible for AI innovation. I report into our CTO, Don Sherman. I’m pretty much Pega being a platform for AI and automation. I’m pretty much his AI guy. That’s the shorter version of it. I really look at innovating, helping our clients understand how they can innovate with AI. But we also need to kind of not eat our own
dog food but drink our own champagne. So how can we also, you know, renew Pega’s brand from an AI point of view, call up a new go-to markets for AI, like Genetic AI, but also more kind of on the, let’s say the technical side or even the research side, what are kind of some of the latest AI innovations outside of Pega and bring them to, well, bring them to Pega, right? So, got one foot also in university as an assistant professor. So there I need to keep up with the cool kids, know, are way smarter than I am. But that’s a good way to get that external lens in a way on what’s happening in AI.
Greg Kihlstrom (02:13)
Well, yeah, let’s dive in here. We’re going to talk about this from a few different perspectives. But first, I mentioned the topic predictable AI at the top of the show. I’d love for you to, what do we mean when we talk about that? I think that’s something that certainly was already introduced this morning in the keynote at PegaWorld. Yeah.
Peter van der Putten (02:31)
Absolutely. we talk about these, particularly in the context of agentic AI, we talk about predictable AI agents. So to kind of define a little bit what it is, let me just maybe peel the onion a little bit. First, agentic AI. What’s the ID there? In general, agentic AI is AI that has more agency, right? So that can actually…
Well, when we look at generative AI so far, the likes of Chet GPT and whatnot, they’re amazing what they can do, but they’re also quite passive. We just need to give them a prompt and then they will give an answer and that’s about it. When you think about agentic AI, we want to have, let’s say AI systems that can operate in a world where they can sense the environment, they can understand what certain goals are to be achieved and they can do…
things like planning, figuring out how they can use different tools and take actions to get to a particular outcome. So that’s kind of flipping around this passive role of Gen.ai more into kind of an active type of AI and that ultimately then unlocks, as we call it, the autonomous enterprise. Now, the predictable AI agents, the ID.dairs, they were kind of playing into this point that
you can’t just say, I’ll unleash a mob of agents and they will kind of magically sort of jump in. What could possibly go wrong, right? So yeah, well, what could go wrong is that this stampede of agents is going to do all kinds of irresponsible stuff. So predictable AI agents for us is really kind of addressing this concern that enterprises have around, yeah, how can we on one side…
empower these agents so that they have the right tools, et cetera, but also how can we govern them so that they are doing the right things. And how do we do that? There’s a mix of approaches. One is that we say, when you’re here in the industry, when people talk about agents, they very much talk about agents, let’s say, at runtime, you know, like when you apply for a loan or you file an insurance claim. Agents can actually help reaching particular goals.
But you could also say why not use agents also at design time when you’re kind of developing these apps. Maybe they can take in all kinds of these agents could take in all kinds of requirements, walkthroughs, demo videos, you name it, and then help generating the right application. But that could be a very predictable application. And because there are particular workflows in there, let’s say if you, well, if I take that claims example that would investigate the claim that would check whether there’s a fraud issue or see if what would be kind of all network services that we could offer to the customer that’s finding the claim, you name it. So we’re then also using kind of this agent ID not just at runtime but at design time to figure out what the application
Greg Kihlstrom (05:29)
Well, that design component and, you know, Pega Blueprint, there’s a lot to that. think that seems to be a differentiator, so to speak, of, you know, there’s lots of talk. I think, you know, first it was generative, now it’s agentic AI. There’s lots of talk about this, but to your point, without guardrails, those agents can, I mean, at best, they just do less than optimal things at worst maybe.
Peter van der Putten (05:55)
both in terms of empowering them and keeping them safe, right? So I think that’s, let’s say, I’ll be doing a presentation here also at PegaWorld and I’m giving these examples of, I don’t know, we’re underwriting a life insurance policy. Maybe we need, there are some elements of a medical check, the decision whether a medical check needs to be done.
That’s not something you want to leave to the agent. That’s something where you have governed business rules that you want to execute. If then the decision is that that medical view needs to happen, that’s then a process that gets executed because of some governed steps that need to happen or maybe some people need to come into the loop to actually give the final call. So it’s really about then also blending.
Yeah, giving the agents the right tools that they’re entitled to use, but also the very kind of predictable tools, the business rules, the more traditional AI workflows, you name it, and connecting them to those tools.
Greg Kihlstrom (06:55)
And it’s also about using different types of agents, right? So, you there’s, you you mentioned design, you know, design conversation agents, automation agents, and so on. You know, I know you described it a little bit, but, you know, can you talk about how do these kind of work together? Yeah.
Peter van der Putten (07:11)
Yeah. Let’s talk through a use case. Let’s say we have this, I’ll stick with this. Let’s file an insurance claim because that’s the most exciting problem in the world. Maybe, maybe not. I don’t know. Of course we can use these design agents to actually, let’s say there’s some legacy application from 40 years ago and God knows, no one knows actually what it is and how it works.
Greg Kihlstrom (07:24)
Yeah.
Peter van der Putten (07:40)
But we do have some training videos available that we could use. There are some old documents from ages ago. So we can then use these agents to actually figure out like based on all that kind of rudimentary information, maybe some information we got out of our process mining as well, like what’s really happening when we look across, you know, history of the last 10,000 claims and use it as inspiration for these design agents to come up with, hey, this is what not just a like for like translation of that old application, but also redesigning it for the modern world. So no use doing that old thing. Take the left and show. Yeah. So those are the design agents. And then we actually have an application in Pega. We can actually import it into the platform and then maybe further connect to real data sources. that’s where agents can come into play because they can help to identify like,
Greg Kihlstrom (08:20)
or whatever kind of.
Peter van der Putten (08:37)
These are ideally you would like the data model you would like to have in your application here for first notice of loss. But what kind of physical data sources do we actually have access to? And so they can further and there’s further types of agents that can help to then further develop that application, test automation or whatever it is. So then we have a running application. But maybe we want to coach a claims agent through some claims adjudication process. So then it’s more a coach agent that would figure out like, I mean, this is a claim that we can just probably readily approve or not, or whether we need to forward it on to be investigated or whether we can recover the claim, et cetera. And also connect the claims agent to different forms of knowledge. So-called knowledge agents where we can ask all kinds of questions and based on the policy and all kinds of rules and regulations and what not, we’ll get answers to our questions. So these are examples of different types of agents that we can use when we develop these apps, but also when we run these apps.
Greg Kihlstrom (09:42)
And so maybe even just to keep on the insurance or the regulated industry theme here, because I think that’s a great, I think any brand wants to stay on brand and has certain guardrails. But I think in those highly regulated industries, it’s easy to talk about examples. One of the phrases in one of the announcements from Pega at the conference is predictability over creativity. it reminds me, you do
Creativity is an amazing thing, but you don’t necessarily want it in some things like accounting or necessarily in insurance in all aspects or whatever. So why is that trade-off so crucial for industries like finance, health care, or even government?
Peter van der Putten (10:28)
I think that’s a great question. I’m maybe also of the creative sort, but you cannot be creative if you also have some predictability. To belabor that metaphor a little bit, there’s awesome musicians out there, but the instruments that they play then actually need to be quite predictable. I hit the chord or whatever, I don’t want it to sound differently if I play it exactly the same. Then I can’t be…
Greg Kihlstrom (10:54)
Timing. Yeah.
Peter van der Putten (10:55)
So you need to have predictable tools, even if you want to be creative. But in these regulated industries, there’s in this, let’s file an insurance claim example, where the agent maybe shines and is trying to figure out, what kind of, for this particular claim, what kind of different data sources or knowledge sources or policy information. Is there out there? How can I combine all that context into figuring out how to proceed with this claim? at some point you’re hitting parts where you don’t want to leave it to generative AI, where you want to go through a fixed process of particular steps, or you want to indeed invoke very bespoke specific business rules because they encode your policy or whatever it is.
So there you can see that it’s not so much this case of either or or agents, we can get rid of all our workflows because we have agents now that’s never going to work. We really need to actually empower these agents and give them very predictable tools like workflows or business rules or the more traditional machine learning AI to be used as part of that process so that you get these predictable outcomes more extreme variant of this is if we would only use that design time to actually create these workflows, then they could even completely lock them down and say, we shall always follow this particular process.
Greg Kihlstrom (12:19)
Yeah, because I mean at the end of the day, know, anyone that’s used chat GPT even knows if you ask it the same thing, it’s going to give you different answers depending on I don’t know what, but it’s going to give you different answers each time. We’ve heard of hallucinations and things like that. So, you know, all of these things, you know, it’s not just as simple as.
let’s throw AI at the problem, whatever that blanket AI term actually means. I think a lot of times it means generative AI in people’s minds just based on the last couple of years. this is where, how does a platform like Pegas ensure that transparency? mean, that transparency seems key and as well as visibility and control in these, again, admittedly complex and multi-agent scenarios.
Peter van der Putten (13:06)
Yeah, like so it’s a different level. So when we design these agents, we can very tightly control like what kind of tools, data, information sources, what kind of knowledge a particular agent has access to. And we could even have things like that particular, let’s say an agent as part of the interaction kind of get into like we want to use a particular tool that permission needs to be granted by a human to be able to actually do so. Yeah. Based on what
the type of tool is. If it’s just like getting some information from a particular source, then it’s maybe okay. if we want to execute a particular workflow process to kick in, that’s going to make changes to the whatever, the customer contract or whatever it is, we probably want to have a human force the human back into the loop and explicitly say, yes, go ahead and do this. So this is at the level of configuration of those agents.
But also I think if we actually, as in when we’re using these agents, like forget agent for a moment. Like any work that happens in Pega, we audit that to the 10th degree, right? So what part of the process are you? What kind of data are you using? What kind of actions you have taken? So for us, agents are nothing new because they’re just another actor in the system. And we’ll audit to the nth degree what it is what they’re doing. If I get a complaint, three months down the line, I can go back and can see what happened in this particular insurance claim. Yeah. And who did what? The human, the agent, what kind of information was used? Yeah.
Greg Kihlstrom (14:40)
I mean it sounds a lot like how you would, which I think should give companies a little, should make them feel better about this is that it’s kind of like what you would do with employees, right? Exactly. Yeah.
Peter van der Putten (14:54)
Yeah, yeah. So I don’t want to sound like that we were super smart. It’s just, it’s more, yeah, we had to build it anyway. We had to, we first started with straight through workflow, but in very kind of regulated environments. Then it was like, we also want to orchestrate, not just straight through workflow, but people doing stuff and make sure that we audit all of this and that we keep a context of what’s going on with this particular case. And then we’re like, we have real time decisioning. We need to be able to make real time decisions again control what kind of data is being used, being able to audit and trace back all these decisions. So for us, agents, it’s in that sense, just another actor that gets involved into the mix, but they will be controlled and audited at least the same level, if not more.
Greg Kihlstrom (15:41)
So yeah, it’s not like the machines are running amok or whatever. It’s like an employee that doesn’t take vacation or whatever.
Peter van der Putten (15:49)
Now I make it sound like it’s super easy but actually without having that environment where you’re doing that, then you’re…
Greg Kihlstrom (15:59)
Without
the guardrails in place, you’re kind of just hoping, but with a system like this, and I guess the other parallel is just with the different types of agents, you are using the right tool at the right time and only within those parameters.
Peter van der Putten (16:15)
Exactly.
agents could also be, one agent could become a tool for the other agent, right? So you can get quite complex interactions where you still want to have all the instrumentation in place so that you can actually see, you know, this could be when I’m in the middle of the claims process, understand how we got to a certain decision outcome or recommendation, or maybe three months down the line, yeah, if you’re in an audit situation or whatever. Yeah.
Greg Kihlstrom (16:40)
So I know we’re relatively early days here with some of this. Have you seen any early examples or feedback from clients using these predictable AI agents that, some feedback there and how they’re being used?
Peter van der Putten (16:54)
Yeah, so what we always like to do is, you know, we’re not the kind of company that like sometimes people maybe say, well, you guys should be a little bit more, you know, let’s say maybe first make some grand claims and then figure out later how it’s going to work. But we’re quite down to earth. We like to really get hands on and technical and try things out. So this agentic service, we kind of, it was a bit of a skunkworks. We kind of build it into the platform already.
I think a year and a half to maybe even two years ago. And that allowed us to kind of get experience with different types of applications internally as well. Right. So one thing that we even prototype even in parallel to that was an intern, intern Iris, we gave her a name and you can ask her like all kinds of different types of questions. It has access to product documentation, customer information, whatever it is. then she will address your question and that’s an internal application.
but it’s being used by a company of roughly 5,000 to 6,000 people and there’s 1,000 to 2,000 inbound requests every single day made to Iris. So for us it was interesting to see that that’s something that caught on like really early. Also because it’s lower risk in the sense that the actions that it takes is gathering information from different sources and then addressing the question the worst thing that it can do is maybe provide the wrong answer, but it’s not going to say, well, we’re sending an invoice to one of our partners or whatever. So that’s an example of a kind of an early thing, but we’re also tomorrow Rabobank will be presenting as well as one of the keynotes, right? So they’re going to talk about some of the hackathons and early implementations they’ve done with agentic technology.
in the financial economic crime area.
Greg Kihlstrom (18:41)
And that’s certainly something you want to have those guardrails in place.
Peter van der Putten (18:45)
Highly regulated and confidential type of area. But they’re heavily experimenting with all kinds of forms of generative and agentic AI. It started a little bit with this knowledge buddy with all the work instructions for the financial economic crime analysts. They have 3000 of them, right?
And you can imagine it’s an environment where this unstructured data plays a very big role. It started with being able to answer questions on the basis of these work instructions. So that’s more of a knowledge agent. But we also expanded to two different areas, like the latest thing they’re also experimenting with. And they went public about it. I can say it is building up this full picture of a particular client that has raised some form of well, either you’re onboarding the client or there’s some form of alert that came out of some system that triggers them to really have a look at like whether everything is still okay in brackets. And then building up that full picture, the first full picture. And it’s also something where they were experimenting with agentic technology. That’s also the way to get into it and just try it out.
Greg Kihlstrom (19:59)
Yeah, I love it. Well, a couple last questions for you here. We’re here at PegaWorld here in Las Vegas, where attendees can test drive some of this stuff as well. I’m looking forward to doing that in a few minutes myself. What do you hope that attendees walk away after experiencing some of this predictable AI firsthand?
Peter van der Putten (20:18)
Yeah, I think it’s… I always love to demystify AI a bit. I think… don’t believe… When people say, this agentic AI is just a fad, it’s to pass on and it’s way too dangerous to use them anyway. I don’t believe that. And on the flip side, where the tech bros are running around and shouting, just throw as many agents you want into your company and magically, you know, they will solve all problems.
I get the shivers as well. So I think it’s really important to kind of demystify it and just show how do these things work, right? So we’re very experience-based here at Pagerworld. So all the customers can just get hands-on in their particular area where there’s customer service or intelligent automation or one-to-one marketing and experience, know, like how do these agents actually work? What can they do? What can’t they do? And they can take that home and then they can actually also get into hackathons or other ways to kind of…
get hands-on with the technology and demystify it and start to understand what the strengths are and how to implement it in your own organizations and what things are maybe still too complex for these agentic systems.
Greg Kihlstrom (21:25)
Love it. Yeah, like I said, I’ll get my hands on it pretty in a few here. So looking forward to that. Well, Peter, thanks so much for joining again, coming back to the show. One last question for you. What do do to stay agile in your role and how do you find a way to do it consistently?
Peter van der Putten (21:40)
Yeah, like I’m a little bit of an AI nerd, right? So it’s not too hard for me to kind of keep being interested in it, but in general, stay curious, stay curious and get real, right? So I also go beyond just the hype around the topic and play around with the technology. That’s a way, get hands on with it. Getting hands on with these things is something that for me always also good, a good way to, yeah, to to do this sanity check, you know, what’s behind it really. Like I said, you know, I’m also at the university, so keeping up with the cool kids is another way to remain agile and well, that’s hard enough in itself, so.