What if every AI interaction with a customer built upon the last, instead of starting from scratch every single time, or at least having it feel that way?
Agility requires not just reacting quickly to customer needs, but learning continuously from every interaction to anticipate the next one. This means our technology, especially our AI, can’t operate with amnesia; it must have a persistent, shared memory.
Today, we’re going to talk about breaking down the silos between our AI systems. We’ll explore a concept that promises to give our AI a persistent memory, allowing different models and platforms to share context and build a truly continuous, intelligent customer experience.
To help me discuss this topic, I’d like to welcome, David Funck, Chief Technology Officer at Avaya.
About David Funck
David Funck is the Chief Technology Officer at Avaya, bringing more than 30 years of experience in enterprise communications, cloud transformation, and contact center innovation. David has held senior technology leadership roles at Edify, Aspect Software, and Alvaria, where he served as CTO and led the transition of legacy platforms to modern, cloud-based architectures.
Before becoming CTO at Avaya, David served as the company’s Chief Architect, where he was responsible for advancing Avaya’s technology strategy and leading the Innovation Incubator and AI/ML initiatives. David joined Avaya through the acquisition of Edify, where he was CTO and played a key role in developing AI-native contact center solutions.
David’s expertise spans full-stack architecture, multi-cloud deployments across leading hyperscalers, and leading global development teams to deliver enterprise-scale solutions. He is known for driving high-impact product innovation, closing strategic customer contracts, and guiding companies through complex technical transformations.,
David Funck on LinkedIn: https://www.linkedin.com/in/david-funck
Resources
Avaya : https://www.avaya.com
The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow
Catch the future of e-commerce at eTail Palm Springs, Feb 23-26 in Palm Springs, CA. Go here for more details: https://etailwest.wbresearch.com/
Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom
Don’t miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show
Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com
The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Transcript
Greg Kihlstrom (00:00)
What if every AI interaction with a customer built upon the last instead of starting from scratch every single time or at least having it feel that way? Agility requires not just reacting quickly to customer needs, but learning continuously from every interaction to anticipate the next one. This means our technology, especially our AI, can’t operate with amnesia. It needs to have persistent shared memory. Today, we’re going to talk about breaking down the silos between our AI systems.
We’ll explore a concept that promises to give our AI a persistent memory, allowing different models and platforms to share context and build a truly continuous, intelligent customer experience. To help me discuss this topic, I’d like to welcome David Funck Chief Technology Officer at Avaya. David, welcome to the show.
David Funck (00:45)
Thanks, Greg. It’s great to be here. I really appreciate it.
Greg Kihlstrom (00:48)
Yeah, looking forward to talking about this. a ⁓ topic top of many people’s minds, myself included. Before we dive in though, why don’t you give a little background on yourself and your role at Avaya.
David Funck (00:59)
Sure, so I’ve been with Avaya about a year and a half now. before being CTO at Avaya, I was CTO at ⁓ Aspect and Alvaria, and then moved to ⁓ a smaller kind of startup at Edify. And Edify was acquired by Avaya, and so that’s how I came into the business. And then took over as CTO about half a year ago. So it’s been a great ride. You know, I’ve been developing enterprise software in the contact center space for most of my career. So led ⁓ transition to cloud at Aspect and really happy and excited about our new product of via infinity. That is, I think a great, great solution for the contact center. So ⁓ really happy to be here.
Greg Kihlstrom (01:46)
Yeah, and I guess to give a little context, and I know you briefly briefly mentioned it, but could you tell us a little bit about Avaya? know who’s your primary customers and and what are some of the challenges that you solve?
David Funck (01:57)
Sure. So ⁓ Avaya is a company that has been around for a while. It has a storied past. It grew out of, you know, the Bell companies, spun out of Lucent a while back and has been an industry leader in mostly voice communications and then subsequently contact center. And we still are industry leaders in the contact center and in critical communications infrastructure in really reliable audio and voice.
Our customers are the largest of the large, big airlines, large banks, financial services companies, governments. We service the US federal government and other governments across the globe. So we’re very much a global company and really cater to those large customers that have really specific needs.
Greg Kihlstrom (02:48)
Yeah, great, great. So yeah, let’s dive in. And I want to start kind of with the the strategy and the why of all this. So, you know, we’re going to talk about model context protocol or MCP for those familiar with it already. But just kind of starting at the top, you know, it would be hard to escape that AI is everywhere and in every conversation and so on and so forth. And, know, kind of started the recent boom started with Gen AI and the chat, GPTs and all that. Agenetic AI certainly taking over a lot of the conversations recently, but you know what I’ve certainly seen and I think what, what many are that are paying attention to our seeing is let’s call it AI sprawl or, or what, whatever label you want to call it. Everybody’s got their own AI features. Now most have their own agentic components. So we’re kind of at this place where you know, what do you choose? How do you choose? And kind of what I touched on in the beginning is does my AI hook up with this other platforms AI and how do these, how do all these things work? Because the, ones that suffer the most, we may suffer internally in trying to get our work done, but the ones who suffer the most are the customers, right? So maybe, you know, given that kind of context, you know, can you talk a little bit about, you know,
What is MCP and why should we care about it?
David Funck (04:17)
Sure. Well, you you’re absolutely right about the AI landscape right now being kind of dizzying. It’s changing so fast and you never know when a new industry leader is going to emerge. And it’s really hard to keep up. I was just at a conference a couple of weeks ago with one of our partners, Databricks. And, you know, that was a real focus of what we talked about there, how quickly things are changing and,
making sure that you prepare your company for a rapidly changing future. And I think MCP is an approach that really does that for agentic AI. So MCP stands for Model Context Protocol. And this is a protocol that was established by Anthropic, one of the industry leaders in creating AI. And what it does is is a protocol that standardizes the way that large language models can interact with the real world. You think about a large language model, they’re built on, they’re trained on huge, vast ⁓ array of different language sources. That’s why they’re called large language models. The model is built up by consuming just about every bit of text that humans have created on the planet. But they’re built on things that have happened before. And all they can do is predict the next token, the next word that logically might be following based on what has happened in the past. So you can get a large language model to very accurately predict what the weather in Tallahassee might be based on history, but you can’t ask it what that weather is right now, unless you use the model context protocol. It gives context to the model. So it gives the model the ability to reach out and ask a question about the real world or take action with the real world. And this is really critical with agentic AI. Agentic AI means AI that has agency. So instead of a large language model being something that could suggest help with your homework, which was mind blowing enough, now we can have large language models telling us what’s really going on and then acting on that based on instruction. So it takes the large language model capabilities completely to the next level. And the other great thing about it is it standardizes the way that those things happen. So it really kind of democratizes that in a large way. don’t need nearly as much technology input to make these kinds of connections. With MCP, you can pick a large language model from any of the ones that are out there and connect them to different, essentially it’s APIs that are available either in your enterprise back office or on the internet at large. And all of that information then becomes available. All the things that those APIs can do become available to that large language model to be effectual. I think it’s a real game changer.
Greg Kihlstrom (07:31)
Yeah, well, and building on that, so to take this from the customer lens, you know, I think AI certainly is able to do a lot of things and when trained on the right data and everything like that, do it it well. But I think we’ve found ourselves getting back into that, you know, the the phone tree doom loop kind of thing where, the, chat is trained on this one thing, but to get from here to there, you know, so in other words, customers are dealing with very fragmented journeys again, you know, in many cases because they’re hopping from AI to AI. How does something like MCP help to connect those, those journeys from that customer perspective?
David Funck (08:15)
Those journeys that you’re talking about, they feel that way because they are very prescriptive. They are established based on what some contact center administrator thought you were going to do. You might be doing something totally different as a customer. The great thing about agentic AI and large language model empowered by MCP is it can adapt.
to your particular needs and you can interact with it. And it doesn’t follow that prescriptive path. It can react and tailor its response to your specific needs and your subsequent needs. It becomes then a dialogue that’s happening with the artificial intelligence. Now, that’s a tall order for the contact center, right? And enterprises are very concerned about the large language models hallucinating and things like that. And they’re expensive to run too, right? So the Avaya Infinity product is designed to help our customers try these things out, maybe by helping agents first, human agents in the contact center first. And then you can use the same tools that we empower the human agent to do that with, to then once you’ve monitored and tested to make sure that the agentic AI is performing well and has the right boundaries around it, you can switch and then make it available to end users. But that kind of power and the flexibility of all the different tools you can give AI, those are the things that I think are really changing the game for contact centers, especially. And in contact centers, another thing I want to make clear is it’s a fantastic information domain for artificial intelligence to really make a difference. Because a lot of the things that human agents are doing there are repetitive. They’re relatively straightforward in many ways and kind of routine, right? AI is perfectly suited to handling those kinds of routine things. And then if you can help a human agent with those routine things with the agentic AI, then it leaves time for that human agent to do things that only a human can do to have empathy, to really listen and give your end customer a really excellent experience. So that’s the vision that we think is unlocked with agentic AI powered by MCP.
Greg Kihlstrom (10:47)
Yeah, so mean, in that approach, you know, I would say it’s elevating the role of the humans that are there from having to do kind of to your point, the busy work or the repetitive stuff that it’s it’s time consuming, but it doesn’t require a lot of, you know, strategic thought or critical, critical thinking. But then on the things that to your point, when there is a challenge that is
There’s no script for it or there’s there’s no way to automate it. The humans can focus their time on there and then everybody kind of gets what they need. Right. I mean, is it overly optimistic to say that I could also play a strategic partnership role as well? So, you know, not not only doing the repetitive work, but also kind of forming a partnership with the humans to also augment some of that higher thinking.
David Funck (11:34)
I think that’s definitely not overly optimistic. We at Avaya have started talking about this concept and I think we, I think my CEO, Patrick Dennis, who’s, you know, a brilliant thinker. think he actually coined this term. We’re talking about this idea of tandem care, right? Of AI and humans working together to improve the care that customers get in the contact center. And that can be taking care of the routine things that I kind of referenced before, but I do believe like you said, some of the higher order things can be empowered too. Now, those are the kinds of things that I think you want to definitely constrain to your back office and to your human agents, helping them out, because you don’t want to necessarily expose that to your end customers right off the bat. if you think about the way that AI is helping businesses across the globe right now with knowledge workers really being empowered and able to do much, much more by connecting all these things together. Same thing is true for the human agent in the contact center. I definitely agree with you on that. That’s a possibility of a really great outcome.
Greg Kihlstrom (12:54)
As far as measuring success, I’m sure there’s, you know, the traditional metrics, you know, average handle time, first contact resolution time to, you know, there’s, there’s a million acronyms and names for a lot of these metrics. I’m sure those don’t go away, but are there other things that get unlocked relating to like customer lifetime value and loyalty and other things when you’re able to connect the dots so much better?
David Funck (13:22)
Well, I guess what I would say to that is the contact center is a great place to put AI out there and to really measure how well it performs because the contact center is a very constrained environment that is already has very sophisticated measurement tools. Some of those metrics you mentioned. We can apply those same tools to AI and you can really measure the effectiveness of AI very specifically with the same set of tools that you’re using, same set of analytics capabilities. And we’re doing that with a Vi infinity. All of the capabilities that we use to measure the human agent, we’re also using our same analytics package to measure the effectiveness of AI. And then the other thing that is very true is that you have a more constrained domain for cost and you can understand the cost of the AI and really then measure the cost effectiveness of both. And that’s important because large language models can be, depending on which ones you use, they can be very expensive to run. And this is a place that I think that there’s going to be a lot of improvement in the next couple of years. If you take one of the big kind of generic large language models, they’re the ones that OpenAI is their latest one out there that’s available on the internet. That’s designed to answer anything that anybody asks about anything. It has to be huge and it’s going to be super expensive to run. But an enterprise can create very much more focused large language models about a much more specific information domain, train those on that domain, and those are much more cost-effective to run. You can really very effectively measure both the effectiveness in terms of customer satisfaction and the cost to deliver that and compare that to the cost to deliver it with a human person and just make your decisions that way. From my perspective, it’s all about return on investment for the enterprise context.
Greg Kihlstrom (15:28)
Yeah, yeah. And so for for leaders planning out there, you know, what’s the what’s the timeline for, you know, for instance, for Avaya adopting MCP? And what does that look like as far as implementing it and being able to connect it with other with other offerings?
David Funck (15:45)
We’re demoing it right now, so you can see demos of it with our Avaya sales engineers. It’ll be available in pilot at the end of Q1, and then it’ll be available in production for our end customers in calendar Q2 of this year, of 2026.
Greg Kihlstrom (16:03)
Yeah, yeah, nice. And so, you know, looking beyond the contact center, I know, I know that’s been the focus of our conversation here. You know, if MCP is successful in being able to create this kind of shared brain of AI, where else do you see potential in the enterprise to for it to have a great impact, you know, in the next few years?
David Funck (16:24)
So the horizon is really pretty wide open there. I’m seeing it really helping my teams and our product organization. We just used a large language model to document an API of ours and it was a product manager that did it. It wasn’t even an engineer, right? So the things that get unlocked with just about any knowledge worker and you connect all of the back office tools that you use, anything that is a piece of software running out there, if it can offer up its capabilities as an MCP server, then you can have the large language model interacting with these disparate tools. And in my case, as a development leader, it is code repositories and ticketing systems. can have the LLM working together to unlock information of how those two systems work together that typically is done by a human going back and forth. When you connect two, three, four of those back office systems together, you can get really fantastic results. So I think it really is a wide open horizon. And what intrigues me is just to think about what we are going to do as a society with that additional productivity. Gets back to this tandem care idea that we believe that the human and AI working together can just create much, much better experiences, much, better outcomes for everybody. That’s my hope. That’s kind of the dream.
Greg Kihlstrom (18:04)
Yeah, yeah, I love that. Well, David, as we wrap up here, a couple couple last questions for you. So thinking a year ahead, if we were having this interview one year from now, what is one thing that we would definitely be talking about?
David Funck (18:18)
I really think that I’ll go back to that concept of more fit specific and more focused AI models doing more concrete defined areas for, for cost-effective delivery. There’s a question out there whether large language models are really going to get much better. I know the big AI that companies are working on general intelligence. can all, who knows if that’s going to come.
What I see happening all the time right now across a variety of industries is specific models getting really good with very focused constraints so that they don’t hallucinate, getting very good at specific tasks. I think more and more the big large language model providers have these tools out there and now all these different smaller companies are filling in all these different niches with super effective AI. And it’s much more cost effective because the models are smaller. I think we’ll see more and more of that. More and more, what a great use case. That company is going to be successful. mean, my team right now is using a software development tool ⁓ that leverages AI with tremendous effectiveness. We’re seeing so much improvement in terms of the throughput of my developers. So we’ll see more and more of those kinds of things.
happening in the next year. That’s what we’ll be talking about.
Greg Kihlstrom (19:44)
Yeah, love it. Well, David, thanks so much for joining today. Last question for you before we wrap up. What do you do to stay agile in your role and how do you find a way to do it consistently?
David Funck (19:54)
Well, before we wrap up, want to say thanks again, Greg, for ⁓ for having me on. This is really enjoyed our conversation. Of course, for me to stay agile, I would say two things. I like to surround myself with really cool and effective people. I am only as good as my team. And I also really, you know, I like to have a good time. I and I like my team to have a good time. We spend a lot of time at work. And if we’re miserable at work we’re wasting a lot of our life. I would much rather have a good time, laugh. I try to make every meeting as fun as possible. And that keeps me on my toes. And I think it makes my team happier to be working. And I think we’re all more productive because of that. So that’s what I try to do.





