Expert Mode: Curing AI Amnesia with a Shared Context

This article was based on the interview with David Funck, Chief Technology Officer at Avaya  by Greg Kihlström, AI and Marketing Technology thought leader, for The Agile Brand with Greg Kihlström podcast. Listen to the original episode here:

The promise of artificial intelligence in customer experience has always been one of seamless, personalized, and efficient interaction. The reality, for many of us, has been something quite different. In the race to embed AI into every conceivable touchpoint, we have inadvertently created a new kind of silo. Our chatbot has one brain, our agent-assist tool has another, and the AI personalizing our website has a third. The result is a customer journey that feels less like a continuous conversation and more like a series of disjointed interrogations. The customer is forced to repeat themselves, context is lost at every handoff, and the experience is often a high-tech version of the dreaded phone tree doom loop we thought we had left behind.

This “AI sprawl,” as some have begun to call it, presents a fundamental challenge for marketing leaders. We invest in sophisticated technology to build relationships and foster loyalty, yet our own systems often operate with a debilitating case of amnesia, starting fresh with every new interaction. To deliver on the true potential of AI, we need to move beyond isolated applications and toward an interconnected ecosystem. What if our AIs could share a memory? What if the context gained in a chat interaction could seamlessly inform the next conversation with a human agent, or even the next personalized offer on our website? This is the strategic imperative we now face, and it requires a new way of thinking about the architecture of our AI stack. It requires a common language, a protocol for sharing context, which can be found in Model Context Protocol (MCP).

The Protocol for Persistent Memory

The core problem is that Large Language Models (LLMs), for all their linguistic prowess, are fundamentally backward-looking. They are trained on vast datasets of what has been written, which makes them excellent at predicting the next logical word in a sequence. What they lack is an inherent connection to the real world in real-time. To bridge this gap, a new standard is emerging: the Model Context Protocol (MCP). As a concept introduced by Anthropic, it’s designed to give LLMs the one thing they desperately need to be truly useful in an enterprise setting: agency. David Funck explains the foundational role MCP plays in transforming LLMs from clever predictors into active participants in the customer journey.

“MCP stands for Model Context Protocol… What it does is a protocol that standardizes the way that large language models can interact with the real world… You can get a large language model to very accurately predict what the weather in Tallahassee might be based on history, but you can’t ask it what that weather is right now, unless you use the model context protocol. It gives context to the model. It gives the model the ability to reach out and ask a question about the real world or take action with the real world. And this is really critical with agentic AI.”

For marketing and CX leaders, this is more than a technical specification; it’s a strategic enabler. MCP effectively acts as a universal translator and connector for your AI tools. It allows an LLM to query an internal API for a customer’s order status, check inventory levels from your supply chain system, or access real-time flight information—all within the flow of a single conversation. This standardization democratizes the ability to connect disparate systems to your AI, meaning you’re not locked into a single vendor’s ecosystem. You can select the best LLM for a specific task and connect it to the tools and data sources that matter most, creating an AI fabric that is both powerful and flexible. It’s the architectural underpinning that finally allows us to build an AI that doesn’t just respond, but acts.

Elevating the Human: The “Tandem Care” Model

One of the persistent anxieties surrounding AI is the replacement of human jobs. A more pragmatic and, frankly, more interesting view is one of augmentation. The goal isn’t to replace your expert human agents but to unburden them from the repetitive, low-value tasks that consume their time and energy. This is where an MCP-powered agentic AI can fundamentally shift the dynamics of customer service and engagement. By handling routine queries with intelligence and context, the AI frees up human experts to focus on what they do best: empathy, complex problem-solving, and building genuine relationships. Funck refers to this symbiotic relationship as “tandem care.”

“Those journeys that you’re talking about, they feel that way because they are very prescriptive. They are established based on what some contact center administrator thought you were going to do… The great thing about agentic AI and large language model empowered by MCP is it can adapt to your particular needs… It becomes then a dialogue that’s happening with the artificial intelligence… AI is perfectly suited to handling those kinds of routine things. And then if you can help a human agent with those routine things with the agentic AI, then it leaves time for that human agent to do things that only a human can do—to have empathy, to really listen and give your end customer a really excellent experience.”

This “tandem care” model has profound implications. First, it directly improves the customer experience by providing instant, accurate resolution for common issues while ensuring a seamless and informed transition to a human for more complex needs. The customer no longer feels like they are being passed around; they feel like they are being helped by a cohesive team. Second, it elevates the role of your customer-facing employees. They become high-value consultants and problem-solvers rather than script-readers. This not only improves employee satisfaction and retention but also transforms your contact center from a cost center into a powerful engine for building brand loyalty and uncovering customer insights.

The Contact Center: A Laboratory for ROI

For any enterprise leader, the question of return on investment is paramount. The AI landscape is littered with fascinating technologies that have yet to prove their business value. One of the most compelling aspects of implementing these advanced AI strategies within the contact center is that it is an environment built for measurement. For decades, we have honed the science of measuring contact center performance through metrics like Average Handle Time (AHT), First Contact Resolution (FCR), and Customer Satisfaction (CSAT). This existing framework provides the perfect laboratory to test, refine, and prove the ROI of agentic AI.

“The contact center is a great place to put AI out there and to really measure how well it performs because the contact center is a very constrained environment that is already has very sophisticated measurement tools… We can apply those same tools to AI and you can really measure the effectiveness of AI very specifically with the same set of tools… An enterprise can create very much more focused large language models about a much more specific information domain, train those on that domain, and those are much more cost-effective to run. You can really very effectively measure both the effectiveness in terms of customer satisfaction and the cost to deliver that… It’s all about return on investment for the enterprise context.”

This approach provides a clear, defensible path for scaling your AI initiatives. By starting in a highly instrumented environment like the contact center, you can gather concrete data on both performance and cost. Funck’s point about using more focused, domain-specific models is crucial here. The tendency is to think we need one massive, general-purpose AI to solve every problem. However, a more cost-effective and often more accurate approach is to deploy smaller, specialized models trained on your specific business domain. You can directly compare the cost and effectiveness of an AI-handled interaction versus a human-handled one, allowing you to make data-driven decisions about where to automate and where to augment. This turns the AI conversation from one of speculative potential to one of measurable business impact.

The path to a truly intelligent customer experience is not paved with more standalone AI tools, but with a more thoughtful, integrated architecture. The fragmentation of our current AI landscape is a self-inflicted wound, one that undermines both customer satisfaction and internal efficiency. Concepts like the Model Context Protocol represent a foundational shift, moving us from a collection of isolated brains to a collaborative, intelligent network. It’s the enabling technology that allows our AI to remember, to learn from one interaction to the next, and to act on behalf of the customer in a meaningful way.

For marketing leaders, the mandate is clear. We must look beyond the features of individual AI applications and begin architecting an interconnected AI fabric. The journey can begin in the measurable and high-impact environment of the contact center, proving the value of a “tandem care” model that elevates both the customer and the employee experience. The future is not about one giant, all-knowing AI. It is about a symphony of specialized, cost-effective models working in concert, sharing context, and building a continuous, intelligent dialogue with our customers. The era of AI amnesia is ending, and the era of persistent, shared intelligence is just beginning.

Posted by Agile Brand Guide

Spreading knowledge, one marketing acronym at a time. Content dedicated to all things marketing technology and CX.