Recorded live at Forrester CX Summit North America 2025 in Nashville, TN where we are hearing all about the latest insights and ideas for brands to create better experiences for their customers.
If your AI roadmap doesn’t include your customer, is it really a roadmap—or just a bridge to nowhere?
Agility requires remembering who pays the bills: the customer. When shiny new tech shows up, it’s tempting to sprint after it—often leaving the actual human in our dust.
Today we’re talking about staying customer-first in the race to adopt agentic AI. To help me dig in, please welcome Stephanie Liu, Senior Analyst at Forrester. Stephanie—welcome to the show!
About Stephanie Liu
Steph focuses on the intersection of marketing and privacy. She guides clients on how to strike a delicate balance between privacy, trust, and consumer expectations, all while navigating a rapidly shifting data deprecation landscape that spans consumers’ privacy-protecting behaviors, regulation, tech limitations, and walled gardens. She examines topics like zero-party data, preference centers, data clean rooms, the customer data ecosystem, and how to deliver experiences that are personalized without being creepy.
Steph has been quoted in publications such as the New York Times, CNBC, The Markup, Marketplace, and AdWeek; her work has featured in AdExchanger, Forbes, and elsewhere.
Resources
Forrester: https://www.forrester.com https://www.forrester.com
Catch the future of e-commerce at eTail Boston, August 11-14, 2025. Register now: https://bit.ly/etailboston and use code PARTNER20 for 20% off for retailers and brands
Don’t Miss MAICON 2025, October 14-16 in Cleveland – the event bringing together the brights minds and leading voices in AI. Use Code AGILE150 for $150 off registration. Go here to register: https://bit.ly/agile150“
Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom
Don’t miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show
Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com
The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Transcript
Greg Kihlstrom (00:00)
If your roadmap doesn’t include your customer, is it really a roadmap or just a bridge to nowhere? We’re here at Forrester CX in Nashville, Tennessee, and hearing all about the latest insights and ideas for brands to create better experiences for their customers. Agility requires remembering who pays the bills, the customer. When shiny new tech shows up, it’s tempting to sprint after it, often leaving the actual human in our dust. Today we’re talking about staying customer first in the race to adopt agentic AI.
To help me dig in, please welcome Stephanie Liu, Senior Analyst at Forrester. Stephanie, welcome to the show. Yeah, looking forward to talking about this with you. Before we dive in though, why don’t we start with you giving a little background on yourself and the research you’re leading at Forrester.
Stephanie Liu (00:31)
Thanks for having me.
Sure, so I’m a senior analyst. I’m actually on Forrester’s B2C marketing team. And I cover the intersection of privacy and marketing. And that has expanded over the years to include the customer data ecosystem, customer data strategy, and now AI agents in terms of the data pieces and the privacy pieces.
Greg Kihlstrom (00:58)
Okay, well, yeah, you’re the right person to be talking about this this topic with here and definitely I can see why it might have expanded over the years as well, but yeah, let’s let’s get started with talking about a Gentic AI certainly I mean first it was AI had to be there was a an unwritten rule that I had to be mentioned in every single episode of my show now a Gentic is certainly coming along and It certainly seems to be everywhere already, but a lot of consumers have never
part of the term. So what’s the very first thing that brands need to make sure that they get right to stay genuinely customer first while they experiment with something like agentic AI?
Stephanie Liu (01:38)
The annoying answer is it’s two things. One is to really define what you’re trying to do with AI, peel back the hype. What is the use case? What are your outcomes you’re working towards? And the second is to, as you said in the upfront, don’t leave your customer behind. I think there’s a lot of emphasis right now on how AI can benefit the business. There’s a lot of noise around productivity and efficiency.
Don’t leave your customers in those conversations. Think about the value exchange for them. How will AI benefit the customer in terms of the experience or the interaction, the personalization, whatever it is, there needs to be an upside for the customer as well.
Greg Kihlstrom (02:14)
Yeah, yeah. A lot of times though, the upside for customers, know, getting more personalized experiences and everything involves having more data around from that customer. And so, you know, there is this, the value exchange, you know, it’s, got to be balanced with, you know, with, with some of the privacy issues and things that you work on as well.
Only 23 % of US online adults are comfortable sharing personal data with Gen.ai tools. How can marketers narrow that trust gap without stalling their innovation?
Stephanie Liu (02:47)
Yeah, I think the most important thing to remember is that most people don’t understand AI. I have to remind clients of this a lot because it’s a bit of like a self-selection issue. You’re at Forrester CX Conference, you are coming to my track sessions on AI. You’re clearly more invested in this just by being here than by a lot of consumers themselves. And so it’s, if you are going to introduce an AI-driven experience to a customer, you have to explain
what it is, why they should use it, and what’s in it for them, right? So that’s the value exchange piece. But then the second piece of that is to really emphasize the data piece, only collecting what you need to deliver that value. Once we get into the, want to know everything about you for some vague promise of personalization, right? That’s really recreating the mistakes we made of 10 years ago when we were just collecting everything and seeing what stuck. you know, that’s
why data deprecation and privacy regulations rolled in. So it’s really a matter of, again, emphasizing that use case and mapping out what data you need to ask from the customer to deliver on that use case.
Greg Kihlstrom (03:54)
Yeah, yeah. so Forrester has something called privacy personas. I want to get a definition on that from you as well. But these can help brands tailor their AI rollouts. Can you talk a little bit, what do we mean by privacy persona? And how can brands work with this so that they can tailor their AI rollouts and really help streamline things?
Stephanie Liu (04:19)
Sure, so I’ll give you the backstory on why we built these personas in the first place. We used Forrester’s proprietary data on consumers, and what we were trying to do was bust the myth that people either care about privacy or they don’t. The reality is it’s a lot more nuanced. There are folks who share data freely, but there are also folks, we call them conditional consumerists, they will share data if you incentivize them.
So for example, they love loyalty programs because there is a clear sign up for the program, share some information, get points, right? And then at the far end of the spectrum, we call them skeptical protectionists. They’re the, do not want to share our data. We are very tech savvy and very, very protective. And so when it comes to designing for these personas, you have to remember that all of them are represented in your customer base. That group that shares data freely, they’re reckless rebels. They’re only about one third of US online adults.
For the other two thirds, you have to think about the value proposition to pull in those conditional consumerists and how to treat the folks who are very averse to AI and don’t want to use it. And so one of the challenges there is you can’t make AI mandatory because you will end up pulling some customers away from you, right? And it may actually end up in
they’re trying to escalate through human channels and they’re getting more aggressive about it because there’s more AI in their way. So I do think it’s important to understand the balance there of AI can help you, but if you want the human touch, if you really don’t like AI, you don’t understand it, or you’re just not tech savvy, that there’s still a way for you to get what you need, whether that’s information or customer support.
Greg Kihlstrom (06:03)
Yeah,
I mean, it sounds like making AI mandatory, you’re potentially alienating a third of your audience, right? Yeah, yeah. So then even to take things further, there’s been a few announcements recently about things like online shopping agents and things like that that are acting on consumers’ behalf or will soon enough.
Most of us are not quite there yet, know, having sending the AIs out to do our shopping for us and that level of trust, a little bit to your last point. So, you know, before AI is going to be fully acting on a customer’s behalf, we’re kind of in this intermediary phase. you know, context aware, you know, we’ve been talking about personalization and things like that for years. But now with AI potentially taking things further, yet still not fully autonomous.
What does a human centric experience look like in this kind of middle ground that we’re in?
Stephanie Liu (06:57)
Yeah, so this middle ground is actually really interesting because I think this is our chance to build trust without going full agentic, full autonomy, because the risks of full autonomy are much greater, right? So thinking about this pretty relatively confined use case and capability, what can we do with it today to improve the experience? And again, thinking about this from the customer’s perspective, not the brand’s, to get them to
grow a little bit more comfortable with AI so that when it does become more autonomous, when there is more capability behind it, the consumer is brought along on that journey. Realistically, this isn’t going to be one day there’s a flip of a switch and ta-da, Agentic is here. It’s going to creep up on us through these incremental improvements in the underlying Agentic AI capabilities. And again, consumers are not tracking these developments as closely as tech vendors and as agencies and brands.
So it’s really about bringing them along the journey, getting them comfortable with where we are today so that they’re not shocked by what’s coming tomorrow.
Greg Kihlstrom (07:56)
Yeah, is transparency about when AI is being used? mean, is that part of this as well?
Stephanie Liu (08:02)
Definitely, yes. People don’t like to be deceived, which is we actually did a market research on a community panel about this. And how would you feel if a brand was using AI and didn’t disclose it, and the number one response was deceived or angry or upset? And then there were a couple that were very confident of, that would never happen. I always know when it’s AI, and I think they’re in for a rude awakening.
Greg Kihlstrom (08:24)
So as from the brand perspective then, when rolling these things out, mean, there’s good ways to do this, there’s bad ways to do this, but regardless of the method, what KPIs or maybe leading signals even should CX leaders and others watch to know when something that might be an experiment or a pilot project is really ready to roll out to a larger audience.
Stephanie Liu (08:47)
Yeah, so this is where defining your use case very narrowly is really, important. And when I say narrowly, mean, right now, sometimes it is as specific as one step in a broader workflow. If we think about the way Gen.A I rolled out, it initially was just producing content, right? Given these instructions, go produce 800 variants of whatever.
As we’ve grown more comfortable with Gen.AI, as it’s gotten better, it’s not creating people with seven-fingered hands anymore. It’s moving upstream, and now it’s part of the creative brief process, right? Like, do some research on the opportunity, the audience, et cetera, et cetera. It’s going to be something like that with these AI agent pilots. Can you do this one step in the workflow, and can you do it well? And by that, I mean, is it behaving as expected? Is it producing accurate outputs? Is it pulling from up?
to date data or is it pulling from stale outdated data? If it can prove that, then it’s not like, all right, we’re going to launch this to the broader world. I think it’s more, can we move it upstream in the process? And I do think we’ll start with internal use cases because the risk is a lot lower. And once we master those, then I think brands will start to experiment with the customer facing ones. If you look at customer service, which is where a lot of AI hype is happening right now,
It’s very similar. The early use cases are about automating the customer agent work, not necessarily interfacing with the customer to resolve their conflict because there’s just too many unknowns, too many variables there.
Greg Kihlstrom (10:16)
Yeah, I mean, to me, it reminds me of, you know, early days of like chat GPT and Gen.ai. It’s like if you asked a very, very broad question or gave it a very, very broad prompt, you kind of get what now we all expect is you get something that’s not very good and it could go in many different directions. Right. So, mean, agentics seems a similar approach of you need to be very specific. in my experience, it’s actually helped me.
think through processes in different and more thorough ways than I thought I did before. it’s actually, there’s actually been some personal growth there, but that’s probably a topic for a different show. But I mean, it sounds like you’re saying something similar in that it’s, know, take it one case at a time, of build, it’s almost like the Lego blocks or what, like build it one piece at a time. And you’re building not only trust, but you’re building those, you know, you’re.
your building according to the KPI is kind of one step at a time. Does that sound right?
Stephanie Liu (11:11)
That’s exactly right. And when I talk to clients, I mean, this was the whole theme of my track session on AI agents. They are not real today. They are not autonomous. They’re not wonderfully doing all these, you know, decisioning and actions, but they’re coming. And so what we can do today is peel back the hype on all the wonderful things we’ll do and just think about how do we get ready for that future? And that readiness stage is really, it’s…
going through the nuts and bolts of what data we need, what tools it needs access to, and what teams we need to work with, because realistically, there is not going to be a single department that can run the whole thing or build the whole thing. There won’t be one data set that is all the AI agent needs. And I think identifying those pieces, again, it’s not fun work, it’s not glamorous, but it will put you in a much stronger position to be ready when, you know, agentic AI is fully agentic.
Greg Kihlstrom (12:02)
Yeah, and to talk about the team and the team structures and things like that, what do you recommend there? What have you seen that’s worked even in early tests or something? When we’re talking about these broad reaching things, silos don’t work. We know that already from journey orchestration or other things like that even. But what does work and what should leaders be thinking about?
Stephanie Liu (12:26)
The frustrating thing is there’s no single answer. A lot of it depends on the organization, their risk tolerance, and so on. But one approach I’ve seen that I really do like, it comes from a highly regulated industry. They have a centralized, it’s like an AI governance council. And if you want to build an AI agent, however you’re defining that, fully agentic or not, you have to pitch your idea to this council. And what I really like about it is it forces you to go through that critical thinking of
how are we going to explain the use case and the KPIs and what we’re trying to do here, right? It peels back again, it peels back the hype of like efficiency and is getting into the nuts and bolts. And what’s also nice about this council is they can flag, hey, that sounds a lot like this AI agent that Greg built. You should go talk to him and see if you can copy what he’s done or if your use case is actually so unique that you need your own.
And what I think is nice about that is there are companies, some really big tech companies right now that are hyping up how easy it is to build an AI agent. And it unlocks a lot of opportunity, but I also think there’s some risk there of waste and of duplication, right? If you have folks who are building AI agents for the same workflow, for example, or the same process, if they are doing that independently, then you’re wasting a lot of resources, both.
financially and computationally on running AI agents in parallel. So that centralized governance structure is nice, but ultimately it’s still on the team pitching the AI agent to deploy it. And so they still have control over the spec and what it’s trying to do, but that governance council adds that layer of accountability and of cross-functional collaboration to try and combat some of those.
duplication issues as well as security and privacy issues of course.
Greg Kihlstrom (14:18)
Yeah, yeah, I love that. Yeah, and also it kind of eliminates some of the hype of like, just because we can doesn’t mean we should. But also I love the idea of just being able to share, because I’ve worked in those large organizations where there are two people working on almost exactly the same thing that I’ve never met before. yeah, at best it’s a waste, and at worst it’s just a real missed opportunity to learn and grow and innovate. So yeah, I love that.
Well, a couple last things here. We’re here on the ground at Forrester CX in Nashville. So far, know we’re only in day two here. So but what’s the single most insightful thing that you’ve heard or seen at the conference so far and why did it stick with you?
Stephanie Liu (14:59)
I’d have to say it was Kelsey Chickering’s keynote where she compared consumers to toddlers. I have a toddler is very accurate, but it was like consumers are impulsive. We want instant gratification and there were a couple others in there. But the whole point was that when you’re delivering an experience, it’s not just the customer experience, it’s also the brand experience and really what the brand perception is.
and how both sides of that equation will impact the customer and how they perceive your brand and how they experience the brand. And so I thought it was a really well done analogy and it’s one that gave me a good chuckle.
Greg Kihlstrom (15:38)
Yeah, yeah, that was great. Yeah, I actually interviewed her this morning, so she’s going to be on the show as well. Stephanie, thanks so much for sharing your ideas and insights. One last question for you before we wrap up. What do do to stay agile in your role, and how do you find a way to do it consistently?
Stephanie Liu (15:52)
I don’t know if this is a good lead to follow, but I try to say yes to as many things as I can. And I find that being outside of my comfort zone is actually where agility is the most important. And so I try to push what I’m comfortable doing, like a podcast interview at the Forrester conference. But I think it’s good to challenge yourself and really make sure you’re not.
Greg Kihlstrom (16:11)
You’re doing great, by the way.
Stephanie Liu (16:17)
getting too comfortable because I do think that’s where innovation and agility, where they happen organically.