What if the most honest and insightful feedback you could get about your customers didn’t come from an actual customer at all?
Agility requires not just the ability to pivot quickly, but the foresight to know which way to turn. That foresight is built on a foundation of deep, unbiased insights, which are becoming increasingly difficult to get through traditional means.
We are in Seattle at the Qualtrics X4 Summit, and today, we’re going to talk about a fundamental shift in how we gather customer insights. We’ll explore the diminishing returns of traditional research and dive into the potential of synthetic panels—AI models trained to represent audiences without the fatigue, bias, or social desirability that can skew human responses. It’s a move from merely confirming what we think we know to discovering what’s truly possible.
To help me discuss this topic, I’d like to welcome, Jordan Harper, Principal AI Thought Leader, Edge Center of Excellence at Qualtrics.
About Jordan Harper
With a career spanning nuclear physics, software engineering, digital transformation, and customer experience strategy, Jordan brings a fresh perspective to technology’s role in understanding people and markets. He has advised global brands, led innovation programs, and built products across industries from financial services to fast-casual dining.
Drawing on this diverse background, Jordan excels in working with clients to help them realize how AI-powered synthetic research can not only deliver faster and more cost-effective results, but also introduces surprising benefits and fresh perspectives into a market researcher’s toolkit.
Jordan Harper on LinkedIn: https://www.linkedin.com/in/jordanharper/
Resources
Qualtrics: https://www.qualtrics.com
The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://aglbrnd.co/r/2868abd8085a9703
Drive your customers to new horizons at the premier retail event of the year for Retail and Brand marketers. Learn more at CRMC 2026, June 1-3. https://aglbrnd.co/r/d15ec37a537c0d74
Enjoyed the show? Tell us more at and give us a rating so others can find the show at: https://aglbrnd.co/r/faaed112fc9887f3
Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom
Don’t miss a thing: get the latest episodes, sign up for our newsletter and more: https://aglbrnd.co/r/35ded3ccfb6716ba
Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com
The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Transcript
Greg Kihlström: What if the most honest and insightful feedback you could get about your customers didn’t come from an actual customer at all? We’re here in Seattle at Qualtrics X4 Summit, and today we’re going to talk about a fundamental shift in how we gather customer insights. We’re going to explore the diminishing returns of traditional research and dive into the potential of synthetic panels, AI models trained to represent audiences without the fatigue, bias, or social desirability that can skew human responses. It’s a move from merely confirming what we think we know to discovering what’s truly possible. To help me discuss this topic, I’d like to welcome Jordan Harper, Principal AI Thought Leader, Edge Center of Excellence at Qualtrics. Jordan, welcome to the show.
Jordan Harper: Thanks for having me.
Greg Kihlström: Thanks for being here. Yeah, looking forward to talking about this with you. Before we dive in, though, why don’t you give a little background on yourself and your role at Qualtrics?
Jordan Harper: Sure. So, my deep background is actually in science, so I studied Physics at university and did a master’s in Astrophysics, worked for a couple of years in nuclear engineering and then made a pretty big pivot into uh, marketing and technology agencies. So I worked for a new media agency as a developer back in the early 2000s, building websites and apps and things like that, ended up working through to becoming tech director, CTO, spending most of my time advising senior clients, C-suite leaders on technology strategy and making smart decisions about new technologies, emerging technologies. And obviously the big thing now is um obviously AI. And uh, yeah, my role at Qualtrics is working inside the the Center of Excellence for our Edge team, which is basically our AI synthetic research product, and working between engineering and sales and customers to understand, to translate kind of the technology hype into something that means something real to customers and and consumers of the product.
Greg Kihlström: Got it. All right. And and I guess one more thing. A lot of our listeners know Qualtrics as a leader in experience management and survey platforms. Maybe, I know you touched on a little bit, but maybe set the stage on how Qualtrics sees its role evolving in this new era of AI, you know, driven insights, as you mentioned. And um, you know, how you’re, you know, helping to lead those those capabilities.
Jordan Harper: Yeah, I think um, you know, like like every business, Qualtrics recognizes that uh AI is going to transform not just the way we do business, but the way all of our customers do business and the kind of questions they ask and the kind of things that they lead and expect from us and what their customers expect from them as well. Like one of the things that really excited me about joining Qualtrics last year was the forward-thinking approach to uh, to AI, to really leaning into this technology, rather than, like some companies do, kind of lean away from it and become a little bit scared and threatened by it. What was really clear to me from speaking to everyone at Qualtrics was this was being fully embraced and and and leant into, but in the right way. So, we’re integrating AI technology and tooling into pretty much all of our platforms, one way or another, to help support customers, make their lives easier, to help make their experiences using our tools and software better, and to help them get better insights and and and action from the data that they’re created. Inside the synthetic research work that we’re doing specifically, uh it’s about creating, like looking at the question that I think a lot of people have and a lot of people are talking about, which is the idea of synthetic research, using AIs to simulate humans in answering questions or providing opinions on what humans might think about a particular scenario. And there’s lots of different approaches that people are taking to that. Most of them, you know, I think to their deficit, are essentially wrapping things around general LLM models and creating very sort of stereotype-driven personas that revert to the mean. What was really interesting to me about kind of Qualtrics’s approach to that is starting at a really fundamental level, creating its own LLM from um real survey data that understands humans are a little messy when they answer questions and don’t always answer the obvious uh, the obvious way when it comes to to responding. And that’s what we see in our model, that it’s like a pretty good reflection of how customers respond to surveys and answer questions. And now we’re working on making that kind of practically useful and available uh to our customers and to test.
Greg Kihlström: Nice. So, yeah, let’s let’s dive in here and we’ve going to talk about a few things here, but um, you’ve used a great metaphor describing human feedback as a mirror that reflects what people say, and synthetic panels as a lens that reveals deeper systems. Maybe unpack that for us a little bit and and explain, you know, what what do you mean by that?
Jordan Harper: Yeah, it’s um, it it it came from talking to the team about uh, and actually thinking about the the bigger picture of research, like why we do it and what we’re doing and how it’s evolved. Back when I was um, in the in the presentation I gave, back when I talk about when I was a newborn and when my uh, my brother was born as well, my my mother took time off work to look after us and and took some part-time work as a market researcher. So I remember her stood on the high street kind of interviewing people, collecting all these piles of paper, and sending them off to research companies. And she did it for sort of a number of years and and it always used to like baffle me why this much effort was being put in to um just collecting what people thought about things. And then obviously in my professional career, later down the line, like realizing how valuable research and and insight is. But thinking about how it had evolved from 1980 through to a couple of years ago, all we’ve essentially done is use technology to make it easier to access people. Whether that’s, you know, email forms, smartphone apps, telephones, like everything that’s kind of come in to to make research easier has just kind of made people more accessible. And I think we all know that what’s that’s really led to for people is just fatigue and frustration with the amount of questions you get asked and the interrogation. And so a lot of the evolution of the research discipline has been how can we alleviate that fatigue and frustration and make it easier for people to answer questions and reduce the number of questions and make it simpler and compact the information into matrices so you’re ticking boxes on big grids of things. And what you’re doing there is essentially reducing the cognitive load on the customer who’s answering those surveys, but really the cognitive load is kind of the point of of asking people questions. And when I started to think about it, you know, building all of these pictures of customers inside the company from asking more and more questions, it felt to me like building this big array of like mirrors of our of customers inside organizations. And what’s interesting, I think about AI is it’s not just a new tech for making it easier to access people. It’s actually a new way of leveraging the data and the insight and the intelligence that we’ve gathered over years and years and years with humans and saying, can we leverage that in a way that can maybe take the load off humans and we only ask them the important questions or the more detailed questions or the things that only they’re going to know. And things where we can infer useful insight from previous data with a lens that looks into that data. And that’s what kind of AI and LLMs and our Edge Audiences product in particular really represents to me. It’s like that that lens to peer into that big data set and extract the insight and useful things from it.
Greg Kihlström: Yeah. Well, and I also want to go back to another thing you touched on in the intro is just this idea that synthetic models can be, you know, quote-unquote more honest than than humans. And you know, so I I know there’s with LLMs, there’s what is it, stochasticity, you know, they’re never answer the same exact way twice. But we talk a lot about the things about AI, but we don’t talk quite as much about how humans may be, you know, there may be bias with humans, there may be, you know, depending on the recency or or things like that. But also they just don’t answer consistently like an LLM, you know, despite that that other that other component might answer more consistent or too consistent, right? So like, how do how do you look at synthetic as being that that more honest?
Jordan Harper: Yeah. Whenever I say honest, I always put some air quotes around it because it’s like, you know, how can how can an LLM or an AI be honest or dishonest? It’s just saying what it’s saying, right? But but yeah, it’s it’s a useful framing for it, I think, because, yeah, we know as researchers that humans do not always tell the truth. They think about the way they’re going to be perceived by the people reading the survey. I’m sure we’ve all done it. I know I have. I know I’ve um exaggerated or uh played down my thoughts on something or you know, when you think I’d really should tick that one, but I’m going to tick two boxes down from it because I don’t really want to think of myself as someone who would tick that box there. And so we all control for that. And and and you know, researchers have decades worth of uh of like tooling and algorithms and processes for for coping with that and dealing with it and mitigating it. What we’ve seen with synthetic a lot of the time is that it doesn’t really fall for those same kinds of self-reflection type text. Like it’s it’s not interested with being perceived as a good person or not a good person. It’s not uh one of the other interesting things that’s kind of connected to it is like priming, you know, that asking certain questions earlier in a survey can influence the answer to a question later in the survey because you’ve planted some seeds in people’s mind. Like we did an experiment where the the ultimate question at the end of the survey is, do you think smartphones are good or bad for the world? Uh, you know, more harm than good, more uh good than harm on a kind of like a scale one to five. And um for some of the cohorts, so we didn’t ask any questions in the rest of the survey that might have influenced it. For another cohort, we asked some questions earlier on that touched on, you know, are smartphones really good for keeping in touch with your family for helping you out with like daily chores or providing maps to allow you to navigate the world. And we had another cohort where we we we set a bunch of questions that were like, you know, do you find that uh, you know, social media is distracting or are you worried about the impact it has on children and and and that kind of thing. And so what we found with humans is exactly what you would expect, which is humans who’d seen who’d answered questions about the negative parts of of smartphones were more likely to answer negatively to that final question than the unprimed baseline. The ones who’d seen the positive ones skewed positive. What we saw with synthetic was almost no ver, I mean there was some variance, but nothing that couldn’t be explained by natural. And there certainly wasn’t a clear pattern moving one way or the other in the variance that we saw out of that. And that’s that’s really interesting that we know it remembers the answers to previous questions as it goes through a survey. That’s part of the fundamental design of the system we built because obviously you need to remember if you answered you are a you live in a rural area at the start of the survey, that later in the survey you answer like someone who lived in a rural area. So it has to remember it and and use it as it goes through, but it doesn’t let it kind of control its emotions.
Greg Kihlström: Yeah. So like the the bias is at the training level in that case versus with humans it’s in the moment, right?
Jordan Harper: Exactly, yeah. And the training data has been like one of the questions that we get a lot is, you know, so how come if it’s been trained on human survey data, it doesn’t just exhibit all the same biases that humans exhibit. And it’s because the data that it’s been trained on has been processed to mitigate all of that stuff, has gone through that process of of scrubbing and stripping and and validating and making sure that it reflects what people actually think rather than necessarily what they say that they think.
Greg Kihlström: You know that moment when marketing wants a landing page, design marks it up, and engineering says, yeah, we’ll get to it. Thousands of businesses from early-stage startups to Fortune 500s are choosing to build their websites in Framer, where changes take minutes instead of days. Framer is a website builder that works like your team’s favorite design tool, with real-time collaboration, a robust CMS with everything you need for great SEO, and advanced analytics that include integrated AB testing. Your designers and marketers are empowered to build and maximize your .com from day one. Changes to your Framer site go live to the web in seconds with one click, without help from engineering. Framer is also an enterprise solution, giving brands like Perplexity, Miro, and Mixpanel the confidence they need to build their websites in Framer. Learn how you can get more out of your .com from a Framer specialist or get started building for free today at framer.com/agile for 30% off a Framer Pro annual plan. That’s framer.com/agile for 30% off. framer.com/agile. Rules and restrictions may apply.
Greg Kihlström: So, how does all of this change the role of traditional insights and, you know, I talked with Ellie Enriques a little bit earlier too about this and, you know, what what does the traditional market research, you know, what do they get to do more of, what do they do less of, you know, in in this kind of scenario?
Jordan Harper: Yeah, I I think it’s really exciting, because I think it’s easy to see it as a threat, but if you see it as unlocking a skill that all market researchers have, I think, but rarely get to flex, which is that sort of experimental, scientific mindset. Like, actually, I would like to ask this question in 15 different ways and see what the difference in our customer responses to that are. But you can’t do that in the real world, because you’ve got your question-weary customers that you’re asking questions. So you’ve got to really value every question that you ask them, or you have, you know, a department in your organization who wants to add some questions to your survey and you have to say, well, I’m sorry, we’ve like, we can’t ask any more than the 15 questions we’ve got in there now. So maybe next time we’ll include your your questions. Then you never do that research for that part of your business. I think it it expands the ability to do more research, to become more experimental, to test a survey design before you put it out to customers. So if you’re like, I don’t know whether asking this question in this way or that way is going to get a better response from customers, you can throw both at synthetic and see if it shows you a pattern that’s like, oh, that’s interesting. A good example of that is actually one we did with a travel brand where they have a trend survey that they run every year and there was a question in there that touched on solo travel. So they were looking for solo travelers or like, are you a solo, have you traveled solo in the last year? And when we ran the same survey through synthetic, there was a really like it it was very similar to human for most of the answers to that question, but the solo travel one was split in a really strange way. And without getting too much into the the basic stuff, what they essentially found out was there was humans were interpreting that question in two different ways. Did I fly solo to X4 to meet up with thousands of people? Um, or am I flying on a solo backpacking trip to the Andes? Um, where I’m going to walk through the mountains on my own and meditate? You know, they um what the travel company was really interested in was the latter type of traveler. But a lot of people ticking that box as human was interpreting solo travel as literally traveling solo. The um synthetic was very one or the other about it. And essentially made it really easy to spot that, oh, there’s probably a bit of a kink in the way we’re framing this question for humans. So now we’re going to rephrase that question going forward so we actually capture the data that we’re looking for.
Greg Kihlström: Well, yeah, and to find that out after you run a real, you know, a traditional survey is helpful but expensive and, you know, are you going to actually be able to rerun that survey?
Jordan Harper: They’ve been asking that question for years.
Greg Kihlström: Right, right.
Jordan Harper: And it’s never been considered uh something that might have been misinterpreted before.
Greg Kihlström: Yeah, yeah.
Jordan Harper: Um and it was the the synthetic experiment that highlighted, didn’t necessarily tell you what was wrong and that’s where the kind of role of the research is really important because it took the researcher to look at it, interpret it, understand where the problem might actually be, what might need to change in order to fix it. But it what I what I find like when you experiment with synthetic, it does a lot is it throws that signal up. So it says, look here. There’s something interesting over here that you might want to like take a look at. I’m not going to tell you any more than that, but like here’s here’s an area for you to focus your attention on. And that’s where the the skills of a of a researcher really come in to then interpreting that and understanding where the problem might be.
Greg Kihlström: Yeah, yeah. So certainly, you know, I talk about AI a lot on the show in a lot of different contexts, but um, there is often skepticism, at least in some corners about all all, you know, whether it’s from a branding perspective or or other areas as well. How do you build organizational trust in, you know, findings that aren’t, you know, they’re not coming from from real humans? You’ve given a lot of reasons why, you know, it’s it’s important and valuable, but how do you make that case internally in an organization to, you know, kind of get the ball rolling with us?
Jordan Harper: Yeah, I think it’s um, it’s a matter of again, going back to like experimentation. It’s validation. It’s it’s taking the surveys that you’ve asked before, running them through synthetic and going, look, 99% of the time, synthetic is giving us exactly the same results as human. This 1% is where the interesting bit lies. That might be because the model is just not very good at answering that question, which definitely happens sometimes. And again, it takes the skill of the researcher to spot it. But other times it might be like the solo travel thing where you’re like, I think there’s something more going on here. But the the other 90% that is in line with humans and the distributions are similar. It’s not just about does it agree top two box. It’s like, is it showing the same kind of distribution as humans? You know, if you ask a general LLM, if you ask survey questions to ChatGPT or Gemini, quite often might get the top answer correct, but it’s like on ChatGPT, it’s 100% of respondents answered the top answer. And we know that humans are a little messier than that. Even if the answer is obvious and correct, objectively correct, there can be a spread of of things. We did an experiment um asking a question about climate change. Uh you know, is it a an important thing that humans should be worried about and concerned about. And um, you know, humans did not answer 100% yes, because it’s an interesting question that stirs up lots of different kinds of perspectives and thoughts. So there was a distribution with humans. You ask that question to ChatGPT or to Gemini a thousand times and it was literally a thousand times yes. No like literally 100% of respondents said that. You run it through our model and we we had a very similar distribution to humans where it was able to grasp the nuance of of human messiness.
Greg Kihlström: Yeah, yeah. There there’s a lot that goes questions like that, there’s a lot.
Jordan Harper: And when you show that it’s not coincidence, right? So when you’re trying to prove that to your, when you’re trying to prove to your internal stakeholders that this could be valuable, if you’re just showing them, well, 100% of of people said yes, you go, well, that’s obviously nothing like what we actually see in the real world. But if you show actually the the distribution is very similar, we’re getting a similar result for most of these questions. But this we think there’s some signal over here where there is a difference. It’s the the 90% aligned, 10% different is where you get that buy-in. If it was 90% different, 10% aligned, you’re not going to get that. Of course, you know, so you’ve got to do the testing, you’ve got to do the work to match your human results to synthetic results and keep doing the human surveys as well. Like you need to keep that baseline up to date and make sure that it’s still matching the way people are changing in the real world.
Greg Kihlström: Yeah, yeah. So as we think a little bit more, you know, into the future here, you know, as as AI models and all, you know, the LLMs included, you know, become more sophisticated. Do you see the future of market research being less about asking questions after the fact and more, you know, where where do predictive simulations really come into this mix? Because right now, you know, we’re probably a lot more heavily on the reactive and, you know, kind of taking action um actions post-survey and stuff like that. You know, where are we moving into a more predictive realm?
Jordan Harper: Yeah, I I think I think we probably should be doing more predictive than we are. Like being able to survey customers in advance before we develop a new product feature or before we advance something. But it’s that fear of fatigue and frustration in customers that’s I think stop a lot of organizations doing that. And you go, well, actually the the signal we can’t let go of is the NPS after the fact or the like what, you know, rate your experience after the fact. That’s the signal we can’t let go of. And maybe we’ll have to sacrifice asking them quite so many questions about new features as we’re developing them. And another thing with that is, you know, fear of asking people about new things. Like if Apple had sent out a survey about, you know, would you buy a flip phone, would you buy a uh a foldable phone from Apple a year ago? Like that’s leaked instantly and everybody’s like, well, look, Apple are asking questions about foldable phones. I wonder what they’re working on. But you could you could use synthetic to ask those kind of questions. You can do more predictive research that would enable you to make decisions that were driven by data and advance. And I think it you still should be asking humans those questions as well. But you can tailor the questions you ask to humans and make sure you’re making the use of their valuable time. Get them to think about the questions and actually be a bit more detailed. Ask fewer people more detailed questions rather than lots of people very superficial questions that you end up getting uninteresting answers from.
Greg Kihlström: Yeah, yeah. Well, Jordan, thanks so much for joining. Um, I got two questions for you as we wrap up here. First one, uh what’s been a highlight of X4 for you so far?
Jordan Harper: So, a few years ago I saw Priya Parker talk at South by Southwest. It was just after the pandemic and she talked about the art of gathering. Uh there and it was really fun to see her speak again yesterday and and again reflect on like how amazing it is to just have so many people in one place. So catching up, seeing Priya and catching up with all of my uh colleagues from around the country and around the world has been amazing.
Greg Kihlström: Nice, nice, love it. And last question for you. What do you do to stay agile in your role and how do you find a way to do it consistently?
Jordan Harper: Yeah, I think like my my job history probably talks to this a little bit but like uh like always staying curious, always uh trying to keep an eye on what’s next and what’s moving and not be like walk towards it rather than away from it. Ask what it can do to to make things better rather than um what can we do to avoid it being a problem. I think it’s really important.











