Expert Mode from The Agile Brand Guide®

Expert Mode: Moving Beyond the Mirror of Customer Feedback

This article was based on the interview with QualtricsJordan Harper on using synthetic panels to get real insight by Greg Kihlström, synthetic research thought leader for The Agile Brand with Greg Kihlström podcast. Listen to the original episode here:

For decades, the core challenge in market research has been one of access and efficiency. We moved from clipboards on the high street to phone banks, then to email surveys and mobile apps. Each technological leap was celebrated for its ability to reach more people, more quickly, and more cheaply. We built sprawling Voice of the Customer programs and sophisticated feedback loops, all in service of creating a perfect reflection of our customer inside the walls of the enterprise. But as marketing leaders, we know the truth: the reflection in that mirror is becoming increasingly distorted. Survey fatigue is real, panel quality is a constant concern, and the very act of asking a question can color the response. We’ve become masters at optimizing the collection of feedback, but are we still getting the truth?

What if the next evolution isn’t about finding a better way to ask people questions, but about creating a new way to find answers? This isn’t another hyperbolic tale of AI replacing human insight. Instead, it’s a fundamental shift in our approach—from using technology as a mirror that reflects what customers say, to using it as a lens that reveals the deeper systems of what they truly think and what’s actually possible. By training AI on decades of real human responses, we can now create synthetic audiences that are immune to fatigue, social desirability bias, and the other human quirks that muddy our data. This new capability allows us to move from merely confirming our assumptions to exploring the vast, un-asked territory of customer sentiment, freeing our teams to focus on the strategic questions that only humans can answer.

The Problem with Human “Honesty”

We’ve all done it. We’ve ticked a box on a survey that was a slightly more aspirational version of ourselves. We’ve downplayed a negative feeling or exaggerated a positive one because we’re subconsciously aware that another human will be reading the results. This isn’t dishonesty in a malicious sense; it’s just human nature. Researchers have spent their careers developing complex methodologies to account for these biases, from carefully wording questions to structuring surveys to mitigate priming effects. But these are workarounds, not solutions.

Synthetic panels, when properly designed, don’t have an ego. They aren’t concerned with how they will be perceived. They aren’t trying to be “good” respondents. This lack of self-consciousness allows them to provide a signal that can be, in a sense, more honest than a human panel. They can reveal how a slight change in wording might be interpreted or how the order of questions could be subtly skewing the results—insights that are prohibitively expensive and time-consuming to test with real people. Jordan Harper explains how this plays out in practice, noting how synthetic models react differently to priming, a common challenge in survey design.

“Asking certain questions earlier in a survey can influence the answer to a question later in the survey because you’ve planted some seeds in people’s mind… What we found with humans is exactly what you would expect… The ones who’d seen the positive ones skewed positive. What we saw with synthetic was almost no variance… it doesn’t let it kind of control its emotions.”

For a marketing leader, the implications are significant. Imagine being able to test five different value propositions without worrying that the first one is unduly influencing the perception of the fifth. This allows for a purer read on messaging and concepts. It provides a stable baseline for understanding sentiment, free from the emotional and cognitive baggage that a human respondent brings to every interaction. It’s not about replacing human feedback, but about creating a sterile environment to test our hypotheses before we take them into the messy, complicated, and beautifully human real world.

The Researcher as Scientist, Not Just Surveyor

The immediate, and understandable, reaction to the idea of AI-generated survey responses is to see it as a threat to the traditional market research function. But that’s a failure of imagination. This technology doesn’t replace the researcher; it elevates them. For too long, our insights teams have been bogged down in the mechanics of research: managing panels, optimizing question counts to avoid fatigue, and cleaning imperfect data. Synthetic research automates much of the drudgery, freeing up brilliant minds to do what they do best: think.

This shift allows the researcher to move from being a surveyor to being a true scientist. They can now run dozens of experiments, testing not just customer sentiment but the very instruments used to measure it. They can ask, “What is the best way to frame this question to avoid ambiguity?” and get an immediate answer. This capability can uncover blind spots in research methodologies that have existed for years, as Harper discovered when working with a travel brand.

“They’ve been asking that question for years… and it’s never been considered something that might have been misinterpreted before… it was the synthetic experiment that highlighted, didn’t necessarily tell you what was wrong and that’s where the kind of role of the research is really important… But what I find like when you experiment with synthetic, it does a lot is it throws that signal up. So it says, ‘Look here. There’s something interesting over here that you might want to take a look at.’”

This is the new superpower for an insights team. Instead of spending 80% of their time on data collection and 20% on analysis, they can flip the script. They become internal consultants who can pressure-test any new product idea, campaign message, or survey design. They can explore edge cases and run “what-if” scenarios at scale, bringing a new level of rigor and creativity to the organization. For leaders, this means your insights function transforms from a cost center that validates past performance into a strategic engine that de-risks future innovation.

Building Trust in a Non-Human Voice

Of course, none of this matters if the organization doesn’t trust the data. Bringing a spreadsheet of AI-generated responses to your CMO is likely to be met with healthy, and necessary, skepticism. The path to building organizational trust in synthetic findings is not to present it as a mysterious black box, but to ground it in the data you already have. The goal is to demonstrate its validity by showing where it aligns with years of human-led research, which then gives you the credibility to explore where it diverges.

The key is not to look for perfect agreement, but for distributional similarity. A generic LLM might correctly identify the most popular answer to a question, but it will often do so with 100% confidence, a result that looks nothing like a real-world human response. A sophisticated synthetic model, trained on nuanced survey data, will replicate the messy, human distribution of answers. The validation process involves running past surveys through the synthetic panel and comparing the outputs. Harper advises focusing on the alignment first to build a foundation of trust.

“It’s taking the surveys that you’ve asked before, running them through synthetic and going, ‘Look, 99% of the time, synthetic is giving us exactly the same results as human. This 1% is where the interesting bit lies’… The 90% aligned, 10% different is where you get that buy-in. If it was 90% different, 10% aligned, you’re not going to get that.”

This approach provides a practical roadmap for implementation. Start with a pilot project. Re-run a well-understood historical survey. Present the vast areas of alignment to stakeholders to prove the model’s grasp of your customer base. Then, and only then, introduce the small percentage of divergence as a signal—an area where the model has spotted an anomaly worth investigating with human-led qualitative or quantitative work. In this way, synthetic research is not positioned as a replacement for human insight, but as an incredibly powerful diagnostic tool that makes your entire research practice more focused and effective.

From Reaction to Prediction

For all our talk of being customer-centric, much of enterprise research remains reactive. We measure satisfaction after a purchase, gauge brand sentiment after a campaign, and ask for feedback after an experience. While essential, this is like driving by looking only in the rearview mirror. The true promise of synthetic research is its ability to shift our focus to the road ahead. It allows us to move from a reactive posture to a predictive one, asking questions about products that don’t exist yet and campaigns that haven’t launched. You can test sensitive ideas without fear of leaks and explore a dozen potential futures without fatiguing a single customer.

The shift from the mirror to the lens is more than a technological upgrade; it’s a change in mindset. It forces us to acknowledge the limitations of our current methods and embrace a new way of generating insight. The goal is not to stop listening to our customers. On the contrary, the goal is to make the conversations we have with them infinitely more valuable. By using synthetic panels to refine our hypotheses, eliminate ambiguity from our questions, and explore the outer edges of possibility, we can ensure that when we do ask for our customers’ valuable time, we are asking the right questions, in the right way, to uncover the truths that truly matter. The leaders who master this new capability will gain more than just efficiency; they will gain a strategic advantage, moving beyond what their customers say to understand what they will do next.

The Agile Brand Guide®
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.