This article was based on the interview with Ali Henriques, Executive Director of Market Research at Qualtrics by Greg Kihlström, AI and MarTech keynote speaker for The Agile Brand with Greg Kihlström podcast. Listen to the original episode here:
We’ve all been in that meeting. The one where a nine-figure campaign is on the line, the creative is sharp, the media plan is ambitious, and the only thing standing between you and a launch is a six-week research cycle. In the time it takes to field a traditional study, a new competitor could emerge, a cultural trend could fizzle, or the entire market sentiment could shift on a dime. This “speed to insights” bottleneck isn’t just an operational headache; it’s a strategic liability. The cost of waiting is often eclipsed only by the risk of not waiting and making a decision based on little more than institutional memory and a well-argued opinion.
For decades, we’ve accepted this trade-off as a fundamental constant of marketing. Rigor takes time. But what if it didn’t? The conversation around artificial intelligence (AI) in marketing has been dominated by content generation and personalization, but its most profound impact may be in how we understand our customers before we ever speak to them. We’re moving toward a world where synthetic data and finely tuned AI models can simulate customer behavior, allowing us to test hypotheses, explore curiosities, and de-risk major decisions in minutes, not months. This isn’t about replacing the deep, strategic work that defines our craft. It’s about augmenting our intuition and filling the vast gaps where, until now, we’ve been forced to simply guess.
The True Cost of a Slow Answer
The most significant friction point in a large marketing organization isn’t always a failed campaign; it’s the dozens of potentially brilliant ideas that never even get tested. When the process for validating an idea is slow, expensive, and resource-intensive, only the “safest” or most politically-backed concepts ever see the light of day. This forces teams to make countless smaller decisions based on gut feel, which can accumulate into a significant strategic drift. Ali Henriques frames this not as a failure of research, but as an opportunity that has been historically underserved.
“You know, what immediately popped to mind… is actually the cost of doing nothing. Think about all the decisions that are made based on gut or, you know, that that meeting with that team and we just feel like this is the right thing to do. To me, that’s there’s incredible opportunity there because it’s just not really been cared for because research is a bottleneck.”
This is the crux of the issue. The scenario we all dread is launching a feature or a message only to have the research land on your desk six weeks later confirming it was the wrong call. By then, the assets are in production, the media is booked, and the cost to pivot is astronomical. Henriques suggests we shouldn’t view AI-powered solutions as a replacement for traditional methods, but as a powerful tool to service the countless decisions that currently receive no data-driven validation at all. It’s about creating a two-track system: deep, multi-modal human research for the most strategic, weighty decisions, and AI-led, quick-turn insights for everything else. This allows us to be both rigorous and responsive, applying the right tool to the right problem.
Not All AI Is Created Equal: Beyond the Generalist LLM
For the discerning marketing leader, the immediate question is one of validity. After all, you can ask a general Large Language Model (LLM) to “pretend to be a millennial mom” and generate a response. What makes a specialized synthetic data model any different? The answer lies in the training data and the resulting nuance. A generalist model is trained on the vast, chaotic expanse of the public internet. A specialized model for market research, however, is trained on something far more specific: millions of structured survey responses. This distinction is critical, as it shapes the model’s ability to replicate the beautiful, and often frustrating, irrationality of human behavior.
“…what you see with the out of the box models… Those models are tend to be a lot more similar in pattern and behavior, and so you don’t see the variability. You see them cluster around I either love it, hate it, I I’m agreeing. And they moved together… Whereas our model, again, that’s been studying kind of human behavior and how we react and respond to survey questions, almost perfectly matched that of of the human responses.”
Human opinion isn’t a monolith; it’s a distribution. When you survey a thousand people, you don’t get a single answer—you get a spread, with clusters of agreement, pockets of dissent, and a healthy dose of “it depends.” As Henriques points out, generalist LLMs struggle with this, tending to regress to a confident mean. A properly tuned synthetic model, steeped in actual survey data, can replicate that variability. It understands that people hold conflicting views and that their preferences are not always logical. For a marketer, this is everything. We don’t market to an average; we market to segments, to personas, to the varied and diverse collection of humans that make up our audience. A tool that can accurately simulate that diversity is infinitely more valuable than one that just gives you the most probable answer.
Putting It to the Test: From Impractical to Actionable
The true test of any new technology is its practical application. In a collaboration with Booking.com, Henriques and her team moved beyond theory to demonstrate how synthetic data can unlock entirely new avenues of research. They undertook a psychographic segmentation—a complex study of consumer attitudes, interests, and lifestyles—that the company had never attempted before. A project of this nature would have been prohibitively difficult with human respondents, requiring a questionnaire so long and detailed that respondent fatigue would render the data useless.
“Her hobbies and interests question would have been rejected by me if we were running this with human responses. It was like 75 things on this list. But Synthetic takes the time to go through those and select all of the hobbies that… that respondent engages in… They’re now taking some of the learnings here about social media behaviors and they’re going to pilot some YouTube Reels based on what we found with our synthetic segmentation.”
This example highlights two key benefits. First, synthetic respondents don’t get tired or bored. You can present them with a 30-minute, 75-option questionnaire and receive considered responses for every single item. This allows for an unprecedented level of depth and exploration, uncovering nuanced segments like the “always-on trendsetter” and the “independent traditionalist.” Second, and more importantly, the insights were directly actionable. The segmentation revealed specific social media behaviors that led Booking.com to pilot a new content strategy with YouTube Reels. This is the holy grail for any insights function: a direct, unbroken line from a research finding to a tangible marketing action. It proves that speed doesn’t have to come at the expense of depth or utility.
The Evolving Role of the Insights Team
So, what does this mean for the structure and function of our teams? If an AI can generate survey responses in 15 minutes, what is the role of the highly skilled, often highly paid, research professional? The fear of automation is perennial, but this shift points not toward obsolescence, but elevation. By automating the most time-consuming and mechanical parts of the research process—data collection and basic analysis—we free up our best minds to focus on higher-order tasks: strategy, synthesis, and translating insight into business impact.
“I’ve PhDs sitting here, right? Building charts and PowerPoint… they should be driving strategy and product investment… decisions, not producing charts in PowerPoint.”
This sentiment resonates with any leader who feels their most valuable talent is buried in execution. The future role of the insights professional is less about being a data producer and more about being a strategic advisor. They will be the ones stitching together disparate data sources—synthetic survey data, operational data, behavioral analytics—into a coherent narrative. They will be the ones sitting with product and marketing leaders, bridging the critical gap between understanding and action. This technological shift allows us to re-center the human expert on the part of the process where they add the most value: thinking.
The integration of synthetic data into our research toolkit represents a fundamental change in our relationship with time. For years, we’ve operated under a paradigm where our curiosity was constrained by our capacity. We could only ask as many questions as we had the time and budget to answer. This new approach effectively reduces the cost and time of asking a question to near zero, fostering a culture of continuous learning and experimentation. We can now test the follow-up questions that arise in a debrief, validate a sudden “what if” idea from a creative session, and provide data where previously there was only a void.
This is not a future where human expertise is diminished. It is one where it is amplified. By automating the menial, we elevate the strategic. We empower our teams to move beyond the “what” and focus on the “so what” and the “what next.” For marketing leaders, this means faster, more confident decision-making and a greater ability to connect with customers in a world that refuses to stand still. The bottleneck is breaking, and the opportunity lies in what we choose to do with our newfound speed.





