Is the most effective way to understand real human behavior to simulate it first?
Agility requires a willingness to test ideas that sound strange at first—like asking bots to act more human by narrowing their point of view, or treating synthetic personas as real sources of insight. But when applied correctly, this thinking unlocks entirely new ways to scale customer understanding.
Today we’re going to talk about how synthetic research is reshaping how we understand audiences—and how asking the right questions can make these insights feel far less synthetic and far more human.
To help me discuss this topic, I’d like to welcome Mike Taylor, Founder & CEO of Ask Rally.
Mike – welcome to the show!
About Mike Taylor
Mike Taylor is the CEO & Co-Founder of Ask Rally. He built a 50 person marketing agency in the US and EU, has taught 300,000 students in online courses, and wrote a prompt engineering book for O’Reilly.
Mike Taylor on LinkedIn: https://www.linkedin.com/in/mjt145/
Resources
Ask Rally: https://www.askrally.com
The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow
Don’t Miss MAICON 2025, October 14-16 in Cleveland – the event bringing together the brights minds and leading voices in AI. Use Code AGILE150 for $150 off registration. Go here to register: https://bit.ly/agile150
Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom
Don’t miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show
Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com
The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Transcript
Greg Kihlstrom (00:00)
Is the most effective way to understand real human behavior to simulate at first? Agility requires a willingness to test ideas that sound strange at first, like asking bots to act more human by narrowing their point of view or treating synthetic personas as real sources of insight. But when applied correctly, this thinking unlocks entirely new ways to scale customer understanding. Today, we’re going to talk about how synthetic research is reshaping how we understand audiences and how asking the right questions can make these insights feel far less synthetic and far more human. To help me discuss this topic, I’d like to welcome Mike Taylor, founder and CEO of Ask Rally. Mike, welcome to the show.
Mike Taylor (00:40)
Yeah, thanks, Greg. Yeah, good to be here and happy to dive into this. There’s a lot of opinions online about this topic, so hopefully we can wade through them.
Greg Kihlstrom (00:48)
Yeah, definitely, definitely. it’s it’s something of personal interest to me as well. Definitely, definitely fascinated by it. Before we dive in, though, why don’t we start with you giving a little background on yourself and your role at Ascrolli?
Mike Taylor (01:02)
Yeah, so previous to this, I created a marketing agency. was a growth hacking agency back when growth hacking was cool. And I guess you can kind of think about it as like agile applied to marketing in some ways. But yeah, I did that for six years, grew to 50 people in New York and London and Europe. And then I left in 2020. It was like,
tired of managing people who managed other people who manage other managers. I wanted to get back to technical stuff. So I was learning how to code at that time, got access to GPT-3, and was blown away like, wow, this is amazing. So dug really deep into that, consulted for a while, became a prompt engineer. I’ve called myself that for a few years and created a book on prompt engineering that was published by O’Reilly last year. And then thanks to all of that stuff, I ended up starting my own business, tech business, Ask Rowley, in the synthetic research space at the beginning of the year. So that’s what I’m working on full time now.
Greg Kihlstrom (02:05)
Wonderful. Wonderful. Yeah, I can empathize with the marketing agency. I sold mine in 2017. it’s definitely fun, fun times, but not always fun times, I guess.
Mike Taylor (02:19)
Yeah, we can definitely share some more stories.
Greg Kihlstrom (02:22)
That’s subject of another podcast probably, but so let’s dive in here and definitely want to talk about synthetic persona, synthetic research. You’ve used the term persona of thought prompting and have scaled synthetic personas into some real decision tools for brands. Maybe just to, I know we’ve talked about this on the show before a couple of times, but for those that are less familiar,
Why don’t we start with you just defining what is synthetic research for someone who is not quite familiar with it?
Mike Taylor (02:57)
Yeah. So the way I describe it to my six-year-old daughter is when you talk to the computer, the computer pretends to be real people. And then you can use that to ask questions that you otherwise wouldn’t be able to ask that many people. So she hopefully gets that. that resonates to other people. But yeah, that’s fundamentally what’s happening here is you will talk to one, you know, five, 10, a hundred, five thousand percent is the most.
most I’ve had, but you’ll have them role play as potential customers or potential users of your product or potential people who you’re trying to reach. And you’ll do traditional market research like surveys, max diff analysis, usability studies, message testing, really anything that you would normally do with real people, you can do with synthetics. And the benefit is that it’s like a thousand times cheaper and faster than doing. So even if you can’t always get the exact same results you would get if you went out and did a study, we’re using this a lot in areas where you just could never possibly do market research at that scale. They test every single ad that you run or explore 100 product ideas. Things that you just, it would just be ridiculously slow and expensive to do normally.
Greg Kihlstrom (04:21)
So it’s yeah, I mean, it’s speed at scale and I would assume it’s it’s cost too, right? Because I mean, it’s expensive to do focus group research and stuff. So it’s really kind of all of the it’s it’s the the good, fast, cheap kind of the antithesis of that, right?
Mike Taylor (04:35)
Yeah,
exactly. And in some cases, actually, it’s not even really comparable to market research because you might not even be able to do market research in that space. If your product is mainstream consumer, then you can go and give out $100 Amazon gift cards and get people to join a focus group. But they might not actually even respond in a way that’s predictive of their consumption habits. So there’s all sorts of issues with traditional market research as well.
And quite often the people you’re trying to reach, they’re just not going to be swayed by that Amazon gift card. Like if your audience is like CEOs of 5,000 person companies, how do you go and get those guys into a focus group? So yeah, the people are using it, think, to fill in the gaps around scenarios where there is no real world.
Greg Kihlstrom (05:27)
Yeah. So for those, you know, skeptical, I mean, I know a lot of people in market research and, you know, great, great, you know, super smart people and great at what they do for those that are asking, you know, can the synthetic persona really, you know, ever represent like niche human behavior? You know, what would you say to that? You know, what what what are the cases where it can work really well?
Mike Taylor (05:52)
Yeah, so just out of the box with the state-of-the-art models today, when we compare the AI results to real-world studies, we ask the same questions with the same definitions of who should participate in the study. It’s like 50%, 60 % accurate or similar out of the box. And with some testing and calibration, we call it, where we optimize the responses until they get more realistic.
We’re seeing 70, 80 % accuracy. So definitely good enough to be directionally correct. You know, it’s never going to be the only thing that you do. in fact, I would say that quite often it’s itself like making the business case to do more traditional research. So we worked with a big, you know, holding company agency on a credit card project. And the reason why they were doing this max diff analysis that we did.
was, they, they were using it for a pitch. they, when you’re doing a pitch, can’t really, mean, you run a marketing agency. It’s like, it’s, it’s always this trade-off. Like you can’t really just like do all the work. costs too much money. but, they, want to at least see like what type of work you could be doing for them, in order to make the decision. So you end up having to do a bunch of work on spec. And, and in this case, you know, they’re doing the AI version on spec and then pitching for the real study, you know, that they’re going to scale up.
But the synthetic study can then also inform the parameters of the human study. So some of the questions we revised after seeing that the AIs stumbled on those questions or didn’t give us the type of responses we wanted. So I think that there’s kind of nice symbiosis between them. I don’t think it’s either or.
Greg Kihlstrom (07:35)
Yeah, yeah, that makes sense. And yeah, I mean, you know, just like there’s skepticism, I would say on the you know, how accurate the the AI is. I think the the thing that is under under reported or not talked about as much much is how unpredictable humans are and biased humans can be as well. know, so you mentioned the the incentive.
Again, I’m not a professional market researcher, but I’m going to make an assumption that there are certain people that take incentives to answer a survey and certain people that don’t. And again, it’s going to completely vary based on what the incentive is and so on and so forth. But it’s like there’s not not bias when you when you ask humans. But I guess that that throws another wrinkle into it as well, because how do you account for human bias in something like synthetic research and AI.
Mike Taylor (08:36)
Yeah, exactly. Yeah. I mean, it’s always a difficult thing. ⁓ And funny enough, like in this specific AI industry, we have the opposite problem of most AI tools, which is they want to try and remove the bias and we actually want to introduce it. It’s right. Right. Because we want ⁓ our AIs to be biased in the same ways that humans are biased, because we want to try and predict where they will act in certain situations. So, you know, if
Like a really good example is everyone says they want an eco-friendly car. And then when it comes down to it, they buy the SUV, right? And, you know, there’s all sorts of scenarios like that where there’s like a gap between the intentions that people have or what they would say in their focus group versus what they actually do when they’re in the market. And so if you just query the LLMs, they actually share that same bias, right? Like they say that the eco friend car is what they would buy. So a lot of the work we’ve been doing is to try and actually calibrate it towards the point where the AIs are speaking more truthfully about what people actually will do. And that’s something that’s quite nice because you can’t do that with real people very easily. You can’t just like find the right prompt to get them to tell the truth. Whereas with AI you actually can keep testing until you find the truth seeking prompt.
Greg Kihlstrom (09:56)
Yeah, yeah. Well, and mean, I think that also makes the case for what you were saying earlier, which is, you know, a mix of synthetic and real world. And I would you know, I would add to then the actual like behavioral activities that they have. Right. Like it all it all kind of needs to get get reconciled. Right. Because I mean, you know, at the end of the day, how do you uncover the biases that help, you know, point the synthetic in the right direction? Like how how is that? Is that you know, kind of observing after the fact.
Mike Taylor (10:27)
Yeah. What we try and do is we’ll take various studies that we know are problem studies. Like, you know, our customers come to us and say, Hey, I don’t think this is quite right or whatever. We’ll, we’ll, we’re reading a lot of the research papers to see, know, there’s actually like a real deep amount of research here from social scientists, because this is like a playground for them, right? Like, yeah, it’s pretty amazing to be able to do this. And, and there’s like, you know, there’s fewer ethical quandaries than there would be if you were trying to like, I don’t know, real people into saying some things one way or another. Yeah, so yeah, that’s where a lot of our focus is, is in trying to figure out what models are better at predicting real world behavior, what prompts, how many examples do we have to give them of what realistic responses look like, that sort of thing.
Greg Kihlstrom (11:18)
you
Let’s let’s go back to kind of that symbiosis between synthetic and real world. So you mentioned, you know, this could be you do synthetic first to get some sort of directional guidance on what to do and then take it into the real world. Is that is that generally how like he talked through, like how does that work best?
Mike Taylor (11:39)
Yeah. So, you never have the right answer. Like you never have the complete answer. And then nobody’s ever going to come to you and say, if you launch the product in exactly this way, it work. Right. So you’re only trying to improve the odds. And there are two major things you can do to improve the odds. one is you can avoid costly mistakes. So you can avoid doing something that is predictably bad, or you could identify, you know, untapped opportunities. And, and so I would say that.
Typically, you start with synthetic research just because it’s cheaper or faster. In the early days of an idea, you’re just looking for a steer. You’re like, there’s all these opportunities, which one should I pursue? I have all these names, which name is interesting? I have all these potential ideas to test. Especially if you’re using AI to generate ideas as well, you could have hundreds or thousands of potential paths.
So a of the times it’s just narrowing it down and you don’t actually know. You can’t, you can’t go out and test a thousand different names for your brand, right? You’re just never going to have like the budget to do that. But you might be able to test 10, right? So if AI, if AI can bring you down from a thousand to 10, then you can build the business case to say, okay, we’re going to go after these 10. Or if you disagree with the AI, then it gives you something interesting to explore. Cause you go, hold on, well, why did I disagree with that? Like, what is it that I like about that name?
And then it kind of gives you a bit more of a chance to react against what the AI is telling you. Like sometimes our customers get better results just by like having a reaction and going, like, okay, like, why do I want to fight for this name? Like, what is it that is already said here? Yeah.
Greg Kihlstrom (13:18)
Right. So how does this compare to like using predictive analysts? So, know, just looking at, you know, current customer available data, let’s just say, but, know, current customer stuff and, you know, running predictive models, like what are maybe the pros and cons of doing something like this versus versus predictive?
Mike Taylor (13:40)
Yeah, GPTs are predictive models themselves, but they’ve been trained on every single variable, right? Like, it’s online. you know, there’s thousands or trillions of parameters, right, for GPT-4 or that class model. So ⁓ you can simplify it and say, I’m going to build a predictive model. I’m going to say that, say I’m like trying to predict what movie is going to be a big hit, right, this summer I can go, OK, well, movies about vigilante orphans who are wealthy seem to do really well. that’s, you know, Batman, Iron Man, ⁓ you know, Count of Monte Cristo, Zorro, like there’s a whole ⁓ deep bench of movies like that. but then you know, OK, well, how come Iron Man was so much more successful than the Batman franchise? It’s not just because of the underlying character.
You know, in fact, for the longest time, like Batman was way more successful than than Iron Man, you know, until Marvel got their act together and did. And then and now like they’re trying to revive Batman and make that big. You know, so there are all these like smaller variables of timing, like current culture, like even within the movie franchises, like some of the Batman movies like Christopher Nolan ones have done way better than some of the others. So, you know, at some point, like you just can’t really understand all of the variables, but you have access for a couple of dollars to this incredibly predictive model that is, some degree, a human brain simulator. It has been trained on every human brain that’s been on the internet. So they are just a very, very good baseline to start with. And you can add traditional predictive models into this as well and weight your decision in some way. this is like…
the type of model that you could only dream of as a researcher to previously. And now you have it available in an API. And that’s pretty powerful.
Greg Kihlstrom (15:37)
Yeah, yeah, definitely. So, I mean, yeah, it sounds like and I in my experience, too, it’s like that the narrow the predictive like just on a narrow set of data, you’re not taking into account all of the other factors, because I mean, we’re as humans, we’re influenced by lots of everything, if not lots of things. Right. So it’s it’s not just a narrow set of characteristics in a spreadsheet or something like that. Right.
Mike Taylor (16:04)
Exactly. Yeah. And at the end of the day, these models have seen how people interact in different scenarios. especially if you don’t know that much about the customer, you can get a pretty good sense of who that customer is and get out of your own head, I think. So would I listen to this model instead of
Steve Jobs if he was giving me advice, Or like one of the great like, you know, product thinkers or marketing thinkers of the world. No, I’d probably wait their advice higher. But do I have access to that? No. like, you know, especially if I’m on my own building my business or like I’m preparing a presentation and I don’t have any resource where I can go, you know, talk to someone who has deep expertise in that field. This is a really good alternative.
Greg Kihlstrom (16:56)
Yeah. And I mean, even if you could talk to Steve Jobs, it might be interesting to see what this also says. Right. I mean, it’s isn’t it? Because they’re I mean, everyone is fallible and there’s, know, there’s a lot of I think I mean, what I’m hearing you say is, you know, it’s not it’s not that there is one single source of truth and everything like that. But all these things are very beneficial to be able to look and compare and even to use the the you know to prompt someone to you know ask deeper questions and stuff is that does that is that correct?
Mike Taylor (17:27)
Yeah, exactly. You can iterate. Whereas you can’t really do that very easily with a focus group. You have to schedule it, talk to people, or surveys, wait till all the responses come in. And then you’re like, oh, I wish I would have asked this question. You have to kind of follow up. One of the things we find is people will take the results of the research they’ve done already and then create personas based off that. So we did 126 customer interviews when we launched.
And then we have that as like an audience in Rally that we ask questions about what we’re deciding what to build. So it’s kind of a way of continuing that focus group.
Greg Kihlstrom (18:07)
Yeah, yeah. So can you talk a little bit about Ask Rally and how does it work and things like that?
Mike Taylor (18:15)
Yeah, so it is basic core level similar to chat GPT, except you’re chatting with many GPTs at once. So you put out a question, you get 100 responses, for example. And you can create those personas. you can go in and define your audience and then generate the personas. We also have a bunch of pre-created personas for you to try in different fields. Like we have one that matches US census data, for example. for, like if you’re going after the US market. And then you can just ask questions. You ask follow-up questions, or you could also do voting as well. So you can do polling and say, OK, which of these options would you choose? You can upload images and videos specifically with the Google models. So we have Google Gemini. We have OpenAI’s models. We have some open source models and the anthropic models as well. So yeah, people can switch between them.
Greg Kihlstrom (19:08)
That’s great. So, you know, as we wrap up here, a couple couple things for you. What do you think is the biggest misconception or misunderstanding about, you know, synthetic research, synthetic personas, you know, good or bad?
Mike Taylor (19:22)
Yeah, I would say that a lot of the misconceptions about synthetic personas are not specific to that industry. Either you believe in AI and you use it for everything or you see it as like a threat or a scam. And like, there’s no way I’m going to convince you. Because like, I’m, you know, it’s not my job to convince people of that. Like, they’ll figure it out eventually. You know, but like,
I think that you have some people in the industry who have taken a stance against AI, generally speaking, because they’re annoyed that people’s jobs might be at risk, which is, think, perfectly valid response. they don’t like the copyright angle, the training on copyrighted material. they made an offhand comment at a conference, and people responded well, and they thought they were going to.
I’m going to make this my whole personality to hate on AI, right? That’s fine. At the end of the day, people have to make their bets. But I think if you already believe in the power of AI and you’re using chat GBC or Claude or whatever to get feedback, then you just understand this intuitively and you’re going to use it. But almost everything that they could complain about with AI, like is also a problem with humans too. That’s what I find. like whenever I see, yeah, like they go, what about the hallucination? I’m like, have you talked to like a consumer? Because they make stuff up all the time.
Greg Kihlstrom (20:57)
Right. Yeah. Love it. Well, last question for you. I like to ask this to everybody on the show. What do you do to stay agile in your role and how do you find a way to do it consistently?
Mike Taylor (21:09)
Yeah, good question. I came from an economics background. when I was studying in school, we were learning about Japanese manufacturing was the cool thing at the time to learn. ⁓ so that was ingrained into me like Kanban boards and all that stuff from the very beginning. I would say I’m an agile native in that respect. That’s always just been the way I operate. And I don’t know if I’d be able to operate in any other way probably why I’ve navigated towards startups and stuff rather than waterfall processes. So it’s almost like I’m like a fish in water and I can’t describe the color of water. It’s just like everything I see. But yeah, very specifically, I would say one of the things I like to do is think about how my beliefs have changed with new information. So like a Bayesian kind of way of thinking is where you might describe it so I never say yes or no. always kind of adjust mentally. think about I’m adjusting my probabilities. So, so like if I, if I run a test and something fails, I don’t say I’m never going to do that again. I just say I’m less likely to try that next. Right. Right. Now the other things that I could have done just like got a little bit higher in the list and that dropped a little bit lower. So I think if you think that way gives you the mental agility, I think too, to contradict yourself and you need to be able to do that when you’re operating under uncertainty, is the case when you’re building an AI startup.







