Imagine your biggest AI-powered marketing decision is challenged by your CEO or legal team. Could you, right now, defend both the outcome as well as how the AI arrived at that conclusion?
Agility requires more than just speed; it demands the confidence to act on insights decisively. That confidence is impossible if the tools making the recommendations operate like an unexplainable black box.
Today, we’re going to talk about the growing trust deficit in enterprise AI. While we’re all chasing the speed and convenience automation promises, many leaders are realizing that fast answers are useless—and even dangerous—if you can’t verify or defend the logic behind them. We’ll explore how to move beyond ‘black-box’ systems to build an AI strategy grounded in transparency, credibility, and true, defensible decision-making.
To help me discuss this topic, I’d like to welcome, Chris Willis, Chief Design Officer & Futurist at Domo.
About Chris Willis
Chris Willis is Chief Design Officer at Domo, where he brings more than 25 years of design and product leadership to the company’s data, analytics, and AI platform. Since joining Domo early in its history, he has played a key role in shaping the platform’s design strategy, helping make complex data more accessible and useful for customers across industries. Before Domo, Chris co-founded HOUR Detroit magazine and Footnote.com (now Fold3.com), which was acquired by Ancestry.com, and worked as an award-winning illustrator, journalist, and author. His experience blends design thinking, technology, and emerging trends to drive innovation and build tools that solve real business problems.
Chris Willis on LinkedIn: https://www.linkedin.com/in/cwillis
Resources
Domo: https://www.domo.com/
The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://aglbrnd.co/r/2868abd8085a9703
Drive your customers to new horizons at the premier retail event of the year for Retail and Brand marketers. Learn more at CRMC 2026, June 1-3. https://aglbrnd.co/r/d15ec37a537c0d74
Enjoyed the show? Tell us more at and give us a rating so others can find the show at: https://aglbrnd.co/r/faaed112fc9887f3
Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom
Don’t miss a thing: get the latest episodes, sign up for our newsletter and more: https://aglbrnd.co/r/35ded3ccfb6716ba
Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com
The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Transcript
Greg Kihlström: 00:00:09 Imagine your biggest AI-powered marketing decision is challenged by your CEO or your legal team. Could you right now defend both the outcome as well as how the AI arrived at that conclusion? Agility requires more than just speed. It demands the confidence to act on insights decisively. That confidence is impossible if the tools making the recommendations operate like an unexplainable black box. Today we’re going to talk about the growing trust deficit in enterprise AI. While we’re all chasing the speed and convenience automation promises, many leaders are realizing that fast answers are useless and even dangerous if you can’t verify or defend the logic behind them. We’re going to explore how to move beyond the black box systems to build an AI strategy grounded in transparency, credibility, and true defensible decision-making.
To help me discuss this topic, I’d like to welcome Chris Willis, Chief Design Officer and Futurist at Domo. Chris, welcome to the show.
Chris Willis: 01:37:924 Oh, thank you, Greg. Greg, I’m really happy to be here.
Greg Kihlström: 01:40:178 Yeah, looking forward to, to diving in here. before we do though, why don’t you give a little background on, on yourself and your role at Domo?
Chris Willis: 01:47:336 Yeah, so I’m the Chief Design Officer and Futurist. So that’s, that is a kind of a unique sort of role. but I started with Domo 15 years ago. I was one of the the founding members of it. And we started off building an AI and data products platform to help organizations, you know, kind of gather and and turn their data into, you know, new kinds of automations and assets and and new kinds of knowledge. And we’ve been working on that problem for about 15 years. I think we were a bit naive thinking this would be a lot simpler. but you know, now in the AI age, I think you know, having a proper sort of data foundation and a platform for that takes on new importance. So we’ve been doing that. Also, as a Chief Design Officer, I also get to get a front row seat, I think, to some really interesting behind the scenes, actions happening at, you know, some of the biggest companies on the planet. So I, I really cherish my position. I really love it there.
Greg Kihlström: 02:39:076 Yeah. Love it. Yeah. That’s, that’s great. And definitely, you know, I, I know you’ve got a lot of great insights to to share. As far as, I mean, you know, where I want to start is really kind of at the, at the high level and, and just to set the context is, you know, today we’re talking about AI and kind of this trust gap. But honestly, you know, I’ve been working with marketers for, for quite a while. I’ve been a marketer for for quite a while. There’s always been a what I would say a trust gap as far as, okay, we put some numbers and some charts and and some, some things on a what a PowerPoint web page, whatever the case may be. How do we back that up? And so, you know, there’s always been a little skepticism, I would say, from the, from the C-suite or whatever on, on, on what marketing and and sales and other teams may be saying. But now we’ve got AI doing this. And so it’s not even, okay, some human somewhere in the, in the office you ask him, how did you make these calculations? Now all of a sudden we’ve got AI, which, you know, as, as we’ll discuss, you know, we, we’ve had algorithms, we’d have we’ve had predictive models, but in the last, you know, couple years here, you know, how is AI changed this kind of the this paradigm of, okay, now, not only are we asking humans, but how what do you what do you do to to address this this trust gap when, when AI is telling us things that might sound great or or not?
Chris Willis: 04:07:375 Yeah, that’s that’s quite a question.
Greg Kihlström: 04:08:706 That’s a very long question, but yeah.
Chris Willis: 04:10:776 It’s a very long question. I was gonna say, by the way, you beat me to the trust gap in marketing. I worked with a lot of marketing groups, so Right. yeah, appreciate that openness. I, I think, I think it helps, especially in times of of real innovative or technological disruption, to kind of provide a little bit of a historical perspective and go back to first principles, because I feel like, you know, this is definitely marketing-related. There’s a lot of hype out there and it’s very difficult to orient yourself towards where things are going and what is really possible, right? What kinds of capabilities are these tools truly unlocking? And I think that’s that’s a challenge. I would say just from a very, very practical standpoint. You know, I work in software development, if I were a product manager and I was writing a PRD for describing a large language model, I don’t know what I would leave out, right? So a large language model isn’t really designed for any one person. It’s not really designed to do any one thing. It does lots of different things in lots of different ways. And because it’s, you know, probabilistic, it has a random component. It doesn’t even do those things in the same way. That’s a very different kind of technology than I think we’re used to. And as well, you know, if you look at just AI in general, because we’ve used AI in many different aspects of business for for decades, right? you know, linear regression is is a form of machine learning, which is under an AI category. And you know, it helps you make predictions. That’s great. Frankly, until this moment, all that AI has ever done is really classified or made a prediction, right? So that’s helping you figure out, okay, you put something in your shopping cart, what should you put next in that shopping cart? Or, you know, even weather forecasts and things like that. One of the fundamental differences is that these new language models started to generate, and even like, you know, the image generation models, these diffusion models, right? Where you you you sort of build things out of just random noise. And I think that is such a paradigm shift, especially when it comes to marketing, right? Cause in, in marketing you want to you’d be using those tools to predict how well maybe a campaign’s going to do, but now you’re actually using to create the campaign. So that’s a that’s a big shift. I would also say that there’s people are starting to have a very personal relationship with some of this technology in a way that they didn’t have before. So previously, big technological shifts came out of big institutions or governments. The internet, for example, that was a big government project, right? GPS, these are, you know, the the tech used in smartphones, these were things that were big projects. They were kind of, you know, in some ways kind of trickle down consumer products after a while. This is a product that’s started with consumers. And so we’re all, I think, in many ways part of this experiment. And I think as, you know, leaders, you know, practitioners, et cetera, are starting to play with some of this, it’s it’s getting a little bit more difficult, frankly, to understand where that what researchers would call the jagged edge of knowledge and applicability lies. So, you know, for example, you might find yourself saying, I want to use ChatGPT to make a a funny poem for my kid for their birthday. And it’ll come up with something and maybe it’s good enough, you know, maybe you tweak it a little bit. But then you’re like, okay, now I want to write a whole marketing campaign. You’re like, oh, that looks good. And then you look at it through the eyes of your experience and domain expertise and you’re like, wait a minute, there’s something that’s a little off there. It’s maybe a good start. There’s an outline. And so I think what we’re starting to see is we’re moving from that learning experimentation phase to now companies are trying to figure out, well, how do we actually get performance out of this? And what is it good at? And I don’t think that frontier, that jagged edge is very clear in many ways. I think there’s some ways to, you know, investigate that, and I think there’s some good strategies for starting if you want to talk about some of those.
Greg Kihlström: 08:14:141 You know that moment when marketing wants a landing page, design mucks it up, and engineering says, yeah, we’ll get to it. Thousands of businesses from early stage startups to Fortune 500s are choosing to build their websites in Framer, where changes take minutes instead of days. Framer’s a website builder that works like your team’s favorite design tool. With real-time collaboration, a robust CMS with everything you need for great SEO, and advanced analytics that include integrated AB testing, your designers and marketers are empowered to build and maximize your dot com from day one. Changes to your Framer site go live to the web in seconds with one click, without help from engineering. Framer is also an enterprise solution, giving brands like Perplexity, Miro, and Mixpanel the confidence they need to build their websites in Framer. Learn how you can get more out of your dot com from a Framer specialist or get started building for free today at framer.com/agile for 30% off a Framer Pro annual plan. That’s framer.com/agile for 30% off. framer.com/agile. Rules and restrictions may apply.
Greg Kihlström: 09:17:154 I think part of this is that idea, I mean, even to use the chat ChatGPT example that you use. And I mean, I’ve done that. I’m sure many, many have done it and and gotten similar enough results. It’s like, if you squint, it’s it’s fine. And then you start, oh, that’s not what we sell or that’s not what I wanted or or or things like that. And I think the I mean, there’s several parts of this. I mean, one, you know, if you any any L L M, you know, if you feed something into it, it’s not going to give you the exact response twice, right? So the, I there’s a stochasity or something. But anyway, yeah. Somebody look that up. But but so there there’s that. But also that also kind of speaks to the, this idea of the, the black box of like, well, how’s it making the decision? To your point, you can’t document an an L L M to the, to the extent that you might document other types of software because it’s kind of everything and for a long period of time. So, you know, when, when humans are relying more and more on AI for decision-making, it’s like humans want things quickly and conveniently and the enterprise needs things that have rigor behind them and and everything like that. You know, how do you how do you deal with that tension in a way that that kind of give gives the best, you know, meet in the middle or or something like that? Like what’s what’s the right way to think through that?
Chris Willis: 10:52:160 Well, I think you’re, you’re, you’ve you’ve already seen some companies kind of go through an AI hangover where they sort of misunderstood what the technology could do. They thought, oh, this this will replace human intelligence, human judgment, et cetera. And maybe there’s some other factors as to why they would make those moves. But I think they’ve quickly realized, no, this is not necessarily the case, to the point that you made, which is these sort of random or stochastic aspects of these models means you you don’t necessarily get the same thing out every single time. And if you’re trying to run an enterprise where you’re putting data in in 15 different ways and you’re getting 15 plus different results, it’s going to be really hard to figure out how to run that enterprise. You know, no one can replicate or explain why they made that particular decision. Uh that’s not to mean that there isn’t usefulness in any of this. I think we’re still in the process of figuring that out. but I think there’s also, you know, from a just a a very tactical, perspective. There’s sort of two camps of people and and researchers have kind of studied some of these groups. but there are the people who either use AI models completely, and I would say by completely without a lot of judgment or review, and there are those who don’t use them. I would put them in the same camp. So either all in or all out. I think those people are going to be challenged in kind of redefining their work in the right kind of way, compared to the people who are figuring out how to create a hybrid sort of work environment with these tools. And I know sometimes it sounds kind of cliche that it’s not, you know, person versus AI, it’s it’s person using AI to their best advantage. And you’re starting to see some of that. I don’t think you’ve seen yet, at least I haven’t seen the signs where, you know, AI is replacing everybody. And there’s a lot of, you know, apocalyptic type stories floating around there. But I don’t feel like those are grounded in any real straightforward, understandable principles. And the reason is, to your point, you can’t explain these things. So at some point, someone in your organization is going to have to own that decision. That’s the way organizations are set up. AI makes it a little trickier because those organizations are also set up to keep people in their lanes, right? But AI doesn’t care about your lanes, right? So now you have marketers coding, right? You have product managers or developers creating, you know, marketing ideas, right? And so I think one of the most overlooked aspects and impacts of AI is that in many ways it’s coming after your organization and it’s coming after your culture. So you have to figure out like, not how am I going to turn, you know, people formally known as doers into automators, but these are going to become doers turn into deciders. And that means the work is going to shift from, you know, mostly doing all the tasks and some of those tasks might have to be human for quite a while to how do I leverage it and apply my judgment, my taste, my curative abilities and sense-making abilities to leverage that? Because the models, to be frank, they do not learn, right? They’ve been trained on trillion, well, they’ve been trained on everything, right? but they’re, they’re kind of like the equivalent of, Groundhog Day, right? They, they wake up every day with every new prompt as if it’s the same day, right? And so there’s, obviously I think, I think there’s a lot of infrastructure that frankly needs to be built because and and not to make it sound too technical here, but these models are driven and focused on syntactic information, words, right? Prompts. They don’t have judgment and they don’t have semantic understanding. I think they, they sometimes act like they do, but, you know, I think it’s, it’s very easy to kind of disprove that. If you put in one question and then you slightly tune it, ask the same question and get a very different answer. So I think this is part of the learning process. And I do feel that there’s definitely some places where there is some performance gains to be had, some efficiencies to be had. but I don’t think that’s necessarily obvious for obvious for everyone. So the strategy here and I think you’ve kind of, you know, alluded to this is, I think you have to engage with this technology a little bit deeper than you did before. You know, when the cloud data warehouse sort of wave came and everyone’s like, yeah, we got to put all our data in the cloud. You didn’t have to frankly, code anything before you understood what the concept of cloud data was, right? You use Dropbox, I’m like, yeah, it’s like that. I put the thing in there and then I can access it anywhere, right? I think this is different. and I think the understanding both from like a a personal perspective will help kind of roll up ideas to a a more organizational perspective, but you have to be cautious about doing that too quickly. One of the traps I’ve seen in many organizations and I’ve been sitting in these meetings with the C-suite executives where they used the model and they’re like, that was fun. Why can’t it do it for my entire business? Or they’ll just assume it would. So yeah, it answered this question about, you know, an Olympian that I was just watched on TV. Why can’t it answer everything about like why we didn’t make our Q4 numbers? And the reason is it has no idea about your business because it’s never seen your business. But consumers, you know, they don’t see that. It feels like magic. And I think that’s that’s a, you know, we have to in some ways vaccinate ourselves from some of that that magical thinking.
Greg Kihlström: 16:41:406 Yeah, I mean, I I think it’s the the magic plus, I think and and you touched on this a little bit earlier too, is I think the at least the the chat G B T kind of the the conversational nature of it makes it feel like you’re having a conversation, makes it feel personal and emotional and and things, you know, I I think there’s a psychological aspect to this also. And and I think that, you know, we’re human. Like we get our our judgment can get clouded by affinity for things and and and So, you know, probably a topic for some someone’s research paper at some point, but if the if not already. But you know, I did I do wonder the the other thing that I wonder just, you know, as a Chief Design Officer, like I wonder from you, like, how does this, how do you design for the interface where to your point, a marketer, a, you know, a software engineer, a an executive, they’re all, they’re all it’s it’s democratized to the point where anyone can do anything, you know, like, and yet we still need transparency on how things are, sorry, this is a, I don’t know how to I’m I’m glad I’m asking and you’re not.
Chris Willis: 17:53:236 a lot of questions in there. And I, I didn’t wanna pass it back to you a little bit.
Greg Kihlström: 17:54:196 I guess from a design perspective, it’s like, how do you, how do you democratization of of all this stuff is great, but how do you also design an interface that’s still usable and like a human can focus on what they need?
Greg Kihlström: 18:10:489 Does that
Chris Willis: 18:11:066 Okay, so this is this is definitely in my wheelhouse. Thought about this a lot and, it comes up every single day. But just taking a step back, I would say if I was really honest with myself and and I am, I think with with you too, which is, you know, the the world of enterprise software, the usability of enterprise software, it it it’s been a pretty low bar, right? And you know, you had to like put in, you know, you’ve all had to use those apps where it’s like, oh, I got to put in my expense reports and you’re just like there it’s the experience is so bad that you wish you never went on the trip in the first place, right? It’s that kind of experience. Right. there is I think a a certain necessity in organizations where everyone has to like be working off the same database and maybe we have to push some of the same buttons because otherwise we’ll fall in that trap that we had before, which is everything is so bespoke and individual and idiosyncratic that it doesn’t it doesn’t, come together in any kind of useful organized way, right? And that’s that makes sense. However, I would say, and I’ve designed a lot of applications and a lot of different kinds of software, I’ve always designed a compromise, right? I’ve always had to design basically one thing that works for many different people. And this is actually one of the things I get really excited about when it comes to large language models is because we kind of look at at software as being customizable or personalizable, right? And we think that’s sort of the the epitome of of what we can do from a user experience standpoint. But I think L L M s, at least what we’ve been working on, open up the next level, which is an individualized experience, right? And and so the gap that’s missing here that you have to provide is the context. And context is kind of a big broad term, but it needs to be defined and created and supported as new kinds of context infrastructure. Because if you can do that, then I think you do open up a tremendous amount of potential, just by taking, you know, maybe what was, you know, sort of a software with the UI that kind of worked well for everybody to, wait a minute, I only need one little aspect of that software. So using context and AI to just reduce the findability, discoverability problem. Many enterprise software products have mega menus, right? Like that shouldn’t be a thing, right? But it’s because it has to work for everybody. All those things have to be accessible and you just have no way of inferring what anyone would need. But if I know you work in marketing, and let’s say you’re using, you know, like a a data platform like Domo. Well, I know there’s probably 99% of the stuff in here that you do not need to see, right? There’s your stuff. There’s stuff that’s maybe that next ring outside of that that we can kind of infer context. Well, if you liked this, then you probably liked that, right? This is a model we kind of understand pretty well. But on top of it, there’s other context that you can leverage using these models that was very difficult to do if not impossible before, which is, well, what are some of the other things we can know about you? We have a term for this. We call it proprietary exhaust. It’s kind of a strange term, but it’s this idea of that when you’re interacting with things or maybe there’s other aspects of your work that create new kinds of pieces of data and information that we kind of just threw away or just kind of evaporated into, you know, your organizational sphere. Those could be captured and then applied so that, for example, maybe the tools have short-term versus long-term memory. So you’re having a conversation, you know, conversational interfaces make a lot of sense with these models because they deal with language. but you then can create, you know, actual tangible digital artifacts. But let’s say you’re asking about a bunch of things. Well, those things might be really important now, but in a week or two, they might be less important. Yeah. Well, if if you’re not good about architecting memory, which by the way, the models do not have memory. I mean, some kind of have that as a feature, but it’s not built in, it’s not learning. Yeah. but, you know, it it’s like the problem you have with, well, I don’t know if it’s a problem for you, but it is for me. Around Christmas, I get on Amazon and I order a bunch of things that are perfect for my family, but I have no interest in. And then for the next six months, it’s constantly telling me I should buy more of those, you know, like a crocheting kit and a gardening book. And I’m like, I don’t do either of those things. So I think that gives us an idea of the gap, right? There’s, yes, there’s a trust gap, which we can talk a little bit more about. But there’s also a context gap, which is really important. And I think until we start figuring out some of those solutions, it’s it’s going to be limited.
Greg Kihlström: 22:52:629 Yeah. Yeah. I mean, I I love I love that concept of the the context. I mean, it in my head at least, it flips it to your point, the the bloated software that has a million options. You know, it’s nice that it has those options, but they get in the way and they often confuse. But so to be able to customize the interface, like that seems like in it’s it’s keeping all of the stuff and yet making it relevant. That’s that’s great.
Greg Kihlström: 23:22:201 Let’s talk a little bit more about the the credibility gap too. You know, because again, I think again, it’s it’s not a it’s not a new issue that marketing throws some numbers in front of somebody and and they they they’re skeptical. But I guess how can AI help here? You know, cause there there I think there’s plenty of Doomsday scenarios or or negative negative ways of talking about it too. But like, how can AI be explainable and actually make this this challenge even like better and and easier for for marketers?
Chris Willis: 23:51:251 So I think the explainability uh problem is is going to be a difficult one to solve, to be frank. Yeah. So, I think for marketers, at least what I’ve observed is there’s there’s definitely a lot of potential to and they’re already being used, you know, for like content generation. but that still requires real human judgment. I mean, if you’re going to do it right. The challenge here from I would say a human behavior standpoint is that often times convenience wins out over everything. Right? Over credibility, over cost. So if it’s easy, we tend to fall into those traps. That is a real problem because, you know, let let’s say you’re writing content. Not all content has to be, you know, you know, have the effort that you would put into say writing some sort of masterpiece. However, the nature of the models is such that they they often times, as you had mentioned, it it looks human. It looks real enough. But there is something happening under the hood, which if you care about your brand voice, you care about your tone, you can maybe get some pretty good ideas and content generated and and for the many cases, maybe you could potentially automate some of that. But the nature of the models themselves is that they look at a distribution of language. So, I mean, I think we all know this, but just to just a level set, the way these models work is what we call like next word or next token prediction. So they’ve read everything on the internet and they’ve figured out sort of the probabilities that, you know, this word will follow after that word. Do that on trillions of things. It starts to look a lot like kind of human language and in very much, you know, can can, you know, fool us, but it can also, if steered correctly, be super valuable. However, the models tend to chop off the tails of those distributions. So if you rework your content over and over in a model, it’s actually going to get simpler. And and it’s going to become more obvious and more generic. this is something you can actually see happen. We’ve been running actually some tests to see like, does it do the same thing with code, for example? Like, do we want that to happen? Yeah. so, so what does that mean, you know, as as a, as a marketer? Well, it means that your voice, your unique voice is at risk of being lost in that. So I think that’s an example of where you need to have humans in there and say, well, you know what, this is more of our voice and tone and maybe it’s not perfect. Maybe it’s a little a little strange and a little crazy and a little a little interesting. That’s exactly what you want. And, you know, obviously, extrapolated, people call this the dead internet theory, which would be, you know, these models are generating all of this content and then learning from all of that content. And as they over over time average out, you know, knowledge just sort of disappears and and all of what’s very human disappears. So that’s that’s kind of a big response, but I say, I I would say from the practical example, it’s you have to be very, I would say, resolute and protective of what makes you human. Your judgment, your creativity, your taste, your ability to curate, your judgment. And there is because of this convenience, I think going to be a deskilling problem where people kind of offload some of their creative thinking, what makes them human human and they become observers of creativity rather than the creators. So I think this is also where it leads into a management and leadership and cultural issue, which is what do we value? How do we do our work? How is work going to shift? Where is it okay to automate and this is good enough and where do we need to lean in? And I think that’s the conversations that should be happening at many of these organizations.
Greg Kihlström: 27:46:757 Yeah. Yeah. It’s kind of like a, do we want to be, do we want AI working for us or do we want to be working for AI in a way? I mean, to your to your point, it’s I I’d rather I’d much rather be driving even if my design or my writing or whatever is is inferior in some way, I I’d rather be, you know, the the one in charge and and that I let’s call it the Terminator scenario or something. But yeah.
Chris Willis: 28:11:038 To your point, writing is a great example. I mean, I know you’ve done a lot of it. writing is more than just the act of putting words down onto a computer or onto paper, right? Writing is thinking because writing that makes sense and that has value has some structure to it. Has it communicates some kind of knowledge, which otherwise might just be floating around your head in a form we call maybe common sense, right? That’s the difference. And so that’s why writing is hard. And by the way, even when it’s hard and you maybe it’s not the best writing you’ve ever done, it’s made you a better writer. It’s made you a better thinker. So I I I I worry, especially maybe for like a younger generation, that it’s going to be a lot easier to say, you know what, I don’t want to do that hard work. Yeah. Yeah. And and you’re not realizing what you’re sacrificing when you do that.
Greg Kihlström: 29:00:845 Yeah. Yeah.
Chris Willis: 29:01:456 So I I actually create as an aside, I I actually have prompts in ChatGPT when I’m doing stuff that it tries to push me from a Socratic sort of way. Like it it pushes me to like recall information and challenge what I’m saying. And and I I think that’s a good thing. I’ve never been able to like cut and paste anything out of a conversation. But it has been helpful in terms of researching. you know, humans are are, you know, maybe, you know, great at judgment and pattern matching and intuition. We’re not great hard drives. Like we’re not great at storing and retrieving tons of information. The models I think are a great collaborator in that regard. So I find that super helpful. But yeah, you you got to you got to, you know, I would say make yourself work harder.
Greg Kihlström: 29:47:890 Yeah. Yeah. I love that. Well, hey, as as we wrap up here, a couple, couple things for you. you know, just one just to get a sense of, you know, you’ve talked about certainly Domo making investments in in AI and and and everything. Where where what are you focused on right now from a, let’s just say from a, from a design perspective in in the months ahead. Like what what are some of the challenges that you’re trying to solve or think through and and things like that?
Chris Willis: 30:18:551 Yeah, so, so the biggest challenges, I think are some of the things we talked about, which is what is the infrastructure we need to fill the context gap? Yeah. So AI and for the foreseeable future, it’s probably not going to get much smarter and I think it would be okay if it didn’t. I mean, I think you’ve already seen some of these companies sort of pivot and go like, okay, we’re kind of changing our our models a little bit. But you know, you can see all the the benchmarks. They’re useful. Yeah. but they again, they don’t know your business. They don’t know the knowledge you’ve already collected, how you think about that business. What’s the language of your business, right? So when you say revenue, what do you mean? Because, you know, so there’s there’s a lot there, there’s sort of the semantic understanding. And then, you know, the second part is really about how can we use these tools not just to sort of improve findability and and, repeatability. But how can we use it to make the right kinds of data find you so that people can make better decisions. That’s always what we’ve been about. I don’t think you want to give away decision making to your machines too soon. I know there’s a a big, obviously a big push for automation. automation using AI without real oversight and monitoring is just an invitation for for risky outcomes. So I feel like people are starting to learn there’s more there that’s needed. But I think if you can unlock that, there’s going to be a lot of companies that are going to like grow and learn and and maybe even shift what they do in in very profound ways. Yeah.
Greg Kihlström: 31:51:75 So I love it. Well, Chris, thanks so much for joining today. A last question for you before we wrap up here. what do you do to stay agile in your role and how do you find a way to do it consistently?
Chris Willis: 32:02:831 I love to experiment. I love to just try on different ideas and push them as far as I can until they break. And I’m lucky to work with really, really smart people who challenge me every single day. But yeah, no, I I I’m definitely a, I I love this Zen idea of beginner’s mind, you know, set aside everything you know and just try to observe it and see what is it really telling you.






