Expert Mode: Navigating the AI Trust Deficit Beyond the Black Box

This article was based on the interview Domo Chief Design Officer Chris Willis on what to do about the trust deficit in enterprise AI by Greg Kihlström, AI and MarTech keynote speaker for The Agile Brand with Greg Kihlström podcast. Listen to the original episode here:

We find ourselves at a peculiar intersection. The generative AI wave has crashed onto the shores of the enterprise, and as marketing leaders, we are tasked with harnessing its power. The promise is immense: unprecedented speed in content creation, hyper-personalized campaign ideation, and data analysis at a scale previously unimaginable. Yet, alongside this excitement, a quiet anxiety is taking root. We are handing over critical functions to systems that are, by their very nature, probabilistic and opaque. The “black box” is no longer a niche concern for data scientists; it’s a boardroom-level issue. When your AI-driven campaign recommendation is challenged, the ability to merely point to a successful outcome is insufficient. You must be able to defend the logic, and that’s precisely where the trust deficit emerges.

This isn’t an entirely new problem, of course. Marketers have long fought for credibility, backing up strategies with charts and data that are often met with healthy C-suite skepticism. But the challenge is now amplified. The source of our insights is no longer just a human analyst who can be questioned, but a complex model trained on the entirety of the internet. To navigate this new landscape, we need to move beyond the surface-level hype. We need a framework for building an AI strategy grounded not in blind faith, but in transparency, defensibility, and a clear-eyed understanding of both the technology’s power and its profound limitations. Chris Willis, a founding member and the current Chief Design Officer and Futurist at Domo, has spent fifteen years at the forefront of this evolution, watching organizations grapple with turning data into action. His perspective offers a much-needed dose of realism and strategic guidance for leaders striving to build a truly intelligent, and trustworthy, marketing function.

The Paradigm Shift: From Prediction to Generation

For decades, AI in business has been a relatively straightforward affair. It has excelled at two primary tasks: classification and prediction. It could predict which customer was likely to churn or classify an inbound lead as hot or cold. These were powerful capabilities, to be sure, but they operated within defined parameters, augmenting existing workflows. The advent of Large Language Models (LLMs) represents a fundamental change in this relationship. We’ve moved from using AI as a predictive tool to employing it as a generative partner.

“Frankly, until this moment, all that AI has ever done is really classified or made a prediction… One of the fundamental differences is that these new language models started to generate… now you’re actually using [it] to create the campaign. So that’s a big shift.”

This is more than a simple feature update; it’s a redefinition of the creative process. As leaders, we must recognize that this shift introduces a new set of risks. A predictive model that is slightly off might lead to a suboptimal media spend; a generative model that is slightly off can create content that is factually incorrect, off-brand, or legally problematic. Willis points to what researchers call the “jagged edge of knowledge and applicability”—the unclear frontier where AI’s utility ends and human expertise must take over. An LLM can generate a serviceable outline for a marketing campaign in seconds. But it lacks the domain expertise, the taste, and the deep understanding of your brand’s unique voice to know that the outline, while coherent, is also profoundly generic and misses the mark. The challenge for us is not to simply adopt these tools, but to institutionalize the critical judgment required to know when a generated output is a brilliant starting point versus a dangerous misstep.

The Real Target: Your Organization and Your Culture

While much of the discourse around AI focuses on task automation and efficiency gains, its most profound impact may be on the very structure of our organizations. Technology, particularly one as versatile as generative AI, does not respect the neat lines of an org chart. When marketers can write Python scripts with a simple prompt and engineers can ideate on campaign slogans, traditional roles begin to blur. This isn’t a threat to be managed, but an opportunity to be seized—if we’re prepared for the cultural shift it demands.

“I think one of the most overlooked aspects and impacts of AI is that in many ways it’s coming after your organization and it’s coming after your culture… these are going to become doers turn into deciders. And that means the work is going to shift from… mostly doing all the tasks… to how do I leverage it and apply my judgment, my taste, my curative abilities and sense-making abilities to leverage that?”

Willis’s framing of the shift from “doers” to “deciders” is critical for every marketing leader. Our teams’ value will increasingly be defined not by their ability to execute a series of tasks, but by their ability to direct, critique, and refine the output of AI systems. The work becomes less about the manual labor of creation and more about the high-level cognitive tasks of strategy, curation, and quality control. This has massive implications for hiring, training, and performance management. Are we equipping our teams with the skills to be effective “deciders”? Are we creating a culture that rewards critical thinking and judgment over mere output and speed? The convenience of AI creates a gravitational pull towards what Willis calls a “deskilling problem,” where we offload not just tedious work, but the very thinking and practice that builds our expertise. The most effective leaders will be those who actively design new workflows that integrate AI as a collaborator, not a replacement, ensuring that human judgment remains firmly in the driver’s seat.

Beyond Personalization: The Untapped Potential of Context

For years, the holy grail of enterprise software has been personalization—the ability to customize an interface or workflow to a user’s needs. Yet, in practice, this has often resulted in bloated platforms with “mega menus” that attempt to be everything to everyone, ultimately serving no one perfectly. We’ve been forced to adapt to the software’s logic. LLMs, however, present an opportunity to invert this dynamic, moving from mere personalization to true individualization, where the software adapts in real-time to the user. The key to unlocking this potential is context.

“I think LLMs… open up the next level, which is an individualized experience… And so the gap that’s missing here that you have to provide is the context… if you can do that, then I think you do open up a tremendous amount of potential, just by taking, you know, maybe what was, you know, sort of a software with the UI that kind of worked well for everybody to, wait a minute, I only need one little aspect of that software.”

This is a forward-thinking but immensely practical concept for any leader managing a complex MarTech stack. An AI that understands your role, your current projects, the language of your business (“When you say revenue, what do you mean?”), and your recent activity can radically simplify your team’s interaction with technology. Willis refers to the data generated by our work as “proprietary exhaust”—the digital trails of our projects, queries, and collaborations. By harnessing this context, AI can proactively surface the specific data visualization, content template, or analytics report a team member needs at that exact moment, hiding the 99% of the platform that is irrelevant to their task. Filling this “context gap” is the next great infrastructure challenge. It’s about building systems with memory and semantic understanding of your business, transforming our tools from passive repositories into active, intelligent partners that reduce friction and accelerate decision-making.

Winning the War Against Convenience

Ultimately, the greatest threat to leveraging AI effectively may not be technical, but human. The sheer convenience of generative AI is its most seductive and dangerous quality. In a world of perpetual deadlines and resource constraints, the temptation to accept “good enough” output from a machine is immense. But these models, by their very design, gravitate toward the mean. They are engineered to predict the most probable next word, a process that inherently sands off the interesting, quirky, and unique edges of language and ideas.

“The challenge here from I would say a human behavior standpoint is that often times convenience wins out over everything… the models tend to chop off the tails of those distributions. So if you rework your content over and over in a model, it’s actually going to get simpler… your unique voice is at risk of being lost in that. So I think that’s an example of where you need to have humans in there and say, well, you know what, this is more of our voice and tone…”

This is a stark warning. Your brand’s voice is a priceless, hard-won asset. Unchecked reliance on AI for content generation risks diluting it into a sea of sameness, an enterprise version of what some call the “dead internet theory.” Leadership requires establishing a clear philosophy and guardrails for AI use. This isn’t about prohibition; it’s about discipline. It’s about defining which tasks are suitable for automation and where human creativity and strategic intent are non-negotiable. The goal should be to use AI as a Socratic partner—a tool that challenges our thinking and expands our research capabilities, not one that does our thinking for us. As Willis notes, writing isn’t just about putting words on a page; it’s the very act of structuring thought. Sacrificing that hard work on the altar of convenience is a trade we cannot afford to make.

The trust deficit in AI is not a problem that will be solved by a more advanced algorithm or a slicker user interface. It is an organizational and cultural challenge that demands our full attention as leaders. The path forward requires us to build a smarter partnership with these powerful tools, focusing our efforts on filling the “context gap” to make them truly useful, redesigning our teams to elevate human judgment, and maintaining the discipline to resist the siren song of convenience. The goal is not simply to do things faster, but to build a more intelligent, adaptable, and ultimately more human-centric marketing organization.

The leaders who thrive in this new era will be those who see AI not as a magical solution, but as a powerful catalyst for rethinking how we work, what we value, and how we create. They will build the necessary infrastructure—both technical and cultural—to ensure that as our tools become more powerful, our own judgment becomes sharper. The challenge is not to simply adopt AI, but to do so with the foresight and rigor required to build unshakable trust—in the technology, in the outcomes it produces, and, most importantly, in our own capacity to lead through profound change.

Posted by Agile Brand Guide

Spreading knowledge, one marketing acronym at a time. Content dedicated to all things marketing technology and CX.