What if the multi-million dollar AI initiative you’re championing is being silently sabotaged, not by a competitor, but by your own data infrastructure?
Agility requires more than just fast decision-making; it requires a data foundation that can deliver insights at the speed of business, without the traditional delays of moving and duplicating information. This ability to access and act on real-time, comprehensive data is what separates brands that lead from those that follow.
Today, we’re going to talk about the silent killer of AI projects.
While everyone focuses on the glamour of AI models and algorithms, the reality is that most enterprise initiatives stall or fail at the data layer. We’ll be discussing why even modern data warehouses can create new roadblocks, and how a different approach to data management can make your enterprise data truly AI-ready.
To help me discuss this topic, I’d like to welcome, Ravi Shankar, Senior Vice President and Chief Marketing Officer at Denodo.
About Ravi Shankar
Ravi Shankar is the Senior Vice President and Chief Marketing Officer at Denodo. He is responsible for Denodo’s global marketing efforts, including product marketing, demand generation, field marketing, communications, social marketing, customer advocacy, partner marketing, branding, and solutions marketing.
Ravi Shankar on LinkedIn: https://www.linkedin.com/in/ravishankardevaraj/
Resources
Denodo: https://www.denodo.com/en/document/analyst-report/denodo-lakehouse-roi
This episode is brought to you by Denodo. Powered by logical data management, the Denodo Platform accelerates data integration, management, and delivery for all your business’s data needs.
Catch the future of e-commerce at eTail Palm Springs, Feb 23-26 in Palm Springs, CA. Go here for more details: https://etailwest.wbresearch.com/
Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom
Don’t miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show
Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com
The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Transcript
Greg Kihlstrom (00:00)
This episode is brought to you by Denodo. Powered by logical data management, the Denodo platform accelerates data integration, management, and delivery for all your business’s data needs. What if the multi-million dollar AI initiative you’re championing is being silently sabotaged? Not by a competitor, but by your own data infrastructure. Agility requires more than just fast decision-making. It requires a data foundation that can deliver insights at the speed of business without the traditional delays of moving, and duplicating information. This ability to access and act on real-time comprehensive data is what separates brands that lead from those that follow. Today, we’re going to talk about the silent killer of AI projects. While everyone focuses on the glamour of AI models and algorithms, the reality is that most enterprise initiatives stall or fail at the data layer. We’re going to be discussing why even modern data warehouses can create new roadblocks and how a different approach to data management can make your enterprise data truly AI ready. To help me discuss this topic, I’d like to welcome Ravi Shankar, Senior Vice President and Chief Marketing Officer at Denodo. Ravi, welcome to the show.
Ravi Shankar (01:09)
Greg, thanks for having me on.
Greg Kihlstrom (01:11)
Yeah, absolutely. Looking forward to talking definitely a timely topic here. So definitely looking forward to diving in. But before we do, why don’t you give a little background on yourself and your role at Denoto.
Ravi Shankar (01:21)
Sure. So I have been as the CMO for Denodo for the last 10 plus years. And my experience has been all in the data field. So prior to Denodo, I was with Informatica handling master data management. And prior to that, I was with Oracle in a number of different capacities, both including technical asset developers, started as that, and then handling all the marketing and the product management functions as well. So overall, 30 plus years just in the data industry.
Greg Kihlstrom (01:49)
Yeah, yeah. And so I know I gave a brief intro and at the very beginning, but for the listeners that might not be as familiar, could you tell us a little bit about Denodo and what’s the core problems that you solve and who are the typical customers that you’re helping?
Ravi Shankar (02:04)
So Dinodo is a global provider of data integration and data management. So we are actually a recognized leader in the space by the analysts like Gartner, Forrester, and so on. We are a mid-sized firm and we have been in business for the last 25 years. We have customers across 40 plus countries and many of them are very large companies, primarily like in the Fortune 500, Global 2000, and so on. And the company itself is present in over 20 different countries and we have hundreds of employees.
And many of these companies actually use our technology, which is a logical approach to data integration, to gain a unified view of the information without having to physically move them all into a single repository. I guess we will be talking about that a little bit more. And these tend to enable our customers to get the data to their consumers much more faster with fewer resources and at a less cost. So, DinoDoF has really made a mark in the market with this unique technology.
Greg Kihlstrom (03:06)
Great, great. So let’s dive in here and we’re talking about a few things today, but I want to start with this concept of the strategic data disconnect. And many leaders that I talked to have invested heavily in modern data platforms like Snowflake or Databricks. they’ve, you know, believing that they’ve solved their data centralization problem. Yet you argue that this often creates new silos and even delays. Can you unpack that paradox for us?
Ravi Shankar (03:35)
Sure. So centralization is very important from a business user perspective. Like I’m a business user. I would like to go to one place to get all my data. So that actually is easier for me to find the information. Now, the data technology, the companies that have been providing this have been solving the problem for the last 40 plus years. So if you go back to the 1980s, databases started coming up like IBM, Oracle.
provided a repository to store this information centrally and make that available. But success of that technology grew the number of databases. And then in the 90s came the data warehousing to pull the data from multiple different databases into a data warehouse, like companies like Teradata and so on. And soon that was only structured data in the millennium. Then unstructured data started coming in. And then we needed to have a data lake.
And that’s where companies like Cloudera, Hortonworks, MapR came into picture. And soon, you know what happened to those companies. And eventually, that evolved into Lakehouse in the 2010s with Snowflake, Databricks, and so on. So you can see the centralization is key, but the success of that has started the sprawling effect. So the centralization, decentralization go back and forth. Now, in all these cases, these repositories are all very purpose-built for analytical operation and so on. So let me tell you the paradox that you asked for. If there is only one platform that you need as advertised by these vendors to store all the data, why is it that companies have Snowflake, Databricks, Oracle, Teradata? They have multiple of these technologies. That’s the paradox. So in fact, JP Morgan Chase, one of my friends who works there in the IT side, used to say, they have a no vendor left behind policy.
So they would actually buy all the technologies under the sun. So that is the state of the market. if you’re coming to where the problem lies is, in terms of the consumption, whether it’s the marketing, sales, and so on, the IT provisions are repository just for them, for that particular purpose, and makes it available to them. So when they do that for marketing, sales, finance, and so on, they create multiple silos. So to sum it up, centralization is key from a consumption perspective, but the problem is the IT in the way they make it available, they create this purpose built and hence they create these silos across the board.
Greg Kihlstrom (06:05)
Yeah, yeah. And it sounds like it’s they’re just bigger silos. It’s not it’s not really centralized. It’s just bigger, bigger ones, basically.
Ravi Shankar (06:13)
Yeah,
for a specific purpose. There are operational, there’s analytical. So there are multiple different purposes. You cannot use the same thing for everything. So that’s why you create something that is very specific for that use case.
Greg Kihlstrom (06:26)
Yeah, yeah. And so from a a chief marketing officer’s perspective, you know, some some may see this as just an IT issue, but it’s not right. And, you know, how does this data disconnect manifest in day to day marketing and CX initiatives? You know, where do marketing leaders maybe feel the pain most acutely?
Ravi Shankar (06:47)
Well, the pain is actually felt at the point of business use. So I, as a business user, I would like to come, you know, in marketing, I would like to understand the campaign performance. So, so that way I can understand the campaign that is actually launched, how it is performing so that way I can fine tune it. Okay. So we have a campaign that is running. We just launched a new book with O’Reilly on the topic of logical data management, which is topical for this podcast. And I need to aggregate the information across CRM, which is where all my needs and contacts are loaded. I have my marketing automation system, which is the one that executes the campaigns and sending the emails. I have an account-based marketing because we are targeting very specific accounts. And then I have my website where people land to download this book and so on. So I need to integrate the information across all these different sources and analyze it to understand the efficacy across multiple different spectrums.
I need to know the company sizes. I want to know the verticals that they are in, the persona who’s actually downloading the book, or even the regions, like I’m executing a worldwide campaign. So all this data is extracted from all these systems, and it needs to be translated to the local format and then loaded into a LACOS, which all takes too much time. So by the time all this data is loaded and the analysis is done and I gain the insights,
the initial system, the source system in which people are still downloading goes on. So this lake house where I can store this information has become out of sync with the source already. So the information that I get is not relevant anymore. So consider this one example I will give you. Let’s say I’m a retailer. I’m running a day long promotion, kind of Macy’s. I want to understand the performance is working, this campaign is working in the first half of the day. So that way I can fine tune it and then I can adjusted for the second half of the day so that way I can maximize my results. But the problem is like when you use this approach of moving the data to a central place and analyzing it, it’s not going to be ready for us for me to kind of do that real time campaign check-in and ultimately both the retailer and the consumers also lose. So that’s the problem.
Greg Kihlstrom (09:07)
So, you we’ve talked about some of the challenges and the needs. Let’s talk about solutions here as well. And, you know, a solution that you propose is a logical data layer. And, you know, that might be a new term for some of the listeners out there. But, you know, why don’t we start with, you know, what is it and how does it differ from the traditional approach of ETL and physical data consolidation? Some of the things that you referred to as before.
Ravi Shankar (09:34)
Right. A simple analogy, let’s start to understand the difference between like a physical and logical. What we are doing today is logical in the sense that I’m sitting here in Palo Alto, you’re sitting somewhere else in the United States. Here we have a technology that brings us both together to be able to record this podcast. Physical is where you and I have to fly to a central place. Either I have to fly to your place or you fly to my place. We sit in a room and then we actually do this work. So the logical approach allows the data to reside wherever it is, in the databases, data warehouses, lake houses, and so on. And it provides a unified view of that information across these sources without having to pull all the data into a yet another repository to do it. And we kind of say it is better to connect to the data wherever it is and then collect the data into a central repository.
The way it differs from the traditional is the traditional is what we have been doing for the last 30 plus years. So extracting the data from the source, moving it into a destination system which has a different format, and we apply some transformation in the process. With the data virtualization technology and the logical data management, you leave the data where it is. And then you have this unified view and the consumers can come to this place to get the data that they need without having to wait for the data to be loaded, transformed, and consumed from the central place.
Greg Kihlstrom (11:05)
Yeah, yeah. And so and to your to your earlier point, this is the challenge of whether how small or big the silos are like data enterprises are not going to consolidate all of their marketing, financial, operational data into one single, you know, like that’s that’s simply not going to happen. And so this this solves for that challenge that again, some things may get consolidated like marketing data may get all consolidated on on a single platform. But not necessarily across the org. So this solves for that really, at least from my standpoint, it solves for that ongoing challenge that businesses are really never going to get away from,
Ravi Shankar (11:44)
Correct. So from you have an analytical use case, so which you need to analyze the data. So that is a purpose and you move the data into a repository to do that. And yet a lot of it is operational. I’m conducting all these different processes with the pharmaceutical manufacturing and so on. For those ones, it is a separate kind of data and the repository needed. So it is never going to be one for all.
Greg Kihlstrom (12:09)
Yeah, yeah. And so let’s talk about some results then as well. And so measuring the impact and return on investment here. Recent white paper showed companies using this approach achieved a 10 times acceleration in AI rollouts and a 75 % reduction in integration time. Those are pretty compelling numbers. What are the one or two key changes and processor capability that unlocks such dramatic improvements.
Ravi Shankar (12:39)
So in fact, he just rolled out a paper which was actually done by an analyst in which he uncovered this particular information after talking to many of my customers and also companies that were not using Dinodo but using Lake Houses to understand the difference between using Dinodo and not using Dinodo. So what he found out is that using this logical integration as opposed to physical integration was the customers were able to save the time and the effort involved in it. So let’s consider this. When you actually move the data, again, it takes time to write the scripts and load it in there. You need the engineers to do that particular work. And it takes some time to write it, test it, deploy it, and maintain it. As opposed to in a logical one, it actually goes to the sources, it runs the queries in those systems, and brings back just the results.
So I will give you this analogy. Let’s say that you say, Ravi, please get me a cup of water. Okay. And there’s a pitcher right about like 10 feet away. So I have two options. Either I can take a cup, go to the pitcher, pour the water in it and just bring back that cup and give it to you. Or I can carry this pitcher, which is maybe say 10, 20 gallons. And then I bring the entire thing to you and then I pour the water into your cup.
Now, the logical data integration is like taking your cup to the source, because all you need, you can drink is just that one cup of water. You’re not going to drink 20 gallons of water. So it is much more faster for me to go, get that one cup and come back and give it to you, rather than going there and lifting this 20 gallon pitcher and then bringing it all the way back to you. So that is the difference here. So by bringing only the data that is needed, we are able to do it much more faster.
And that actually helps these companies unlock, first of all, the acceleration that you talked about and also the less integration time. So the data virtualization enables that as opposed to a physical data integration.
Greg Kihlstrom (14:47)
Yeah, and some other things from a cost perspective. You know, the study mentioned three point six million dollars in savings with an ROI in under seven months. And, you know, certainly impressive cost savings there. I think there’s another component to this as well, which is, you know, what does getting those from a from a marketing perspective, those campaign ready insights in, you know, days instead of months in some cases? you know, how does that translate into business value and revenue gains as well?
Ravi Shankar (15:20)
Right. So the business value, first of all, is tangible in the sense that you’re able to make, if you’re getting the data much more faster to the consumer, then that means they can take faster decisions. That results in increased revenue, customer satisfaction, it decreases the cost and so on. So in the retailer example that I talked about, let’s say you’re able to do a mid-day analysis and then analyze like there is the people who are coming to it are not actually that many people coming into the store. So you can increase the discount and have more people come in and avail it in the second half of the year. So that way the consumers are benefiting from it as well as the retailer is benefiting from it. I’ll give you another analogy. Let’s say that some person is having like a heart attack or something and you need to quickly administer medication. So there’s a pharmacy down the road, which is maybe like a couple of miles away.
You have two options. Either you can take a bicycle, you can bike down there, pick up the medication and come back. God forbid, like the patient needs to be alive to be able to receive that medication. Or you can take a car, quickly go down in a minute, pick up the medication and come back in a minute and administer it. So the logical data management or integration is like using the car. It’s much more faster in going and getting the information and coming back and giving it to them.
That’s why we see the ROI being delivered in a very short amount of time. And it is much less costly because think about the amount of people that you need to have, the storage that you need to have, the amount of time it takes, all these things multiply. So without that, this is the savings, about $3.6 million in savings that they were able to do it. And you realize the ROI in under seven months.
Greg Kihlstrom (17:05)
Yeah, yeah. And certainly another component to all of this is that, you know, change is the only constant. so, you know, things continue to evolve. AI continues to evolve, you know, with new models, new data sources emerging. How does a logical data management approach help to, you know, future proof an organization’s data strategy against the next wave of disruption that we may not even know what it is yet?
Ravi Shankar (17:34)
Yeah, so to me, this concept of data abstraction is key in the sense that you have two entities. You have the business users that are consuming the information, and then you have the IT, which is provisioning the information to the business users to consume. Now, of them move at different speeds. The business moves much more faster. There is an immediate market shift. They want to react to it. They want the data immediately. IT cannot react that fast. takes time to provision new systems, put the data in, transform it, and then provide it. So they both operate at a different level. A data abstraction is a middle layer between that disintermediates the business users from the IT. So that way, the business users come to this data abstraction layer and consume the data from there. And at the same time, the IT can move at the speed that they need to under the cover of the data abstraction to slowly change out the systems, modernize and so on. So this logical data management is that middle data abstraction wave. This is the one technology that I’ve seen in my 30 plus years of experience that provides true data abstraction. There’s no other technology that actually does that. So this enables the IT to take the time to refresh the systems and modernize it, yet it makes the business users to be able to move much more faster. hence, it future proves So business users don’t care where the data comes from and the IT can take the time to modernize it to the cloud, to the lake house or anything like that.
Greg Kihlstrom (19:04)
Yeah. So those marketing and CX leaders listening to this show who, know, this, this resonates, let’s say, I, cause I certainly resonates with a lot of, of what I hear in my, in my consulting work. What’s a practical first step that they should take to start a conversation about making their data, you know, truly AI ready as, as well as, you know, enabling that, that logical data management.
Ravi Shankar (19:30)
Right. So first of all, this AI is a big kind of storm that is kind of upending everything in the industry. So I will start with this particular story that I heard. I was talking to a CIO of an automotive manufacturing company. He was saying right now there are 50 different teams within the company that are knocking on the doors to get the data that is needed for them to provision AI for productivity improvement all the way across the shop floor.
finance, marketing sales, and so on. So you need to be able to provision the data very quickly to these people, but you cannot lose control of the security of the data. So you need to provision it in a governed way. So that’s why this logical approach makes it very easy and flexible for companies like this to very quickly provision the data to make it available in a governed manner because it has its own security information in there make it available to the business users. The flexibility it gives you is like carrying a cell phone. you know, many years before the cell phone became possible, we all had like wired phones. We had to go to a certain place and pick up the phone. And if you had to provide a line, you need to draw everything all the way from the street into the house and set the phone in a particular place. Now we carry our phone everywhere. So that’s the flexibility that it provides. So that’s why
with the logical data management, it makes this data available to the AI teams much more faster. So that way they can start realizing the productivity without having to be constrained for IT to provide the data to them.
Greg Kihlstrom (21:05)
Yeah, well, this has been great. Couple last things as we wrap up here. So, you know, if we were having this interview a year from now, what’s one thing that we’d definitely be talking about?
Ravi Shankar (21:17)
We will definitely be talking about the companies that are basically wasting money by not using the logical approach. You would see that this approach really catches up. So we have a number of customers that are actually using it for multiple different purposes. And now with the AI coming on board, they are able to quickly turn those projects around much more faster. So that is what we will be talking about.
Greg Kihlstrom (21:39)
Yeah, love it. Well, Ravi, thanks so much. Really, really appreciate you joining. Last question for you, I like to ask everybody here, what do you do to stay agile in your role? And how do you find a way to do it consistently?
Ravi Shankar (21:51)
That’s a good question. Goes right along the lines with your brand. So I’m a news junkie. I read a lot. So I listen to a lot of news stories which is flooded with the AI these days. I also kind of read and listen to analyst reports. So I have a long commute time, so I kind of return that on. And nowadays with the AI, it can take a text and it can read it. So it reads out to me, you know, whatever article that they write. I watch some of the YouTube videos in this and listen to podcasts like these.






