When an agentic AI makes a decision that costs your company millions in a lawsuit, who do you fire?
Agility requires both the speed to adopt new technologies like AI agents, as well as the foresight to build the guardrails that prevent that speed from driving your brand off a cliff.
Today, we’re going to talk about the hidden crisis brewing behind the AI revolution: the accountability gap. As companies race to replace roles with autonomous AI agents, a critical question is being ignored: when an agent makes a biased, unethical, or simply wrong decision that harms a customer or an employee, who is actually responsible? This isn’t a future problem; it’s happening right now, and it poses a massive threat to brand trust, customer relationships, and legal standing.
To help me discuss this topic, I’d like to welcome, Albert Castellana, Co-Founder & CEO at GenLayer.
About Albert Castellana
Albert Castellana is Co-Founder & CEO at GenLayer. A serial crypto entrepreneur since 2013, Albert has co-founded and led major blockchain projects including Radix DLT, NEM.io, BadgerDAO, and StakeHound, reaching over $25B in combined market value. Albert brings extensive experience in decentralized finance and governance. Albert’s leadership is driven by firsthand insight into how existing legal systems fall short for digital assets, fueling his passion to create a trustless, global arbitration layer.
Albert Castellana on LinkedIn: https://www.linkedin.com/in/acastellana/

Resources
GenLayer: https://www.genlayer.com
Take your personal data back with Incogni! Use code AGILE at the link below and get 60% off an annual plan: https://incogni.com/agile
The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow
Drive your customers to new horizons at the premier retail event of the year for Retail and Brand marketers. Learn more at CRMC 2026, June 1-3. https://www.thecrmc.com/
Enjoyed the show? Tell us more at and give us a rating so others can find the show at: https://ratethispodcast.com/agile
Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom
Don’t miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show
Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com
The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Transcript
[00:50:07] Greg Kihlstrom: When an agent makes a decision that costs your company millions in a lawsuit, who do you fire? Agility requires both the speed to adopt new technologies like AI agents, as well as the foresight to build the guardrails that prevent that speed from driving your brand off a cliff. Today, we’re going to talk about the hidden crisis brewing behind the AI revolution, the accountability gap. As companies race to replace roles with autonomous AI agents, a critical question is being ignored. When an agent makes a biased, unethical, or simply wrong decision that harms a customer or an employee, who’s actually responsible? This isn’t a future problem. It’s happening right now, and it poses a massive threat to brand trust, customer relationships, and legal standing. To help me discuss this topic, I’d like to welcome Albert Castellana, co-founder and CEO at Genlayer. Albert, welcome to the show.
[01:41:26] Albert Castellana: Hey, Greg, thanks for having me here.
[01:43:08] Greg Kihlstrom: Yeah, I’m looking forward to talking about this. Definitely. I think it’s an often overlooked topic here and and but definitely timely and and and important. Before we dive in though, why don’t you give a little background on yourself and your role at Genlayer?
[01:56:49] Albert Castellana: Yeah, so my background is in computer science. I’ve been in in crypto and AI for about a decade now. Like, actually crypto, I started, you know, like in 2013. Before then, I had actually managed an algorithmic trading fund, and then I’ve been essentially since the rise of ChatGPT, right? So, November 2022, I was like fascinated by the idea of having these agents, you know, being able to write code and start to interact with each other and so on. And then both things essentially turn into, you know, how do we essentially build a trustless decision-making system, right? Like, because right now we’re so used to having just, you know, these AIs and these algorithms just make decisions on our behalf, right? You don’t know what appears on YouTube. You don’t know what appears on on Instagram, right? The feed is actually absolutely up to the AIs, right? Like, whether you get hired or not, whether you get, you know, an insurance payment or not, right? Everything is just kind of managed by the algorithm. It’s just going to get worse. So, like we’re trying to basically make it, how can we make it so that it’s, you know, a more trustless system that you just don’t just rely on what the AIs are doing, right?
[02:56:22] Greg Kihlstrom: Yeah, yeah. Love it. Well, yeah, let’s let’s dive in then. I want to start with really defining this this accountability gap. And so, you’ve talked about the need for systemic trust in the agentic economy. For a marketing leader at a, you know, Fortune 500 company, what what does the lack of this trust look like in practice, you know, going beyond HR examples and, you know, what what what does this look like for accountability that could impact marketing, sales, and and customer service operations?
[03:29:18] Albert Castellana: Right. So, we’re all racing towards, you know, using AI, right? Like we’re all kind of like still, I think, baffled by, you know, how strong these things are. And we’re like trying to to really understand like how to use this technology. You know, I think that that’s the mandate right right now for many, many brands, right? But the real question is that, well, you know, what does it mean, right? Like, how do you implement it? How do you really make it a reality, right? And the problem with this that few people are talking about is, well, you know, who’s in control, right? Like, who’s making the decisions? Like, you’re used to having, you know, like tools that just do your bidding, right? But this is a bit of a a bit of a different system, right? Like, you basically are not in control anymore. You’re saying, okay, I want this agent to be representing my brand in front of the public, but I don’t even know how it’s thinking, what it’s doing, what promises it’s making, right? Did it hallucinate something, right? Did it, you know, deny something, let’s say, deny a refund that really should have been refunded, right? Can I get sued because of that? Am I responsible because of that, right? So, all of that, like, we’re we’re in a system that’s already like kind of struggling, like from a legal perspective, like the whole legal system is really struggling with the number, right? Of of people in the world, right? There’s like 5 billion people that don’t have access to to justice, right? The problem is, when these agents come online, they’re just going to be, you know, suing each other just for fun. Like, they’re going to be able to be just, you know, patent patent trolling, right? They’re going to be able to be just thinking what are the edges of the contracts, anything that you put on top of them, how can they benefit, right? Sometimes breaking the law, sometimes not breaking the law, but for sure stressing the system. A system that right now is already under stress, right? So, I think to me, like, that’s the that’s the systemic trust, right? It’s like, trust is kind of breaking apart. It’s been breaking apart for quite a while. It will just become even more, it will become worse in the moment where you have, you know, whatever, 10 billion agents out there, 100 billion agents out there that are transacting with each other at light speed for like, you know, bigger or smaller amounts. I think at the beginning will be smaller amounts. Eventually will be huge amounts, right? It will be very difficult to understand, okay, what is actually happening with my business? I’ve got these, you know, 1,000 agents dealing with other people outside and they’re it’s it’s going to be quite tricky. So, yeah, that’s that’s what we’re looking at, right? Like, how do you reduce this risk? How do you improve, you know, the turn? How do you, you know, avoid like brand damage that they can be just by hallucinating stuff?
[05:51:79] Greg Kihlstrom: Yeah, yeah. Well, and I think for the for the humans that are, you know, overseeing these things. I mean, you you mentioned this briefly, but I want to I want to dive into this a little bit more, which is, you know, we’re all used to using software and using tools as as you had mentioned, um, as well. But this this shift in addition to to the legal and and other implications that you mentioned, it’s also a mindset shift for the humans involved in this of instead of just, okay, I open this application and I do this thing, whatever it is. It’s now I’m overseeing, you know, autonomous semi-autonomous agents doing this thing. How does how does a leader start to teach their team to think in terms of of of that instead of again, I I do an action and there’s a there’s a defined and limited action versus now it’s could be unbounded or at least relatively unbounded.
[06:50:90] Albert Castellana: I would go kind of like, what’s the what’s the difference between, you know, a leader and just, let’s say, the person that are being led, right?
[06:57:82] Greg Kihlstrom: Yeah.
[06:58:39] Albert Castellana: I think that humans are going to have a job like long-term, right? Which is liability, responsibility, right? Like, you are using an agent. Yes? And you spin up this agent that’s going to be doing something for you. It’s not like a tool, right? Like, tools just execute, right? And that’s just you can oversee the tool because you’re actually running the tool. It’s doing something that you asked it to do. Now, these agents are going to be able to be doing their decision-making on their own, and they’re going to be, you know, doing some things wrong. So, you, if you’re the person that puts the system in place, that agent in place, and you don’t put the, you know, the borders so that it doesn’t, you know, create complexities and and problems, well, you’re going to be the one responsible. And I think that this is going to be, you know, long-term, like, what do I see, you know, 10 years down the line, humans don’t really have anymore like, you know, like base line job, but they will have their liability, their responsibility that things go well, just, you know, that’s what the leaders are normally doing, right? Like taking on the responsibility and trying to, you know, manage a lot of people. I think what’s going to happen is that everybody will become leaders. Everybody will become their main job will be, you know, I’m responsible for this thing to happen and I’ve got this team of agents, they’re doing this thing. And then the question really for me would be, okay, how do you create the railway, right? How do you create the borders, right? The constraints so that things work well. How are they accountable, right? How can you track what they’re doing? How do you make sure that they they don’t, you know, do something off, right? And I think that that’s what most people should be right now thinking about as well, is like, okay, we’re moving really quickly towards this adoption. I’m the number one in doing that, but at the same time, like what type of like limitations am I putting into the system to make sure that they it doesn’t break apart.
[08:40:87] Greg Kihlstrom: Yeah, yeah. Well, and I think that that’s a good segue to I want to talk a bit about your solution to it, you know, at least some of this is, you know, with with Genlayer and something that uh you call global synthetic jurisdiction. So I want to I’d love for you to unpack what that what that means and and, you know, kind of break it down in in practical terms.
[09:02:44] Albert Castellana: So, you can think of Genlayer as a as a digital judge, right? It’s only that it’s not just, you know, one entity, one AI, but rather many different AIs that anybody can be part of together reaching a consensus on how to resolve a contract or how to resolve a dispute. Right? So, you know, for those that are like, you know, into into crypto, right? Like, Bitcoin is basically trustless money, right? Like, you don’t need the bank in order to transact. You don’t need a third party in order to move money around, right? There’s this other one called Ethereum, right? Which is this global computer, right? This this world computer that can essentially be running applications in a trustless manner. And what Genlayer is is basically like trustless decision-making. How do you enable decision-making? Like, you would be going to a judge, right? Like, you and I have a dispute, we go to a judge. We’re expecting the judge to be fair, right? And essentially, we’re trusting him, her to be able to, you know, be fair between us, right? What Bitcoin and Ethereum do is that they remove that trust for technology. What Genlayer does is also the same, right? We’re removing the trust in making decisions and instead putting technology in the mix, right? What that means is that then you can essentially have disputes and contracts that can be resolved at, well, at AI speed, right? Because it’s a bunch of different AI, 1,000 different AI models that are basically reaching an agreement, in a majority vote, on whether some dispute or some agreement should be resolved one way or the other, right? What that amazingly means is then now you have the, you know, this railway, right? You have these constraint. You have a system that allows you to create a contract that will be self-enforced and that will be decided by this consensus, this group of AIs that will be looking into, okay, you know, this is the evidence, this is the contract, this is the asset, what should we do, right? And well, if you think about it like, most of the contracts in the real world are not really black and white, right? Like, most of them are not something that you can like codify as a smart contract, right? Normally it’s like, okay, you have this creator campaign and you have, you know, like, did this creator do what we asked him to do, right? Like, was the content high quality, right? Did it actually get the audience? How many impressions did it get? Some of those things are something that is objective, but some of those are subjective, right? And so, what what Genlayer does is essentially enables for these like fuzzy contracts, not just, you know, very deterministic, very, you know, black and white contracts, but fuzzy contracts that you can deploy into it and then they get executed across the world.
[11:35:36] Greg Kihlstrom: So, I think you touched on some of this already, but you know, when a lot of people think about blockchain, they think about things that you mentioned like Bitcoin, Ethereum, NFTs, you know, all all sorts of things like that. So, why blockchain for this as opposed to some, you know, regulatory body, government entity, tech company, you know, something like that. And what are some of the unique aspects of this that that blockchain can solve?
[12:03:32] Albert Castellana: I think what we’ve seen for the last, you know, decade or so is that trust is breaking apart, right? There’s a lot of countries that are just, you know, the the governments are falling apart, companies that are using their power in order to to harm their users, right? Like from, you know, like making you addicted to their platforms, right? To just, you know, presenting you with things that you shouldn’t even see, right? There’s basically that power has been concentrating in the hands of very few, right? It happened the same with money, right? Very few had the power over money and that basically meant that you had to be paying, you know, multiple percentage in order to make a single transaction, right? Or that you had to be waiting for days to make a transaction, right? By decentralizing, by making it so that there’s not just one entity that has the power to be the gatekeeper that has basically opacity, right? I mean, we’ve seen with, you know, Bernie Madoff, like all the all the, you know, lack of transparency has just created issues across the system, right? What we’re doing is basically trying to make it so that the future of decision-making, right? The you want an AI to be making the decisions or do you want basically nobody to make the decisions for it to be a traceable, a fair and open and accessible system that everybody can essentially rely on, right? And they don’t need to be just trusting a single model with a specific bias, specific, you know, point of view and, you know, owner, actually, right? We think that decentralization enables new types of use cases that’s been proven by by Bitcoin and Ethereum. I think it will be the same like equally proven through Genlayer as, you know, we are already being decided upon by many different AIs we don’t know anything about, right? We just want to move a little bit out from that if we can.
[13:44:27] Greg Kihlstrom: Yeah, yeah. So, let’s let’s talk a little bit about the the the customer experience and the the customer dynamic here. So, you know, when a human agent, you know, an employee at a at a company makes a mistake, there’s a clear path for escalation, apology, you know, whether that person continues to work at that organization or not, you know, all all kinds of things like that. Obviously, things get different when we start when we start talking about autonomous agents. And so, you know, what what does this what does a process of, you know, let’s call it like CX recovery look like in a world that’s run by autonomous or semi-autonomous agents, you know, how how can brands still manage risk and and protect themselves against, you know, AI failure, let’s say.
[14:33:43] Albert Castellana: To me it’s quite interesting the question because like, you know, customer experience at the end of the day is because the customer is really, you know, like the one you’re interfacing with. And I wonder what it will mean, and I’m not, you know, a CX expert at all, right? So, like, I wonder what it will mean when the customer is really not the customer, but rather an agent that’s operating on behalf of the customer, right?
[14:52:90] Greg Kihlstrom: Right.
[14:53:50] Albert Castellana: My perception is that they’re going to be looking at, you know, any type of angle and any type of margin, right? They will be trying to, okay, what is the best objective product, not just, hey, you know, like, oh, this marketing works well. They will be able to see through that. So, I think that marketing will fundamentally change the moment that you have these agents working on our behalf, right? Like, and like I think this year is the year we’re going to see that as we we start to see, you know, like shopping from Open AI, right? Like, we have to we like I I think many people, you know, watching this or listening to this are going to be thinking, yes, I actually this year for Christmas, for example, I was asking the AI to give me recommendation of what can I can I can I give, right? Even like trying to look at the products and give me, yeah. I think that that’s going to be fundamentally changing.
[15:41:40] Greg Kihlstrom: Yeah.
[15:42:07] Albert Castellana: At the end of the day, from the legal perspective, it’s just going to be, you know, like, well, you will want there to be escalation levels, right? Like, you will want there to be first a system that’s automated that’s giving you an input. You want that then there to be some human that’s giving you that that input, right? That response, right? And you will want to have the the safeguards so that it doesn’t, you know, escalate potentially into the legal world, right? What Genlayer does is essentially also like offers that type of framework, but really, I think that that what you need to have is a system that can understand as, okay, I’m actually dealing with an agent, I will be able like my agent will be able to deal with it and like negotiate and like try to find common ground, right? And so things are going to be changing very quickly on on this front. I’m not an expert on CX, so like, yeah.
[16:24:26] Greg Kihlstrom: Yeah, yeah, no, of course. But and then I think to but to to escalate it up to the the legal level, there there’s a there’s risk here for there’s brand risk, right? I mean, there’s there’s the um brand perception part, which, you know, we we could talk about. But I I’d rather focus on the the risk mitigation, I mean, if things go so wrong in other words with a customer experience that it becomes a legal issue. How should brands be starting to think about, you know, measuring, you know, and managing risk in in terms of of of this?
[16:57:11] Albert Castellana: I think that at the end of the day, you know, like measuring will be the way, yes? In order to measure, first you need to take that data and analyze it, right? So, making things explainable and understanding, okay, I’ve got this customer service agent. One option is to not just to just not track anything of what it’s doing, right? The other option is to make it so that it has very enforceable rules that it needs to be acting upon. So, you want it to be as deterministic as possible, right? Like, these agents are non-deterministic. In general, like, they’re just, you know, you can let them do anything they want, right? Now, what you want is to reduce the degrees of freedom so that they, you know, create less issues, right? So, reduce the degrees of freedom, make it so that there’s clear escalation processes, again, put them just like as many constraints as you can, and constraints enable for completely new types of of, you know, opportunities, right? I think ultimately, the customer’s agent will try to find a way to game your agent, right? It’s going to be trying to find a way because like, you’re a good brand, right? And your naive agent is just going to be there for agents to be attacking to say, okay, like, what’s the best way? I can search the whole internet to to find for ways to basically like, you know, profit from this offer. Yes? And it will be able like that the whole the whole thing will change.
[18:14:14] Greg Kihlstrom: Yeah, yeah, agreed. So, look looking out, you know, future future states, you’ve said AI won’t wait for lawyers. You know, look looking out over the next couple years, what’s the what’s the biggest mistake that brands are are currently making as they rush to adopt AI agents without considering some of these things that that we’ve talked about?
[18:38:43] Albert Castellana: I think the next two, three years is when we’re going to see this, right? We’re going to see that companies are going to be deploying agents at scale, at a scale that it will be even hard to control because like, you know, like your competitor is deploying all these new agentic systems, right? Why are you not doing it? And it’s just so easy and you can bite code, so like, you can just like move really, really quickly, right? And what I think will happen that people are not really seeing yet is that the legal system is going to be really like in stress. These agents like, you know, they will be able to sue each other just for fun, you know, do anything they want, and they don’t sleep. You don’t know who is behind them, right? Like, they’re just like crazy smart, right? So, I think just the the infrastructure is not ready. The infrastructure is not ready. Like, you don’t know if the agent that’s buying your product is actually from North Korea, right? How are you going to do? Okay, now you need KYC, right? Do are you used to having KYC on your on your ChatGPT? What happens if one doesn’t? Like, do we have this infrastructure? We don’t have that right now, right? So, I think that you will see the emergence of all these adversarial agents. You will see how the legal system starts to struggle and how probably in a couple of years, right? It will be obvious that you really need to have a layer, like a legal system for this, you know, future of of AI, right? This AI commerce layer needs to exist because if not, well, the the human legal system just cannot scale to that level. I don’t think that it’s going to be replacing it. I think it’s going to be like, okay, I want to, you know, offer to you this product, right? I will put an escrow, and then the escrow will be smart enough to understand what actually if I fulfilled my promise, and then your agent will be able to say, yes, I want to enter into this contract, and then you just have, you know, maybe it’s a $1,000 contract. You don’t need to go through, you know, hundreds of thousands of dollars of litigation a year or so to get something resolved if not worse, just for $1,000. That’s what we are right now, right? The the customers are feeling like they’re not in power, right? They are like, the brands are in power. They just, you know, they they cannot really do much for many of the things that are happening. I think that that’s going to be like shifting a little bit into into making it trustless. That’s what that’s what we’re working on.
[20:46:44] Greg Kihlstrom: Yeah, yeah. Well, thanks so much for for joining. Got a couple last questions for you before we wrap up. So, first, if we were having this interview one year from now, what is one thing that we would definitely be talking about?
[20:59:43] Albert Castellana: I think we’ll see how agentic commerce is really kicking off. I think you will see how people are just asking their agents to make money. It’s just so easy to say, right? Make money for me, and then the agent will have to find a way to make money. So, you can think of, you know, what would you do if you were that agent?
[21:16:32] Greg Kihlstrom: Yeah, yeah, definitely. Well, and last question for you, uh what do you do to stay agile in your role and how do you find a way to do it consistently?
[21:24:67] Albert Castellana: I’m always trying to be, you know, innovative and like understand what’s where we’re heading. And on top of this, like I I don’t really have a sunk cost uh you know, policy like to, you know, very adaptive, like always need to be tip of the spear, understanding that things are moving really quickly and ready to, you know, like just, you know, change focus if if you need to.





