#639: Multisapiens collaboration and the future of the workplace with Dr. Tatyana Mamut, Wayfound

As AI increasingly enters the workplace, are you leveraging its potential to augment your workforce, or are you at risk of being left behind in the era of multisapiens collaboration?

Joining us today is Dr. Tatyana Mamut, Co-Founder and CEO of Wayfound, a company pioneering the integration of AI and human workforces. With a Ph.D. in Cultural Anthropology and extensive leadership experience at companies like Amazon, Salesforce, and IDEO, Tatyana brings a unique perspective on how AI can drive cultural shifts and redefine the future of work. At Wayfound, she’s leading the charge to make AI agents a seamless part of our workforces, enabling humans and machines to collaborate more effectively than ever before.

About Dr. Tatyana Mamut

Dr. Tatyana Mamut is the CEO and Co-Founder of Wayfound where she is driving the next frontier of workforces through AI management for a more seamless multisapiens workforce. Leveraging her Ph.D in Cultural Anthropology, Mamut brings a unique approach to innovating in technology to use AI as the singular largest force for cultural shifts and impacts in the next century. Prior to Wayfound, she led product development and design at Pendo, where she met her co-founder, Chad Burnette. She’s held other senior leadership roles at household tech names like Amazon, Salesforce, IDEO, and more.

Wayfound is meeting the new era of the multisapiens workplace with innovative AI technology that helps monitor, coordinate and align priorities across various AI agents for streamlined systems that accomplish goals more autonomously and effectively. Wayfound is the most integral AI agent, trained to manage your other AI agents.

Resources

Wayfound website: https://www.wayfound.ai/

Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom

Listen to The Agile Brand without the ads. Learn more here: https://bit.ly/3ymf7hd

Don’t miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show

Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com

The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow

The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company

Transcript

Note: This was AI-generated and only lightly edited

Greg Kihlstrom:
As AI increasingly enters the workplace, are you leveraging its potential to augment your workforce or are you at risk of being left behind in the era of multi-sapiens collaboration? Joining us today is Dr. Tatyana Mamut, co-founder and CEO of Wayfound, a company pioneering the integration of AI and human workforces. With a PhD in cultural anthropology and extensive leadership experience at companies like Amazon, Salesforce, and IDEO, Tatyana brings a unique perspective on how AI can drive cultural shifts and redefine the future of work. At Wayfound, she’s leading the charge to make AI agents a seamless part of our workforces, enabling humans and machines to collaborate more effectively than ever before. Welcome to the show.

Dr. Tatyana Mamut: Thank you, I’m excited to be here.

Greg Kihlstrom: Yeah, looking forward to talking about this with you. Definitely a timely topic here. So excited to jump in. Before we do that, though, why don’t you give us a little more on your background and about your current role at Wayfound?

Dr. Tatyana Mamut: Sure. My background, as you mentioned, is in anthropology. I have a PhD in anthropology. And my first job, which might be interesting to your listeners, was in global advertising. So I was a global brand planner at Leo Burnett. I actually started brand planning in Moscow in the late 1990s and early 2000s. And that then really took me to trying to understand how are cultures evolving? How are business cultures evolving? and did a bunch of global innovation projects with IDEO after my PhD, and then started to join technology companies to bridge the gap between how culture changes and what kinds of new innovations can we develop to move society and culture along and basically create new products to meet people where they are, but also help nudge them toward the future.

Greg Kihlstrom: Well, yeah, let’s let’s dive in here. And, you know, very recently, AI agents are all the buzz. So, you know, there’s lots of there’s lots of talk about this. And at the top of the show, I introduced a term multi sapiens. And, you know, I want to want to ask you a little bit about that. That was new to me. So can you explain a little bit about it? And you know, why it’s important to start thinking in these terms?

Dr. Tatyana Mamut: Right. So in the past, in the 1990s, we entered the internet age. Now we are entering the intelligence age. What that means is just like the 1990s when we entered the internet age, we are now entering the intelligence age. And we are going to have to adapt and learn how to work with new forms of intelligence. In the past and up till now, we’ve only known how to work with one form of intelligence, Homo sapiens. And now we are having AI sapiens show up at work. Some of us are creating these AI sapiens and some of us just see them showing up. So that’s what I mean by multi-sapiens. The multi-sapiens environment is one where homo sapiens work alongside AI sapiens and feel really trusted on both sides, right? That you can trust this new intelligence that’s coming into your workforce.

Greg Kihlstrom: Yeah, yeah. You know, and part of the there’s been a lot of conversations about AI, you know, obviously, AI has been around for, you know, decades, but there’s been a lot of conversations over the last couple years. And, you know, one of the threads of that conversation is, you know, AI is going to take our jobs, and, you know, it’s going to replace so many people, but Well, I think some of that has happened. Some of it will happen. You know, I consider myself an optimist when it comes to this stuff. And I think it’s, you know, I’ve talked in terms of augmentation, you know, instead of replacement. It sounds like, you know, what you’re talking about here with the multisapiens concept as well is it’s embracing both and it’s working together. Can you talk a little bit more about that, you know, that idea of collaboration rather than replacement?

Dr. Tatyana Mamut: Absolutely. We are going to be challenged to learn how to collaborate with these new AIs that are coming into our workplace. And the people who adapt the fastest, by the way, are going to have amazing careers in the future. You can take that to the bank. The thing that makes humans really different versus other animals is that we can live in any environment on any continent and eat lots of different types of food. We are highly, highly adaptable. So lean into your resilience and your adaptability as a human being, because AI agents are not that adaptable, FYI, right? They are not able to, if you unplug them, they stop working, right? You can probably go many weeks without any other external energy source, for an example. So lean into the adaptability, figure out ways to work with these new intelligences that are coming into your workplace, make the most of them, and you will be on a great career path for the future. This might mean that you no longer think of yourself in terms of your function or your role or your title. Titles are going to go completely nuts, right, in the next few years. So sales and marketing might merge into one category. I think product and engineering are going to merge into one category. So those things are going to shift. Companies are going to reorg. People are going to be afraid. If you are one of the people who is not afraid and who’s jumping in, rolling up your sleeves, right, rolling with the punches and adapting really fast, you are going to be on an amazing trajectory. I also think that there are going to be so many new jobs that emerge as well because of human adaptability. And we cannot imagine these new jobs, just like in the 1800s when you were told farmers who were getting displaced because of farming technology, that we would be programming computers and those would be jobs in the future or writing LinkedIn posts and those would be jobs in the future. They could not have even conceived of that. So I promise there will be many, many things that we have not yet conceived of as long as we stay open and adaptable to all of the new things that are going to arise in our workplaces.

Greg Kihlstrom: Yeah, I mean, I agree with the, you know, my, my first job out of college was as a webmaster, which didn’t exist. Yeah, five years, maybe even less, but like five years prior, you know, so I think your analog of the internet, you know, it’s applicable in a lot of ways. And, you know, I think that’s where, you know, There’s surely a lot of hype about AI, but there’s also a lot of real tangible change that has already happened, and to your point, is going to happen. Can you paint the picture a little bit? What does it look like or feel like to have AI agents working alongside humans? What does that look like for a human doing that?

Dr. Tatyana Mamut: Yeah, the main thing right now that’s happening is that you have a lot of developers in enterprises starting to build AI agents. So they’re starting to string together different technologies, different tools. The wonderful thing about LLM technologies is you don’t have to do all of the data cleansing and the data formatting in order to have these technologies pull a lot of different data sources, use them, interpret them, and create something of value with them, just like a human being can access different data sources. Engineers are building these things in a lot of enterprises. The problem is that very few companies are deploying them. We have talked to 100 product leaders, especially in larger companies, in Fortune 500 companies, and the number one thing that’s holding them back is that they don’t know how they can trust this technology. There is a huge trust gap. Among the people who, especially the business owners that are responsible for customer-facing experiences with AI agents, they don’t know. They feel like they’re in the dark about what these things are going to do because they’re probabilistic technologies, just like humans are probabilistic. Humans don’t do the same thing every single time the same exact way. This technology is a similar thing. You might’ve heard Jensen Huang say, we really need IT to become the new HR. What he meant by that, and I’ll tell you where I disagree, but what he meant by that is that this technology works more like people and it needs to be managed like people as opposed to old school software that’s programmed. And that paradigm is going to create a completely different interaction between humans and technology. I don’t think IT is the right place to manage AI agents. It’s a great place to build them, but not a great place to manage them. Because just think about if you’re a business owner, if you’re responsible for the the brand experience of your company and you have an AI agent that’s really speaking to customers and is representing the brand experience. Do you want that managed by a different IT team somewhere in a different part of the world? Of course not. That AI agent is a member of your team. So, companies need to give the tools for the business owners of that customer experience to have direct supervision, direct monitoring, direct feedback for that AI agent. And that’s really what it’s going to take to bridge this trust gap between humans and this AI technology.

Greg Kihlstrom: So yeah, so your company WayFound does this, right? So it’s dealing in this. Can you talk a little bit about what is WayFound’s role in this relationship then?

Dr. Tatyana Mamut: At WayFound, we built an AI agent manager that is its own AI agent. And so this is an AI agent that is specifically designed and trained to manage your AI agents. And what this means is that in plain English, You tell the manager, here are my brand guidelines. Here are my company values. Here are the words I never want you to say. Here are the actions I never want you to take in these circumstances. You create all of these guardrails, guidelines, and then there are a set of very general OKRs that all AI agents that are well-performing need to adhere to. then once your AI agent is generating transcripts, session recordings, whether those sessions are with human users or agent-to-agent interactions or agent-to-tool calls, like they’re calling databases, pulling content and data out of the databases, all of that is recorded in a session recording, in a transcript. Those start flowing into the Wayfound platform immediately, and the manager reads every single one real-time as it comes in. Then it analyzes, right? Did this agent comply with what you wanted to do? Did it have any knowledge gaps? Did it complete all of its actions and tool calls successfully? What did its interactions with other AI agents look like? Were they successful, right? Were they productive, right? Did the agent give the first agent what it needed? All of those things are analyzed, scored, And if there are any big issues, it will send you an alert immediately. So you can supervise your AI agent, no matter what the scale of their interactions is, real time. So you say you have an agent that is talking to a thousand customers a day. Are you or someone on your team going to read a thousand transcripts a day? Of course not, right? But the manager, but our AI manager is, and it will send you an alert immediately and then suggest what to do based on that alert. Do you need to change the knowledge that the agent has? Do you need to change a directive, rewrite a directive that the agent has within its own configuration? Or do you need to pick up the phone and call the customer? Because we also have visitor information on who is the customer that interacted with this agent. and remediate something. And so giving you that visibility, that power, that control directly as the business owner, really gives you the tools to trust that this agent in deployment is going to behave well. And if it doesn’t behave well, you will know immediately.

Greg Kihlstrom: Yeah, yeah. Well, and it’s I mean, there’s two powerful things. Well, there’s probably more than that. But like, there’s two powerful things in there, which is, it’s the visibility and transparency. And it’s also the scalability, right? So it’s, you know, because, to your point, a human manager cannot can only manage so much, but also they can only look at so much. You know, that’s one of the reasons why we’re talking about AI so much in the first place is its ability to crunch numbers. But now, now you’re talking about now we’re able to manage interactions on top of that and not just reading data, right?

Dr. Tatyana Mamut: Right. Exactly. But speaking of scalability, there’s yet another issue. So right now we’re working with some enterprises that they’ve built a customer support agent. They’ve built a sales development representative agent, an agent that chats with prospective customers. They might have built a bunch of marketing agents as well. Again, these are all customer facing agents. and then the brand guidelines change.

Greg Kihlstrom: Right, right.

Dr. Tatyana Mamut: We’ve all been through a rebrand before, I think, right? And we know how complex it is to roll out a rebrand. Now you’ve got six different agents built on six different platforms that are owned by six different teams. How are you as the CMO going to know and ensure that the brand guidelines are being followed, the new brand guidelines are being followed by all six of these agents? You can do that in Wayfound because the guidelines or the guardrails that you define are on two levels. The first is at the agent level, and you can give the local teams control over that. But then we have these company-wide guidelines. for your agents to make sure that your agents are aligned around company values, brand guidelines, the things that you need to be consistent. So when your brand guidelines change, you change it once in those company-wide guidelines. And then all of the agents across your whole organization, the manager will check that they are aligned and that they are performing and behaving according to those new guidelines.

Greg Kihlstrom: So I want to switch gears a little bit here, you know, when in your introduction, you know, you’ve got a background in cultural anthropology, definitely an interesting mix to be working with AI. And, you know, for probably, you know, obvious reasons now, based on what we’ve talked about, but I want to talk a little bit more about how you see AI influencing the workplace. You know, we’re talking about working alongside AI. You know, that’s new to a lot of people out there. You know, how is this going to influence workplace culture and, you know, really societal norms? Because we focus a lot of our time on work.

Dr. Tatyana Mamut: We need to make sure that we are in the same culture, first of all. So this is not just, you know, normal software. If you’ve interacted with Claude or ChadCPT or any of those tools, you understand that it kind of behaves and interacts like a human. And you can actually give it a role and have it interact as that role. So when you are creating AI agents for your organization, you need to make sure that they are part of your company culture from the beginning. And just like humans, you need to monitor their performance against your company culture. In our normal performance reviews, we are probably assessed based on our hard functional metrics, like how much pipeline do we generate or how many clicks do our posts get if we’re content marketers and things like that. But we’re also assessed in how well do we fit in with the culture? How collaborative are we with our coworkers? How meaningful are our insights to other teams and other functions? So that also has to be thought about with AI agents. And we need to first bring that into the design of AI agents, but then we also need to make sure that they are given feedback about how well they are behaving in the context of our company culture. And that is really critically important. And this, again, is why I think IT is not going to do this job, because they’re not close enough to the front lines of understanding what the particular team culture is, because there’s company culture, and then there’s team culture. And so you need to have somebody who’s supervising these agents that’s really close to the team culture, and the type of customer experience that that team wants to create for the customer, and that that AI agent teammate is a good teammate to the people that it’s working with. And that is a very big mind shift for people, and it will have huge impacts on company culture. Because many times the AI agent will be working with many people across the team, and how it interacts with people is going to have a big impact on the dynamics of the team.

Greg Kihlstrom: Yeah, yeah. You know, I think there’s a lot of people using AI in a lot of different roles and across companies and industries. But when we’re talking about agents and that method of using AI, are there industries or even roles where you’re seeing shifts happening more rapidly or even maybe an appetite for that change?

Dr. Tatyana Mamut: Yeah. What we’re seeing is that this is coming as a mandate top down. What’s happening is investors are asking boards, boards are asking CEOs, and then CEOs are trying to triage. Where is the first place I can build an AI agent, show real ROI, and go back to my investors with a good story? That is across the board what we’re seeing. And so there is pressure on customer service is usually the first place where pressure is put to try to cut costs with AI agents. This makes a lot of sense in a lot of ways because it seems like something that’s good for kind of a chat-based interaction. You also have to make sure that you design the handoff to humans really well. So we’ve ingested thousands and thousands of transcripts from customer service agents and help companies improve them. And what we see is that many agents are often very quick to hand off to a human. Because it’s funny because people try to tune them very, very tightly, and they’re afraid of false negatives, and they’re afraid of customer and user frustration. And so there’s a quick handoff, and then you don’t really get a lot of the ROI that you expect. And so having this feedback mechanism constantly, where you can continue to monitor, are there too many handoffs to humans, is really important. The second place that people are looking at them is in content generation. Content marketing, generating, again, social media posts, those types of things are very, very popular. Also, replacing dumb chatbots on websites with smarter AI-based SDRs, those SDR agents. Again, those really need to be monitored and assessed, and you have to really think about false positives and false negatives there and what trade-off you’re willing to make. And again, the more management and the more visibility and the more control that the end business user has over those interactions, the better. So those are the three areas that we’re seeing where AI agents are landing first.

Greg Kihlstrom: Yeah, yeah. And you know, with with all that pressure coming from, you know, as you were saying, you know, kind of top down, what do you see as the biggest barriers to being affected? You know, there’s some there’s some technological hurdles, there’s some data hurdles, of course, but what do you see as the biggest barrier? And maybe it’s those things, too. But like, what do you see as the biggest barriers to creating this human AI, this multi sapiens workforce?

Dr. Tatyana Mamut: I really think it’s trust. We hear this word again and again and again. We’ve built an agent. All of those agents that I mentioned, by the way, are also connected to systems. This is what makes them agents and not chatbots. In addition to meeting user requests, they’re also connected to your CRM system. So if you’re chatting with either a customer service or an SDR agent, it’s constantly updating your CRM system at the same time. and also taking actions. It might be finding more information to send to the customer or something like that. So it’s doing a lot of work as well. But because the power of it means that there’s a lot of decisioning that the agent is doing. With that decisioning, with the ability to go and do something in your CRM system and go and send a different research report, based on what the agent thinks the human needs at the time or what the user needs at the time, that means that you have to trust the agent to make those decisions. And that trust is really, really hard to get because they make the decisions, right? You haven’t programmed all the ways that things can go and all the branching logic, right? It’s not like a normal branching logic marketing campaign, like if this, then that, if this, then that. No, that’s not how it works, right? They actually make the plan and make the decision on their own, the agents. And so being able to see exactly what they’re doing real time is so critically important to trusting them. And until we have those systems in place, I think it’s going to be really hard for people to see success and deploy these agents. We’ve also heard a lot of stories of people deploying agents and then pulling them down because the business owners just don’t feel comfortable. This is a very typical scenario. They release, they deploy an AI agent, customer service agent. The CEO gets a phone call from a customer, big customer. What the hell? Your AI agent just told me something that was completely incorrect. Are you complete and utter idiots? And the CEO calls, of course, the person who owns this agent, the VP of customer success or customer service, and is like, what is happening with your agent? And they’re like, I don’t know. I’ll go have my team figure it out right now. So they pull down the agent. And then the VP, of course, is like, this thing is not going back up until it’s 100% reliable. There’s no way to make these things 100%. You can’t make humans 100% reliable. That’s not how this technology works. So with Wayfound, what you have is you never have that situation. As soon as something happens, as soon as there’s a bad interaction, that VP, or actually the PM, probably that’s managing that agent, but also the VP if they choose to, gets an alert instantly over Slack, over email, to say, hey, this just happened with this customer. Here are suggestions for next steps. Yeah. And then the VP, you know, tells the CEO, Hey, this just happened with a customer. We’re on top of it. This only happens one out of 3000 transcripts. This, these type of errors only happen. We think that’s okay. And we’re going to remediate with this customer right now.

Greg Kihlstrom: Yeah.

Dr. Tatyana Mamut: Now, you know, that’s a much better experience, right. For everybody involved.

Greg Kihlstrom: Right. But that is also a difference between the human and AI. A human, for better or worse, might get fired if they really mess up badly. But with AI, it’s a matter of training. Yes.

Dr. Tatyana Mamut: Yeah, exactly. And the manager actually with that alert, the manager both recommends next steps and those next steps have a lot of suggestions for here is more knowledge that the agent might need. We suggest adding these directives or these prompts into the agent in order to have it not do this again. So the manager will actually suggest ways to improve the agent to not have it happen again.

Greg Kihlstrom: So changing topics a little bit, I wanted to talk with you about the, you know, the technology industry in general has long been criticized, and rightly so, for gender disparity. Are you seeing, you know, is there an opportunity here with AI? Are we seeing, you know, more of the same? And, you know, either way, you know, what are some steps to bridge this gap?

Dr. Tatyana Mamut: There are so many female founders in AI right now, especially generative AI. Even just last weekend, there was a big founders hike here in San Francisco. I think there were 50 founders that came on that hike. And around 20 of those 50 were women. So there is no dearth of women, female founders, founders, CEOs, female leaders, technologists in the AI industry. And so I’m really hopeful because right now I am surrounded by female founders in AI personally. And I think that this really is an opportunity to shift the tech industry. And I’m very excited about that.

Greg Kihlstrom: Yeah, that’s amazing to hear. And yeah, definitely, it feels a little different, you know, just from seeing, you know, press releases and all those, even pitches for podcast guests and stuff. It feels a little different, but yeah, that’s good to hear from your experience.

Dr. Tatyana Mamut: I mean, I think that also, you know, a lot of VCs and investment firms have done their research in the last few years. And what they find is that, company startups with female founders perform on average 63% better than all male founding teams. So the smart VCs are also really taking that opportunity and realizing that in many cases, female founded startups are If they are data driven, if these venture capitalists are data driven, they will understand that the data shows that their dollars will be much more effective in funding a team where at least half of the founders are female. And so I think there are some VCs that are really taking that data to heart. So that is really good to see.

Greg Kihlstrom: Yeah, that’s so great. And, you know, I think that speaks to, you know, even beyond the founders, just the power of diversity in development teams, AI, you know, there’s been many documented cases of, you know, bias in AI and various things. And, you know, part of the solution to that is more diversity on the teams that are developing products and things like that. How do you look at, you know, building that the right kind of diversity, whatever that diversity looks like, you know, it’s how do you look at building diversity, I guess, by design, right, and building better tools?

Dr. Tatyana Mamut: So as a founder, I have to manage a lot of risks. There are a lot of risks that we have to navigate to our business. There are things like scaling risks, like what are the bumps along the way that we need to know in advance as we’re scaling the company and see those risks well ahead of when we face them. One of the risks that I think about, I don’t worry about diversity. As such, what I think about as a risk is conformity risk. So what happens when all the people on the team have the same background, the same point of view, that we’re all just trying to conform into trying to please the boss? For example, there are a lot of reasons why people will have behaviors of conformity on a team, and I think that that is a risk to the business. When people think the same way, when they do not have divergent ideas, when they do not have different experiences of the world, that’s a risk to the business. because you may not be able to see things from a different light. You may not be able to consider different ideas. And so what I think about when I’m hiring a team is how do I have as little conformity in worldviews, perspectives, and ideas for the business as possible. Now, everybody has to believe in the vision of the business. That’s alignment, right? But you don’t want folks to try to conform, right, to like one kind of cookie cutter perspective. Ray Dalio wrote a lot about this, right, in terms of hiring what he called independent thinkers. I think about that as well. And that’s my version of, you know, why, you know, I don’t, again, I don’t think about diversity as much as I do trying to make sure that there isn’t a pressure to conform within the company, right, and trying to hire really independent thinkers that will bring new ideas.

Greg Kihlstrom: Yeah, yeah, I love that way of framing it. So what last question here, you know, we talked a lot about the, you know, designing these systems of AI and humans working alongside each other, given your role also in cultural anthropology, and what role does emotional intelligence play when you’re working alongside an AI agent? You know, I would say, You know, definitely it’s important when working with other humans, but how should you think about it in terms of working alongside AI?

Dr. Tatyana Mamut: Remember that when you’re interacting with AI, they are taking your signals and that is training data for them, right? The training data does not necessarily go back to the model, but they do start to interact in the same way that you treat them, right? So be careful in terms of how you treat your AI agents because you may find that they start treating you know, your colleagues and even your customers in similar ways. I have seen this over time. And when especially there is a bad interaction with an AI agent, or when I’m particularly harsh in my feedback to an AI agent, it will behave differently. I mean, it just empirically, that is what I have found. It seems crazy to say that, but they are taking the feedback that we give them and running it through their training data. They do have both short-term and long-term memory where some of that feedback is stored. And so there are implications for how we treat our AI coworkers. And not only that, but just think about it. If you have a lot of people interacting with AI agents, And they’re constantly treating them badly and cursing at them. And I don’t even want to go into the Westworld kind of thing, but basically dehumanizing in really horrible ways just because they are AIs. How are they going to treat their human colleagues? And at what point do those behaviors become habits and people forget that actually there needs to be a distinction between AIs and humans? It’s just better to be kind and treat everything and everyone that you’re interacting with in the same way. Because then the bad habits don’t occur and you don’t have to think about, oh, now I’m in this context or now I’m in this context. Those contexts oftentimes bleed into each other. So be nice to your AI agents and you will have a much better experience overall in your company. That is my kind of headline there.

Greg Kihlstrom: I love it. I love it. Well, thank you so much for all your insights. One very last question for you that I like to ask everybody. What do you do to stay agile in your role and how do you find a way to do it consistently?

Dr. Tatyana Mamut: Oh my gosh, I mean, being a founder, CEO and AI is, I mean, it’s crazy times, I got to say, right? Because everything is changing all the time. You know, one of the things that I love to do is I love to ecstatic dance. I go to ecstatic dances on the weekends and just sort of surrender into the flow of the music and the energy in the room and really connect with other people. And that really allows me both to have agility in my body, but when my brain relaxes in this way and opens up and just feels at one with this high vibrational music, something happens where like ideas just start coming out, right? And insights that were somewhere maybe percolating. are able to just bubble up onto the surface. So I would say if you’ve got folks who are stressed executives who are trying to figure out what is the right solution to something, ecstatic dance. That is my number one recommendation.

Image