@media screen and (max-width: 1023px){section[data-id=”block_2432676e6df83f9e094498de340ec148″]{ }}@media screen and (min-width: 1024px) and (max-width: 1365px){section[data-id=”block_2432676e6df83f9e094498de340ec148″]{ }}@media screen and (min-width: 1366px){section[data-id=”block_2432676e6df83f9e094498de340ec148″]{ }}

Gabrielle Kohlmeier is a lawyer, tech whisperer, and transformation executive in a lifelong love affair with growth mindset and sustainable innovation. From building a Fortune 30 legal and policy approach to antitrust, to navigating retail risk, to leading global legal AI adoption and outperforming teams. She helps organizations rightsize risk and turn disruption into strategic value.
body.single-post p, body.single-post li{color: #131313;} body.single-post li a{color: #E33E2B; font-weight: 400 !important;} .center-block{margin:0 auto;float:none;display:block;clear: both; margin-bottom: 0px;text-align: center;} .podwrap {margin-top:20px; }.podwrap img{margin-right:10px; width:98%; margin: 0px; } .podwrap.last{margin-bottom:12px; margin-top: 0px !important;}.podwrap.pod1{margin-bottom:0px;} .podwrap div{display:inline-block; width:21%;} iframe{text-align: center;display: block; margin: 20 auto; float : none;} .iframe-container{ position: relative;width: 100%;padding-bottom: 56.25%; height: 0;}.iframe-container iframe{position: absolute;top:0;left: 0;width: 100%;height: 100%;}
@media screen and (max-width: 640px){ .podwrap { width: 100%; position: relative; display: inline-block!important;}.podwrap div{width:36%;}.podwrap img{margin-bottom: 0px !important;} }
Here’s a glimpse of what you’ll learn:
- Gabrielle Kohlmeier shares her career journey from Big Law to Legal and Innovation Executive
- Practical tips for building AI fluency
- Common pitfalls companies face when operationalizing AI governance
- How companies can experiment with AI while right-sizing risk through privacy and security guardrails
- Strategies for keeping pace with rapid AI developments
- Recommended resources and approaches for building AI literacy and staying informed
- Gabrielle’s personal AI tip
In this episode…
Many companies are rushing to adopt AI tools and publish AI policies, yet far fewer are investing in AI fluency across their workforce. Knowing how to use an AI tool is not the same as understanding what it is doing, what data it collects and uses, and the privacy, security, and compliance obligations that come with using it. Without that level of understanding, organizations risk using AI without fully grasping its impact. So, what does true AI fluency look like in practice?
Organizations spend time creating AI governance policies, and sometimes those policies are not operationalized. Governance then becomes “precious” when it is documented and published but not embedded into how teams actually work. That gap becomes more pronounced when teams lack the AI fluency needed to apply governance to their day-to-day use of AI tools. To be effective, governance needs to be lived, with clear accountability, ongoing feedback loops, and policies and processes regularly revisited as AI use cases evolve. It also requires establishing privacy and security guardrails that allow teams to experiment with AI responsibly, while right-sizing risks.
In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Gabrielle Kohlmeier, Legal and Innovation Executive, about building AI fluency and operationalizing responsible AI use. Gabrielle explains why AI fluency goes beyond simply using AI tools and requires a deeper understanding of the ethical and legal obligations that come with them. She shares how AI governance often breaks down in practice and what it takes to truly operationalize it, while enabling responsible AI experimentation with clear guardrails. Gabrielle also highlights numerous curated resources to help companies stay grounded as AI evolves and offers a practical privacy tip that applies to everyday internet and AI use.
Resources Mentioned in this episode
- Jodi Daniels on LinkedIn
- Justin Daniels on LinkedIn
- Red Clover Advisors’ website
- Red Clover Advisors on LinkedIn
- Red Clover Advisors on Facebook
- Red Clover Advisors’ email: info@redcloveradvisors.com
- Data Reimagined: Building Trust One Byte at a Time by Jodi and Justin Daniels
- Gabrielle Kohlmeier on LinkedIn
- Hard Fork
- The AI Daily Brief
- Waking Up With AI
- “21 Days of AI: A Grit and Growth Mindset Challenge”
Sponsor for this episode…
This episode is brought to you by Red Clover Advisors.
Red Clover Advisors uses data privacy to transform the way that companies do business together and create a future where there is greater trust between companies and consumers.
Founded by Jodi Daniels, Red Clover Advisors helps companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. They work with companies in a variety of fields, including technology, e-commerce, professional services, and digital media.
To learn more, and to check out their Wall Street Journal best-selling book, Data Reimagined: Building Trust One Byte At a Time, visit www.redcloveradvisors.com.
Powered by Rise25 Podcast Production Company
@media screen and (max-width: 1023px){section[data-id=”block_61643e61c3fde7d7a06b90e9c5c3c223″]{ margin-top: -100px; margin-bottom: -50px;}}@media screen and (min-width: 1024px) and (max-width: 1365px){section[data-id=”block_61643e61c3fde7d7a06b90e9c5c3c223″]{ margin-top: -100px; margin-bottom: -50px;}}@media screen and (min-width: 1366px){section[data-id=”block_61643e61c3fde7d7a06b90e9c5c3c223″]{ margin-top: -100px; margin-bottom: -50px;}}
Intro 0:00
Welcome to the She Said, Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st Century.
Jodi Daniels 0:21
Hi, Jodi Daniels, here, I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies.
Justin Daniels 0:35
Hi. I am Justin Daniels, I am a shareholder and corporate M and tech transaction lawyer at the law firm, Baker Donelson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cyber security risk. And when needed, I lead the legal cyber data breach response brigade.
Jodi Daniels 0:59
And this episode is brought to you by Red Clover Advisors. We help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e-commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business. Together. We’re creating a future where there is greater trust between companies and consumers to learn more and to check out our best selling book Data Reimagined: Building Trust One Byte at a Time. Visit redcloveradvisors.com. So typically, Justin, you are the one who makes some kind of funny joke. But today I’m not going to make a funny joke. I’m I’m going to start our transition. Okay, take it away. So by the time this airs, this is going to be our first podcast since we had some pretty big losses in our family. And I wanted to take this time to say thank you to everyone who has reached out. You might know I lost my dad in early March, and 24 hours later, we lost Red Clover Advisors, Chief canine officer, as someone once coined him, which is Basil. And if you’ve been listening to this podcast for a long time, you have potentially seen basil. He often made cameos because he wanted pets while we were recording, or he barked to let us know that there was a leaf or a garbage truck or he just wanted some attention. I’m still moving my chair as if I’m going to hit him, but he’s not here. So we just wanted to say, or I at least wanted to say, thank you for for all of your support and welcoming basil, to our podcast and to all of our calls and all the outreach that people have given me. Well said, Any special memories that you’d like to share about basil?
Justin Daniels 2:52
I guess my special memory is, wow. I don’t even know where to begin. I don’t know he was just a very loving, caring dog who was pretty happy, other than maybe a couple dogs he didn’t like.
Jodi Daniels 3:09
One, yeah, two dogs in the neighborhood are probably happy. But in all seriousness, I believe dogs are people too. So thank you. And if you want to read more about my dad, I wrote something pretty special and meaningful, and that would have been released, I guess, on March 17. So feel free to go back, but let’s bring it back to privacy. And AI, okay, and Justin, you have a really cool guest that you have brought on, and we’re gonna have a fun topic. This is a Justin topic. I mean, I guess it’s a Jodi job. This is like a,
Justin Daniels 3:45
really, you’re the one speaking about AI governance, and this is a Justin topic,
Jodi Daniels 3:49
because you talk about it like just day long, okay, 28 hours a day. Well, let’s I’m only maybe 22 hours a day.
Justin Daniels 3:57
So today I’m excited to bring on my on the show my friend Gabriel Kohlmeier, who is a lawyer, tech whisperer and transformation executive in a lifelong love affair with growth mindset and sustainable innovation, from building a fortune 30 legal and policy approach to anti trust, to navigating retail risk to leading global Legal AI adoption and outperforming teams, she helps organizations right size risk and turn disruption into strategic value. Gabrielle, welcome to the show.
Gabrielle Kohlmeier 4:30
Thank you. Justin and Jodi, what a special day to be on here. And I read the beautiful piece that you wrote about your dad and some of the pictures of your chief canine officer, Jodi, and just grateful for you sharing all of those memories, and really grateful to be here with you both.
Jodi Daniels 4:51
Thank you. I love this Teck whisperer and lifelong love affair. This is one of the best info. Of sentences I’ve ever seen. So tell us all about your career journey and how you became the tech whisperer and how you have a lifelong love affair with growth and sustainable innovation.
Gabrielle Kohlmeier 5:12
Well, thanks. So I have to say, I don’t know that I had the full on love affair that I have now lifelong, because there really were kind of these pivot points throughout my career. So I started law school feeling kind of like an outsider, because I didn’t grow up in the US, and so everything was new. So I was just trying to take in as much information as possible to understand things. And loved having this place where I could learn a whole lot of different things and incentives and how people are thinking and then being creative and innovative around coming to solutions, legal solutions for that, and took that into Big Law and jumped into the world of antitrust, which I was surprised to find myself loving. But I think it was partially the ability, again, to learn lots of different things. I was working on so many different industries, from air cargo to fiberglass insulation to baby products to golf clubs like it was, just learn about this industry and then apply the law to these different facts and the economics around it as well. And then went from Big Law, where I, you know, did a lot of complex litigation and then a lot of mergers and acquisitions, into my first in house gig, and was in a fortune 30 telecom company. And right when it had acquired two tech companies, AOL and Yahoo, so it was Verizon, and so I had the huge pleasure of right off the bat my first week in getting to not only represent a highly regulated, conservative company, and see how that all works, and how do you approach and build governance in that case, around antitrust in that space, but then the very different world of these tech acquisitions that we had just brought in, and I became their main lawyer for all of these digital advertising investigations that were taking place around the world, and that really got me into this world of technology, emerging technologies, big data, that then led me to AI and I kind of built this reputation of being relentlessly curious, earnestly nerdy about learning all of the things and then wrangling that to both ask, Why are we doing things the way that we’re doing it? How can we do it differently, and being able to bring those things to life? And that just put me into positions where I was asked to take on different challenges, be market general counsel and build teams while the markets were building, being built, and then jumping over and becoming the Global Head of legal innovation and emerging solutions, building large teams in India, while acquiring and deploying AI for our 1000 person department, and then also building our first long term technology roadmap and actually implementing it, and so it’s really been that tech force for a part of getting people to understand the value and how to approach it and the mindset behind it, that it’s really not this technology is going to fix everything, but we’re going to work together to shape it, and it’s going to give You more agency, as opposed to take away your freedoms.
Jodi Daniels 8:43
That is a very fascinating journey. And I like how you just were saying how the tech is is going to help and is going to enable. I think that’s a really important piece, especially where we are right now.
Justin Daniels 8:57
So many organizations are rushing to publish AI policies, but far fewer are investing in AI fluency across their workforce. So from your perspective, what is true? Ai fluency inside a company actually look like
Gabrielle Kohlmeier 9:12
I love this question Justin, because I think, you know, everybody’s saying we need AI. We need to adopt AI. I got a lot of these, you know, urgent requests, demands about, you know, what is the AI tool that we’re going to use? What is the technology tool that’s going to fix everything? And to me, you know, such a big part was that is not going to be enough. It’s not going to be enough to have a tool. It’s not going to be enough to know how to use a tool, because that’s not AI fluency. What AI fluency, I think, really is, is knowing what the tool is doing, why it does what it does, what data it’s touching, what your responsibilities are around when something goes wrong, around outputs, inputs and also. Kind of the ethical weight of, what are we giving up when we use the tool? Are we okay with that loss? How do we mitigate it so that we stay well rounded professionals that are adding the value that we’re trying to add? So to me, it’s really understanding the broader picture and taking the time and having a deliberate framework to not just know how to use AI generally, but all of those broader parameters around it.
Jodi Daniels 10:33
I’m curious if there’s an example or a way that maybe someone listening could say, Gosh, I really want to, I want to do the same thing in my company. Is there something that you saw that worked really well, that someone else listening might be able to apply
Gabrielle Kohlmeier 10:52
so I, you know, I got a lot of questions around this, in terms of, like, what’s the playbook, what works really well? I think that this is where the innovation and the creativity comes in. And so, you know, I’ll go back just really quickly to talk about, like, kind of these different levels of fluency that I think about, because I think it’s helpful to think about, you know, what are you trying to achieve to figure out what’s going to get me there? So there’s kind of this level, one of awareness. I know what AI tools exist, and kind of what they do. A lot of people, I found, just didn’t even know what was available, both in terms of, you know, here’s a very specific AI tool, but here are also technology tools that we have that has AI built into it. And so, you know, a lot of what I was doing leading innovation was not just focused on AI, but more broadly, what work are we doing? Why are we doing it? And what’s the best way to do it, and do we need to do it at all, you know? And so kind of starting with that as I guess, even level zero and then level one of what are the tools we have? Then how do I apply the tools effectively? And then the judgment around when not to use it, even though I can use it, is it appropriate to use it here, and then applying that judgment to kind of building these gritty AI and human teams? You know, how are we using AI, other technologies with teams in this coordinated way? And I think to your question of what has worked effectively. One of the ways that I approached this was trying to get as many people doing this together. So I called my team, the change agency, because I really wanted it not just to be Hey, tell us what to do, tell us what the tool is and how to use it, but that we are an agency that have these resources, but we need to do this together, because you have agency to be part of this change. And so we had a lot of very deliberate touch points, both in terms of launching the tools and doing hackathons together, and then having twice a month lab hours, having office hours in between, creating monthly reports that we would send out around you know, which tools are being used, how much not to, you know, shame or pressure people, but to create a play, a way to have conversations around what you know. Why are people using this? Why is this not working as well? Is it, you know, because it’s the tool. Is it because really, this is not appropriate for that? So so much of it was just, how do we launch conversations? And I think what works well in your organization needs to be tailored to, what are your goals? And being really honest about that, because sometimes we say, Oh, this is the goal, but that’s not really the goal. So I think, you know, being clear and honest around what you’re trying to achieve, and then bringing people into those conversations, and it like, you know, getting into places where there can be exchanges and shared learning. Because I think that’s where things really scale. That’s where you don’t lose all of the things that are happening in these silos. But people build on each other, and they’re really excited and get the energy from other people using tools
Jodi Daniels 14:10
as well. I love how you emphasize, emphasized right sizing, because I’m I’m always saying that I think it’s really important that companies pick what works for them in their culture, and I actually was talking a lot about just that in a recent webinar, as Justin pointed out that I did do on AI governance, we were talking about AI governance frameworks and principles and guidelines. And I’m curious in your point of view, where do you think companies tend to get AI governance wrong when they try and operationalize it.
Gabrielle Kohlmeier 14:43
So I think that one of the most common failures that I see is something that I kind of think of as precious governance. So just share a quick story, if that’s okay. I spent the last six months. In Berlin, Germany, on a lovely sabbatical with my family, and one of the things that really struck me was how common history is presented there. It’s just part of life. You’re doing yoga in old churches, you you know are like having meetings in museums. It’s just very part of things, as opposed to how I experience history in my us, hometown of DC, where it feels kind of precious. It feels separate. It feels like we go to these places that hold historical meaning and have a lot of reverence for them, but it’s just not really part of our day to day. And I think the problem that a lot of organizations have with governance is that it’s the same kind of thing. You spend a lot of time and energy forming committees, creating the perfect policies, putting things together if there is a policy to begin with. But you know, the ones that do a lot of energy goes into creating that, and then it’s published on an internet site, and nobody ever looks at it, and it’s not actually operationalized and part of things. And so I think I’m not creating the frameworks and the accountability and the revisiting and feedback loops so that it really becomes something common and not something precious, is something that would make governance much more meaningful and robust and really achieve what we’re trying to achieve. Not just have a beautiful, papered governance, but an actual, lived governance.
Jodi Daniels 16:36
I say that quite often, policy doesn’t work without process, and process doesn’t work without training. You need all of it together.
Justin Daniels 16:43
What topic are you speaking on at Atlanta AI week? Remind me, that’s your job. What is it so Gabrielle, one tension executives face is encouraging experimentation with AI while also managing risk. How can companies create space for innovation without losing control of privacy, security and compliance issues?
Gabrielle Kohlmeier 17:10
So I guess maybe I would challenge the framing of that question a little bit, because I think the idea that we can control this is maybe part of the part of the hurdle. When you’re looking at it as innovation versus control, you’re kind of setting up attention that, you know, I think, leads to frictions that can be an impediment to actual innovation. So I don’t think that it’s as much, how do you control the innovation? It’s how do you create those conditions? How do you create the trust scaffolding, which includes the governance that where innovation and responsibility are really reinforcing each other, as opposed to being in this tension space. And I think right now, so much of it comes down to really grappling with risk and prioritization and clear, being really clear around that I think in a lot of spaces, compliance professionals and lawyers are often extremely risk averse. And how do we eliminate risk? How do we control risk? And I think it’s more you know, going back exactly what you said before, Jodi, how do we right size the risk? How are we really clear around what the risk is, as clear as we can be, but thinking more broadly. And then, you know, saying, okay, these are, this is your sandbox. These are the guardrails. We don’t jump in without any of those guardrails. But within that, go forth. And, you know, make smart experimentations. Do try things we want to see. You know, where things go here, and build on that. And then I think the other thing is, you know, beyond just the risk part creating time and space for it, I think that that’s even more of an issue than you know. The the tension of, is this going to create huge privacy security and compliance concerns? It’s easier to say, well, we can’t do it because of privacy, security and compliance concerns, and so I think sometimes that’s the scapegoat, as opposed to, okay, I’m going to have to spend some time and be thoughtful, but also kind of attack some of the defaults and how they how we’ve been doing things all along, to think about, how can we be doing things in a different way and in more innovative ways?
Jodi Daniels 19:46
Well, it’s interesting how you just mentioned you need to find the time, because it also seems like people need to find time to be able to keep up with what appears to be a weekly announcement of this LLM or this new tool with the. Their big latest changes, and how it’s going to be this great new model and solve all these really interesting new use cases. How, how are you how are you finding others, and what would you recommend to keep up with that type of pace?
Gabrielle Kohlmeier 20:17
Jodi, I wish that it was every week at this point, feels like every day I go to bed and then I wake up and I’m like, I feel like I’m a year behind. And so I think even those of us that are constantly trying to stay up on things, it’s really hard, I have found it extremely helpful to be around people and who are, well, our experts in this, you know, and have been throughout their careers, who say, I’m no longer an expert, because things are changing so quickly that my, you know, I don’t have the new expertise that is developing daily. So I think one thing is just giving myself a lot of grace, that it’s okay. I, you know, I have to have realistic expectations, but then also not keeping up with every single announcement. You know, I think that that is a losing game. It produces a lot of anxiety. It is counterproductive if that’s where all of your energy is going. So I have a very curated set of trusted resources. Some are people that I really trust and agree with. Some of them are people where I’m like, I think that it’s a little bit hype, but I want to know what’s going on, and I want to know what is being said in those spaces, and there is valuable information that I get there. And so, you know, just having those different frames and different perspectives and different lenses, but not all of it, you know, just picking, like a couple, so that I have a good spectrum of views. And then, you know, what are the big headlines, not, you know, like, what is every little iteration, but what are the big changes that we’re seeing, and how much further we going, and kind of having that realistic sense. And I think this is one of the hardest parts of where are we versus where, where do people say we are, and where do people claim to be, you know, with a lot of the hype. And then what do we need to prepare for, though, you know, like as when, when I was be building our long term technology roadmap, one of the things that I really wanted to ground people towards was, I’m not building this for the pressures of today, but for optionality for three years down the line. And what are all of the things that we need to put in place today to get there, and they have to have a certain amount of flexibility, so staying aware of, okay, how well are different tools working now, and what exposures are they creating, and where is their potential retraction going to be taking place because of potential privacy violations and security violations, that then diminishes the trust in the tools and makes us use them less. And so, you know, I think it’s a constant calibration, but I try to go easy on myself and be as realistic as possible and spend more time talking to really smart people that are actually doing things, that are actually building and implementing, as opposed to telling us what we should be doing. So the people that I love following the most are not the ones that are that preach a gospel, but are there to work together and learn together as we’re figuring all of these things out,
Jodi Daniels 23:31
I often get questions of resources, and so you shared, you have a couple different trusted people and resources. Are there any that are maybe, you know, organizations or or people that you’d feel comfortable sharing so someone listening might be able to also expand their learning?
Gabrielle Kohlmeier 23:52
Um, sure. Well, first, I’m going to plug one thing that is a huge passion project of mine that focuses on AI fluency, which is the ABA center for innovation that I know Justin, I think we’ve gotten you roped into, and we’ll continue to get you roped into more. And the Commission on Women, actually, which I’m a Commissioner on the commission, we partnered up to create a 21 Days of AI: Grit and Growth Mindset Challenge. And the idea behind this was really, you know, how do we get as many people as possible, really thinking about the tools in this holistic AI fluency way? What are the things that I should be aware of and thinking about? I think, you know, there certainly are the enthusiasts that will just, you know, throw all of their data in. And I’m guessing Jodi, you and I probably both feel like, help think about what you’re doing before and you know, like, where is this data actually going, and what is the trail that you’re creating? But also. So I want to reach the people that are overly hesitant and not using this at all, and, you know, kind of like, I don’t like it. I don’t want to be part of it. I don’t have time for it. To be thinking about, where are their good uses, how are other people using it, and how can I know enough about what is going on and what the tools do and what their capabilities are, so that I can be part of the bigger conversations around how they should be used, whether at work in my communities, in my schools, you know where whatever your spaces are, I think that it’s so important to create that fluency. So the ABA 21 Days of AI: Grit and Growth Mindset. Challenge is one source for building the fluency and then for getting the information. You know, I’ve got a lot of podcasts that I follow that I think that there are so many, it’s hard to say I like Hard Fork. I like The AI Daily Brief. I like Waking Up With AI for my legal friends. And then, you know, I have a lot of different people that I follow on LinkedIn. And if you want to follow me, you’ll see, you know, all the different things that I like, because I do a lot of saving and liking. I think it kind of just depends what space you’re in, because I see a big difference between, you know, what law firms are interested in, and lawyers in law firms versus smaller company, GCS and legal professionals versus people that are in larger corporations. You know, there’s, there are some similarities. But I think if you follow, you know, people that don’t have the same environment as you can be really stressful because they’re, they’re just playing in a different field, and they have different restrictions. And you can feel like, oh my gosh, I should be doing so much. I should be, you know, I should be leading a team of AI agents at this point. And that just might not be the reality that is possible in your day to day, and that and it might not at all be necessary. So I think, you know, finding different voices and just seeing what they’re posting, what they’re liking, and exploring things that way has been really helpful to me.
Jodi Daniels 27:21
Very, very helpful. Thank you so much for sharing.
Justin Daniels 27:25
So Gabrielle, what is the best practical AI, privacy or security tip you would give to our audience around responsible AI, use.
Gabrielle Kohlmeier 27:38
I mean, I think it is thinking, you know, broadly, I guess I would start with what I tell my kids. Just, just know everything that you’re putting into the internet, into an AI prompt could potentially come back, you know, assume everybody can see it. Don’t put things in there that you don’t want to necessarily, you know, have other people know, because there is no guarantee of privacy once you put things online. And I think that that also, you know, goes for a lot of AI tools unless you have a very secure enterprise grade environment. So I think being really thoughtful about what you’re putting into your prompts, what you’re putting into your AI tools, matters. But then I do really think that it’s so important to get in there and try different things, you know, try the low risk things, see what does work for you. I think getting being part of it, and exploring is just critical to be part of the conversation, and I think it also really mitigates some of the anxiety around it. So that might be less of, you know, specific privacy security advice, but more around building your AI fluency. And then, you know, while you’re doing that, have those thoughts around, you know, where, where is this data going? You know, what? What are the potential exposures that I should be thinking about and not just using it,
Jodi Daniels 29:15
and when you are not thinking about, AI governance, AI innovation, what do you like to do for fun?
Gabrielle Kohlmeier 29:24
So I feel like aI governance, and how do we really get people excited about transformation and doing things differently and creating value from that in ways that are sustainable and benefiting is constantly on my mind. So I run, I take hikes, but it’s all I’m always thinking about it, except I recently started doing ballet, and I’d never done ballet before, so starting ballet in my 40s, and one of the things that I love about it is that I feel like it’s it demands my. Attention in a way that I cannot be thinking about AI governance. So I think it is the one thing where I am purely focused on my very strict French ballet teacher that likes to yell on us. And there’s something so delightful about that, also that, you know, I’m being bossed around, as opposed to having to be the one that’s bringing all the energy. I only need to focus on the millions of different things that you need to focus on while doing ballet, plus you get to wear very cute pink shoes, which is also fun.
Jodi Daniels 30:34
I love that. I don’t think we’ve had a ballet one yet. We have lots of hikes, and I am outdoor activities, too many racket sports. I love that like so that is awesome. Well, Gabrielle, thank you so much for joining us today. If people would like to connect, where should they go?
Gabrielle Kohlmeier 30:50
I am pretty active on LinkedIn, so definitely send me a request. Would love to be connected.
Jodi Daniels 30:58
Amazing. Well, thank you again for joining us. We really appreciate it. Thank you, and this was so great.
Outro 31:08
Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.
(function($){
$(‘[data-id=”block_61643e61c3fde7d7a06b90e9c5c3c223″]’).find( ‘.accordion-title’ ).on(‘click’, function(e) {
e.preventDefault();
$(this).toggleClass(‘active’);
$(this).next().slideToggle(‘fast’);
});
})(jQuery);
@media screen and (max-width: 1023px){section[data-id=”block_8ea5dd395ea1741c7a97d0c2e774cbb0″]{ }}@media screen and (min-width: 1024px) and (max-width: 1365px){section[data-id=”block_8ea5dd395ea1741c7a97d0c2e774cbb0″]{ }}@media screen and (min-width: 1366px){section[data-id=”block_8ea5dd395ea1741c7a97d0c2e774cbb0″]{ }}
Privacy doesn’t have to be complicated.
As privacy experts passionate about trust, we help you define your goals and achieve them. We consider every factor of privacy that impacts your business so you can focus on what you do best.

The post Advancing AI Fluency With Grit and Growth Mindset appeared first on Red Clover Advisors.






