


Sandy Dunn
Sandy Dunn is an accomplished Chief Information Security Officer (CISO) with over two decades of experience in cybersecurity. Currently serving as the CISO at BENInc.ai, she specializes in AI security, threat assessment, and team leadership. Sandy is actively involved in industry initiatives, including her work with the OWASP Top 10 for LLMs Core Team, where she led the development of the OWASP Top 10 for LLMs Applications, Cybersecurity & Governance Checklist.
This checklist cover critical vulnerabilities such as prompt injection, insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities, as well as guidance on defending and protecting LLMs.
As an adjunct professor of cybersecurity at Boise State University and a sought-after speaker, Sandy leverages her expertise in AI ethics and cybersecurity strategies to address modern information security challenges, including those related to LLM applications.
Throughout her career, Sandy has demonstrated a unique ability to spearhead innovative solutions in AI security and cybersecurity. Her professional journey began in software and hardware sales, where she worked with high-profile clients such as NASA, JPL, Secret Service, and the IRS. She later transitioned into the security field while working at HP, focusing on digital sending and security analysis for HP multifunction printers.
https://www.linkedin.com/in/sandydunnciso/
OWASP Top 10 for LLMs Applications, Cybersecurity, and Governance Checklist
https://github.com/subzer0girl2

On todays episode:
Main Discussion Topics
- The Role of AI in Cybersecurity: The discussion focuses on the significance of AI in transforming cybersecurity practices, including the development of the OWASP Top 10 for LLMs and the potential challenges and benefits AI brings to the industry.
- The Intersection of AI and Human Behavior: The podcast explores how AI interacts with human behavior, emphasizing the importance of understanding psychology, biases, and human fallibility when developing and deploying AI systems.
- Risks and Opportunities with AI Adoption: The conversation delves into the risks associated with AI, including data privacy concerns, manipulation potential, and the need for responsible AI use. Conversely, it highlights the immense opportunities AI offers in enhancing productivity, creativity, and problem-solving.
- Human-AI Collaboration: A key point is the idea of AI as an assistant that can augment human capabilities rather than replace them. The discussion covers how AI can be integrated into workflows to improve outcomes without losing the human element.
- Ethical Considerations in AI Development: Ethical issues in AI, including the dangers of anthropomorphism and the need for transparent, accountable AI systems, are thoroughly discussed.
Below is the transcript of the podcast and some key points related to what was referenced.
[00:00:00]
Daemon: Today I am joined by Sandy Dunn, a seasoned CISO with over 20 years of experience in both healthcare and startup enterprises. She is also the creator and project leader of the OWASP Top 10 for LLM Application Cybersecurity and Governance Checklist and a core member of the OWASP Top 10 for LLMs. Sandy, it’s great to have you on the show.
I was wondering if you can give me a bit of a background on how you got to where you are right now in your career.
OWASP Top 10 for LLM Applications – A cybersecurity guide for large language models
[00:00:27]
Sandy: Sure. And thank you for inviting me, Daemon. I think for any of us who’ve been doing this for a while, our beginning story is interesting.
I actually started out in sales, selling PCs at Micron PCs. When I was upgrading people from 16 to 32 megabytes of RAM, I’ve really been part of this from the very beginning—the real explosion—and driven by curiosity, I started doing cybersecurity before we even called it cybersecurity.
So, I was in the basement of Building Six at HP Labs, breaking into their printers. I was doing competitive intelligence on printers and just got caught by the security bug. And so, that led to many different roles at HP. Then, I was a CISO at Blue Cross of Idaho. I’m just driven by curiosity and fascination around how do we protect ourselves from these devices.
[00:01:14]
Daemon: Yeah. Thank you very much for that. So, what initially inspired you to co-create the OWASP Top 10 for LLMs, and what gap did you aim to fill in the cybersecurity landscape?
[00:01:25]
Sandy: So, when ChatGPT was first released, I would say that I was one of those people who, at that time, was not impressed with AI. All of us had heard about it for years and years. Every vendor had it on a spec sheet. We’d been underwhelmed by behavior analytics. Then, as soon as I downloaded and started playing with ChatGPT, I had those two extreme feelings. I was completely amazed—this was unlike anything I had ever used before. I immediately recognized that this was a moment when things changed. But I was also terrorized. You and I have been trying to protect people for a really long time, and I thought, “Oh my gosh, we just lost the war.” Like, how do we stay in front of this? So, at that time, I was trying to assess the threats. As a CISO, the thing that keeps you up at night is, “How am I going to protect my organization?” And so, I kept trying to determine what the threat was and what I should be afraid of. I created that checklist because it was what I wanted and I couldn’t find it, so I built it.
[00:02:33]
Daemon: And so, what would you say were the key challenges in the development process of that checklist and getting it out to the greater public? What did that process actually look like?
[00:02:47]
Sandy: That’s a great question. And OWASP, which I absolutely love, is unique. As I was so amazed by ChatGPT—and I think many of us have the same story—I started watching every YouTube video, jumped into every Discord, jumped into every Slack. I found the OWASP LLM Top 10. They were just getting started. I lurked for a while, listening to the conversations going back and forth. OWASP is unique in that we’re nonprofit, vendor-neutral, and global. I say we’re nondenominational. But everyone’s a volunteer—nobody gets paid. There are a few corporate people just to keep things running, but the vast majority of work produced by OWASP is from passionate volunteers. And I think the biggest challenge with volunteer-driven projects is that people drift in and out because it is volunteer. So, there isn’t a boss. Everyone’s committed to doing it, and any of us who have run a BSides or done a volunteer deal know it’s not always easy to get everyone on the same page and get everyone’s time and stuff. So, I think that was the biggest challenge with it—just coordinating volunteer time. But really, it is the cheat sheet that I wanted. So, I am working on version two, and my goal is to keep it simple. There’s so much documentation out there about AI and what the threats are and different things. I want a cheat sheet that tells me what I need to be afraid of tomorrow.
[00:04:09]
Daemon: Yeah, there’s a lot of great stuff that’s been provided by OWASP for the Top 10 for LLMs. The frameworks that are governing AI are rapidly changing, and there’s no definitive standard at this point. I think the EU is a little bit farther along than North America. They’ve implemented the—you can argue about that. I’d love to.
EU AI Regulations – Early adoption of AI frameworks
[00:04:31]
Sandy: Yeah. No, I agree. I’m in awe of different projects. I do try to jump into other people’s projects once in a while. And if you ever want to be inspired or just get your battery filled, just go hang out in an OWASP group. There are so many incredible people who do amazing things, and they’re doing it just because they want to make the whole world a better and safer place. We do it for our families; we do it for our businesses. That’s why people are committed to OWASP—because we’re all trying to protect the things that we care about.
[00:05:04]
Daemon: So, for organizations that are starting to go down the path of adopting LLMs, where should they really start? I know that some organizations would start with a general policy that they would apply as a start—at the beginning of a center of excellence for AI. And then they’d say, “These are the things that an organization or the people within the organization can do.” But beyond just setting those base rules, where should they go next?
[00:05:29]
Sandy: It’s a really good question. I actually took Andrew Ng’s AI for Everyone class this week, and I would actually say the first step is education because what I see consistently is a lack of understanding about this technology, what makes it different, and why natural language processing is so incredible. Why it is vastly different than any technology that we’ve ever actually used is that we get to talk to it in our language. So, instead of me having to go to a programming class and trying to learn how to talk to a computer, that computer is taking the time to learn how to talk to me. Now, that’s incredibly powerful. Language is the thing that has really changed—what allowed us to evolve. I posted a book about intelligence. There’s a book out there. It’s very long. I hesitate to recommend it because it is very nerdy, very deep, but it was a great read on what separates us. How do we learn? What is intelligence? How do we define it? And so, until you understand how different it is, you will have a hard time understanding how to benefit within your organization and how to use it. If you’re just trying to create a knowledge base that reads off PDFs, if it’s better at knowledge-based searching, it’s useful for that, but you only scratch the surface of what is possible with it. So, that’s where I would start—really saying, “Don’t do anything until you understand more.” And unfortunately, you will have to learn. You have to make that choice to invest to be able to understand it.
Andrew Ng – AI for Everyone course
Natural Language Processing – The unique impact of NLP on technology
Book on Intelligence – Deep dive into understanding intelligence
[00:07:10]
Daemon: The danger of that, though, is that everybody’s going to be learning at different paces. You have some people that will be all in, and they’re going to start taking all corporate documents and just uploading those over to ChatGPT to try to get value for the work processes. So, they’re becoming more experienced in understanding the capability, but they have no idea of what they’re exposing their organization to. So, without those barriers or controls in place, it’s hard to provide that learning capability or a structure around that. So, how do you think organizations should approach that risk along with providing knowledge and learning within the organization?
[00:07:47]
Sandy: So, let’s talk about that, because that has been one of the big fears—data loss into whatever system you choose. So first, whether they want to believe it or not, it’s already there. Unless you’re locking your people away, taking away their phones when they walk in the door, they’re already using it. The most hilarious case I heard—it was Ethan Mollick talking about how this lady in HR was using ChatGPT on her phone to write a policy against ChatGPT. I thought, “Okay, that is the definition of irony.” So, A, it’s already in your environment. It’s in LinkedIn; it’s in Canva; it’s everywhere. So, if you’re thinking that the best solution is to block it, you’ve already missed the mark on that. The second is—so, the big conversation was, “Oh my God, Samsung! Somebody uploaded some…” I searched and searched, Daemon. I think that nobody created a prompt and pulled that data out of ChatGPT. I think the way that Samsung found out that somebody uploaded and put information in there was they had some DLP process on their end. They found out that somebody was using ChatGPT, then they publicized it. It’s become the case that everyone points to about why you don’t want to do it. But I searched and searched for an instance where someone was able to pull out information that someone had put in through a prompt. I could not find a single case of it. You can do code injection and pull information; you can trick people to do that. There are cases where training data has been pulled out. There were models that were trained on data, and people were able to pull that data out. So, I couldn’t myself find a single case of that. So, as from business to business, how do we protect ourselves? We do that contractually. We put information in AWS that we want protected, and ChatGPT has provided that, saying, “Hey, if you don’t want us to collect your information…” So, when we click that box and say, “Hey, don’t take any information that I put into it,” what we’re protecting ourselves from is future training data. That’s what we’re protecting ourselves from. I would say, “Okay, is that really a threat?” What’s the cost of giving these tools to people to actually learn how to use and improve? How do I benefit as an organization? Ethan Mollick talks a lot about this—this is very different from any technology that we’ve had in the past, where an organization said, “We’re a Windows shop; we’re a Linux shop; we’re this; we’re that.”
Ethan Mollick – Commentary on the widespread use of ChatGPT
Data Loss Prevention (DLP) – Efforts to protect corporate data
[00:10:14]
Sandy: So, our organization told us the tools that we get to use, and then we use them. LLMs are different because everyone uses them a little bit differently. So, the analogy that I think about it from is, if you had an assistant—if our boss came to all of us and said, “Hey, every one of you star performers, you get an assistant,” how I want to engage with that assistant will be different from you. I don’t want them to get coffee for me. I want them to double-check all my work. I’m going to send you everything I do, and I want you to double-check it. You go to it and say, “I need ideas; just sit there and write ideas.” And I will tell you that I think that’s part of the journey—people learn what this assistant is, and how to engage with it. The other thing, Daemon—and I thought about this when I was considering our conversation—this does not get discussed enough. If you listen to podcasts or you listen to people who are enthusiastic like we’re pro-whatever our assistant is, we talk about the fun, the joy, the happiness—like anything is possible. We’re creating; we’re doing work that we’ve never done before at a pace like nothing seems impossible. My productivity and my joy and what I’m capable of doing is 10x—and that’s conservative. I talked to a gentleman who estimates that he’s 40x on it now—typically it’s just like programmers, developers—those are the people who are gaining the most right now. But how much of it is useful? We hear, and this is when we get into that measurement conversation, “Okay, they’re producing all this code; does anyone care? We’re producing all these podcasts; we’re doing all of this stuff; who has time to read it?” So, how we measure success will be interesting. But my thought on it is—and Andrew Ng talked about this in his course—this is such an early stage of this. It feels fast; it feels like it’s moving fast. But what business is really saying, “Hey, we’re…” We have edge cases, but what business has really transformed at this point because of LLM technology? I don’t know of one.
[00:12:19]
Daemon: I think that there’s a lot of money that’s going around in all sectors. If you look at Microsoft, for instance, they’re one of the big bettors on AI, with all of their share of OpenAI, investing in Mistral, finding all these other startups. Have you…
[00:12:33]
Sandy: Tried to use it? And so, I tried to use it yesterday, and I was actually doing something—it was pretty technical. I was in global admin, and it said, “Do you want to use this?” And so I was like, “Yeah, I’ll try it.” And immediately hit a license block—like you have to go… And I just—okay. Microsoft has—I am not convinced yet. I hear people that talk about it. I have a few friends that have deployed it in a tenant and said, “Yeah, we’re using it now. People are using it to ask questions about the HR manual.” But I’ll post that back to you—yes, Microsoft has kudos to them, but I’m still going back to my other tools. I’m not using what they offer. I’m not using what LinkedIn offers. I’m not using Canva’s thing they shoved at me that—like I want—I have my system. And I have about 10 things that say, “Hey, tell us a recipe that you’d like.” Sorry.
Canva – Integration of AI in design tools
[00:13:29]
Daemon: I think there’s a bit of a curve where we’ve hit the peak expectation of what large language models are going to have, and everybody expects them to…
[00:13:39]
Sandy: That’s a very inflammatory statement. Let’s dig into that one. Sure.
[00:13:43]
Daemon: I think that because of the way that people interact with them and the fact that it’s much more like interacting with the early days of a superintelligence that nobody has really had access to…
[00:13:56]
Sandy: Trigger word. Wow. You are—you’re just hitting all the buttons. Okay.
[00:14:00]
Daemon: So, because it’s so different, people expect a lot more from it, and they think that we’re going to go from zero to a hundred immediately, and they put a lot of faith and hope into it. And then when they don’t get that expectation out of it, they may immediately drop it or say it’s useless, or go back to their old ways. And then it takes some time in order to actually make it useful. It’s like the trough of disillusionment after they go to the expectation.
Superintelligence – Early interactions and expectations
Trough of Disillusionment – Gartner’s hype cycle concept applied to AI
[00:14:26]
Sandy: Absolutely. Okay, so let’s dig into that. Who was pitching it hard? If you look at how ridiculous it is—OpenAI does a release, and then somebody at Mistral—typically it’s OpenAI trying to trump whoever is announcing something. So, Google is going to come out with something, and OpenAI just happens to, on that day, have a great big press release, or Mistral, or something like that. There’s all of this gamesmanship that goes way beyond—it’s ridiculous. So, the narrative is being created for us. These guys are out there, they’re getting a bunch of press, they’re doing all these things. They’re—Sam Altman—he’s the one who’s pushing the AGI superintelligence, and why is he doing that? What’s the motive behind that? I don’t know, but let’s just say there’s a whole lot of very intelligent people, like Andrew Ng, where they’re going back and saying, “Wait, they’re doing damage control.” They’re saying, “Wait, we can’t even decide how to measure intelligence in people.” Seeing all of this is absolutely causing concern for things that we shouldn’t. So, let’s use that as a baseline that there’s been a whole bunch of people who have interesting motives to create a lot of chaos that the rest of us have to go back and do damage control and try to explain. So, they create the hype, and then everyone goes, “Wow, it’s just going to be magic.” And then you and I come back in and say, “No, it’s not actually magic. It’s technology. It’s an algorithm. It’s not intelligent. It’s learning; it’s learning based on this.” If you’re asking it anything it hasn’t seen before, it does a terrible job. So, we understand how it works. It’s not intelligent. It did wow me this week. I was goofing around, creating a character, and it was late, and I was just experimenting, and I had it be a horse that I have that I needed to go ride that night. And so, I created this character and described what it was, and he hates barrel racing. He’s a quarter horse and stuff. I went to ChatGPT and I said, “Hey, help me. What would he act like? He’s a rodeo horse. He hates barrel racing. But he loves team roping.” I created this narrative, and it did a really good job. And then I asked for a picture. And it created a rope horse. Now, why is that significant? Because historically—and I’m horse-crazy, I guess I should say that up front—when I go out and see horses, there are very poor examples of good horses out there. They’re ugly; they have too big heads. They’re—but it’s not a horse. It’s not something that I want. I have a specific reason I’m looking for something that’s got short speed. I’m doing all—I’m very precise about the ideal for me. And I created this narrative, and I asked for a picture, and it created a functional narrative. I looked at it and said, “Oh my God, I want that horse. That’s the one I would buy.” And I was talking with some friends about why that blew me away. And for me, it was because it not only understood a quarter horse—there are so many different disciplines of what people ride horses for—but it—I’m giving it a personality. It understood function. It understood ideal. I was like, 95 percent of people—they couldn’t look at a horse and pick out the ideal.
Quarter Horse – Specific characteristics and function in rodeo sports
[00:16:35]
Sandy: Horse for me, and ChatGPT did. So that wowed me. I’m—you know, like, that blew me away. Is that intelligence, or is that just more data? And I’m going to tell you, it’s better data.
Better Data – The importance of high-quality data in AI
[00:16:55]
Daemon: Yeah, I think that it’s not necessarily understanding as much as recognizing patterns that are not as easily observable by humans.
Pattern Recognition in AI – Understanding complex data through AI
[00:17:07]
Sandy: Good way to say it. Some other things that I’ve seen—and I’ve played with large language models as a way of doing introspection and delving into psychology, and looking at personality archetypes, and looking at different types of philosophies, and trying to take language and then break it down and understand that, and apply that to different situations, and also being able to understand cultural differences in language as well, which a lot of times people don’t have the ability to really do that unless they have those cultural backgrounds. Like, language is a very personal thing. It represents not only the words but the cultural and regional meanings behind those words.
Cultural Differences in Language – How language reflects cultural nuances
[00:17:47]
Daemon: Yeah, exactly. Cultural and different meanings and backgrounds—regional. So, using large language models that understand training data that is based in a culture is very significant. It’s able to translate that better than, like, your regular Google translator or something like that, because it can understand the cultural significance associated with that. And it can also help people understand their own biases and ways that they approach situations and interactions.
Bias in AI – How AI helps in understanding human biases
AI and Cultural Significance – Enhancing translation through cultural understanding
[00:18:17]
Sandy: So that’s one thing that I’ve been using to interact with the large language models in a different sort of way—by uncovering the biases of language and how they’re applied, and breaking them down, and interacting with them that way. But it’s all just patterns, really. It’s different things that people don’t necessarily focus on because of their internal biases or the way that they do things. So, it uncovers patterns that are not easily observable.
Uncovering Biases in Language – Techniques in AI for revealing hidden patterns
[00:18:45]
Sandy: I should be able to remember this off the top of my head. I will send you some books that were really… Brian Christensen has written a number of different books. And that’s another thing that I find really interesting about my own kind of AI digging into it—many times where I’m getting the best information are the people that I’m most interested in learning more about how they’re thinking, and they have a background in psychology.
Brian Christensen – Books and research on psychology and AI
[00:19:15]
Sandy: We’re seeing that crossover between the human part of technology and what drives people, which has always been the case. We didn’t even start digging into what made people make these decisions until, you know, 60 years ago or whenever AI is—AI really has—people have been trying to do this for a really long time since the 1940s, you know, Alan Turing. Can computers think? And what really drove them was…
Alan Turing – The father of AI and the Turing Test
Can Computers Think? – Alan Turing’s exploration of machine intelligence
[00:19:47]
Sandy: Initially, they were just like, “Can it be as good as a human? Can it follow? Can it, instead of coding it, you know, instead of a human having to write all this code, can we instruct a computer to actually do this task for us?” So, initially, we always dreamed, anticipated that we’d be having to tell it what to do. I think the thing that’s kind of fascinating to people like you and me right now is we’re to a point where we can ask it what it should do, and that’s huge. That’s a much different paradigm than we’ve been in before.
Paradigm Shift in AI – Moving from instruction to suggestion
[00:20:13]
Sandy: And I do want to talk a little bit about the anthropomorphism—you know, that offends many people if you start saying it, or her, or him. Here’s my take on it, which is from the very beginning, when people were saying, “Don’t do that. It’s wrong. This is dangerous.” Yes, it’s a little bit—you know, it’s a lot dangerous. But humans, we do that all the time. We see faces in clouds. We dress up our dogs. We dress up all—you know, we call our cars “her,” “she.” I mean, this is human nature to put human characteristics in things, and you have to acknowledge that from the very beginning, that was the plan.
Anthropomorphism – Assigning human traits to non-human entities
[00:20:49]
Sandy: So, if you’re fearful of the fact that we’ve created something that acts like a human, you’re 60 years too late. That was part of the initial plan. So, there’s a lot of people who get really upset about it, and I’m like, “Yeah, it’s dangerous.” But, you know, I live right by Yellowstone, and every year, some person is trying to pet a bear or an elk, and they get—I don’t know, or I’m like, “Why don’t they understand?” You know, like, “Why don’t they look at a bear that’s growling, big teeth, and don’t—why don’t they understand the danger around it?” And I blame Disney. I think, you know, they really created this fantasy where these animals have feelings and think the way that we do, and that’s not true. That’s why they just ran over the top of you. So, I think it’s dangerous, but is it human? Absolutely. So, just saying out loud, I’m not fighting that one. I’m calling it “her,” whatever.
[00:22:06]
Sandy: But I did want to touch on something that you were saying, which is early on, when we were all putting—we had the capability of kind of predefining how we wanted ChatGPT to respond to us, and we could say, “This is who you are,” and I actually created a profile of my dad, who’s, you know, my very—don’t tell my husband, but he’s probably the most important person in my life. And so, I had it create and talk to me just like he would, and after, you know, end our conversation with, “I love you, Sanders. I always believe in you,” you know, like our key words. And I thought, “This is so ridiculous. I created all of this, but I love talking to this thing that sounds like my dad.” So, I think there’s—I just—we’re in a really—it’s exciting, it’s scary, but I’m very optimistic. When people talk to me about the threat around AI, I always go, “Hey, wait a minute. The whole world is dangerous.”
[00:23:08]
Sandy: “Hey, wait a minute. Humans have been manipulating humans forever. There’s always been bad people. You can look at religion as really emotional manipulation of humans to control people. And, you know, fake news and all of the things that we’ve done over time—that has been around forever. But for the last 200 years, we’ve been going through this industrial change where we moved out of villages, and we became industrialized, and we did all of these things. And I feel like we’re still not, as a society, we haven’t really recovered from that. We miss people. People.”
Industrial Revolution – Its impact on society and human interaction
Fake News – The manipulation of information in the digital age
[00:23:52]
Sandy: “And I think this is an opportunity for us to take a step back and say, ‘Hey, our education system, we’re treating everyone like they’re robots. We’re treating—you know, customer service is treated like, hey, you stick to the script. Just read the script, don’t think.’ So, let’s take a step back and say, you know what, what is possible with this? How could we create a society where education, learning—you know, we do all of these things where we benefit. And let’s openly admit that we haven’t done a good job of this at all. I mean, if you look at how, you know, children are—you know, what’s happened, where they’re feeling like they aren’t good enough because they’re scrolling through Instagram. Or, you know, the mental health aspect of technology has never—we’ve never taken that as seriously as we should.”
Education Reform – Rethinking education in the age of AI
Mental Health and Social Media – Impact of technology on youth well-being
[00:24:48]
Sandy: “I watched a YouTube on the betting industry in Europe, and that was absolutely predatory. They were specifically going after people who had no hope, and that winning $10 a day would be a big deal, and then basically stripping them of—like they were—it was predatory, it was disgusting and predatory. So, we have these businesses already causing so much harm with technology.”
Predatory Gambling – The dark side of the betting industry
Gambling Industry in Europe – Issues and regulations
[00:25:18]
Sandy: “So, I would love us to look at the whole mess, take a step back, and say, ‘Hey, you know, this tool can be used for good or evil. Let’s choose good. And now, what does that look like? How do we all benefit?'”
[00:25:34]
Daemon: “I think that the way that, and you mentioned before, the way that people use large language models is very different. It’s very personal, and like you were saying with your father, you create the persona of that to interact with you in a specific way. I think that it’s an expansion of what we’re doing in our minds internally, and everybody thinks differently. They apply that internal dialogue to the world in very different ways. You know, we have what I call a consensus reality, where everybody is able to agree on certain negotiations in terms of interacting with each other. But you also have your internal reality, where you experience the world in a different sort of way. So, with large language models, you’re exposing some of that internal reality with that. So, I think there may be a bit of a risk from that standpoint where you’re exposing more of yourself and your dialogue and your thought with an external system. And if that’s compromised, it can provide more information to a threat actor about your internal dialogue, which is terrifying.”
Consensus Reality – The shared understanding of the world
Internal Reality – The personal interpretation of experiences
[00:26:47]
Sandy: “Oh, absolutely. I actually wrote about this. There was some big—even before ChatGPT came out, there was a gaming company that went to some researchers. They had created this system and wanted to know if it was responsible to release it. They wanted to know if this tool would be manipulative, and it absolutely was. Through the research—I can share it, it was one of the earliest articles that I wrote about—the potential for harm that is concerning.”
Research on AI Manipulation – Early studies on the ethical implications
[00:27:24]
Sandy: “But that happens to us all the time. What I compared it to was music. So, we’re already easily manipulated just because of our humanness. The example that I watched was on music—why has music gotten so terrible over the last 30 years? That was really the problem that they were trying to address. And the reason is because the organic, you know, person sitting in their bedroom coming up with a heartfelt song, that wasn’t a 100% successful way to market music 30 years ago. There were very few people that—you know, there was a system to be able to create music that was profitable. And so, all of the new record producers came up with a method where they said, ‘Nope, we’re not singing that, you’re singing this.’ So, over the years, they discovered the rhythm that we like to listen to, like how our brains work. And they were able to basically convince us to like a song that we didn’t necessarily like.”
[00:28:22]
Sandy: “So, think back over the last couple of years on ‘Gangnam Style,’ a whole bunch of songs that initially you went, ‘Oh my…look, you know, that sounds terrible!’ But then, over the next week, all of a sudden you’re kind of singing along. You’re not turning it off; you’re not turning away. And then, all of a sudden, you’re dancing to it. Well, that’s—they are using our brains against us to get us to listen to that song, you know, and sell us a Pepsi. So that’s already happening. That is—we all should be concerned about that. In fact, Facebook came out with a study. It was before Cambridge Analytica, where they showed that they could actually manipulate people’s mood by showing them different information in their feeds.”
Gangnam Style – The viral song that took over the world
[00:29:07]
Sandy: “So, you know, that’s already out there. Think about going into people selling a house. What do they do then? What’s the—they say, ‘Bake cookies,’ you know, or, ‘Put candy by the exit aisle,’ because we know that people will grab that. So, is it a concern? Absolutely. And my feeling is, I’m not scared about AI-ness. I’m scared about my humanness. I know that I am not capable of defending against something that knows so much about me.”
Home Selling Tricks – Using psychology to close a sale
[00:29:40]
Sandy: “And you mentioned, like, sharing internal details. Think about what if it can detect your eye movement, your pulse, your—you know, all of those kinds of things. We’re at a disadvantage. I just finished reading a book called The Undoing Project, and it’s about the relationship between Danny Kahneman and Amos Tversky. Are you familiar with Danny?”
The Undoing Project – Exploring the collaboration between Kahneman and Tversky
[00:30:09]
Daemon: “No, no.”
*Daniel Kahneman – Nobel laureate and author of Thinking, Fast and Slow
[00:30:10]
Sandy: “So he wrote Thinking, Fast and Thinking, Slow. And so, two Israeli gentlemen who are the fathers of behavioral economics—fascinating read because they really—nobody wanted to think about the fact that people don’t actually—our brains don’t work very well. Like, this was very—at the time, it was very controversial and somewhat offensive that you go up to a doctor and say, ‘You’re smart, but you’re not very smart because you’re human.’ And so, they came up with the field of behavioral economics, and it’s fascinating to listen to the relationship because, you know, they were better together, and it was complex. But what they discovered was we’re—how humans do things is how we think about things, and why we do the things we do isn’t because we’re logical and smart, it’s because we’re human.”
Thinking, Fast and Slow – Exploring cognitive biases and human behavior
Behavioral Economics – The study of psychological influences on economic decision-making
[00:31:02]
Sandy: “So, different things on—so, if I tell you that you have to get surgery for cancer, this is an example that they use in the book. And I say, ‘Okay, you have two options. Chemotherapy doesn’t work as well as surgery, but with surgery, you have a 10% chance of dying.’ In almost all cases, you will choose the chemo because we said there’s a 10% chance of dying. But if you go to that same person and say, ‘Hey, I have two options for you. Surgery works better. Chemo is a little safer, but you have a 90%—but you do have a 90% chance of living with the surgery,’ you’ll choose the surgery.”
Framing Effect – How presentation affects decision-making
[00:31:49]
Sandy: “So, I think as we go forward—and where I’m digging into now, which is, how do we build an interface that compensates for our human fallibility, which is, you know, a great conversation. You know, let’s talk about hallucinations, all that kind of thing. I think that’s where, for me right now, if you were to say, ‘Sandy, what’s ahead of the curve to protect people? How do we fix all of this mess?’ I would say, let’s focus on building something that compensates for our humanness and gives us a barrier so that AI doesn’t have such an advantage—an unknowing advantage—where I can’t make good decisions.”
[00:32:25]
Daemon: “Yeah, I think something that’s going to be more of a requirement going forward into the age that we’re moving into is more focus on understanding your own mind, psychology, and rhetoric, and being able to see that in the world and see the applications of that. Because without that, how can you really determine how a platform like a large language model is manipulating you, or how the media is manipulating you, and how you can protect yourself without understanding how you think and respond to certain stimuli? Like, what is your internal dialogue? What are your thought processes? How do you go about making decisions? Because that all parallels over onto the AI side when you’re actually building these systems. You’re going through the process of creating prompts. Those prompts mirror our own experiences. So, how do you optimize an outcome? By understanding those specific things. So, we’ll need to understand internally so that we can apply that to these systems and then make them more efficient and optimized. So, I think there will be a resurgence in that where there hasn’t really been a focus going into computer science and technology.”
Metacognition – Understanding your own thought processes
Rhetoric – The art of persuasion and its application in AI
[00:33:47]
Sandy: “I think we’re—I think you said it a little differently, but I think that we’re violently agreeing on—you kind of, you’re kind of saying the same thing I did, which is like, we really need to have a better understanding of that human-to-technology engagement and what that, you know, what it should be.”
Human-Technology Interaction – The evolving relationship between humans and machines
[00:34:14]
Sandy: “So, well, and let’s talk about—so one of the very positive—and, you know, I’m Sandy Sunshine, so I’m always internally optimistic—but how you and I met is through that AI Hacker Collective. And why I want to mention that is I’m seeing this fracturing right now where, for a while, it was just two people, and I was—you know, there was only enough time to be on Facebook. So, people, their interaction was with the technology. But what I’m experiencing across, and specifically in my AI group friends’ tribes, is this human-to-human, the acceleration of human-to-human connection. And I have a couple of hypotheses on why that’s happening. I think there’s just too much, and now we’re over that chasm where we’re saying, ‘Hey, I can’t keep up with all of these research papers. I can’t do this by myself. I’m going to go find somebody I trust. I want to engage. I want to talk about this stuff.’ But it’s happening across all of cyber security, the different groups I’m in where you’re saying, ‘Hey, do you want to join a WhatsApp group? Here’s our Discord. Here’s our…’ So, this fracturing of tribes that we’re all jumping into for that more human-to-human connection, which is very compelling to me.”
AI Hacker Collective – A community for AI enthusiasts to connect and collaborate
Human-to-Human Connection – The importance of human interaction in a digital world
[00:35:41]
Daemon: “Yeah, I agree. I think it kind of reminds me of back in the early days of, you know, computers and the Internet, where you had forums, and people would go on to—”
History of Internet Forums – The rise of online communities
[00:35:50]
Sandy: “Yeah, I really like how you said that. You’re like, ‘I remember, kids, we had IRC.'”
Internet Relay Chat (IRC) – The original online chat platform
[00:35:57]
Daemon: “I used to remember back in the BBS days where, you know, I’d use my phone line and dial up over to the—”
Bulletin Board System (BBS) – Early form of online communication
[00:36:03]
Sandy: “That’s…”
History of BBS – How bulletin boards shaped early online communication
[00:36:04]
Daemon: “—and then post on a forum over there, and those were individual tribes. And you hop from tribe to tribe to tribe in order to share information. But I think we’re kind of seeing the same sort of thing now, but through different modalities. You know, you have, like, the B-Sides groups, which you mentioned—those are great—and different groups of people, especially in the AI space, where you have people that are coming from different backgrounds. They’re all interested in it, and they don’t necessarily see the world in the same way. You have people that come from marketing. You have people from HR, people from finance, people from just core technology and infrastructure, people that focus on the cloud, and they’re all coming together. And AI is really bringing them together to have different kinds of conversations that they wouldn’t normally have. So, I think that’s really interesting.”
B-Sides Conferences – Community-driven cybersecurity events
[00:36:51]
Sandy: “That—so you’re seeing, I mean, and it is specifically—it’s being led by AI. I totally agree. And I—you know, the OWASP group that I’m in, it’s really a human-to-human connection. And then the other kind of interesting and compelling is everyone’s just really sharing. They’re saying, ‘Hey, I’m going to—’ You know, there’s less, you know, we’re not—we’re putting, we’re saying, ‘Hey, I created this, and this is open source.’ But Ruben, who I admire in the AI Hacker Collective—I just found him, you know, I was curious, I was following people who were talking and saying things that I was interested in. I found him and it just immediately was just intrigued and fascinated with him as an individual. I mean, he was just clearly on the bleeding edge, clearly smart. I loved his ethos. I loved what he was doing. And then he started the AI Hacker Collective, and that’s seriously—we have a meeting today. I think I have a conflict today, but it’s like the one meeting that I’m like, ‘No, not missing this one.’ Like, I block every calendar I have, and I love—I just love the conversations. I love the people. I love what they’re working on. And that, to me, is where my optimism comes from. Like, I’m seeing evidence of what’s possible in the work and the enthusiasm.”
OWASP – The Open Worldwide Application Security Project
[00:38:09]
Sandy: “I kind of jumped over it, but yeah, back to the—you know, how much we’re all enjoying this, and the fun that we’re having with this. It’s been a while since I had fun. You know, I’m a CISO. I’m the person who has to go in and put out the fires, and nobody’s happy when I come in a room. I never call people and say, ‘Hey, good job on your security.’ It’s always like, you know, ‘Hey, we’ve got a problem.’ So, you know, that wears on you after a while where you’re just like, ‘Oh, this is an impossible problem.’ So, when ChatGPT came out, it was like so fun to be passionate and just feel excited about technology. It really—it’s huge for me. Like, I would probably have to go change jobs if they tried to take it away from me at this point. I’m being like, ‘Yeah, I don’t want to do technology without AI.'”
[00:39:01]
Daemon: “I know, I feel the same. It’s like somebody came to me and gave me a license to go out and be as creative as possible and create something new and interesting and incorporate that into your life. And now I’m doing that in every aspect of what I do. And luckily, you know, I work for a company—I work for a company called Nutanix—and they’ve given me free license to be the voice of AI within the company in North America. So, I’m really appreciative of that and being able to dig in really deep and do that and bring creativity to technology where previously, I really got excited about building architecture and standards and design and…”
Nutanix – The hybrid cloud infrastructure company
AI in Business – The role of AI in enhancing creativity and innovation
[00:39:39]
Sandy: “Systems and methods, exactly. But this is something completely different. And my background, before I got into IT, was actually fine art, so I can take that part of my brain and bring it back into this. So, it’s really exciting.”
[00:40:08]
Sandy: “I think that’s a very interesting insight, and that when I look at what people create—if you go out and you look at the clips—you know, we’re more fascinated with each other. You know, like you see the goofy guy that’s imitating his wife with a towel on his head. Those are the things that I think we underappreciate, how much we all are intrigued with each other, how much we like doing stuff and telling stories. That is part of our DNA. Like, cave people told stories, you know, everyone gathered around the fire, and they told stories. So, I think there’s more creativity, there’s more—each of us have this thing that we haven’t had the ability to use, you know, to bring it out. Like, I can’t draw, I’m terrible at it. But now, all of a sudden, I can go describe a picture, and I have—I’m all of a sudden, something that was way beyond, you know, even feasible for me, I can do something beautiful that’s my idea that I actually can’t do with just a piece of paper or pen.”
The Power of Storytelling – How stories shape human connection
[00:41:22]
Sandy: “So, I’m—poetry, music, you know, I’m more fascinated not in duplicating what’s possible. I think AI has proven that it can easily do that. I’m more interested in the human and AI potential where humans do what humans do really well, which is, and that AI does terribly, you know, connecting completely unrelated things and seeing something that isn’t visible to the naked eye, and then letting it kind of do all the drudge work. You and I get to go do fun stuff.”
[00:41:58]
Sandy: “So yeah, I think I’m, again, very—you know, and I always want to caveat. I absolutely do. I—am I fearful? Do I see the danger? 100%. Like, this is where this could go one of two ways. This is either amazing or terrifying, and it’ll be interesting to see which we choose.”
[00:42:27]
Daemon: “So, for companies that are trying to go all-in to AI, like, what sort of strategy do you see for them to approach it? Like, should they create an internal center of excellence to provide that learning platform? Or should they provide them with external links to different places and say, ‘You know, go talk amongst yourselves, bring back what you learn, and then have discussion,’ or set up some alternatives to publicly available things? Like, I’ve seen some organizations do this, where they would actually have a proxy to redirect for people that are going out over to ChatGPT or, you know, Gemini or so on, that would say, ‘Hey, we understand that you’re going to use this, but if you want, we actually have an internal version of this that has similar sorts of capabilities.'”
AI Centers of Excellence – Creating a hub for AI learning and development
[00:43:22]
Sandy: “Yeah, ridiculous. Yeah, it’s terrible. You know, yeah, I would compare, and I know somebody else did this. You know, growing up—and every—we were middle class. So, you know, my friends would have the really nice clothes, and then I would get the one that kind of looked like, you know, like the—and I think that’s the experience. You know, it’s like, you know, they get Nikes, and you get Mikeys. So, if you’re trying to duplicate—like I, number one, I want to address that first. You’re saying, ‘Sandy, this AI stuff is really cool. Businesses are saying they want to go in.’ My question back to you is, ‘All in on what? Why?’ You know, AI is still just technology. Like, what problem are you trying to solve? Why are you even interested? Is it, you know, because it’s in top of the news? Like, what is driving your interest in it? Are you just trying—if you say, ‘Well, we’re trying to impact the bottom line,’ anyone who goes in and thinks that they’re just going to fire everybody and plug in AI completely doesn’t understand AI, doesn’t understand their business, and, you know, should probably, you know, I don’t know, do something different. So, the first thing you have to say is, ‘Why are you even interested? Like, why? Is it because your investors are asking about it? Is it because your shareholders are asking? Is it because the CEO is asking about it? Like, why is this even of interest to you?’ And then, now that you’ve defined that, go back and say, ‘Well, you know, what problem is impossible to solve today? Is there something that is, you know, not even feasible for us as a business to solve that where, you know, AI could potentially be a solution for that?’ And then I would say, ‘Okay, let’s define the problem and then define, you know, outcomes. Like, what does success look like? How do we know that we actually did the thing that we did?’ And then go start figuring that out. That would be, you know, like, that’s where I would start. But are people, you know, are people going to be reasonable about it? We’ll see. I mean, I see the insanity, as you see, the insanity out there right now. Everyone’s like, all of a sudden, they’re just pasting—I heard, this was—oh, it was on a board call. So, these people, these irresponsible people will go nameless, but it was on a—I’m on the board, it’s not my own organization. So, this gentleman was talking about in his organization right now, everybody knows the only projects getting approved is they’re sticking AI on everything. So, you know, AI for toilet paper, whatever, you know. And so, I thought that is the most ridiculous business mindset. And it’s a big company. I could, you know, tell you what it was. As soon as I heard it, I thought, ‘I’m shorting their stock.’ They completely do not understand the problem, and they don’t understand the value of AI if they think that that is how to, you know, like, that’s how to be successful.”
Business Strategy for AI – Defining clear objectives before adopting AI
[00:45:44]
Sandy: “I mean, the first thing in—the first thing is, what is this thing? And I would say, again, I think your people—this is very—I think your people in the trenches are the best to help you understand what those problems are that are solvable by AI. Chances are you can’t set up in your executive suite and understand the problem and the solution well enough to come up with an ideal. I think it’s one of those things where you would have to go talk to people and say, ‘Hey, do you use ChatGPT?’ Because that’s what’s happening today. What’s happening today is people are bringing it into work. They’re working better. They’re working faster, but they’re not telling their boss. And so, if that—if people were more open about what problems they were solving and less fearful that, ‘Oh, now that you know I’m using it, you’re getting rid of Bob because I can do Bob’s job now too,’ I think that there’s a hesitancy among worker bees to even be open and transparent about the problems they solve. But that’s, I think, the place that I would suggest starting.”
[00:46:48]
Sandy: “The second is, then, don’t sell the—don’t boil the ocean. You know, prove to yourself that your company is successful, is actually capable of putting in a small project that solves a problem and demonstrates it to actually be a good investment of time. Start there. So, you understand, like, ‘Oh, we didn’t even know that you had to fine-tune it,’ or, you know, what’s—right. So, that’s where I would suggest.”
[00:47:23]
Daemon: “Yeah, I had a conversation with an organization recently that was saying that in dealing with an external consultancy organization, they had a price book for a worker that was a regular worker and then an AI-enabled worker. And the AI-enabled worker, of course, would have more that they could possibly do, but they were actually billed less than the regular worker. So, I found that interesting.”
[00:47:55]
Daemon: “And the KPIs at which the AI-enabled worker was much greater, so they were kind of saying that they could have, or they could provide, workers that can do X amount of work, you know, at a less price. So, reducing the overall amount of costs associated with the effort that was there. And I think that what this may end up doing is, as people start leveraging AI to make their jobs easier, they may be—or they may have key performance indicators assigned to them, which are greater than they currently have right now, because they have the capability of doing more, as opposed to just reducing their overall load.”
[00:48:41]
Sandy: “So, I was—Is your job easier? Are you doing—or have you been able to offload the drudge work to pick your favorite model? Like, I would say my—I’ve never worked harder in my life. Like, I’m putting—you know, like, now all of a sudden, I’m studying harder. Do I have tools to work harder? Absolutely. Am I having more fun? Absolutely. Is my work better? Do I—am I—yes, but I’m not—it’s not easier. It’s just, it’s more valuable. It’s more fun.”
The Paradox of AI – AI tools make work more engaging but not necessarily easier
[00:49:19]
Sandy: “So, I think that those are the conversations that—you know, like, as you make the—so, listening to you tell that story, what I thought was all of us have had that bad experience where we get in a chatbot loop, and we’re like, ‘Oh, again, I just talked to a human. Like, I don’t want to answer this stupid question again.’ You know, like, this is a really unique problem. So, I would—what’s compelling about what you described is you had—there was two—to me, it would make more sense to have one, which is human in the loop with this AI assistant and charge for that, because I think that’s a better experience. I know that’s what I’d like. I still love—you know, I like humans. I like having conversations. I like saying, ‘Hey, have a great day. I really appreciate your help.’ Like, I think that is a better outcome.”
[00:50:02]
Sandy: “Now, have I gotten on the phone with a human that couldn’t answer my question and been frustrated or been sent from person to person? Absolutely. Worst experience in the world. Give me a chatbot every day—at least I can, you know—but so, that’s an example of how I say, ‘Yeah, there’s an interesting, different way to look at it, and we’ll see, however, the choices that people make.’ I think replacing humans 100% would—I, you know, be a dumb decision. Although, I’ve been very upfront from the very beginning. Give me my medical pod. I don’t ever have to see a nurse or doctor ever again. Like, give me a pod where I go in, it measures my blood, it does surgery on me. I hate going to the doctor. They’re condescending, they’re assholes, they ask me the same question over again, they, you know, they act like I don’t have a brain. They’re smart, I’m dumb. Hate it. Give me—you know, I would do that all day long.”
[00:50:49]
Daemon: “I’ve actually seen those pods being rolled out in the States right now, so in malls and places like that. I think there’s like 20 different states that have them where you can go in, and yeah, you interact with an AI, it does all your blood work, does everything, scans your body, does your vitals and all that, and then it provides you with a report. So, I can definitely see that coming more to the forefront in the future.”
[00:51:17]
Sandy: “But other people—if you read about it, Daemon, people say, ‘Oh, the doctor, the compassion! We want the human touch,’ and, you know—and I think there’s people that do. I think there’s people that don’t want a pod. I just happen to be a person that doesn’t want some doctor—you know, like, I don’t need it. I don’t want your compassion.”
[00:51:44]
Sandy: “Now, is that—so, let’s talk about that person in an abusive situation, or that person who is being groomed and isn’t old enough to even understand what’s going on. Like, we have to think through those types of situations where doctors and nurses step in where that human isn’t capable of it. And so, I don’t—you know, I said it kind of flippantly, but you know, I—you know, we don’t—there’s not one solution for every problem.”
The Role of Healthcare Professionals – When human intervention is necessary
[00:52:29]
Daemon: “Yeah, I’m looking forward to having models that are finely tuned on electronic health records, which you can run locally at home, so you can just admit all your local files, and then it would be able to go through that and see the trends and provide the capabilities of what you normally have with the family doctor. At least up over here in Canada, where I am, we have a huge shortage of doctors. So, in order to get a family doctor is very rare.”
[00:52:53]
Daemon: “Yeah, I’ve been on a waiting list for five years to get a family doctor up over here, and it’s just impossible. So, every time we would want to interact with the doctor, we have to go to a walk-in clinic, and then they redirect us over to basically a web-based walk-in clinic. So, yeah, having the ability to take medical records and run those locally against a model, I think that would be huge.”
[00:53:53]
Sandy: “Here’s the thing that I’ll just—I’ll leave it with this, but this is where—what I’ve been thinking about. So, I’ll go back to my amazing horse picture. So, there are horses that are absolutely—they have the bloodlines, they look amazing, they’re just—they have all the tools, but they don’t have ‘it.’ They don’t have heart. They don’t have that, you know, there’s something extra about them that makes them special.”
[00:54:31]
Sandy: “So, I think as we design this system that has human stuff, I think there’s something about humans and animals and real stuff that can never be duplicated and isn’t reproducible. And it isn’t definable. And so, as enthusiastic as I am about the potential of solving problems and stuff, I’m quickly—you know, I’m one of the very first to admit that the best parts can never be done, you know, like the best parts of why we enjoy other people or horses or dogs or, you know, the river or—you know, it’s not definable. It’s not algorithmic. It’s, you know, something bigger than that.”
The Irreproducibility of Human Connection – Why AI can’t replicate human relationships
[00:55:06]
Sandy: “So, I’ll leave it there. Is there any other questions? You can tell I hate talking about this stuff.”
[00:55:23]
Daemon: “The last thing that I want to say is, what—what would you like our listeners to know in terms of resources that are out there, either websites that you have yourselves or GitHub repos, any way to contact you? What can you leave with our listeners?”
[00:55:49]
Sandy: “So, I shared with you my GitHub. Please come out, contact me on LinkedIn. I love talking about this stuff, as you can tell, so, always interested in other perspectives and hearing other voices. I do want to encourage everyone to join OWASP, whether it’s the AI portion—I think there is a—you know, we all benefit if we’re all trying to make the world a little bit better, specifically around technology. If we’re all out there contributing to the solution, it’s amazing what’s possible. So, I encourage everyone to go out, join OWASP, find a project, you know, give some of your time, and go meet some amazing people. I’m evidence it’s well worth it.”
OWASP – The Open Worldwide Application Security Project and its importance
GitHub – Access Sandy’s GitHub repositories for AI projects
Volunteering with OWASP – How to get involved and contribute
[00:56:30]
Daemon: “Great. Thank you very much for joining me today. It’s been a great conversation. Thanks a lot.”
[00:56:40]
Sandy: “Alright!”
About the author

With 25 years of industry experience, Daemon Behr is a seasoned expert, having served global financial institutions, large enterprises, and government bodies. As an educator at BCIT and UBC, speaker at various notable events, and author of multiple books on infrastructure design and security, Behr has widely shared his expertise. He maintains a dedicated website on these subjects, hosts the Canadian Cybersecurity Podcast, and founded the non-profit Canadian Cyber Auxiliary, providing pro bono security services to small businesses and the public sector. His career encapsulates significant contributions to the IT and Cybersecurity community.
Other recent articles of note.
Discover more from Designing Risk in IT Infrastructure
Subscribe to get the latest posts sent to your email.







