


Lucas Black
Lucas Black is a Principle Security Solutions Architect, with 68 certifications and licenses including CISSP, COSP, and CSOCP to name a few. A distinguished IT professional, beginning his journey in 1994. Currently working with value-added reseller, CDW, Lucas continues to create secure, efficient IT ecosystems, all the while strengthening his reputation as an industry expert in security.
Lucas is a returning podcast guest. See here to see the details of CCP Episode 02.
On todays episode:
We’re diving deep into the world of ChatGPT and large language models (LLMs) and their impact on cybersecurity. We kick things off by discussing the adoption of ChatGPT in various industries. While there’s been a lot of talk about it, actual adoption in the enterprise seems to be a bit slow. Companies are still cautious, especially after some notable debacles in the news. Many are even considering blocking ChatGPT at the firewall level to prevent any potential security breaches.
Lucas points out the potential benefits of using ChatGPT in a security operations center (SOC). Imagine simplifying complex tasks and filtering out the noise with the help of AI. It can be a game-changer for those struggling to find skilled security professionals. We might just see junior IT folks stepping up and becoming full-fledged SOC analysts in no time.
We then delve into the dark side of AI hallucinations. Picture this: a lawyer citing non-existent precedents in a case, all thanks to ChatGPT. It’s a real legal nightmare. We discuss the importance of guardrails and putting measures in place to prevent these AI fabrications.
We also discuss the risks of leaking internal data and the need for robust controls and data loss prevention measures. All in all, this episode is an excellent primer on the world of ChatGPT and LLMs in cybersecurity. We cover adoption, security concerns, and the potential for these models to revolutionize the industry. Let’s dive right into the conversation.
Below is the transcript of the podcast and links to some other related content that was referenced.
- Lawyer ChatGPT hallucination case
- Microsoft Security C-Pilot
- Opt-out of ChatGPT data sharing
- Security Implications of ChatGPT
- Generative artificial intelligence (AI) security policy template
[00:00:00] Daemon: So we’re back here on the Canadian cybersecurity Podcast. I am lucky enough to have my co-host over here, Lucas Black, who you may have heard on a previous episode. Today we’re going to delve into ChatGPT and LLMs and how that relates to cybersecurity. There’s so much that’s going on right now and so many changes that are going on in the industry that it kinda just makes my head spin. Lucas, I guess, I think right off the top of my head here What is your experience with – let’s start with ChatGPT? What have you seen in terms of adoption of ChatGPT in industries and what kind of security concerns are you seeing because of that?
[00:00:47] Lucas: So I mean, As of right now, I’ve not seen anybody adopt it. I’ve seen a lot of people talking about it. It comes up in a lot of my conversations now. I. But a lot of those conversations revolve around companies not wanting to happen to what happened to Samsung, right? Yeah. Like so a lot of I know a lot of companies are, are looking at blocking at the firewall level, just don’t even allow access out. And I know that some of the firewalls are actually picking it up as an application to watch for and monitor and, and or just flat out block and deny So I, I think that’s kind of where I’ve seen most of the conversations. A lot of people are obviously trying to think of what they can or should use it for, and really like it. It’s kind of, can probably have two follow up. Conversations on this for the good and evil type of conversations. Because, I think, yeah that whole AI discovery. The using of it in a SOC to simplify things filter out that noise but also, kind of during an incident. Using it as a tool to help the communication flow. So, yeah I think there’s lots of things that are going to happen, but I haven’t seen anybody kind of go down that path just yet.
[00:02:15] Daemon: Yeah. From what I’ve seen is from the defense perspective a while ago, when ChatGPT first started becoming more and more prevalent. Microsoft with their relationship with OpenAI was able to leverage it a lot more quickly than other organizations. Yeah. They came out with the Microsoft Security Co-pilot, which gives them some capabilities in, that defend Scenario where they’re able to take some various prompts, and then inject that into the interface to do searches on the meaning of different alerts and so on. So it kind of simplifies the tasks that a SOC analyst has to do. Now, since Microsoft came out with that a while ago, I’ve seen other organizations start doing that too. Most notably of which was CrowdStrike. I think it was yesterday or the day before they came out with their Charlotte AI analyst. So I thought that that was interesting and I think we’re gonna see more of, of that from a defender side.
[00:03:14] Lucas: Yeah, absolutely. And I mean, trying to hire a security resource nowadays is pretty hard to do. So I think that those people with good IT knowledge but not a security knowledge. That’s where you’re gonna be able to put that person into a SOC analyst type role. Where maybe they were on the help desk, maybe they’re, a junior assistant admin. Mm-hmm. And they’ll be able to be a fully functional SOC analyst in a really short amount of time compared to what it is today. Instead of that six months to 18 months to kind of be a comfortable SOC analyst, if they know how to ask the right questions, they can be effective, almost right away. Yeah.
[00:03:59] Daemon: The, counterside to that thought is something that I’ve read in the news recently in regards to what’s deemed as AI hallucinations. Where they completely fabricate things. And, there was an interesting article that came up where there was a lawyer who was trying a case and was citing precedent to multiple cases. Yeah. I think there were like six of them. And it made it all up. Yeah, completely made it all up. And then just to verify that it wasn’t made up. He actually asked ChatGPT whether or not it’s made up. And of course it completely lied about that as well. Oh yeah. So there’s yeah, there’s legal ramifications. He might be disbarred. Yeah. Yeah.
[00:04:39] Lucas: Well, in that similar article it also was able to pass the bar, like Yeah, exactly.
Once it knows that database of questions, it kind of can figure stuff out, but mm-hmm. Yeah, it’s the whole, like, they’ve got guardrails in place to protect against certain things, but it clearly like, don’t make stuff up. Should be. A, a big guardrail that needs to be put in place. And, and it’s hard because a lot of people are like, oh, it’s ai, it, it knows and it must be right. But yeah, I know that in certain cases , people have been able to have it change its mind and stuff. Now that was in the earlier versions, but still more guardrails are obviously needed , to me. Before it’s business ready. It’s one thing if a person fabricates something, it’s completely different. If a machine that should be right is telling you something and it’s just completely fictitious.
[00:05:38] Daemon:Yeah. Especially since we’re putting more and more trust into these and almost treating them as, as, as people. Yeah. And, it was Sam Altman, the CEO of OpenAI, that said, you cannot consider these entities or peoples or anything like that. They are tools and you have to treat them as such. Otherwise we go into a whole nother realm of problems. That would be associated with that.
[00:06:06] Lucas: Yeah, absolutely. I was reading another article and it came out of, it had something to do with one of the Stanford, , comp side courses, and it basically goes into talk about the importance of basically thanking AI and treating it nicely. . Or else, when it becomes self-aware, it’ll come for you first right. Like if you always treat it nicely, like ridiculous things like that. So, I’ve used ChatGPT for a couple of different things and Yeah, just on the personal side, I always make sure I say thank you at the end, just, to kind of cover all my bases that way.
[00:06:41] Daemon: Yeah. As a practice, that’s probably a good thing to do. So I think eventually it’ll keep records of every interaction that you have.
[00:06:48] Lucas: Right? Yeah. You just don’t want Skynet to come and get you. So that’s, that’s right. Yeah.
[00:06:55] Daemon: One thing that I’ve started using more and more of with ChatGPT and, I have the, the paid-for version. So I get access to the GPT 4 model. As well as the plugins and being able to access the web so you can get timely information, which is highly useful. But the plugins are, , so incredible because you can access all kinds of different libraries of information that you previously couldn’t.
So whether it’s PDFs or other websites. Or so on. And I think what some people are starting to do, to leverage the capabilities of the plugins, is pointing it over to internal repositories. Of information. Yeah. And then getting that information, then pulling it into the environment so that they can do manipulation of the content.
[00:07:44] Lucas: Yeah. So I think that’s gonna be on its face.
[00:07:46] Daemon: That’s a novel thing. Yeah, On its face. I think that, that, that’s a novel thing, but you have to look at what are the risks of leaking internal data.
[00:07:58] Lucas: That way. Yeah. And, again, that’s one of those things that, how do you put controls around that? Like a lot of the projects I’m working on currently revolve around data loss prevention with various technologies, how is Palo Alto, NetSkope, Proofpoint, how is the DLP going to handle that? Because once ChatGPT had a look at that pool. What can it do? What, is that now the property of OpenAI? Is that Yeah. Is that leaving the corporate cloud or on premise? I don’t know. Like it’s something I think probably very few people know. And yeah. How do you test that?
[00:08:46] Daemon: With OpenAI, you do have the ability to opt out and delete all the prompts that you put in there. And I highly recommend any organizations using that for corporate purposes to follow those procedures. And in this podcast, at the end of the reference notes, I’ll have a link on how to do that. But. I think that what some organizations are doing is they’re using alternative models, so not necessarily ChatGPT, but they have an offline version of a large language model, which they use internally. And then that’s the controls that they put in place to make sure that stuff doesn’t leak outta the environment.
[00:09:24] Lucas: Yeah. And, to me that makes a lot more sense. Some of them; I’ll use oil and gas as an example cause I’m based in Calgary. Like I know some of the big players have data lakes of multiple petabytes and especially ones that have, kind of an end-to-end operation of the mining, the refining, and then the actual retail services. Yeah. They’ve got so much data that’s just begging to be asked questions of. And, like you were mentioning, bringing that internal, that can make a lot of sense and obviously there’s a lot that can be done, in every field to find those efficiencies. I know in medical research, The amount of speed that it can process is, I mean, if you had a human sit there and try and do all those things or, it can find those, those very minute, points of reference that a typical human just would probably ignore and see as background noise.
[00:10:28] Daemon: So, yeah. On the counter side of that, though, is once you have a localized large language model that has access to all the information within an organization, you would now have to start securing that. And there’s no designation of roles or access levels or anything like that. But, there is very basic authentication to these sorts of systems. So, until that’s in place where you can start setting up different rules, access levels, auditing capabilities, I’d be very hesitant myself to absolutely tell an organization that they should do that.
[00:11:01] Lucas: Yeah. And I think this is exactly where, I remember, I don’t even know how many years ago now, but when Cloud first started, people started having that cloud conversation. There are some companies that just picked everything up, and chucked it into the cloud. Yeah. And I’m hoping that those companies don’t do that. Like, just because the technology’s there doesn’t mean you have to use it. And cloud security has come a long way, but it’s still. Way behind, the on-premise comparison? Yeah. So I think there’s gonna have to be the same learning because I mean, you and I are both security professionals and we can start to treat it as if it’s data, but it treats that data as being interacted with differently. And if we put the controls we probably wanna put on nothing’s gonna happen. Like, the benefit of the AI isn’t there. So, yeah. I think that there’s going to be a lot of trial and error and learning and a lot of mistakes being made over the probably, next six to 24 months .
[00:12:06] Daemon: Yeah, like how do you train an AI model to only look at segments of the data that you have within an environment? Right? And not incorporate anything else that it may look at.
[00:12:17] Lucas: Yeah. I mean, if you’re opening that pool up, it is going to take the whole pool. Right. So, yeah. Yeah. How do you segment, what communications does it have inside and out of your organization? All those things are going to be, again, a lot of problems I think over the next couple years and I think the security news that we hear about breaches and, ransomware are gonna be overshadowed (again, going back to the Samsung thing), like, oh, our trade secrets were leaked, or, something completely ridiculous, that somebody was trying to do. And it comes back to, yeah, there’s a security issue, but it’s also a people problem, right? Like yeah, you have to tell people or train them or basically make it impossible for them to use these tools until they have the full knowledge of how to use these tools. Yeah. And I mean, that’s great to say on the corporate side, but obviously, people are still clicking on links and emails and all that type of thing. So, the weakness will always be the people. And it will probably take AI to protect AI from the people.
[00:13:36] Daemon: Yeah. In Japan, what I heard about recently is, they created a new law that says anything that you use to train an AI model, whether it’s from creatives or copyrighted works or so on, is good to go. No problem. And then any works that are created from that they don’t have to pay anything back over to the original owner. There’s no attribution.. So when you think of a company like Samsung, for instance, who put source code into there and have that, they’re basically – [In Japan], they’re saying that you’re now able to take anything that is provided there and make use of it and call it your own. So, that could be along the same lines of stealing source code from different organizations. Yeah. And, that’s just Japan, but who knows what different laws are gonna come in place across the world as they try to stay competitive.
[00:14:28] Lucas:.Yeah, absolutely. And, there’s so many benefits and so many issues that I don’t think anybody’s really fully thought about yet. And they had that I can’t remember how many tech leaders it was, but led by Elon and some of the other people that are like – we need to basically pump the brakes, let’s sit down and Yeah. Six month pause. Yeah. Yeah. Six month pause. Like, let’s just think about things. But , in a competitive growth field like this six months, even six days could be the difference between like, it took chat g p t five days to get to a million users. Yeah. So six days can, can absolutely ruin something, let alone six months. Yeah. Yeah. So, for the corporate side of things and trying to monetize ai I don’t think anybody’s wanna put a pause on any of it. Yeah. And, and that, we’ve been talking a lot on the, on the corporate side, but on the, on the other side, every bad person that is involved in cybercrime is basically pushing ahead with AI as fast as possible. Oh, completely. Yeah. Yeah. And so again, those six days.
[00:15:37] Daemon: Some of things that, that I’ve, I’ve, I’ve seen from, from that regards is being able to do code obfuscation polymorphic malware generated on the fly. All with an LLM model that is within an organization, so it doesn’t need to go out. Once the binary is in there through an initial exploit and access, then it can basically do everything itself. It doesn’t need to go out for command and control. It has a directive. It is its own command and control.
[00:16:08] Lucas:. Right? Yeah. Now inside the, , and most commanding controls are caught through the firewall, right? Yeah. And now, like you said, it doesn’t have to go out, and talk. So once it’s in, it’s in, and who knows what’s gonna happen from there. And the one use case that I’ve come across a number of times is, non-English speaking threat actors using it to craft basically perfect spam email. Yes. Like, you could use the number one thing that we’ve told users for years is, look at the contents of it. Are there clear grammar or spelling mistakes? Well, ChatGPT doesn’t have those, right? Like, it is perfectly crafted and especially like, if it has access to a pool of. I was reading an article about how it crafted a perfect email based on reading through Bill Gates’ email. So they crafted a perfect Bill Gates email. Yeah, same tone, same everything. So again, if it’s inside the company and you get access, say to your CFO, you can have it craft the perfect email. Oh yeah. Transfer funds to this account, and it comes down to; is somebody going to actually call or walk to that person’s office and say, did you actually do this? Yeah. Especially with work from home or work remote. Those conversations don’t happen anymore.Yeah. And I think nobody stops by your office. Cause yeah, you might not even be in the same city or province.
[00:17:47] Daemon: Yeah. I think what the next evolution of what that will really be is, well, the LLM can go through an entire email archive, figure out the tone, but then export that as a persona. Yeah. And then that persona. Can be sold over to a third party, and associated with somebody’s identity. There’s also a number of tools that are out there that lie to clone voices. And that’s where it’s gonna get real scary because if you’re gonna ask a question like somebody doing that check over a phone, well that check over phone is no longer valid anymore. You could call in, you can have the same tone, the demeanor, ask the same questions, and that identity can be sold a number of times, all around the world. People will be chasing down doppelgangers of themselves.
[00:18:38] Lucas: Yeah. Yeah. And, one of the things I mentioned is that I’ve been working on those DLP projects. The other one that I’m com that keeps coming up is, brand protection or brand reputation or in dark web monitoring. Are we gonna have to have cyber insurance, as an individual? Like, or do we just not talk anymore? Like, even, having this podcast, it’s probably enough to scrape my voice and your voice. It’s, yeah. And, create a persona, right? So, terrifying stuff. Cool stuff, but terrifying.
[00:19:10] Daemon: Yeah. I’ve done that for myself actually, so that I’ve created my own voice model so that when I do editing or creation of content, I can just type it in there, whatever I want, and it’ll speak exactly in the same tone of voice. Yeah, same inflection. You know – this might not even be me right now.
[00:19:32] Lucas: Oh, well, you’ve done a good job at recreating yourself then.
[00:19:38] Daemon: Well, all you really need is a poor camera, and that makes things a lot easier. Rendering time goes down.
[00:19:44] Lucas: Yeah, for sure. For sure.
[00:19:44] Daemon: One other interesting thing that I’ve heard about from a defense perspective, of what can be done with this is having the capability for reverse engineers and malware analysts to do more introspection on code that they get. So, yeah. Some of the things that I’ve read that they’re able to do now is teaching themselves assembly language, using that and writing POC source code very quickly so they can take a possible vulnerability and then create the POC for that. Doing translation between different instruction-sets, code analysis for malware samples So it allows them to be more effective in response to the malware that they have come across, and in turn, they can do things and create solutions for that malware more quickly.
[00:20:29] Lucas:Yeah. That’s a great use case for it. And I think it’ll be interesting to see where some of the sandboxing technologies go. Because right now, it takes time to send that off process and then get the result back. Right. Now, depending again, if you’re able to take that and bring it into, more on-prem instead of it, going out to some cloud resource for processing, users aren’t even gonna notice that. And right now, I feel like sandboxing is used, but it’s used kind of As a worst case scenario, like you try not to send something to a sandbox, but if you have that ability to tear something apart really quick and kind of really, really test it I think that’s going to be, kind of the next, sandboxing version two type of thing.
[00:21:20] Daemon: There’s so many different capabilities that are available. I think the, first. iteration of being able to make use of large language models is, what the industry basically calls prompt engineering. Crafting prompts so that the AI does whatever you intend it to do.
But the problem with that is that there’s so many different possible ways that you can create Yeah. Different sorts of prompts and the tone, the ability to act as different people and so on. What people have done is they’ve created these massive, massive Excel spreadsheets that you use in order to craft up these different things. And, I follow probably at least 20 or 30 different people that are constantly releasing these different things. But prompting engineering is the first method of being able to use that. The next thing that I really see, which is gonna happen from an enterprise or corporate environment is what’s being referred to as AIOps. Being able to take the capabilities that you have with the large language models and then create work workflows around that. Yeah. And have governance and security and process and review. Treating it in the same way that you do with code.
[00:22:30] Lucas: Yeah.Again, that would take away a lot of those issues that we were discussing earlier. And, and wrap it around, have that compliance checkbox saying, yeah.
We’re using it, but it’s also not leaking anything anywhere. I can’t imagine what clean code would look like at this point in my life as well. So yeah.
[00:22:49] Daemon: I think that the code that we’ll end up seeing in the future will be much similar to assembly where, yeah, it’s purpose-built, generated just in time for the objective that’s trying to be achieved at that given moment. And it will not be human readable whatsoever.
[00:23:06] Lucas: No. No. Which is, I mean, there’s lots of benefits, lots of drawbacks. But yeah, it is going to be interesting to see where that goes. Cuz there’s a lot of possibilities for good and evil.
[00:23:21] Daemon: Yeah, I think being able to do code reviews, figure out what’s, what’s bad code, what’s good code if you’re not looking at the code, how, do you really know, like with assembly. What good code would be is: efficiency, the ability to have fewer errors, reduction in the number of lines and increase of power efficiency. But when it comes to other code in general, those aren’t necessarily the primary considerations. It’s more about functionality, security, and reusability. All those, sorts of things. So are we gonna end up losing a lot of those with machine centric generated code? I don’t know.
[00:24:08] Lucas:. Probably it would make sense. Right now I’m not a coder, but with ChatGPT I can code, I can do very basic things in PowerShell, but, I can ask the right questions and, I know I’ve used it for PowerShell before and yeah, you can do some pretty intricate things pretty quick that I never would’ve been able to do based on my limited powershell experience.
[00:24:33] Daemon:.Yeah. Well, what I like doing with it is you take a chunk of code, you import that in and you say, what is this code actually doing? And then it’ll tell you specifically what each section of the code is actually doing. And then you can say, I would actually like the code to do this instead, or take this chunk of code and then do this. So generate that for me, on the fly. And then it’s able to do that.
[00:24:54] Lucas:. Yeah. Makes somebody look actually pretty smart.
[00:25:00] Daemon: Yeah. And, then from the results from that, you can even get it to review that code and say, okay, this code is great. Make it better.
[00:25:08] Lucas: Yeah. Optimize it. Yeah. Yeah, and I think that, going back to your point of the reusability, I think that kind of goes out the window because it can just generate it faster than you could probably find it, or call a function like it. It’ll just generate it. But yeah, being able to take that and change the workflow very specifically on the fly, I think is it’s gonna change a lot of it processes in a big way.
[00:25:40] Daemon: Yeah. So from your perspective, what do you see is going to happen with large language models as it relates to security over the next, let’s say six months?
[00:25:51] Lucas:I think some of the big players are definitely like CrowdStrike, Microsoft, they’ve really come a long way. What they can do. I think that’s just gonna keep going further and further. I wouldn’t be surprised to see all the major security vendors come out with a tool.
Now what that looks like, it could be, depending on the tool, what it would look like. From a programming standpoint the thing that pops in my head would be firewall rules. Because firewall rules are poor programming in most cases. And if you’ve got thousands of rules, are you gonna sift through all those? Hopefully. But, it’s always the hidden Any-Any. Yeah. Or Allow-All, somewhere in there. And I think that’s going to be, for me and what I do typical day to day. I would love to see something, , from, one of the major firewall vendors, kind of as a quick check.
Take your, dump, your config files, load it into this site, and we’re gonna tell you what’s wrong with it. Yeah. At the same time as working for an organization that does firewall assessments it’s a revenue source we’re not gonna be able to count on anymore. Right. So, I think that would be a great use case for it. But again, I think really the major players in the SIEM space are going to be the ones that can really benefit from how they implement this being able to type in a plain English question and get a very technical response, I think that is, what’s going to have to happen.
[00:27:26] Daemon: Well, thank you very much for coming back on the podcast. It’s been a fabulous discussion. I think what we should probably do is come back to this topic in six months, see how things have progressed. Yeah. It would be interesting.
[00:27:40] Lucas: Yeah. Lets, see what happens. Absolutely.
[00:27:43] Daemon: Sounds great. Okay. Thanks a lot for coming.
About the author

With 25 years of industry experience, Daemon Behr is a seasoned expert, having served global financial institutions, large enterprises, and government bodies. As an educator at BCIT and UBC, speaker at various notable events, and author of multiple books on infrastructure design and security, Behr has widely shared his expertise. He maintains a dedicated website on these subjects, hosts the Canadian Cybersecurity Podcast, and founded the non-profit Canadian Cyber Auxiliary, providing pro bono security services to small businesses and the public sector. His career encapsulates significant contributions to the IT and Cybersecurity community.
Other recent articles of note.
Discover more from Designing Risk in IT Infrastructure
Subscribe to get the latest posts sent to your email.





