Episode 08 – Protecting Critical Infrastructure: A Guide to Operational Technology Security in Modern Environments. – With Andrew Ginter

Andrew Ginter

Andrew Ginter is renowned for his expertise in control systems and industrial cybersecurity. With a foundation laid at Hewlett Packard, he pioneered high-end control system products for the worlds largest pipelines and power grids. Transitioning into IT-OT middleware, Andrew established connections between control systems and business automation, revealing the cybersecurity implications that would later drive his career.

Notably, Andrew served as the Chief Technology Officer at Industrial Defender, where he spearheaded the creation of pioneering industrial security solutions. Presently at Waterfall Security Solutions as VP of Industrial Security, he continues his impactful journey.

With a legacy of safeguarding critical infrastructure and advocating awareness, Andrew’s influence persists as he bolsters industrial security globally.

Industrial Security Podcast​

2023 OT threat report

https://www.linkedin.com/in/andrewginter/


On todays episode:



Here are the key topics discussed:

  • Convergence of IT and OT Security
  • Evolution of industrial cybersecurity post-9/11.
  • Introduction to CCE and CIE, and its focus on risk assessment and engineering mitigations.
  • Lack of personnel in the industrial cybersecurity field.
  • Similarities and differences in managing IT and OT systems.
  • Challenges and Methods of Improving Critical Infrastructure Cybersecurity
  • Government Regulation and International Cooperation in ICS Security

Below is the transcript of the podcast and some key points related to what was referenced.


[00:00:00] Daemon: Today I’m joined by Andrew Ginter, VP of Industrial Security at Waterfall Security Solutions. Andrew, could you provide me with a little bit of background of yourself and waterfall?


[00:00:12] Andrew: Sure. Myself, I got started many, many years ago developing control systems, products, high-end product that’s used in the world’s largest pipelines, largest power grids at Hewlett Packard.
I drifted into, IOT middleware connecting control systems to mostly SAP, for business automation for business efficiencies. thereby connecting a lot of IT and OT networks and contributing to the cybersecurity problems that now plague many industries in a sense.
I got religion. I wound up the Chief Technology officer at Industrial Defender building the world’s first industrial SIEM. Now I’m with Waterfall Security Solutions. Waterfall is a technology provider. We work with the world’s most secure industrial sites. our flagship is the unidirectional gateway, and I’ll probably say more about that later in the show.
But it was designed back in 2007 when advanced persistent threats were the big news, and it was designed to defeat them utterly. And that need has not gone away. So, we’re still here, we’re still protecting thousands of industrial sites worldwide. Thank you for that.


[00:01:28] Daemon: So, there’s been a lot of merging of what’s deemed as IT versus OT, as there’s more IOT devices that come into play and as, the criticality of IT systems becomes more and more apparent. How do you see these two different, strategies or methods of managing separate networks merging and what are some of the challenges that are associated with that?


[00:02:00] Andrew: Yeah, it’s a complicated space. In the early days, the industrial cybersecurity, SCADA security, OT security, whatever you want to call it, really only got started after 9-11, after the physical assault on the World Trade Center and the Pentagon. That was not a cyber-attack, do not get me wrong. But in the aftermath, a lot of authorities said that was a failure of imagination.
Where else have we failed? And they’ve identified, critical infrastructure, cybersecurity, and more broadly industrial cybersecurity. And then, before 9-11 in the late nineties, there were maybe a dozen academics, university professors involved in the space. And after, the World Trade Center attack, of course there were hundreds and thousands of people are in the field now.
And where did the field look to for inspiration? The first generation of advice, the first generation of standards were written largely by IT cybersecurity experts. because that’s all the world had. And, in consultation with engineers and basically, they said “do security the IT way as much as you can. I know it’s hard, do security the IT way, on your industrial networks”. And that was the first generation and, it was subtly wrong. But don’t get me wrong, I’m grateful. To those practitioners, security experts from the IT world who came and bootstrapped this industry with that first generation of advice.
But today, things are different. The latest thinking is look, from the very beginning, we said, “yeah, patching is hard to do on industrial networks because, cure can be as bad as the disease”. “You shut the plant down if there’s a blue screen.” “Encryption is just hard on these networks.” I mean, where’s the certificate authority? How do you verify the certificate? Do you go out to the internet for that? No, thank you. Even passwords are hard on these networks. Science has demonstrated that every one of us becomes measurably stupider in an emergency.
If I need to remember a password to trigger the safety shutdown to save my life, I’m sorry I’m dead. So, for a long time there’s been advice saying OT is different. Try to do things the IT way. But we recognize it’s hard. Try harder today. We recognize that OT networks are different.
Now, all these differences that have been documented forever are, in a sense, superficial. The fundamental difference is consequence. What’s the worst that can happen on most business networks? You leak PII into the ecosystem, into the internet. You get hit with lawsuits from your customers. You suffer downtime. You’ve got to bring in expensive consultants to clean out your system on an emergency basis. You pay some ransom. It’s millions, tens of millions of dollars in damages. You can buy insurance for all of that.
What’s the worst that can happen on, OT systems? It depends on the system, but often it’s explosions and fires and environmental disasters and, damaged turbines. A lot of this equipment, costs hundreds of millions of dollars and takes months or years to produce and install, so you suffer, enormous downtime. Here’s the thing. We cannot restore lives, nor environmental disasters, nor damaged equipment from backups. And even if we had a magic wand, we could wave the magic wand patch everything, put antivirus everywhere, passwords everywhere, encrypt everything.
Even if we had that magic wand on the OT side, we would still have to manage the networks differently because of the consequence. Because the consequence of compromise is unacceptable in many, but not all these networks, and this is part of the confusion. A small shoe factory, it’s an OT system and it demands a very different kind of security management system than does a railway switching system where the worst-case consequences of compromise are a mass casualty event.


[00:06:43] Daemon: So, what I’ve seen from several attacks that have happened over the past few years are that industrial systems that may not be directly attacked by a cyber-attack are affected by attacks that happen on the IT side. So, if for instance, there is an attack on management systems, on backend systems; as a means of protecting possible lateral breach to the OT systems, organizations may do a shutdown of that environment pre-emptively until they get their house in order from the IT side.
Do you believe that that is because the systems that are managing both are merging or that the staff that are managing both are the same?
Or what do you think the reasoning is behind that? Is it a lack of visibility across the systems?


[00:07:45] Andrew: No, it’s. It’s a little complicated. in our latest threat report, we identify three ways that ransomware attacks. It can impact physical operations, and this one is obvious. The ransomware gets into the computers that are automating the physical process and wreak havoc.
And so, you have to shut down. This was where the snake ransomware had that kind of code built into it. It hit, I think Honda a number of times, it goes right into the control system and wreaks havoc. That’s obvious.
So, what you’ve mentioned is sort of less obvious. The catchphrase is “abundance of caution”. Ransomware gets into the IT network, and we look over at the OT network and say, we don’t trust the security system on the OT network, so we must shut this critical safety system down, because we don’t know what the ransomware’s going to do, if it’s going to leak into that system. It may not have leaked into there yet, but we just don’t know. So, we have to shut it down due to an abundance of caution.
And the third way is a dependency. So, for example, the classic example was Maersk, the container shipping company with NotPetya in 2017. It wasn’t ransomware, but it behaved kind of like it. It pretended to be ransomware. NotPetya and Maersk shut down for six days because their IT network was crippled, not because the stuff got into their shipboard computers. Their ships still moved. Not because they were afraid of the ransomware getting into the shipboard computers. They were confident of their security in the ports and in the ships. The reason they shut down is because, the system that tracked the EDI files that said where each of the thousands of containers on every ship went. Those systems were crippled. And so, you could take a crane that still worked. You could lift the containers off the ships in the port. You could set them down on the truck, but you couldn’t tell the truck where to go. And so, shipping stopped. And this is an example of a dependency where physical operations depend on systems on the IT network.
And so, you can have the strongest OT security in the world if you depend on IT systems. And those IT systems are crippled. Your physical operations are crippled. So, this is something people have been looking at. Since these are security directives, these are the law. They’re not advice.
The latest directives out of the TSA for pipelines and for rail systems, address these dependencies. but yeah, it’s, the, talking about, about, the Canadian context, number one attack, the number one type of attack that we see impairing physical operations is ransomware.


[00:11:00] Daemon: Some, initiatives to change the way that security is approached for ICS systems, have come to light over the past few years. And that’s adoption of cyber informed engineering and consequence oriented cyber informed engineering. Can, can you talk a little bit about those and adoption that you’ve seen in the industry of this framework?


[00:11:31] Andrew: Sure. where to begin? the one that came out first was CCE, consequence Driven Cyber Informed Engineering. There’s a textbook on it, that Bachman and Friedman, published, out of Idaho National Labs. And it’s been around for a couple of years now.
The book is primarily about a risk, an OT risk assessment methodology. but there are four chapters in the book on mitigations, and it’s those four chapters that people think of as the engineering, primarily the engineering end of it. The mitigations they talk about are in a sense, unhackable.
They’re not really cybersecurity mitigations at all. They are engineering designs to address cyber risk. I mean the engineering profession. I mean, where did engineer, where’d the word come from? It was the people who drove the trains. Who drove the locomotives, what was their number one priority?
Well, there was generally only one track across the continent. because track was really expensive to lay. And so, the number one priority was making sure that I’m the only train on the track. And so, every time you go past, this is where the telegraph was invented. So, they could reach out to an arm, grab a scrap of paper on the way by and figure out if they had to, pull to an emergency stop or keep going.
Protecting safe operation, protecting their own lives, protecting public safety. If there was, if there was passengers in the train, this was number one, the number one, goal. And so, the engineering pro profession from the very beginning has been, has been tasked in part, I mean, there’s many things’ engineers do, but in part with protecting public safety.
Consequence driven is saying, look, it’s recognizing that the difference between the IT and the OT network is consequence. Even if we had the magic wand, the consequences are still different. You have to manage to the consequences. And so, the engineering profession for centuries has had designed systems to address physical risk, as have very powerful tools at their disposal to address physical risk.
The classic example in the modern day is let’s say you’re the technician managing a four-story boiler. In a coal-fired power plant. There’s a furnace under the boiler. It produces a lot of steam. The steam goes into a turbine. You’re calibrating the instrumentation; you’re checking the ultrasound to see what the corrosion levels are. This, it’s boiler and, four others like it in plant are your job. You spend eight hours a day inside of the kill radius of a worst-case boiler explosion.
If one of those boilers explodes, you never see your kids again. How would you like to be protected from a cyber-attack that over pressurizes the boiler? Would you prefer a spring loaded valve so that if the pressure in the boiler becomes too great, it forces a steel plate against the spring? The spring deforms the steam escapes and there’s no explosion.
Would you prefer a spring-loaded valve or three because these things do wear out, or would you prefer a longer password on the computer controlling the furnace? Most of us would like the valve. Thank you. because it’s unhackable. This is an example., safety systems like this.
Unhackable safeties have been used to prevent explosions. engineers design their systems to address risks like the risk of an earthquake so that the boiler doesn’t blow up during an earthquake, can kill everybody nearby. The risk of a fire, the risk of explosion, the risk of toxic release.
They’re addressing all these physical risks in their physical designs. And what people, what an engineer, what the profession has started to do, (but not systematically has started to do), is to apply these very powerful tools for managing physical risk. Apply them to cyber risk as well, apply them to prevent physical consequences of cyber-attacks.
And so, that is sort of re’s examples of this in the four chapters of the book, and I say it’s engineering, not cybersecurity. Because if you look at the NIST cybersecurity framework, where is an overpressure valve? Where’s manual operations as a fallback if a control system is crippled, in the system.
None of these engineering mitigations are in the NIST framework. None of these engineering mitigations are even in the industrial cybersecurity Bible, which is, IEC62443. They talk about cybersecurity mitigations, not physical designs. And so, what CCE introduced was the concept of systematically applying physical designs in engineering to the task of addressing cyber risk as well as other kinds of risks. And the new cyber informed engineering initiative is sort of taking that to the next step. Again, CCE was a lot about risk assessment, whereas cyber informed engineering is sort of looking at the bigger picture.
And basically, CIE is saying there’s two sides to the coin. One side is the obvious that we’ve been talking about for 20 years, which is we need to teach more cybersecurity stuff to engineers. The other side of the coin doesn’t even have a name. We’re, we’re arguing over what the name should be on the, on the calls that are trying to define this body of knowledge, that the other side of the coin is take safety, engineering, automation, engineering, protection engineering, network engineering, and apply them systematically to the task of addressing cyber risk as well as physical risk.
This activity, has no well-understood name, but it is saying apply systematically the very powerful tools that the engineering profession developed over the last two centuries to the task of cyber risk in addition to cybersecurity. We need both sides of the coin, but to me, what’s exciting about CIE, is the new part about CIE is the engineering mitigations.


[00:17:44] Daemon: Now in cyber industry itself, there’s a severe lack of personnel to address all of the open vacancies that there are, especially as threats keep on evolving and emerging. I imagine that’s even more so in the case when we talk about, managing industrial control systems because you have to have that fundamental cybersecurity knowledge plus the additional, industrial control system knowledge.
How are organizations addressing this lack of, resources or staff or capability in the industry?


[00:18:22] Andrew:, It’s complicated. I mean, there’s brand new apprenticeship programs, there’s universities involved with these The organization that I remember being involved is Energy sec.
So, it’s an apprenticeship program for cybersecurity on, in the power sector, the electric power sector. There, part of the mandate of the CIE initiative, is to engage with – and this is a Fed, a US federally funded in initiative. Part of the mandate is to engage with post-secondary institutions, help them, put a framework together for curricula so that, we can produce more trained people.
But literally just two days ago, I was at a conference and, heard, some folks from, the Department of Energy speaking saying, we’re producing, people are coming out of these programs with OT cybersecurity credentials and they’re not getting hired. What’s going on here?
And part of the problem they identified was they can’t find the job postings, because there’s no consistent language. You can’t go to monster and search for OT security. You won’t find anything. People are calling the position something else and there aren’t a lot of entry level positions.
And a lot of practitioners in the space enter the space with an engineering degree in five years’ experience. As they take the next step to become involved in cybersecurity, or they come into the space with, an IT credential and five years’ experience on the IT side, and they learn about the OT side.
It’s sort of, it happens during their career, not at the beginning of the career. And so, the situation seems very confused. I’m not an expert on this. I’m just giving you a couple of inputs that I’ve heard recently. And there isn’t one OT cybersecurity job. Okay. It’s just like the IT security space.
There’s help desk, there’s patching, there’s policy, there’s, designing the systems. There’s, security analysts in the SOC that are looking at alarms all day long. There’s incident response there. There’s the whole, the whole gamut. And some of them need a lot of knowledge and some of them don’t.
And like I said, it just seems confused, but I’m sorry I’m, I’m not an expert on that end of the space.


[00:20:56] Daemon: I definitely appreciate your insight as you have, a lot of experience in seeing what challenges are seen in the industry. Now on a different, topic or subject in terms of managing, ICS infrastructures: how does OT diverge in the way that, IT infrastructure is managing or is managed in the comparison of, dashboards, management, real time eyes on glass, having, a knock or a sock in place. Are those managed in the same sort of way? Do you have operators 24/7 managing the environment, or are those becoming more automated through, the use of leveraging, AI and machine learning?


[00:22:00] Andrew: For ICS systems, generally speaking, it’s a lot of the same technology. There are exceptions. There’s stuff that’s unique in the OT space. there’s stuff in the IT space that just doesn’t apply to the OT space. but most of technology’s the same. IT folk would, would come into an OT security environment and recognize the technology.
In fact, they’d say “it’s a little old, we’ve been using this stuff for five, 10 years and you’re just starting to use it”. And that’s by design and it very much depends on where you go.
So again, let me work the example of a small shoe factory a bit more. A small shoe factory has what, 30 machines in it, robotic machines, they all have moving parts. Friction is the enemy of moving parts, where is the symptom of friction? Things wear out and when they wear out, you must crawl into these machines, find the piece that’s worn out and replace it. How do you protect that and what? What are the safety risks in that environment?
The main safety risk is the risk to the technicians who are repairing the equipment. So, when they must crawl into a machine, they take a lever, they move it, the lever forces a pin into the gears, okay, the gears can’t move anymore. They put a lock on the lever, they put the key in their pocket, and now they can safely crawl into the machine. This is procedure.
What’s the worst-case consequence of cyber consequence, cyber compromise? Well, you might have to erase your computers and rebuild them. You might have to call the emergency people in and pay through the nose to bring a guru in. you might lose a few shifts of production and have to schedule extra shifts.
These are business consequences. You can buy insurance for this.
Tools that you use to protect that network because they’re business consequences. The same as the kinds of business consequences you see on IT networks. It makes sense to use exactly the same tools and exactly the same approaches. There isn’t a difference between small shoe factories and IT networks.
You may as well manage the whole thing as an IT network. But again, at, rail switching systems, large power plants. I mean, fundamentally all powerful tools, are also weapons. The more powerful the tool, the more powerful the weapon. Many of the physical processes on the high end of the industrial base are very powerful tools, and so we have to manage them more aggressively.
The engineering, community calls it engineering change control. Before you apply a patch, you have to answer the question, how likely is this patch to kill anyone? And the answer is never zero. Okay. Nothing is ever perfectly safe. And so, the engineering community studies, the engineering team studies the patch.
I mean, what does that mean? They don’t have the source code to it. They test it for a long time in all the possible combinations they can think of. It’s a very expensive process. so change control is one of the ways you manage. This is why things are slower in the OT space, because you just don’t trust the new stuff.
For safety critical applications for a long time, you may never trust some of it. so that’s sort of a general principle. But, let me give you a more concrete example. One of the things that we’re talking about in the CIE space, and I’m on these calls, one of the, one of the concepts that’s emerging, especially in the very late, I mentioned the TSA regulations for pipelines and rails, one of the new concepts in there the TSA did not use the term, but, I call it, a consequence boundary. The TSA had a bunch of brand new, never before seen rules for the IT / OT interface. At the boundary between a network with business consequences and a network with physical consequences. And I’ve seen this over and over again in other standards, older standards and the newer ones coming out.
That idea of a consequence boundary is something that almost doesn’t exist in the IT space. I mean, there, there is some, military classified networks kind of have that concept, but it’s a, it’s a different application there. You’re protecting information here; we’re protecting life and limb here.
At a consequence boundary, the TSA recently gave concrete example that said, “if the IT network is compromised, the goal is to keep the pipeline running at necessary capacity”. They didn’t define necessary. I assume it meant at the capacity the society deems necessary.
You will keep the pipeline running at necessary capacity in spite of IT networks being crippled. To do this, you will design and implement a technology, and procedure to physically separate these networks in an emergency. If it goes down, you will exercise that option, separate the networks, and restart the pipeline.
That means remember, ransomware, dependencies I talked about earlier on. That means there cannot be any dependencies on the IT network in physical operations, you must be able to run physical operations completely independently of the IT network. The second generation of regulation that came down and said, okay, I recognize that some of these dependencies are hard to get rid of. So, if you cannot get rid of a dependency, you must document it to the TSA, the Transportation Security Administration. And you must explain, and they didn’t say this, I’m paraphrasing. You must explain in one syllable words how you’re going to keep the pipeline running in spite of this dependency during an emergency.
They also said, trust, shared trusts are a particularly dangerous kind of dependency. Get rid of them. And again, if you can’t get rid of them, then then document the residuals, and explain in one syllable words, et cetera. So, they’re putting new rules in place at this consequence boundary because they’re recognizing that these are two different kinds of networks.
You can buy insurance for your IT network going down and having to clean it out. You generally cannot buy insurance for society, for the damage to society. There’s no insurance company that will sell it to you due to a cyber-attack, taking the pipeline down.
So, at this consequence boundary, this is where we start seeing people deploying these sort of new approaches. They are deploying unidirectional gateways. This is sort of the most common example of what gets deployed at a consequence boundary. But we also see, there’s a whole field of network engineering growing up that is talking about a methodology from EPRI, the Electric Power Research Institute on how to do industrial internet safely.
A consequence boundary between a control network and the internet. It’s an even bigger consequence boundary. There’s a whole field of network engineering, growing up that includes, old school air gaps and unidirectional gateways and the EPRI methodology and, the stuff that’s coming outta the TSA as tools for managing that very important interface between networks that have different physical, different, different consequences, of worst case, worst case consequences of compromise.


[00:29:21] Daemon: What do you feel that is the largest challenge that critical infrastructure has right now and what are some, methods of improving those challenges or mitigating the risks that are associated with those?


[00:29:39] Andrew:, Biggest challenge? There’s so many, but, let me pick on a couple, let’s just say one is, I’m going to use the word awareness, but that’s the wrong word. But let me, let me use the word awareness and the second one is that space is still evolving. The understanding is still evolving. so let, lemme start with an awareness first. Again, the, on the IT side, the goal is protect the information.
On the OT side, the goal is prevent disaster. Colloquially, it’s stay left of boom. And IT experts. assume that you do the two the same way. A lot of the technology is the same. Yes. But you apply it differently. So, there’s a lack of understanding on the IT side.
There’s a lack of understanding. On the OT side, the engineering teams, have sort of thrown security over the fence for a generation now, almost a generation. And, said, IT can handle that and it’s not happening. It’s different and as I said, the engineering profession has very powerful tools in this space.
They’re not being applied systematically, mostly because a lack of awareness. And part of the problem is that the threat is growing rapidly. It’s getting steadily worse. I mean, let me back off on threats for a second. For 40 years, we’ve been deploying more and more automation in the industrial space.
This is why we have cheap cell phones. This is why we have cheap drinking water. This is why we have cheap electricity. Automation makes everything cheaper, makes it more affordable for all of us. This is a good thing, but, in the modern world, all automation is computers and software. And so, the more automation we deploy, the more targets we’re deploying for industrial cybersecurity sabotage.
And data in motion is the lifeblood of modern automation. And all cyber sabotage attacks are information. The only way a control system can change from a normal state to a sabotaged state is if attack information somehow enters the system. It might come in through a firewall. It might come in through a USB key.
It might be come in because a contractor’s carried an infected laptop in. It might come in my head as I walk past physical security with a password and malicious intent. But for 40 years we’ve been deploying more and more. Targets of cyber-attacks and we’ve been connecting those targets, deploying more and more, creating more and more opportunities for attack information to enter the system.
Are either of these trends disappearing in the next 40 years? I don’t think so. Okay. This problem’s steadily been getting worse. You could ignore it 25 years ago. It wasn’t a thing then. You can’t ignore it anymore. Threat environments changed. Ransomware has changed the equation.
We’re seeing physical outages more than doubling every year. We’re on an exponential path. and business decision makers, boards of directors, engineering teams are still wrapping their heads around this going, “Whoa, what happened? The world changed. Oh no, we have to do something differently. What do we do differently?”
It’s not like safety engineering, that we’ve been doing it for 30, 40 years now. This is new stuff. And so, awareness not just among the engineering teams, but across the entire business, across, the ecosystem, regulators. Awareness is a big thing, and the other one is that the techniques are still evolving.
Nobody’s ever before gathered engineering techniques into a body of knowledge that can be systematically applied to the task of dealing with cyber risk. And even the ideas and the terminology, it was less than a decade ago that, we looked around at the first generation of advice and said, Thanks for that.
But our goal is not to protect the information. Our goal is to prevent unacceptable physical consequences. Our first goal is safety. Our second goal is to keep the lights on reliability. Our third goal is efficient operations. These are the goals for industrial automation. These are the goals that a cybersecurity program must support.
The space is still evolving and in part it’s evolving comparatively slowly because, of engineering change control, everything is slow in the space. It has to be. But there just aren’t that many of us in the space. I’m sorry, but, if you put more brains on something, it moves faster and cybersecurity is a niche within it.
OT is a niche within IT. OT security is a niche within OT. We’re talking a niche within a niche and we’re talking on the low end of sort of the consequence spectrum. It’s the same as IT’s only the high end of the consequence spectrum. We have to do things differently.
So, we’re talking a niche within a niche within a niche, and it all gets confused because people say industrial cybersecurity and they mean a small shoe factory as well as a railway switching system. It’s just taking time.


[00:35:22] Daemon: How important do you think, government regulation is and, international co cooperation in securing ICS and what role should these, entities play, in the fight against cyber threats?


[00:35:36] Andrew: Regulation is a controversial thing, and industry by and large hates it, though there are exceptions. Let me talk about an exception for a moment and then I’ll come back to regulation and, cooperation. In most industries, people don’t want regulation. They don’t because regulation introduces a new kind of risk.
It introduces compliance risk, and it can be very expensive proving that you comply with things. There’s a lot of paperwork and you go to paperwork. Doesn’t really bias anything. It seems to be a pure expense.
There’s resistance to regulation, though I was surprised recently, when I learned that, big actors, big businesses in the shipping industry are interested in cybersecurity regulation. Why? Well, because they recognize that they have to do cybersecurity. But they can’t because the shipping industry is extremely competitive. Okay? Profit margins are razor thin. Businesses go out of business because they miscalculated the profit margin.
It went negative, and so the big shipping companies, they cannot afford to do cybersecurity. They know they should be doing much more of it. They’re worried about, the consequences, but they cannot afford to do any more cybersecurity than their competitors do. And so, they’re all saying, we’d really like regulations, so we all have to do what we know we should be doing anyway.
So, this is, that’s unusual. Most industries say, “forget that I don’t want any regulation”. Is it necessary? Well, it’s confused again. The space is evolving rapidly. Maybe it’s, maybe it’s needed now. I don’t know. the problem with regulation is that, if you get it wrong, then everybody’s focused their attention on stuff that’s not addressing the risk.
[00:38:00] On the other hand, if you have nothing, a lot of entities do nothing. So, I give the shipping industry as a data point. Another data point that, was sort of a surprise recently was the colonial incident. Okay. Colonial’s a super major, they’re massive, in the oil and gas industry. And when the pipeline went down because of a cyber-attack, the government regulators looked at this. Government authorities looked at this and said, that should never have happened. We thought you guys had this under control. Who invented the very first industrial cybersecurity standard or guidance back in the day back?
2003, 2004. The very first was, I think 2006. because it takes a couple of years to put this together. And it was the American Petrol Institute 1164 standard. They had the first 6, 2 4. Four three was not the first. 2001 dash 13, whatever, ISO standard that talks about industry was not first petrol industry was first government authorities thought that industry had this nailed and we’re shocked to discover that it isn’t. And bang, nine weeks later, there’s regulations. It’s a knee-jerk reaction. Are they the right regulations? Well, I said there was some interesting stuff in there.
It is a very modern regulation, but it’s a regulation. I would rather that people and businesses just do the right thing because it’s the right thing to do. Like they figured out 30 years ago for safety, they said the threat is just too big. We’re having too many explosions in whole, entire industries (plural), they looked around and fixed the safety problem.
And now safety is, so thoroughly ingrained people. People ask the question; “when’s the right time to involve cybersecurity in your engineering designs?” And I come back and say, when’s the right time to involve safety in your engineering designs – from the very beginning. Thank you. What do you mean? What kind of a stupid question is that?
[00:40:00] When people ask the question, when is the time to involve cybersecurity? We need the same culture shift. It’s only happening right now. That culture shift is happening right now. Ask me the question in 10, 15 years. And the answer will be, nobody cares about regulations. We’re all doing cybersecurity, and it’s designed in from the very beginning because you must right now. Maybe regulation will serve a transitional role.
But again, regulation generally says the minim everyone has to do and has a hard time distinguishing between small sites and much more consequential sites. The regulation says you have to do this stuff and you have to prove that you do this stuff. Proving it is expensive and the stuff that they’re asking us to do, is it the right stuff?
There’s three people at the regulator who put this together. Are they the smartest people in the world if there’s issues with regulation? But it might be that for now we need some of it. It’s heavily debated in the industry.


[00:40:51] Daemon: Well, thank you very much for that. Andrew, this has been a fantastic, conversation.
I’ve learned a lot in the process. before we go, is there anything that you’d want to leave with our listeners, perhaps on, more information resources or perhaps, how to reach yourself, or waterfall online?


[00:41:12] Andrew: Sure, sure. where to begin? I would, So thank you for having me on your podcast.
I would, mention to listeners that I’m also a podcast host on the Waterfall Industrial Security Podcast. You can go to your favorite podcasting app and search for industrial security and find us. it’s not a podcast about Waterfall. Waterfall sponsors it as a public service. It’s a podcast about industrial security.
Like you, we have a different guest on. every episode showing us, a different piece of the elephant. And I talked about ransomware threats, the threat environment. If you’re interested in the threat report, it’s, I think I. 20 pages long and half of it’s appendix.
[00:42:00] So, it’s an easy read. It’s, on the Waterfall website, waterfall security.com, and then look under the resources menu and, you’ll, white papers and eBooks, you’ll, you’ll find it there. And, I would, I would encourage you, if you’re interested in the space, Waterfall puts out, a lot of stuff.
I talk on events like this. I do webinars. I’m writing a new book that’s coming out in, mid-October. if you get on the Waterfall mailing list, we will inflict upon you, a newsletter every, typically two weeks, two to three weeks, and it’ll have all the, all the reports. It’ll have the announcement of the book when it’s available.
So, and to do that, the easy way to is go to the podcast, under Waterfall Security.com resources, podcasts, and sign up for podcast notifications. The newsletter is the podcast notification. You’ll, get, a mention of each new podcast in the newsletter as they come out every two weeks.
So maybe, that’s what springs to mind. Okay. Well thank you very much. Thank you so much, and I wish you the best. This is a field that as I said, I don’t think is going to go away anytime soon.
The advice I give to young people is yeah, the economy goes up and down. It’s hard to find jobs in OT security, but the security problem, in the next 40 years is going to get worse before it gets any better. So, there’s opportunities here. There’s an opportunity for people to build a career. There’s opportunity for people to contribute in very important ways, to a very important field.


About the author

With 25 years of industry experience, Daemon Behr is a seasoned expert, having served global financial institutions, large enterprises, and government bodies. As an educator at BCIT and UBC, speaker at various notable events, and author of multiple books on infrastructure design and security, Behr has widely shared his expertise. He maintains a dedicated website on these subjects, hosts the Canadian Cybersecurity Podcast, and founded the non-profit Canadian Cyber Auxiliary, providing pro bono security services to small businesses and the public sector. His career encapsulates significant contributions to the IT and Cybersecurity community.


Other recent articles of note.


Discover more from Designing Risk in IT Infrastructure

Subscribe to get the latest posts sent to your email.