The False Promise of IT Security in OT Environments

The assumption that OT security is just IT security in a different building is not only wrong — it is dangerous. Applying IT controls to OT environments does not reduce risk. It introduces a new category of it.

Part of the Phase I — Observation series

By Michael E. Ruiz

The instinct makes sense on the surface. You have a mature cybersecurity program on the IT side — endpoint detection, vulnerability scanning, patch management, zero trust segmentation. Someone asks why the same controls are not running on the plant floor. The answer comes back as some version of: we are working on it. The assumption underneath that answer is that OT security is just IT security in a different building.

It is not. And the gap between those two things is not a matter of degree. It is a matter of kind.

IT security is built around a specific priority ordering. Confidentiality first: protect sensitive data from disclosure. Integrity second: prevent unauthorized modification. Availability third: keep systems running, but accept that downtime is recoverable. This ordering reflects the nature of information systems, where breaches carry the most immediate legal, financial, and reputational consequence. You can restore a server. You cannot un-disclose customer records.

In operational technology environments, that hierarchy does not bend. It inverts. Availability is not a secondary concern — it is often a physical constraint. A natural gas compressor station, a pharmaceutical batch reactor, a water treatment plant: these systems run continuously because stopping them has consequences that extend well beyond a service outage. Integrity matters, but differently — the integrity of process state, not data records. Confidentiality, while relevant for intellectual property and configuration data, is rarely the dominant risk.

In IT, a security failure is a data event. In OT, it can be a physical one.

The dominant risk in OT is harm — to people, to process, to physical infrastructure. Which means the highest-order constraint is not confidentiality, and not even availability in the IT sense. It is safety. Every security decision in an OT environment must first answer a question that most IT security tools were never designed to ask: does this action create any condition under which the physical system could behave unsafely?

This is not theoretical. Endpoint agents have crashed PLCs by consuming the processor cycles a control system needed for scan time execution. Vulnerability scanners have sent malformed packets that dropped time-critical traffic between a distributed control system and its field devices. Active directory changes have broken trust relationships that SCADA systems depended on for operator access. None of these failures required a threat actor. The security tool itself was the disruption.

Applying IT security controls to OT environments without understanding process context does not reduce risk. It introduces a new category of it.

The engineering community has understood this for decades. IEC 61511 and IEC 61508 define systematic approaches to ensuring process systems fail into safe states. IEC 62443, the most comprehensive OT cybersecurity framework, builds on that foundation — not by porting IT controls into industrial environments, but by defining protection levels tied to consequence, accounting for the operational constraints of legacy systems, and distinguishing between what must be protected and what must remain available for safe operation. These distinctions get lost when organizations treat OT security as an IT problem with a different network prefix.

None of this makes IT security principles irrelevant in OT. Segmentation matters. Identity matters. Logging matters. But the implementation must be derived from the operational context — not imposed from the IT playbook. An air-gapped historian is not a security failure. It may be a deliberate architectural choice that protects process availability. A VLAN that would be inadequate in a data center might be exactly right in a remote terminal unit network where the alternative is no segmentation at all. Maturity in OT is calibrated against operational reality, not against the IT standard.

The more consequential problem is structural. When IT security owns the OT security mandate — which happens more often than it should — the bias toward familiar tools and familiar frameworks creates blind spots that are predictable and dangerous. IT practitioners see an unpatched endpoint and reach for the patch. OT practitioners see a running process and ask what happens if we touch it. Both instincts are correct in their own domain. The failure occurs when only one of them is in the room.

The CIA triad was designed for systems where data is the asset. In OT, the asset is the physical process — and the triad does not survive the crossing.

The right model is a shared accountability structure: IT security provides governance expertise and tooling discipline, operational engineering provides process knowledge and safety constraints, and someone owns the synthesis of both. That person has to be fluent in both languages — the language of threat models and the language of scan cycles. Without that synthesis, the organization will keep applying the wrong framework to the right problem, and the consequences will eventually be physical.

The monitoring question — how you observe OT environments without perturbing them — runs directly from the constraints described here. That is where the next failure mode lives.

These ideas are available as keynote presentations and executive briefings. Explore speaking topics →