The Shape of Cybersecurity
Understanding cybersecurity can seem like a daunting task. It’s often treated as a field reserved for the deeply technical - a realm of jargon, acronyms, and invisible digital machinery. But the real barrier isn’t the complexity of cybersecurity itself. It’s the way we approach it.
If we change how we think - if we start from intuition instead of technology - we can build a complete and lasting understanding of cybersecurity without using a single technical term.
To do that, we start not with “cyber,” but with models.
Models – The Foundations of Understanding
A model is an abstract representation of something in the world. Humans create models constantly - mental, linguistic, mathematical, visual. A blueprint of a building is a model. A sketch of a car is a model. Even language itself is a model - a symbolic system that represents our thoughts, which in turn represent the world.
We model because we cannot hold all of reality in our minds at once. So, we simplify. We capture what matters and ignore what doesn’t.
Imagine a single line. That line could represent anything - a road, a beam of light, the number one. The point is not what it represents, but that it is a model - a simplified abstraction of something larger.
Now add another line. Then another. Soon you have a square. Add more lines again, and the square becomes a cube. With each step, the model gains complexity - more dimensions, more detail, more realism.
Models are useful because they simplify, yet the natural tendency is to make them progressively complicated to try and reflect the complexity of reality.
You can imagine this in real-world terms: at first, the line is a road. Then you realize the road has a driver and a car - so your model expands to a square. Add the car’s condition, the driver’s alertness, the weather, and the road’s quality - now your model becomes a cube. Each new line or surface represents an added dimension of understanding.
But here’s the catch: no model is ever perfect.
Reality always bends, warps, and surprises. The “straight line” of the road might actually curve. The cube might not have perfect edges. Our driver might behave unpredictably. So while models can become more complicated, they never perfectly match the complex world they represent.
Reality continues to diverge from even our most complicated models.
And this difference – this gap between what we imagine and what exists – is where risk begins.
Risk – The Space Between Models and Reality
Risk is, at its core, the measurement of difference - the distance between our model of reality and reality itself.
When we expect the road to be straight but it curves unexpectedly, that’s risk. When our model of how something should behave doesn’t line up with how it actually behaves, that’s risk.
By looking at risk, this way, it becomes a lot less intimidating of a concept. That is, all risk is really saying (or asking) is “in what way or ways do we think reality will look different than our planned model?”
We can get more sophisticated with how we ask that question by adding in some qualifiers, like:
- “How regularly do we think these differences will occur?” That is, likelihood of risk.
- “When differences occur, how large do we think they will be?”
- “If and when they occur, what things do those differences really impact?”
- “Based on how regularly and how large the differences are, and what those differences impact, how big of a deal is it really?”
And while these qualifiers are important for getting into more advanced understandings of risk, it’s important to understand they are all spawned off the fact that risk itself is that differential between model and reality. Also, for those keeping score, these qualifying questions themselves represent further advanced models with more surface areas and lines for distortion to occur.
As our models evolve, so does the nature of that difference. The more complicated our systems become, the more places there are for drift, misalignment, and error to occur. And this drift doesn’t just happen in space - it happens in time.
Our models are always time-bound snapshots. What’s true at one moment might shift in the next. So risk isn’t static. It moves. It fluctuates. It grows and contracts as our understanding changes and as the world itself changes around us.
When thinking of models we implicitly see the world as static, but in reality the world is constantly reshaping across space and time.
In essence, risk is the dynamic gap between how we think the world works and how it actually behaves - across space and time.
Once you understand that, you’ve already internalized the foundation of cybersecurity.
The Sources of Differential
So what causes these differentials? And why can’t we just build ever more sophisticated models that perfectly match reality?
Differentials - the gaps between our models and the real world - arise in three fundamental ways:
Acts of Nature - Unmodeled Chaos. Sometimes the world simply refuses to behave according to pattern. We build a house on land that’s always been safe, and then a hundred-year flood hits. Nature introduces randomness that no model can fully capture.
Accidents - Unintentional Deviations. These come from human error or misunderstanding. A new user presses the wrong button, an engineer misreads a diagram, or someone forgets a step in a procedure. These aren’t malicious - they’re just the ordinary friction of human operation.
Intentional Manipulation - Exploiting the Gap. This is when someone deliberately takes advantage of the space between how a system is modeled and how it actually behaves.
With this last scenario, think of a person who pretends to be from IT security support - the very team meant to protect the organization - and calls employees asking for their passwords under the guise of “system testing.”
That’s a “hack.” A “hacker” is someone who discovers and manipulates the differential between model and reality.
And not all hacks involve code. Culture, trust, and social norms are also models.
When someone convinces an employee to hand over credentials, the attack isn’t on the machine - it’s on the social trust model. It’s the same kind of exploit as a technical intrusion in a computer network.
In both cases, the vulnerability lies not in the metal, but in the model.
Cybersecurity – Three Models in Motion
Cybersecurity, at its most fundamental level, is the discipline of managing these differentials through models. It lives entirely within the space between what we think a system is, what it actually is, and how we try to keep those two aligned.
We work with three interrelated models:
Architectural & Operating Models - how we believe our systems, data, and users are structured and interact.
Risk Models - how we believe things might deviate from that structure or fail.
Control Models - how we plan to prevent or mitigate those deviations.
Cybersecurity, then, is the art of continuously tracking how these three models relate to one another - and how they drift apart - over space and time. Interwoven between these models is a hidden fourth model, which is usually cultural (that is how we expect humans to behave).
But these models are never perfectly aligned. Our understanding of the architecture lags behind reality. Our understanding of risk lags behind that. And the controls we design to mitigate risk lag behind both.
Hackers operate within the gaps between reality and our many layers of models.
This creates a kind of staggered choreography - a dance between systems, risks, and responses, all slightly out of sync with one another. The goal is to narrow those gaps, but perfect synchronization - zero risk - is almost impossible.
The Drift We Can Never Eliminate
Why can’t we eliminate risk completely, including cybersecurity risks?
Because our understanding of every system - and the risks within it - is always historical. It’s built on what we already know, not on what will happen next. The future, by its nature, contains uncertainty. Unexpected events - “black swans” - will always appear.
Even if we could perfectly predict those risks, the cost and complexity of designing a flawless model - one that exactly mirrors reality - would likely exceed its benefits. The effort to create perfection would cost more than the imperfections themselves.
So we live in a world of managed imperfection. We continuously update our models, refine our controls, and try to shrink the differential - but the drift never disappears entirely. It’s part of the system.
The Pursuit of Alignment
Modern automation and artificial intelligence offer the promise of reducing this gap - of aligning our models more closely with reality. They can process change faster, adapt to data more dynamically, and respond to threats with greater precision.
And yet, even with these tools, the asymmetry remains. The model still trails reality, just by a smaller margin.
That’s not failure - it’s the nature of the universe. Every act of prediction, control, or protection is an act of modeling. Every model contains uncertainty. And every instance of cybersecurity is an attempt to reconcile that uncertainty, to bring our abstractions closer to truth.
Closing Reflection
In the end, cybersecurity isn’t merely about defending computers or networks. It’s about defending alignment - keeping the models we hold in our minds, our code, and our organizations as close to reality as possible.
It is the study of how our mental lines, squares, and cubes bend and twist against the real world - and the practice of reshaping them again and again to fit.
We began with a simple line, an abstract model of something in the world. Cybersecurity, at its essence, is the art of noticing where that line curves - and learning to bend with it.