A common maxim of computer security is “the attacker knows the system”. In other
words, the system should still be secure even given a hypothetical attacker who
knows exactly how it works. It’s the flip-side of relying on “security by
obscurity”.
In cryptography, the same concept is referred to as Kerckhoffs’s
principle, but can be
applied to systems in general.
Most engineers are aware of this concept, but it’s also common to find a vague
attitude that you can still “get away with” relying on little details that a
hypothetical attacker won’t actually know, e.g.
- “We don’t need authentication on this webhook URL containing a UUID – no
attacker could guess it.”
- “We can put in a little authentication backdoor to make integration testing
easier – no attacker would know it’s there to exploit it.”
- “We don’t need to log details of what triggered this action – we know that
only this other service can trigger it.”
These are obviously bad ideas when made explicit like this, but they can sneak
in under the radar due to a subconscious belief that an attacker won’t really
know the system in that much detail.
You can immediately put a stop to this belief, subconscious or otherwise, with
the example of a disgruntled former employee. This employee has left the
company, but remembers how these obscure details work. They might even have
implemented them, and might also have the source code from that time.
This hypothetical employee doesn’t have to be disgruntled either. Maybe they are
unscrupulous and someone else offers them money for the source code.
Bearing this kind of attacker in mind, the security risks of the above ideas
seem quite severe, and you won’t be inclined to let them into the codebase.
View post:
The attacker really does know the system when someone leaves your company
|