are more likely to contain vulnerabilities that attackers can exploit. Where appropriate, I’ll
share anecdotes to provide examples of the mindset issue at hand.
My focus for the past several years has been on large-scale environments such as major
corporations, government agencies and their various enclaves, and even nation states. While
many of the elements are applicable to smaller environments, and even to individuals, I like
to show the issues in larger terms to offer a broader social picture. Of course, painting with
such a broad brush requires generalizations, and you may be able to find instances that
contradict the examples. I won’t cite counterexamples, given the short space allotted to the
chapter.
The goal here is not to highlight particular technologies, but rather to talk about some
environmental and psychological situations that caused weak security to come into being. It is
important to consider the external influences and restrictions placed on the implementers of
a technology, in order to best understand where weaknesses will logically be introduced. While
this is an enjoyable mental game to play on the offensive side of the coin, it takes on new
dimensions when the defenders also play the game and a) prevent errors that would otherwise
lead to attacks or b) use these same techniques to game the attackers and how they operate.
At this point, the security game becomes what I consider
beautiful
.
The mindsets I’ll cover fall into the categories of learned helplessness and naïveté, confirmation
traps, and functional fixation. This is not an exhaustive list of influencing factors in security
design and implementation, but a starting point to encourage further awareness of the
potential security dangers in systems that you create or depend on.
Learned Helplessness and Naïveté
Sociologists and psychologists have discovered a phenomenon in both humans and other
animals that they call
learned helplessness
. It springs from repeated frustration when trying to
achieve one’s goals or rescue oneself from a bad situation. Ultimately, the animal subjected to
this extremely destructive treatment stops trying. Even when chances to do well or escape
come along, the animal remains passive and fails to take advantage of them.
To illustrate that even sophisticated and rational software engineers are subject to this
debilitating flaw, I’ll use an example where poor security can be traced back to the roots of
backward compatibility.
Backward compatibility is a perennial problem for existing technology deployments. New
technologies are discovered and need to be deployed that are incompatible with, or at the very
least substantially different from, existing solutions.
At each point in a system’s evolution, vendors need to determine whether they will forcibly
end-of-life the existing solutions, provide a migration path, or devise a way to allow both the
legacy and modern solutions to interact in perpetuity. All of these decisions have numerous
ramifications from both business and technology perspectives. But the decision is usually
2 C HA PT ER O NE