That seems to be the conclusion in the (fascinating) article below, and it makes frightening sense. The reasons being:
1. We can identify and ameliorate a lot of single-point failures, but in doing so we make systems more complex and more prone to multi-point failures.
2. Pilots are human. Humans make mistakes, including assumptions about what failures are and aren't possible, to the point of, under stress, ignoring warnings about failures that they know are not possible.
3. The world's a complicated place.
Where this gets interesting is when you start talking about very dangerous technologies, where you can minimize the likelihood of a catastrophe, but where the cost of a catastrophe would be huge. Nuclear plants are an obvious example. Lots we can do to make them extremely safe. But … #ddtb
Embedded Link
Disaster book club: What you need to read to understand the crash of Air France 447 – Boing Boing
