Ah, I see. You're considering both inadvertent user error due to inadequate understanding, and conscious acts resulting from complete failure to think through one's actions. When you say "the result of someone else's folly is your pain", I suspect it would be true to say that this is not so much a case of direct stupidity as of negligence on the other person's part. (Granted, that negligence may be stupid in its magnitude.)
There are situations and applications where a "zero defects" standard is critical. They are surprisingly rare. There are very few situations in which a potential defect, if the possibility is anticipated, cannot be provided for via a safety device, an adequate engineering margin of safety, a failover device, or some other technological precaution. The deadly defects, in this case, are the unanticipated ones — perhaps because a system is too complex to fully predict its behavior in edge cases, perhaps because the properties of a material or device are not fully understood. (For example, the extensive use of kapton as lightweight insulation in aerospace applications, based on its known extremely good resistance to high temperatures, except that nobody knew that above a certain critical temperature it becomes a conductor, since — having never suspected the possibility — no-one ever thought to test for it.) The problem is that, by definition, unanticipated failure modes cannot be predicted, and so it's impossible to guarantee that you're prepared for them all. (Example: United Airlines flight 232. The DC-10 had three completely separate, fully redundant control systems. McDonnell-Douglas apparently never considered the possibility that a catastrophic failure of the tail engine could simultaneously disable all three.)
The thing is, these cases really resolve not to simple stupidity, but to failure to manage (and/or perhaps to fully grasp) overwhelming complexity.
[I think this is the sort of thing you mean. I'm not 100% certain.]
no subject
There are situations and applications where a "zero defects" standard is critical. They are surprisingly rare. There are very few situations in which a potential defect, if the possibility is anticipated, cannot be provided for via a safety device, an adequate engineering margin of safety, a failover device, or some other technological precaution. The deadly defects, in this case, are the unanticipated ones — perhaps because a system is too complex to fully predict its behavior in edge cases, perhaps because the properties of a material or device are not fully understood. (For example, the extensive use of kapton as lightweight insulation in aerospace applications, based on its known extremely good resistance to high temperatures, except that nobody knew that above a certain critical temperature it becomes a conductor, since — having never suspected the possibility — no-one ever thought to test for it.) The problem is that, by definition, unanticipated failure modes cannot be predicted, and so it's impossible to guarantee that you're prepared for them all. (Example: United Airlines flight 232. The DC-10 had three completely separate, fully redundant control systems. McDonnell-Douglas apparently never considered the possibility that a catastrophic failure of the tail engine could simultaneously disable all three.)
The thing is, these cases really resolve not to simple stupidity, but to failure to manage (and/or perhaps to fully grasp) overwhelming complexity.
[I think this is the sort of thing you mean. I'm not 100% certain.]