Phil003

Phil003 t1_iyovzno wrote

Well, there are actually objective mathematical answers to what is “safe enough" being used in safety engineering (at least in theory... see my remarks at the end)

On academic level there are basically two generally referred methods to determine what is "safe enough":

(Remark: To handle this question, the concept of risk is used. In this terminology risk is basically the combination of the magnitude of a potential harm and the probability of that harm happening. So if there is 1% probability that 1000 people will die, the risk is 10, and also if there is 10% chance that 100 people will die the risk is again 10.)

​

  1. One is the ALARP principle ("as low as reasonably practicable"). This is basically a cost-benefit analysis. In a very simplified way, what you do is that you determine the current risk of the system (e.g. let's say there is 10% probability that a tank will explode in a chemical plant (e.g. till the planned closure date of the plant) and if this happens, on average 100 people would die in the huge explosion and in the resulting fire, then the risk is 0.1*100=10 ) Then you assign a monetary value to this, so let's say you assume that one human life worths 10 million € (this is just a random number, see the end of my post) then the risk*human_life_cost=100 million €. Now let's say you can decrease the risk to 5 (e.g. instead of 10%, there will be only a 5% probability that 100 people will die) by implementing a technical measure, e.g you install automatic fire extinguishers everywhere in the chemical plant, or something like that. If you do this, you reduce the risk*human_life_cost to 50 million € so you will have a 50 million € benefit. So how to decide if you should do this according to the ALARP principle? Easy, you consider the cost of implementing this technical measure (buying, installing, maintaining etc. all the automatic fire extinguishers) and if it costs less than the benefit (50 million € ) you should do this, if it would cost more than the benefit, then this would not be "reasonably practicable" and therefore you should no do this.
  2. The other approach is basically to use the concept of acceptable risk. In this case you first determine the acceptable risk (e.g. a worker in a chemical plant shall have a lower probability of dying in an accident per year than 1 in a million. i.e. out of one million workers only one shall die each year) and then you reduce the risk posed by the system till you reach this level. In this model the cost of reducing the risk is irrelevant, you must do whatever is necessary to reach the level of acceptable risk.

I am a functional safety engineer working in the automotive industry, so I don't claim to have a general overview of every domain of safety engineering, but let me add some remarks to these academic models based on the literature and discussion with other experts on my field:

  • ALARP: sounds very nice in theory, but I think the main problem is that pretty much no regulatory body or company would publish (or even write down! too much risk of leaking documents) their assumption on the worth of human life expressed in money or otherwise the witch hunt would immediately start...
  • Concept of acceptable risk:
    • Here it is important to highlight that what can be considered as an acceptable risk is decided by the society, and it can significantly change depending on the system in question. This also pretty much means that this decision is not necessarily rational. E.g. people accept higher risk while driving a car than when they fly as a passenger. (My understanding is that this is because people feel "in control" while driving, but they feel helpless controlling the situation while on the board of a plane. So this is not a rational decision)
    • Perhaps this acceptable risk concept looks strange, but it really makes sense. Consider car driving. Every year over 1 million people die in traffic related accidents worldwide, and people are fully aware that the same can happen to them on any day they drive a car. Still they choose to take the risk, and they sit in their car every morning. Society basically decided that 1 million people dying every year in car accidents is an acceptable risk.
    • Publishing acceptable risk values has similar challenges like publishing the worth of human life expressed in money, but the situation is a bit better, there are actually some numbers available in the literature for certain cases (but not everywhere, e.g. in my domain, in the automotive industry, we kinda go around of writing down a number)
  • On my field of expertise (developing safety critical systems including complex electronics and software), estimating the probability that the system will fail resulting in an accident is just impossible (describing the reasons would take too much time here), therefore there exists no really reliable way to estimate the probability of an accident and therefore it is not possible to quantify the risk with reasonable precision. Therefore neither of the above two methods are really applicable in practice in their "pure" form. (and I am quite sure that the situation is pretty similar on many other fields of safety engineering)

So my summary is that there exist generally accepted academic models to answer the question of what is “safe enough". These models are in theory the basis of the safety engineering methods followed in the industries everywhere, so applying mathematics to make moral decisions (so to determine e.g. what is an acceptable probability for somebody dying in an accident) is kinda happening all the time. In practice this whole story is much more complicated. e.g. because of the above mentioned reasons, so what is really happening is that we are using these models as "guidance" and we basically try to figure out what is safe enough based on mostly experience. I would be very surprised if these academic models would be used anywhere in significant number in a "clear" and "direct" way.

4