Viewing a single comment thread. View all comments

Tinac4 t1_iym8rv6 wrote

How does the driver decide that one situation is “safe enough” while the other one isn’t? What’s the right choice if the odds of an accident were somewhere in the middle like 0.01%?

I’m not saying that there’s an objective mathematical answer to what “safe enough” means. There isn’t one—it’s a sort-of-arbitrary threshold that’s going to depend on your own values and theory of ethics. However, these situations do exist in real life, and if your theory of ethics can’t work with math and probabilities to at least some extent, you’re going to get stuck when you run into them.

6

chrispd01 t1_iymqet8 wrote

I think in reality it comes under intuition. You have an idea experientially as to what is a reasoanble course of action to take. Tonthe extent a mathematical decision gets made, its at the level of “i probably ought to be ok”

Thinking about, there is a good analogy in the world of sports - look at the change in basketball shot patterns. The change is traceable to applying an economic / statistical approach to thise decisions.

But my point is people are more like players before the analytical approach took over. They tend to use intuition and “feel” more than the sort of evaluation you are talkkng about.

In fact its really interesting how wrong peoples intuitions are in those situations … making the less efficient choice, choosing the wrong strategy etc.

That to me shows that in practice people do not ordinarily make the sort of calculations you were describing. It doesn’t mean that they should not make those, just that they do not.

9

[deleted] t1_iymxq1u wrote

[deleted]

0

chrispd01 t1_iyn2qwr wrote

It looks like that except in practice its not. There isnt a real analysis going on in terms of real data etc. hence the basketball model - once people start actually applyjgn analysis the behavior markedly changes

That means that people arent doing that becaue once they start doing that their behavior changes.

The counter to that i think is that people think they are doing that but they are doing a bad job. But in general i dont thibk they really are - they dont make a conscious evalaution of the steps to solve the problem and they just intuit it. They may thinknthey exercosdd judgemtn but in practice they did not

5

Phil003 t1_iyovzno wrote

Well, there are actually objective mathematical answers to what is “safe enough" being used in safety engineering (at least in theory... see my remarks at the end)

On academic level there are basically two generally referred methods to determine what is "safe enough":

(Remark: To handle this question, the concept of risk is used. In this terminology risk is basically the combination of the magnitude of a potential harm and the probability of that harm happening. So if there is 1% probability that 1000 people will die, the risk is 10, and also if there is 10% chance that 100 people will die the risk is again 10.)

​

  1. One is the ALARP principle ("as low as reasonably practicable"). This is basically a cost-benefit analysis. In a very simplified way, what you do is that you determine the current risk of the system (e.g. let's say there is 10% probability that a tank will explode in a chemical plant (e.g. till the planned closure date of the plant) and if this happens, on average 100 people would die in the huge explosion and in the resulting fire, then the risk is 0.1*100=10 ) Then you assign a monetary value to this, so let's say you assume that one human life worths 10 million € (this is just a random number, see the end of my post) then the risk*human_life_cost=100 million €. Now let's say you can decrease the risk to 5 (e.g. instead of 10%, there will be only a 5% probability that 100 people will die) by implementing a technical measure, e.g you install automatic fire extinguishers everywhere in the chemical plant, or something like that. If you do this, you reduce the risk*human_life_cost to 50 million € so you will have a 50 million € benefit. So how to decide if you should do this according to the ALARP principle? Easy, you consider the cost of implementing this technical measure (buying, installing, maintaining etc. all the automatic fire extinguishers) and if it costs less than the benefit (50 million € ) you should do this, if it would cost more than the benefit, then this would not be "reasonably practicable" and therefore you should no do this.
  2. The other approach is basically to use the concept of acceptable risk. In this case you first determine the acceptable risk (e.g. a worker in a chemical plant shall have a lower probability of dying in an accident per year than 1 in a million. i.e. out of one million workers only one shall die each year) and then you reduce the risk posed by the system till you reach this level. In this model the cost of reducing the risk is irrelevant, you must do whatever is necessary to reach the level of acceptable risk.

I am a functional safety engineer working in the automotive industry, so I don't claim to have a general overview of every domain of safety engineering, but let me add some remarks to these academic models based on the literature and discussion with other experts on my field:

  • ALARP: sounds very nice in theory, but I think the main problem is that pretty much no regulatory body or company would publish (or even write down! too much risk of leaking documents) their assumption on the worth of human life expressed in money or otherwise the witch hunt would immediately start...
  • Concept of acceptable risk:
    • Here it is important to highlight that what can be considered as an acceptable risk is decided by the society, and it can significantly change depending on the system in question. This also pretty much means that this decision is not necessarily rational. E.g. people accept higher risk while driving a car than when they fly as a passenger. (My understanding is that this is because people feel "in control" while driving, but they feel helpless controlling the situation while on the board of a plane. So this is not a rational decision)
    • Perhaps this acceptable risk concept looks strange, but it really makes sense. Consider car driving. Every year over 1 million people die in traffic related accidents worldwide, and people are fully aware that the same can happen to them on any day they drive a car. Still they choose to take the risk, and they sit in their car every morning. Society basically decided that 1 million people dying every year in car accidents is an acceptable risk.
    • Publishing acceptable risk values has similar challenges like publishing the worth of human life expressed in money, but the situation is a bit better, there are actually some numbers available in the literature for certain cases (but not everywhere, e.g. in my domain, in the automotive industry, we kinda go around of writing down a number)
  • On my field of expertise (developing safety critical systems including complex electronics and software), estimating the probability that the system will fail resulting in an accident is just impossible (describing the reasons would take too much time here), therefore there exists no really reliable way to estimate the probability of an accident and therefore it is not possible to quantify the risk with reasonable precision. Therefore neither of the above two methods are really applicable in practice in their "pure" form. (and I am quite sure that the situation is pretty similar on many other fields of safety engineering)

So my summary is that there exist generally accepted academic models to answer the question of what is “safe enough". These models are in theory the basis of the safety engineering methods followed in the industries everywhere, so applying mathematics to make moral decisions (so to determine e.g. what is an acceptable probability for somebody dying in an accident) is kinda happening all the time. In practice this whole story is much more complicated. e.g. because of the above mentioned reasons, so what is really happening is that we are using these models as "guidance" and we basically try to figure out what is safe enough based on mostly experience. I would be very surprised if these academic models would be used anywhere in significant number in a "clear" and "direct" way.

4

Tinac4 t1_iyqec3c wrote

Great comment! Thanks for the thorough explanation.

1