Submitted by beforesunset1010 t3_za96to in philosophy
EyeSprout t1_iyldpwt wrote
I don't think this article sees or explains the full extent of how far math can go to describe morality. All it talks about are utility functions, but math can go so much further than that.
Many moral rules can arise naturally from social, iterated game theory. Some of you might know how iterated prisoner's dilemma gives us the "golden rule" or "tit for tat" (for those of you that don't, look at this first before reading further https://ncase.me/trust/), but stable strategies for more complex social games gives rise to social punishment and as a result, rules for deciding who and what actions to punish.
Most people would believe that this merely explains how our moral rules became accepted and use in society, and doesn't really tell us what an "ideal" set of moral rules would be. But I think, even if it might not uniquely specify what morality is, it puts some strong constraints on what morality can be.
In particular, I think that morality should be (to some degree) stable/self-enforcing. By that, I mean that a moral rules should be chosen so that if most of society is following that set of moral rules, then for most people following moral rules as opposed to discarding them is in their personal self-interest, in the same way that cooperation is in each of the player's self=interest in the iterate prisoner's dilemma under the "golden rule" or "tit for tat" moral rule.
[deleted] t1_iymrdrq wrote
[deleted]
enternationalist t1_iynhi6p wrote
They just specifically said that this wouldn't tell us an "ideal" set of morals.
[deleted] t1_iynllw1 wrote
[deleted]
enternationalist t1_iynwp28 wrote
Yep. So, that being the case, I'm not sure I understand who your question was directed to?
[deleted] t1_iynxehh wrote
[deleted]
enternationalist t1_iyocmv0 wrote
I suppose I wouldn't infer that, but I see how you are reading it; if I say "Look, this blender can't make a perfect smoothie that everyone would like", to me that doesn't imply that I think a perfect smoothie liked by everyone can exist; I'm just clarifying that such a concept isn't the goal.
I think what they are really trying to say is that the method constrains morality such that there only a few local maxima of stability - only some moral systems can be stable. It's not that it says that these systems are or are not morally good; in fact it doesn't assign them any sort of "goodness" score - it only tells us what is socially stable enough to be perpetuated as a moral system.
So, if our goal is to arrive at a moral system, this method theoretically lets us discard many unstable possibilities.
In this way, this method can reject a common set of suboptimal ("non-ideal") solutions, even if "ideal" solutions are totally unique for each person as you suggest, so long as we all agree with the premise that stability is good. It relies on that common criterion, even if all other criteria are totally unique.
That's how some "non-ideal" solutions can be consistently identified even if "ideal" is highly personal - it cannot identify ALL non-ideal solutions for all people; that can't be done without asking literally every human what they'd prefer - but it CAN identify a consistent subset of those solutions that will not be functional, regardless of personal views (unless you disagree with the basic premise of stability!)
[deleted] t1_iyomn2t wrote
[deleted]
EyeSprout t1_iyofnq7 wrote
The stability condition itself is an independent concept from "ideal" morality. I was using the idea of an "ideal" system of morality for reference because it's what people seem to be most familiar with, even if most people here probably don't believe in the existence of an ideal set of moral rules themselves.
As I said, the stability condition doesn't uniquely define a set of moral rules, it's possible that multiple different sets of moral rules can satisfy it at the same time. Different people with different values will still arrive at different sets of moral rules that all satisfy the stability condition.
A rationale behind caring about the stability condition in a system of morality is that actual systems of morality and ethics all tend to approximately follow the stability condition, due to evolutionary pressures. A moral system that is not (approximately) stable in practice won't persist very long and will be replaced by a different system. So the stability condition is "natural" and not arbitrarily decided by some individual values. Few conditions like that exist, so it's a valuable tool for analyzing problems of morality.
[deleted] t1_iyokw2u wrote
[deleted]
EyeSprout t1_iyom6dr wrote
Absolutely stability is difficult since the world is constantly changing, but the change is slow enough that evolution does tend to produce approximately stable systems. That's a straightforward result of the math; less stable states change quickly and therefore your system spends less time in them.
[deleted] t1_iyomgr2 wrote
[deleted]
EyeSprout t1_iyon8r9 wrote
The oxygen catastrophe is possibly the worst possible counterexample you could pick here. The oxygen catastrophe happened slowly enough for all forms of life to settle in niches, enough for game theory to direct evolution, and for a stability condition to apply. Those niches were approximately stable while they existed.
That's all that the stability condition needs to be applied. It's not some complicated concept.
[deleted] t1_iyonzdu wrote
[deleted]
EyeSprout t1_iyopym1 wrote
For example, in iterated prisoner's dilemma "always cooperate with your opponent" is not stable, because your opponent's optimal strategy against that is to defect every turn. The simulation I linked in my original comment shows a ton of strategies that are not stable and shows quite directly how they would quickly get eliminated by evolution.
For a simple example in evolution, most mutations harm the organism and are unstable. If most organism in a population had a very harmful mutation and a small population didn't, that small population would quickly take over the larger population. Hence, that mutation is unstable.
A slightly nontrivial example would be blind altruism in a situation where your species is severely starved of resources. If most animals were blindly altruistic and a small number of animals were not and would take advantage of the altruistic animals, then again, that small number would outcompete the larger population. So blind altruism isn't stable.
Of course we can't find many real-life examples; that is because they tend to be quickly eliminated by evolution. If they exist, it's usually only temporary.
EyeSprout t1_iyos2ps wrote
Ah, wait, just in case... when I say "stability" it has nothing to do with stability of government and things like that. I meant it in more of the physics sense, that small perturbations wouldn't cause a large effect.
Viewing a single comment thread. View all comments