Comments
SooooooMeta t1_j9l0g9a wrote
One of the things philosophy is often searching for is a principle that works over a broad range of circumstances. You need to try a lot of situations, including edge cases, to see if it holds up as one might intend. This is going to require hypotheticals to fill in gaps in the real world data.
IAmNotAPerson6 t1_j9kg2qj wrote
Wow, the only good response to this issue I've seen, thank you lol
loki_cometh t1_j9m3v3f wrote
You articulated it before and better than I could.
Wishingwings t1_j9nzv4r wrote
Definately, but are we sure that dividing every answer in two version is not more adequate?
(Math is already showing this in a lot of ways)
Take the question: “is it okay to hide the truth?”
Imagine a young child being brought up by a widow. It would be significantly impactful to share said truth about the death of the father with the child so young in their life. It would be morally more acceptable to hide the bitterness in this truth from the child untill it is ready.
Now imagine that your best friend saw your partner cheat, and did not tell you about it. Years go on and once you find out about it the entire house of cards collapses.
I think to truthfully say whether answers are of dichotomous nature, we must first answer whether the people we become as we mature are truly us, because they often answer very different to questions than te people we were. While children are individuals, adults are far more cooperative and considerate.
So, what is an individual? Is it defined by who we are becoming as a species, or the way people are born? I think its the nature of how this development expresses itself which shows one of the most troubling characteristics of a human, to sacrifice who you are means to be able to permanently lie to yourself. How is then another adult on this page viable to discuss anything?
[deleted] t1_j9kf1dk wrote
[deleted]
Solaced_Tree t1_j9kqr08 wrote
Ahh so you're just messing. Glad you could clarify for us, sometimes it's hard to filter out the BS responses but you made sure we knew
Sulfamide t1_j9lo2ro wrote
The edit really elevated the comment from stupid to amusing.
Bjd1207 t1_j9lhzc5 wrote
> Most don't have the stomach for it.
If "it" means more comments like this and people like you, then I absolutely don't have the stomach for it.
minion_is_here t1_j9lx6yw wrote
Lmao This is a great comment. I swear people take silly internet forums way too fucking seriously
PrimalZed t1_j9jlc9o wrote
> First, often people respond to them differently across demographic groups, particularly different cultures, and second; small, irrelevant changes in how thought experiments are worded can change entirely how we respond to them.
These are just known aspects of ethics, not unexpected features only present in thought experiments. If anything, the use of thought experiments (including framing) to expose and analyze these things is useful, not detrimental.
It seems like these objections to thought experiments would only make sense if the response - and ethics in general - is thought to be objective.
PancAshAsh t1_j9k3y8x wrote
If small changes to wording in thought experiments change entirely how we respond to said thought experiments those changes are by definition relevant. That in itself is interesting and worthy of study.
Anathos117 t1_j9ln8yj wrote
It's not just interesting and worthy of study, it calls into question the entire utility of thought experiments. Which is the point of the article, although it does a strangely poor job of explaining why it's important.
If thought experiments are extremely sensitive to framing and demographic variation, then whatever conclusions we reach using them aren't generalizable. That is to say, if we get different answers to the Trolley problem depending on which generation we ask, then we're definitely going to get different answers if we change the trolley into a car, let alone a bigger change like a bullet, explosion, or disease.
And this is something of a general problem with argument by analogy, which is basically what thought experiments are. The conclusions you reach with an analogy often don't generalize to the thing you're drawing a comparison to. They differ enough that you can almost always generate an equally appropriate analogy that reaches the opposite conclusion.
PancAshAsh t1_j9lt1i0 wrote
This is only a problem if you consider ethics and morality to be absolute laws that never change. Of course the responses to thought experiments change over time and across cultures, human thought isn't governed by static and unchanging laws. That's sort of the point. Likewise changing the framing can give some insight in how people think and how that can change.
Anathos117 t1_j9lukbq wrote
> This is only a problem if you consider ethics and morality to be absolute laws that never change.
No, it's a problem if you want to create generally applicable rules or convince people that something is right or wrong. What does the Trolley problem tell us about the ethics of killing people to harvest their organs for lifesaving transplants? Nothing, because despite the fact that you're choosing between killing one person and letting several die, they don't engage our moral intuitions the same way.
Edit: Thought about this a little more, and it's easier to make my point if we reverse the Trolley Problem. Would you pull the lever to switch the trolley from the track with one person to the track with five? Obviously not, that would be monstrous. So we can generalize a rule that reads something like "it's wrong to take an action that you know will increase the number of deaths", right?
So is it wrong to save the life of an organ donor? I think the answer is just as obviously "no". The Trolley Problem has completely failed to generalize.
So what good is the Trolley Problem if it only lets us examine our moral intuitions about scenarios that literally involve choosing which people tied to a track should die. That's not something that anyone is going to encounter.
XiphosAletheria t1_j9lzdxm wrote
I think the response there is that the apparent lack of generalizability means only that you have failed to analyze the situation correctly. What the trolley problem teaches us is that those running a closed system should run it so as to minimize the loss of life within it. That is, if I am entering into a transit system, and a trolley problemish situation arise in it, I should rationally want the people running the system to flip levers and push buttons such that fewer people die, because I am statistically more likely to be one of the five than the one.
Whereas we shouldn't want people using others as means to an end in an open scenario. Again, because the number of people who might want an organ from me at any given moment is really much higher than my odds of needing one myself.
In both cases, the trolley problem shows is that our moral impulses are rooted in rational self-interest, rather than, say, simple utilitarianism.
ulookingatme t1_j9n9itp wrote
As an example, the psychopath agrees to be moral not out of a sense of need or community, but as a result of his own self interest and his or her desire to avoid the cost of ignoring laws and social norms. But does that then mean morality involves nothing more than making a self-interested choice?
XiphosAletheria t1_j9qinie wrote
I think of morality as being a complex system emerging from the interplay between the demands of individual self-interest and societal self-interest.
The parts of morality that emerge from individual self-interest are mostly fixed and not very controversial, based on common human desires - I would prefer not to be robbed, raped, or killed, and enough other people share those preferences that we can make moral rules against them and generally enforce them.
The parts of morality that arise from societal self-interest are more highly variarble, since what is good for a given society is very context dependent, and more controversial, since what is good for one part of society may be bad for another. In Aztec culture, human sacrifice was morally permissible, and even required, because it was a way of putting an end to tribal conflicts (the leader of the losing tribe would be executed, but in a way viewed as bringing them great honor, minimizing the chances of relatives seeking vengeance). In the American South, slavery used to be moral acceptable (because their plantation-based economy really benefited from it) whereas it was morally reprehensible in the North (because their industrialized economy required workers with levels of skill and education incompatible with slavery). Even with modern America, you see vast difference in moral views over guns, falling out along geographic lines (in rural areas gun ownership is fine, because guns are useful tools; whereas in urban areas gun ownership is suspect, because there's not much use for them except as weapons used against other people).
ulookingatme t1_j9qxy67 wrote
Sure, morals are based upon the social contract and self-interest. That's what I basically said.
Anathos117 t1_j9m1f2i wrote
> What the trolley problem teaches us is that those running a closed system should run it so as to minimize the loss of life within it.
Maybe, but that's absolutely not what people are using the Trolley Problem for, and we don't really need the Trolley Problem to reach that conclusion in the first place. The point of thought experiments is to isolate the moral dilemma from details that might distract from the core intuition, but that's worse than useless because those details aren't distractions, they're profoundly important.
XiphosAletheria t1_j9m3q8e wrote
I think the point of the thought experiment is to help people discover what their intuitions are, what the reasoning is behind them, and where that leads to contradictions. What's important about the trolley problem isn't that people say you should flip the lever. It's that when asked "why?" the answer is almost always "because it is better to save five lives than one". But then when it comes to pushing the fat man or cutting someone up for organs, they say you shouldn't do it, even though the math is the same. At which point people have to work to resolve the contradiction. There's a bunch of ways to do it, but hashing out which one you prefer is absolutely worthwhile and teaches you about yourself.
Anathos117 t1_j9m67db wrote
> There's a bunch of ways to do it, but hashing out which one you prefer is absolutely worthwhile and teaches you about yourself.
But again, it doesn't teach you anything generalizable. Someone who might balk at pushing the fat man might have no problem demanding a pre-vaccine end to COVID restrictions for economic reasons. So it might be intellectually stimulating, but not actually useful.
XiphosAletheria t1_j9n2j56 wrote
I think my main issue here is that I don't think "generalizable" is the same as "useful". I think learning to articulate your moral assumptions, then to interrogate them and resolve any contradictions as they arise are all useful, and really the whole point of philosophy.
Beyond that, I think a lot of the factors people come up with are in fact generalizable, at least for them. That is, once people have resolved the trolley problem to their own satisfaction, the factors they have identified as morally relevant will remain relevant across a range of issues. The trolley problem doesn't reveal much that is generalizable for people as a group, but because morality is inherently subjective, we wouldn't really expect it to.
Anathos117 t1_j9n50m4 wrote
> I think learning to articulate your moral assumptions, then to interrogate them and resolve any contradictions as they arise are all useful, and really the whole point of philosophy.
Again, not what most people are using thought experiments for, and "it's good practice for when you actually have to make a moral judgement about something completely unrelated" is hardly a ringing endorsement for their usefulness.
> the factors they have identified as morally relevant will remain relevant across a range of issues
I don't think they will be. People are weird, inconsistent, and illogical. You don't have some smooth culpability function for wrongdoing that justifies punishment once it rises above a certain threshold, you've got an arbitrary collection of competing criteria that includes morally irrelevant details like how well you slept last night and how long it's been since you last ate.
Great_Hamster t1_j9lpzah wrote
Agreed.
StrayMoggie t1_j9m4o7v wrote
Also that order changes outcomes of responses. These things are an insight into language and processing that we are nearly blind to. They may actually have more pattern than we believe they do. It is easier as an outsider, something different, to see patterns rather than seeing them from within.
frnzprf t1_j9o0a8b wrote
When we talk about ethical thought experiments like, would you take the organs out of a living person, to save the lives of five other people, the fact that in daily-life situations most people agree and in thought experiment situations there is disagreement and small changes make a big difference could be explained by the idea that human moral intuition is not based on a few ground pricinciples, like "don't murder" or "maximize happiness", but instead on many ideas, formed in practical daily life.
That's one critique of ethical thought experiments: They presuppose there is an elegant set of a few universal moral ground-laws and those moral axioms are connected to moral intuition.
IAmNotAPerson6 t1_j9kfuru wrote
If you're only familiar with thought experiments or intuition as a pedagogical tool because you've only taken a philosophy 101 class then this makes sense, but they're also used in philosophical arguments all the time, and not only for stuff that's thought to be objective.
TheRushConcush t1_j9jfr5s wrote
Or you know, they're just a good way to explain a dilemma or philosophical notion, most if not all of which have no "correct" answer. Framing in language in general and its influence on our moral judgement of a situation are the actual issues and in my opinion should be the focus instead of "curtailing the use" of good tools. But hey, understanding something is harder than rejecting it.
MrKurteous t1_j9jlcc3 wrote
Sure, but I felt the article made a compelling case for avoiding use of thought experiments as a way of arguing or discovering what's right/wrong. Also, happy cake day!
Killercod1 t1_j9jmeh9 wrote
There is no objective morality. It doesn't really matter what you say in the dilemmas, neither answer is correct or wrong. However, they're good at determining the fundamental aspects of what you personally think is right
Judgethunder t1_j9jo4z1 wrote
That's one theory anyway. There certain seems to be some pretty clear commonalities of what most people determine as harmful or helpful or what most people regardless of culture find to be a laudible goal.
Even non human animals have some basic intuitions about reciprocity, compassion, and survival. Some answers seem better suited to achieving a generally positive outcome than others.
And of course you could point to some outliers who might find for whatever reason that causing unneeded suffering is somehow ideal for them. But I could also probably find a similar number of people whom when placed in an unlocked cage decide the best way out is to defecate on the floor.
What I mean to say is people say "There is no objective morality" like that is some kind of given, obvious statement. When it's not. It's just as likely to be a coping mechanism for our lack of ability to make optimal ethical determinations due to our biases and flaws.
Killercod1 t1_j9jsllw wrote
Everyone has different desires and goals. Some want to maximize pleasure, others may want to be zealous with their religion. The only constructive argument to be made is how best to adhere to their morality.
There's definitely a moral philosophy that is the most compatible with a functioning human society. Like a morality that maximizes growth, pleasure, and health of the society (some form of utilitarianism). It may be necessary for creating the most effective and functional society. But, it's not the only morality that exists and some may desire society to be less functional or they may be completely indifferent to it.
I would argue that the most common morality is actually detrimental to society. The morality of capitalism, being that of the belief in private property, productivity, and profit being inherently good. This isn't capatible with humanity and our communal structures. However, it is the current ruling morality.
mackinator3 t1_j9jqo1n wrote
You just wrote all that but didn't really say anything. Besides that you specifically want to exclude things that don't suit your conclusion...
Judgethunder t1_j9jr491 wrote
That's an awfully interesting interpretation.
ChubbiestLamb6 t1_j9ke3vv wrote
>optimal ethical determinations
Optimal how? The thing you seem to be missing is that you must choose a yardstick to assess any decision, action, situation, etc, as better or worse than another option. How can you possibly rank things as more or less optimal unless you've picked an attribute to care about? Your appeal to common values across cultures and species--even setting aside the inherent weakness of cherry-picking examples--hinges on a false equivalence between consensus and objectivity.
The fact that there is no objectively correct yardstick to use is the whole problem. It's not about, like, logistics, or the difficulty of accurately predicting outcomes in a complex system to be able to confidently pick the best actions, or anything like that. Those are all problems that come up after you've picked a yardstick.
I'm not saying your yardstick is a bad one, or an uncommon one. But you did pick it because you like it best for whatever reasons, compelling as they may be.
It seems like what you should be arguing for is something like an "Official Morality", not an objective one. I think failure to distinguish between the two is what leads to a lot of the friction in discussions like these. Reading your comment as an argument that it is possible to create a moral policy that is best suited to promote the things most people need and care about totally avoids the disagreements you're encountering. From everyone else's point of view, you missed the point of what "objective morality" means, and from your point of view, everyone else is bumbling around acting like it's impossible to determine if starving is preferable to being safe and well-fed due to some veil of philosophical technicality. But yhe real issue is that you're talking past each other.
Judgethunder t1_j9kh9b4 wrote
>From everyone else's point of view, you missed the point of what "objective morality" means, and from your point of view, everyone else is bumbling around acting like it's impossible to determine if starving is preferable to being safe and well-fed due to some veil of philosophical technicality. But the real issue is that you're talking past each other.
Yeah. That's about the sum of it.
ChubbiestLamb6 t1_j9l5mcs wrote
Soo...if you're aware of the problem, is there a reason you don't change your approach to the conversation?
Judgethunder t1_j9mdhdk wrote
Because I think it is indeed a useless philosophical technicality.
There are many objective facts we accept as objective facts because we use our senses to perceive then. Our senses are subjective. Nothing we detect using them is truly objective, from colors, to shapes, to anything at all.
But we set a standard of objectivity based on our senses anyway.
So in the absence of the word of a deity, what kind of objective reality could we possibly expect beside what we can to the best of our ability calculate is in the best interest of all humanity and the ecosystem we are a part of?
The fact that it is usually better to eat than to starve is as objective as me looking up and observing the color of the sky.
frogandbanjo t1_j9k4gfz wrote
>What I mean to say is people say "There is no objective morality" like that is some kind of given, obvious statement.
Well, maybe it wasn't always, but I'd say Godel did some pretty compelling work on a highly analogous problem. "There is no objective morality" ought to be understood as simply claiming that you can't prove premises using an argument that initially accepts them as a given.
Remember, you're also bounded on the other side by self interest. Free-standing self interest is widely understood as being amoral, not moral... but of course, people can also disagree with that - and some philosophers have! Indeed, many have posited that it's immoral, while a minority have posited that it's moral!
How very objective.
PrimalZed t1_j9jsup8 wrote
There is no universally true moral statement. There is no way to definitively prove any moral statement. Hence, there is no objective morality.
Judgethunder t1_j9ju677 wrote
Some solutions to problems are going to be objectively better than others in their given context. Morality and ethics are problem solving tools, emergent from the evolutionary process.
Midrya t1_j9jzy2n wrote
Could you provide an example? Certainly there are solutions to problems that maximize for specific goals, but you would need to establish that the goal itself is objectively derived, and not just something that is desired.
Judgethunder t1_j9k2zil wrote
You can deconstruct all frameworks to be meaningless if you want to. But we don't. Our minds and desires are emergent products of evolution with certain common desires leaning toward survival, homeostasis, propagation.
Some outcomes are going to be better than others for this. Some desires and goals are going to be better than others for this.
Could we deconstruct these goals as philosophers and render propagation of our species and our ecosystem and our societies as relatively meaningless? Sure. But we don't. Not really.
PrimalZed t1_j9k6f1m wrote
A social or moral desire being "emergent products of evolution" does not make them objective. It's not even true that all morals are emergent products of evolution.
To give an extreme example to quickly cut to the core here, "We shouldn't press the button that kills all humans" is not an objective statement. It presumes that human life or the continuation of humanity are inherently valuable.
Your position that there is objective morality would be easily proven if you can give an example of an objectively true moral statement.
Judgethunder t1_j9k92bb wrote
Assuming that human life and the continuation of humans is a reasonable assumption to make. And an assumption that nearly everyone makes.
PrimalZed t1_j9kjbl0 wrote
"The continuation of humanity is inherently valuable" is not objective. Yes, it is a value that most people hold, but that does not make it an objective truth. At best, that makes it a common axiom.
That you had to qualify "nearly everyone" holds that value itself demonstrates that it is subjective, not objective.
There is no fundamental universal property that makes humanity inherently valuable. Humanity can cease, and the universe will continue on just fine. We can say that's bad, and construct our morals around that axiom, but that doesn't make the axiom objectively true.
brutinator t1_j9jwxaw wrote
I dont really agree. I agree that we might not know what the objective morality is, but I do think that we cant say that the existence of an objective moral theory doesnt exist.
Killercod1 t1_j9jx5q7 wrote
Why should we all adhere to one interpretation of good?
brutinator t1_j9k9ldf wrote
Why shouldnt we, if the hypothetical interpretation is the correct one?
Killercod1 t1_j9k9tj0 wrote
What makes it the correct one?
brutinator t1_j9kkmaq wrote
Thats not the question. I am not listing what makes a correct ethical theory, I am asking why a correct ethical theory should not exist.
Killercod1 t1_j9knkop wrote
It can be personally correct to you (or your group), as in it's consistent with your beliefs and desires. It may be a correct ethical theory, not the correct ethical theory (which is what "objective morality" is attempting to establish). However, using the word "correct" to define an ethical theory, is ridiculous. There's no way to prove it's correct, it doesn't make any sense to assume it can be correct. How would you define a wrong ethical theory?
brutinator t1_j9krgae wrote
Youre avoiding the question, or assuming Im saying something different.
A correct ethical theory is one that maximizes good, a wrong one is one that does not.
Again, Im not trying to define what THE correct ethical theory is. But we can say that some ethical theories are better than others. For example, the ethical theory "Murder every single person you encounter" is obviously not a good one. So it seems a logical conclusion that one is the best. What it is, I dont know.
But lets assume it exists, and is known: a system that maximizes good with no downsides.
Why shouldnt it be universally followed?
Killercod1 t1_j9kv4db wrote
What is "good"? Why should it be maximized? You sound like a zealous utilitarian.
Playing devils advocate here: why isn't killing a good thing? What's so obviously wrong about it? Perhaps, one may consider human life evil and seek for it's complete extinction. PETA members come to mind. Perhaps, human life isn't as valuable as capitalist profit is. Economy > humanity. There's some that would die on that hill to enforce these ethics.
Obviously, I dont subscribe to these ideas. I consider myself a humanist that wishes to maximize humanity's health and well-being. However, even the question of what is "health" and "well-being", is up for debate.
Whether or not something should be universally followed, is an opinion. Particularly, the "should" implies subjectivity. It's completely dependent on your personal beliefs and goals. In the real world, not everyone shares those beliefs and goals. Morals and ethics seem objective, until they face contradicting counterparts. Leading to war.
brutinator t1_j9kznhz wrote
Youre not playing devil's advocate, youre ignoring the question and diverting the discussion. If you are going to argue that all actions are amoral and/or equally ethical, then we have no basis to continuing this discussion. To put it in your terms, why bother being a "humanist" if all actions are equally healthy and increasing well being?
All youre saying is that because we dont know, we shouldnt subscribe to a single belief. While I think that position alone is contestible, thats not the question Im asking, and truth be told, is a little tautological. Obviously people cant do what they dont know.
To reiterate it a third time, if we DO know a universal ethical theory, why shouldnt everyone follow it?
Killercod1 t1_j9l3r2y wrote
Ethics have no quantifiable value. Since you cannot physically measure how "good" something is, it's entirely subjective. I'm not saying everything is equally valuable. I'm saying that there's no way to objectively determine the moral value of something. You can only determine it's value to individual people and groups. What would an all-encompassing good look like?
You always act in your own interest, even if those interests are for someone else's well-being. Your morals are your values. I never said you should be amoral. I'm sharing the fact that there are those with different values. You can call them evil, if you want. But, they will continue to exist. They may even overpower you.
You would have to prove that there is an ethical theory that trumps all others. This is conflicting with real world conditions, because there isn't one. Some people's values may align with other's. But, it's not true for everyone. The only way to make your ethics universal, is to defeat all contradicting ethics and people who uphold them. In doing so, you would be considered a fascist. The road of good intentions is paved with blood.
The point is, by enforcing your "universal objective" ethical theory, you would be eliminating all others. Who's not to say that you're the evil one?
brutinator t1_j9lbgjh wrote
Gotcha. Im not going to engage in this anymore because you are refusing to answer the central question.
Again, WHAT the "best" theory is is out of scope. I frankly do not care what it looks like.
Again, if you are going to suggest that "good" and "bad" do not exist, then I think the field of ethics is not for you: the entire core supposition of the field is that there exists good of varying degrees. What that is? How to achieve it? Sure, those are things to discuss, but it inherently relies on the premise that you CAN measure moral value. Show me a single ethical theory with decent standing that says that you can not determine if something is good or bad.
Again, you are constructing strawmen to argue against instead of engaging with the question. Never did I say everyone should be forcibly made to adhere to an arbitrarily decided moral code. Is every action that you do the same as everyone else enforced upon you by the threat of external violence? Do you eat? Do you drink water? Do you breath oxygen? Do you do those things because youll be executed if you dont? If no, then clearly there are things that everyone does, and can do, without the need for external violence. But I digress because, again, its out of the scope of the question.
Good bye.
Killercod1 t1_j9lgmzc wrote
Since "good" and "bad" cannot be materially measured, they can only be subjectively determined. They are social constructions, concepts. It's totally illogical to insist that they can be measured. You can only measure material.
How else would you make everyone adhere to your "universal" ethics, other than by enforcing them? Perhaps, you convince people otherwise. However, if you find someone adamantly opposed to it, their existence would contradict your universal theory. As it obviously wouldn't be universal if there's other conflicting ethics that exist.
People perform basic necessary tasks to live. By not performing them, they would be executing themselves, in a sense. Values are materially driven. It's likely that your values have conformed to benefit your own material conditions. If they haven't, I would consider you illogical. Your actions would be unpredictable and inconsistent.
Happy cake day
frnzprf t1_j9o1prk wrote
I agree that there is no universal moral truth.
I heard once the story that Moses went down with the ten commandments and when he saw that the Isrealites worshipped a golden cow, he destroyed the stone tablets out of anger. Then he wrote down the ten commandments again, but slightly differently.
I don't know if that's true. He definitely destroyed the tablets once. It was surprisingly difficult to find the respective parts in the bible - I'm going to try again. The story might very well be not true!
The point is: Moral laws only matter if people know them and agree with them, so in the end what people think is the only thing that matters.
That's not the intentended meaning by the bible passages, they were probably just written by different people, or the author has forgotten what he has written the first time.
Edit: The relevant section is Exodus 34. God says that he will write the same words as on the first tablet. Then he goes on to state some commandments, which aren't the classical "10 commandments" but it's not clear to me which of them will go on the stone tablets or whether they are maybe just some additions that are not important enough to be written in stone.
The bible is not important to my point though - only subjective morals matter.
Sulfamide t1_j9loi05 wrote
Because it makes a more cohesive society.
Killercod1 t1_j9lpmx2 wrote
What if a conflicting society was the ideal? It would allow for one to express themselves. A cohesive society, may be an oppressive society.
Sulfamide t1_j9nrb5u wrote
> It would allow for one to express themselves
How so? What kind of expression?
And what type of conflict are you thinking of? Doesn’t conflict allow for violence and soffering? Is it possible for those to be ideal?
Killercod1 t1_j9obiln wrote
A difference in core values. Not everyone's situation is best suited for one ethical theory. It may be that you live in inhospitable conditions, requiring one to be unethical to survive and thrive.
Conflict allows for one to express their individualality and identity. Ethics and morals allow for violence as well, punishing the unruly. Anything can be ideal.
Sulfamide t1_j9ofk9u wrote
I am a gay man and I live in a muslim country where it is illegal and punished. If I had to choose, what should I wish for, that my people would share my values, or that suddenly the laws and ethics of my county allow for differences in values?
Right now I would choose the former as it would surely make me happier. I would be happier because on top of not having to hide, I would feel closer to my family, friends, and fellow countrymen. It is more important to me to share values with people than to be permitted to have different ones, as it seems to me more like a compromise than an ideal.
ulookingatme t1_j9na2ip wrote
Morality is subjective and is of a fungible nature. You can theorize all you like, but reality and history tells us this is true.
brutinator t1_j9oqoq2 wrote
Sure, just like how the sun revolved around the earth. After all, 'reality' and history told us that was true too.
Its a special kind of hubris to just decide that we know everything about anything.
ulookingatme t1_j9qy66c wrote
If you give me an example of one morally objective rule that is universally accepted I'd say you may have a point. Standing by.
brutinator t1_j9rbzg2 wrote
Its morally permissible to breathe.
ulookingatme t1_j9rv13c wrote
Tell that to the guy on death row.
brutinator t1_j9rxfck wrote
That's odd, I've never seen someone say that a guy on death row breathing is immoral or unethical. Want to show me some evidence of that?
cloake t1_j9l5ik8 wrote
Right like the trolly problem is more about people not wanting to get their hands dirty rather than risking something to do some good in the world.
TheRushConcush t1_j9k0tb8 wrote
I respectfully disagree, I think the case is quite poor as it entirely misses the essence of the problem and as others have stated as well, implies an objectively correct answer to philosophical issues can exist.
IAmNotAPerson6 t1_j9kf6mj wrote
> But hey, understanding something is harder than rejecting it.
I absolutely agree. You should give that a try for this issue.
TheRushConcush t1_j9ki0iy wrote
If you think I showed a lack of understanding in relation to "this issue", please, do explain why. In case you were confused, I was referring to the disposing of a useful but dangerous tool instead of learning how to use it properly.
IAmNotAPerson6 t1_j9kkrr6 wrote
Well, 1) your second sentence almost exactly just mirrors what the title already says in a tone that for some reason thinks it contradicts it, and 2) only refers to thought experiments as pedagogical devices to clarify issues rather than their broader use in philosophical literature, and the more general use of intuition, to actually arrive at allegedly correct solutions.
mirh t1_j9k2rle wrote
> First, often people respond to them differently across demographic groups, particularly different cultures,
No shit, as with anything and everything? Even semantic memory is still inevitably sprung from a life of experiences.
> and second; small, irrelevant changes in how thought experiments are worded can change entirely how we respond to them.
And that's a plus, not a negative thing?
Just like in normal "physical" experiments, figuring this out allows you to notice nuances and variables that you had never thought mattered or even just existed.
You can't criticize people with the hindsight of their future self having discovered them to be non-trivially wrong. Ironically this is the kind of insight that the Gettier problem eventually leads you to.
> and their assessment of free will and responsibility differ from the one found in other parts of the world. Women have different intuitions about moral dilemmas such as the Trolley cases from men.
Literally the observational point of the entire experiment-making. In fact, thanks god you had such simplified thought experiments to begin with, because no way anything more convoluted would have given you a better time.
Beyond the most obvious "you should always be careful with X" platitude, this article is absolute trash.
> Of particular interest is the recent emphasis on conceptual engineering, i.e. on attempts to reform philosophically significant concepts.
That's known as ordinary language philosophy and it's like a hundred years old by now.
Wizzdom t1_j9ki66s wrote
I think he should have focused more on why it could be problematic to apply thought experiments to real world applications such as AI or self-driving cars. That would be a much more interesting conversation imo.
mirh t1_j9ko0fn wrote
I could swear I had read a very insightful comment/article in this regard, but I cannot find it anymore...
Anyhow, I see where you are coming from. But then you aren't talking about thought experiments "per se" anymore (this dude even lowkey criticizes Gettier somehow!) but just warning not to talk out of one's ass like in any other kind of argument.
Like, those atrocious "should the car kill the elderly or the baby" are either more of an engineering problem than truly philosophy, or they are ethics from somebody that thinks either too sanctimoniously about people or too stupidly about computers.
fencerman t1_j9kc0wc wrote
"Thought experiments" are less about RESOLVING ethical dilemmas, and more about CLARIFYING the real underlying issues of those dilemmas.
It's like calibrating a measurement device. You need to explore the limitations on it to know how to correct for biases and errors.
TheRealClyde t1_j9jyu36 wrote
I completely disagree with this and generally i have the words to say why but im struggling here.
Sure moral intuitions depends heavily on context. But thats like the whole point of everything. If i get two different responses on the morality of something from 2 different people, that IS insightful, even if you believe that there is only one way that ethically you can act in that scenario.
For example, the trolley problem. One person gives a detailed explanation of why they would pull the lever. One person gives a detailed response about why they wouldn't. Yes both of those responses are dependent on the individual and the context, but why does it not matter because of that? Does the thought that both of these people disagree on what to do not generate philosphical insight?
brucey-baby t1_j9m9dt1 wrote
I gave examples of both as well as the only disturbing logical argument I could think of for not pulling the lever.
oddlywarmpotato t1_j9jvb37 wrote
Everyone's focussing on ethics, but the use of thought experiments is widespread. The author of this article mentions the Gettier problems in epistemology, I'm currently churning through stuff like Mary the Scientist and pZombies in philosophy of mind.
In ethics I think the problem is more whether the thought experiments are formulated to "lead the witness". I question whether thought experiments like pZombies that are designed to flush out metaphysical truths are actually helpful.
black_brook t1_j9k02mn wrote
Even if an experiment isn't designed to lead the witness, it can still have that effect. Experiments and analogies tend to create situations ripe with pitfalls for our natural tendency to be lead astray by language.
Jazzmatazz7 t1_j9kqklk wrote
You say > but these Scenarios are deceptive ..........."
Is it impossible for some of these experiments to actually capture the subtle aspects you refer to (context and individual)?
While I cannot point you to a classification of types of thought experiments, there are of course several kinds which may involve group decisions or individual decisions.
There is no reason why individual responses are not to be considered.
With respect to context, as an example a thought experiment that entails having to imagine something familiar isnt fat fetched and can be said to be within a realistic context.
What you have not done with your question is specify why you presume that context and individual responses are not considered or left out of thought experiments,
Then state that on this basis, these scenarios (i.e thought experiments) are deceptive.
GrimThor3 t1_j9kzua5 wrote
Haidt’s work in emotional moral foundations made him create a variety of ethical/moral dilemmas. He ran into the problem of dealing with context of the dilemma and how account for different perspectives. Better written than this article
BernardJOrtcutt t1_j9jw0eh wrote
Please keep in mind our first commenting rule:
> Read the Post Before You Reply
> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
Thisisunicorn t1_j9kdirz wrote
"Different groups and cultures may have different responses to thought experiments."
...and this is supposed to make them LESS valuable?
[deleted] t1_j9khqol wrote
[removed]
BernardJOrtcutt t1_j9ki4tg wrote
Your comment was removed for violating the following rule:
>Argue your Position
>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
william-t-power t1_j9lazjk wrote
As I read recently: artificially constructed situations lead to artificial reactions. Our minds have "error correction" built in that takes context into account (e.g. those phases with "the" written twice on two lines), which can make the analysis to oddly constructed questions nuanced.
EricFromOuterSpace t1_j9lzz62 wrote
This is why sam Harris is so insufferable.
“What if insert impossible scenario therefore x”
An exhausting pointless way to try to understand the world.
brucey-baby t1_j9m7u2c wrote
Morality is difficulty especially in a case like this as you decide life and death. I think there could be arguments for both. By acting you kill someone and kind of save 5. Through inaction you hold some responsibility for the death of 5. I think what would be most relevant in the decision making process is what if any knowledge of the 6 people you have.
Do you know any of them? Do any of them have a visual appearance that you can relate to from your own life experiences. Theses could impact the decision making process. Excluding knowing the 1 man I think you probably just switch the the tracks. My reasoning for thinking this is simple greater good.
If 1 man dies one family and one set of relationships suffer. Where as if 5 die 5 families and relationships are hurt. I do not say this is the morally correct decision and would have to accept that I had killed a person by my actions. If I had to make a choice between letting 5 die or killing a different one by pushing a button. I would push the button. Though again this excludes all other possible relevant factors. (Though if someone did not make a choice but froze in indecision I would not call that immoral.)
The only arguments I could see for not pushing the button would require more information than provided. Excluding if you believe population reduction is actually in the greater good. Which one could make for an argument for that though its kind of a dark one.
brucey-baby t1_j9marg1 wrote
I will put fourth the idea that there is some form of innate morality. Best described by the golden and silver rules. I put this forward based only on my on personal experiences in life. Though being good vs feeling guilt and shame; seem obvious. I know you could argue that those are societally ingrained into me and I would say yes that is part of it. Though I would in turn point out that if there was not also a physical aspect that caused a natural sense of morality societies would have been very difficult to form. The understanding of mutual benefit comes from a sense of morality I think.
brucey-baby t1_j9mbnpe wrote
Just to extend a bit further, I think it is actually more immoral behavior that societies ingrain into people. I think immoral behaviour is more learned behaviour and loss of self. As examples I will point to hatred racisim war selfishness. These things when created propogate themselves. I dont think any baby is born racist or wanting to invade another country.
ItTookAges t1_ja2qfgu wrote
Thought experiments are useful, for expanding logical conclusions verbally that are otherwise expressed too densely for some ppl.
For example, a friend once said angrily, " I just found out that vaccine causes autism!" That was the first I'd heard that but knew it couldn't be true given the fact that vaccines do almost nothing and autism is a permanent trait, not a temporary set of symptoms.
Anyway, I told her that, if there are any kids with autism who have never been vaccinated, that would basically negate that hypothesis. She said, "I know for a fact there are. I know several of them. But still, you never know." I wanted to say, "Actually, sometimes you know with mathematical certainty", but when ppl are scared or angry, logic is almost useless. That is when thought experiments are necessary. A thought-experiment that utilizes the same type of abstraction can be a verbal mirror of the logic of the concept.
IAI_Admin OP t1_j9jd7ik wrote
Philosophers, metaphysicians or social psychologists frequently employ thought experiments, such as the Trolley or Gettier cases, to study important epistemic notions or how people think about what is right or wrong, what is morally permissible or not. But these experiments suffer from significant limitations, argues philosopher of science Edouard Machery. In the Trolley case, for instance, people respond differently depending on the way in which the test is phrased and the order in which they read it. This is what psychologists refer to as “framing effects.” Moreover, demographic and cultural factors can have a significant effect on how people respond to these experiments. Edouard Machery asks us to recognise that intuition is not as reliable as we would like to think and to be more critical of the conclusions we draw from thought experiments.
Wizzdom t1_j9kio08 wrote
Isn't the point of thought experiments to challenge our intuitions in the first place?
bumharmony t1_j9jk7dh wrote
They also give all reasoning a bad name. But academic philosophy is designed to keep people ignorant, like with the veil of ignorance though experiment.
[deleted] t1_j9jnmcj wrote
[deleted]
LobYonder t1_j9jpomi wrote
"What if there were no hypothetical questions?"
Yes it's an old joke, but relevant. Reasoning is inherently based on analogy and abstraction. I would claim "what ifs" are necessary to form any sort of world view or philosophical position. The fact that context can change our view of the correct answer is interesting but does not defeat the purpose of the question. Arguing about which context is most relevant is just where it starts to get interesting.