Submitted by beforesunset1010 t3_za96to in philosophy
Ok_Meat_8322 t1_iyye0so wrote
You can only "solve" moral problems with logic or mathematics once you've already assumed a particular moral philosophy or ethical framework- consequentialism, for instance.
But which moral philosophy/ethical framework is correct or superior is the crucial question; once you have an ethical framework the solution to most moral dilemmas follows fairly straightforwardly, and in the case of utilitarianism/consequentialism may even boil down to no more than simple arithmetic... whereas in other moral frameworks (e.g. deontic systems) quantities are irrelevant and so mathematics has nothing to say.
So this blog's thesis isn't all that objectionable, so far as it goes, but it seems to me that its just that its addressing the least tricky or difficult aspect of moral reasoning and so isn't telling us anything particularly useful or anything which we didn't already know or tend to agree on.
iiioiia t1_iz0rmk7 wrote
> You can only "solve" moral problems with logic or mathematics once you've already assumed a particular moral philosophy or ethical framework- consequentialism, for instance.
What if you merely present all of the valid options in a steel-manned manner, making no presumptions or epistemically unsound assertions along the way?
Ok_Meat_8322 t1_iz1twka wrote
If you don't assume any value judgment or normative statements, you cannot conclude with any value judgments or normative statements; any argument that did the latter without doing the former would necessarily be deductively invalid.
And it has nothing to do with the manner of your presentation, "steel-mannered" or otherwise you still run afoul of Hume's law if you attempt to conclude an argument with normative or morally evaluative language if you did not include any among your premises.
iiioiia t1_iz1vf78 wrote
> If you don't assume any value judgment or normative statements, you cannot conclude with any value judgments or normative statements; any argument that did the latter without doing the former would necessarily be deductively invalid.
Right, don't do that either. Pure descriptive, zero prescriptive.
> And it has nothing to do with the manner of your presentation, "steel-mannered" or otherwise you still run afoul of Hume's law if you attempt to conclude an argument with normative or morally evaluative language if you did not include any among your premises.
And if you aren't making an argument?
Ok_Meat_8322 t1_iz1x6jn wrote
>Right, don't do that either. Pure descriptive, zero prescriptive.
But then you can't conclude with a moral judgment. Presumably solving moral dilemmas involves being able to make correct moral judgments wrt the dilemma in question, right?
>And if you aren't making an argument?
But you're needing to make an inference, yes? In order to come to a conclusion as to the correct answer or correct course of action wrt a given moral problem or dilemma? You definitely don't need to be making an explicit or verbal argument, but if you're engaging in a line of reasoning or making an inference to a conclusion, then the same old and you need to assume a particular moral framework (or at least certain moral/normative premises).
iiioiia t1_iz1ynlq wrote
> But then you can't conclude with a moral judgment.
Correct.
> Presumably solving moral dilemmas involves being able to make correct moral judgments wrt the dilemma in question, right?
Perhaps certain conditions can be set and then things will resolve on their own. Each agent in the system has onboard cognition, and agents are affected by their environment, their knowledge/belief, and the knowledge/belief of other agents in the system. Normalizing beliefs (ideally: a net decrease in delusion, but perhaps not even necessarily) could change things for the better (or the worse, to be fair).
> But you're needing to make an inference, yes? In order to come to a conclusion as to the correct answer or correct course of action wrt a given moral problem or dilemma?
I'm thinking speculatively, kind of like "I wonder if we did X within this system, what might happen?" Not a risk free undertaking, but that rarely stops humans.
> You definitely don't need to be making an explicit or verbal argument, but if you're engaging in a line of reasoning or making an inference to a conclusion, then the same old and you need to assume a particular moral framework (or at least certain moral/normative premises).
To the degree that this is in fact necessary, that would simply be part of the description as I see it - if something is necessarily true, simply disclose it.
Ok_Meat_8322 t1_iz25jjm wrote
>Correct.
But then you can't resolve a moral problem or dilemma, the topic of this thread. When it comes to reasoning or logic, you can't get out more than you put in: if you want to come to a conclusion involving a moral judgment or moral obligation/prohibition, you need premises laying down the necessary moral presuppositions for the conclusion to follow. And mathematics or logic is of no avail here.
>Perhaps certain conditions can be set and then things will resolve on their own. Each agent in the system has onboard cognition, and agents are affected by their environment, their knowledge/belief, and the knowledge/belief of other agents in the system. Normalizing beliefs (ideally: a net decrease in delusion, but perhaps not even necessarily) could change things for the better (or the worse, to be fair).
Sure, and none of that is objectionable; but the OP is talking about using mathematics or logic to solve moral problems, and my point is simply that the point where mathematics or logic are useful is after the hard part has already been done, i.e. determining what sort of moral framework or what sorts of moral presuppositions are right or correct.
Like, if you're a utilitarian you can use simple arithmetic in many situations to decide what course of action maximizes happiness and minimizes unhappiness, but the tricky part is determining whether one should be a utilitarian or not in the first place.
iiioiia t1_iz276fw wrote
> But then you can't resolve a moral problem or dilemma, the topic of this thread.
"Perhaps certain conditions can be set and then things will resolve on their own."
Tangential topics often occur in threads, I thought this approach might be interesting to some.
> When it comes to reasoning or logic, you can't get out more than you put in
"Each agent in the system has onboard cognition"
> if you want to come to a conclusion involving a moral judgment or moral obligation/prohibition, you need premises laying down the necessary moral presuppositions for the conclusion to follow.
"agents are affected by their environment, their knowledge/belief, and the knowledge/belief of other agents in the system. Normalizing beliefs (ideally: a net decrease in delusion, but perhaps not even necessarily) could change things for the better (or the worse, to be fair)."
> And mathematics or logic is of no avail here.
Perhaps it is, perhaps it is not.
> Sure, and none of that is objectionable; but the OP is talking about using mathematics or logic to solve moral problems, and my point is simply that the point where mathematics or logic are useful is after the hard part has already been done, i.e. determining what sort of moral framework or what sorts of moral presuppositions are right or correct.
In the virtual model within your mind that you are examining - I have a virtual model that is different than yours (this is one non-trivial but often overlooked detail that I would be sure to mention front and centre in all discussions).
> Like, if you're a utilitarian you can use simple arithmetic in many situations to decide what course of action maximizes happiness and minimizes unhappiness....
To estimate what course of action...
> ...but the tricky part is determining whether one should be a utilitarian or not in the first place.
There are many tricky parts - some known, some not, some "known" incorrectly, etc.
I think it may be useful for humans to be a bit more experimental in our approaches, it seems to me that we are in a bit of a rut in many places.
Ok_Meat_8322 t1_iz28e8l wrote
>Perhaps it is, perhaps it is not
No, its definitely not. Neither mathematics nor logic can tell us the answer to any substantive question of fact or value. It can never tell you whether you should be a consequentialist or not. It can't tell you whether you should steal, murder, or even to swipe the last piece of pizza. Logic and mathematics can tell you all about logical or mathematical questions... but nothing substantive about ethics or moral philosophy. Logic and mathematics only become relevant once you've got that part figured out.
>In the virtual model within your mind that you are examining - I have a virtual model that is different than yours
If it differs wrt the fact that mathematics/logic are indifferent to substantive questions of fact or value, then I'm afraid to say that your model is incorrect on this point.
>There are many tricky parts - some known, some not, some "known" incorrectly, etc.
No doubt, but once again that doesn't contradict what I said: I'm saying that the ways in which mathematics/logic is useful is a less tricky matter than what moral philosophy, ethical framework, or particular moral values/judgments are right or correct or should be adopted in the first place. Once you have answered the latter question, the answer to the former follows fairly easily (in most instances, at any rate).
iiioiia t1_iz2a29u wrote
> If it differs wrt the fact that mathematics/logic are indifferent to substantive questions of fact or value, then I'm afraid to say that your model is incorrect on this point.
I'm thinking along these lines: "Perhaps certain conditions can be set and then things will resolve on their own."
You seem to be appealing to flawless mathematical evaluation, whereas I am referring to the behavior of the illogical walking biological neural networks we refer to as humans.
> No doubt, but once again that doesn't contradict what I said
I believe it does to some degree because you are making statements of fact, but you may not be able to care if your facts are actually correct. In a sense, this is the very exploit that my theory depends upon.
Ok_Meat_8322 t1_iz2ibc5 wrote
>I'm thinking along these lines: "Perhaps certain conditions can be set and then things will resolve on their own."
I'm having trouble discerning what exactly you mean by this, and how it relates to what I'm saying.
>You seem to be appealing to flawless mathematical evaluation, whereas I am referring to the behavior of the illogical walking biological neural networks we refer to as humans.
What does "flawless" mean here exactly- does it just mean that you've done the math correctly? But yes, I'm certainly assuming that one is doing the math correctly- even if ones math is correct, it still can only enter into the picture after we've settled the question of what moral philosophy, ethical framework, or specific values/judgments are right or correct.
>I believe it does to some degree because you are making statements of fact, but you may not be able to care if your facts are actually correct. In a sense, this is the very exploit that my theory depends upon.
Again with these vague phrases. I said that "the tricky question" was what moral philosophy, ethical system, or moral values/judgments one should adopt, not how math or logic can help resolve moral dilemmas... but, as you note, there are more than one "tricky question", which I'm happy to concede, and so what I really meant (and what I more properly should have said) was that the question of the correct/right ethical framework or moral philosophy is trickier than the question of how math/logic can help us solve moral problems.
But keeping that in mind, there was no contradiction between your reply and my original assertion. And yes, for the record, I most definitely do care about which facts are correct, I'm having trouble thinking of anything I care about more than this (at least when it comes to intellectual matters), and drawing a blank.
iiioiia t1_iz2lzqx wrote
> I'm having trouble discerning what exactly you mean by this, and how it relates to what I'm saying.
A bit like this is what I have in mind:
https://i.redd.it/5lkp13ljw34a1.png
My theory is that humans disagree with each other less than it seems, but there is no adequately powerful mechanism in existence (or well enough known) to distribute this knowledge (assuming I'm not wrong).
> What does "flawless" mean here exactly- does it just mean that you've done the math correctly? But yes, I'm certainly assuming that one is doing the math correctly- even if ones math is correct, it still can only enter into the picture after we've settled the question of what moral philosophy, ethical framework, or specific values/judgments are right or correct.
What I'm trying to say that yes, you are correct when it comes to reconciling mathematical formulas themselves, whereas I am thinking that showing people some "math" on top of some ontology (of various ideologies, situations, etc) may persuade them to "lighten up" a bit. Here, the math doesn't have to be correct, it only has to be persuasive.
> Again with these vague phrases. I said that "the tricky question" was what moral philosophy, ethical system, or moral values/judgments one should adopt, not how math or logic can help resolve moral dilemmas... but, as you note, there are more than one "tricky question", which I'm happy to concede, and so what I really meant (and what I more properly should have said) was that the question of the correct/right ethical framework or moral philosophy is trickier than the question of how math/logic can help us solve moral problems.
I think we're in agreement, except for this part: "the correct/right ethical framework or moral philosophy" - I do not believe that absolute correctness necessarily necessary for a substantial (say, 50%++) increase in harmony (although, some things would have to be correct, presumably).
> And yes, for the record, I most definitely do care about which facts are correct...
Most everyone believes that, but I've had more than a few conversations that strongly suggest otherwise - I'd be surprised if you and I haven't had a disagreement or two before! As Dave Chappelle says: consciousness is a hell of a drug.
Ok_Meat_8322 t1_iz2qxzo wrote
>My theory is that humans disagree with each other less than it seems, but there is no adequately powerful mechanism in existence (or well enough known) to distribute this knowledge (assuming I'm not wrong).
But we're not necessarily talking about resolving moral disputes between different people, but also of individual people having difficulty determining the correct moral course of action (i.e. "resolving a moral dilemma"), and this meme has nothing to say about the latter case (and that's assuming it says anything substantive or useful RE the former case, which I'm not sure it does).
The point is, once again, that mathematics or logic only enter into the question after one has decided or settled which ethical framework, moral philosophy, or particular moral values/judgments are right and correct, irrespective of how common or popular those ethical frameworks or moral values/judgments may be, or the extent to which people disagree about them.
>I think we're in agreement, except for this part: "the correct/right ethical framework or moral philosophy" - I do not believe that is necessarily necessary for a substantial (say, 50%++) increase in harmony.
Neither do I; determining or even demonstrating what is the right or correct thing is quite a separate matter from convincing others that it is the right or correct thing. It very may well may be (and in fact almost certainly is) that even if we could establish what ethical framework or moral values/judgments are right or correct (something I don't believe to be possible), many if not most people will persist in sticking with ethical frameworks or particular moral values/judgments other than the right or correct one. And it may well not "increase harmony", it could even lead to the opposite; sometimes the truth is bad, depressing, or even outright harmful, after all.
But these psychological and sociological questions are nevertheless separate questions from the meta-ethical question raised by the OP, i.e. whether and how maths or logic can help resolve moral problems or dilemmas.
iiioiia t1_iz2te3r wrote
> But we're not necessarily talking about resolving moral disputes between different people, but also of individual people having difficulty determining the correct moral course of action (i.e. "resolving a moral dilemma"), and this meme has nothing to say about the latter case (and that's assuming it says anything substantive or useful RE the former case, which I'm not sure it does).
All decisions are made within an environment, and I reckon most of those decisions are affected at least to some degree by causality that exists (but cannot be seen accurately, to put it mildly) in that environment....so any claims about "can or cannot" are speculative imho.
> The point is, once again, that mathematics or logic only enter into the question after one has decided or settled which ethical framework, moral philosophy, or particular moral values/judgments are right and correct, irrespective of how common or popular those ethical frameworks or moral values/judgments may be, or the extent to which people disagree about them.
I think we are considering the situation very differently: I am proposing that if a highly detailed descriptive model of things was available to people, perhaps with some speculative "math" in it, this may be adequate enough to produce substantial positive change. So no doubt, my approach is other than the initial proposal here, I do not deny it (or in other words: you are correct in that regard).
> ...many if not most people will persist in sticking with ethical frameworks or particular moral values/judgments other than the right or correct one.
To me, this is the main point of contention: would/might my alternate proposal work?
> And it may well not "increase harmony", it could even lead to the opposite; sometimes the truth is bad, depressing, or even outright harmful, after all.
Agree....it may work, it may backfire (depending on how one does it). Also: I am not necessarily opposed to ~stretching the truth (after all, everyone does it).
> But these psychological and sociological questions are nevertheless separate questions from the meta-ethical question raised by the OP, i.e. whether and how maths or logic can help resolve moral problems or dilemmas.
Agree, mostly (I can use some math in my approach).
Ok_Meat_8322 t1_iz2v3nr wrote
>I think we are considering the situation very differently: I am proposing that if a highly detailed descriptive model of things was available to people, perhaps with some speculative "math" in it, this may be adequate enough to produce substantial positive change.
I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma after one has presupposed or established a particular ethical framework, moral philosophy, and/or particular moral norms and judgments. Descriptive models, non-normative facts, and math/logic alone can never solve a moral problem or dilemma, in order to arrive at a moral judgment or conclusion one must presuppose an ethical framework or particular norms/value-judgments.
>To me, this is the main point of contention
It may well be the angle that interests you, but its not the point of contention between us because I'm not taking any position on that question.
iiioiia t1_iz3242b wrote
> I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma after one has presupposed or established a particular ethical framework, moral philosophy, and/or particular moral norms and judgments. Descriptive models, non-normative facts, and math/logic alone can never solve a moral problem or dilemma, in order to arrive at a moral judgment or conclusion one must presuppose an ethical framework or particular norms/value-judgments.
I suspect you have a particular implementation in mind, and in that implementation what you say is indeed correct.
Ok_Meat_8322 t1_iz7db9d wrote
Once again, I'm not sure what that's supposed to mean.
iiioiia t1_iz9mvo6 wrote
"I don't disagree with this, what I am proposing is that a descriptive model and/or mathematics or logic can only be applied to a moral problem or dilemma ...."
What would "applied" consist of?
Ok_Meat_8322 t1_izbtljz wrote
The example I used earlier was a utilitarian, who can use basic arithmetic to resolve moral dilemmas (such as, for instance, the trolley problem).
But this only works because the utilitarian has already adopted a particular ethical framework. Math can't tell you what values or ethical framework you should adopt, but once you have adopted them maths and logic may well be used to resolve moral issues.
iiioiia t1_izc1dt3 wrote
I don't disagree, but this seems a bit flawed - you've provided one example of a scenario where someone has done it, but this in no way proves that it must be done this way. In an agnostic framework, representations of various models could have math attached to them (whether it is valid or makes any fucking sense is a secondary matter) and that should satisfy an exception to your rule, I think?
Ok_Meat_8322 t1_j0naes5 wrote
>I don't disagree, but this seems a bit flawed - you've provided one example of a scenario where someone has done it, but this in no way proves that it must be done this way.
I don't think it must be done, I don't think logic or mathematics is going to be relevant to most forms of moral reasoning. But consequentialism is the most obvious case where it would work, since consequentialism often involves quantifying pleasure and pain and so would be a natural fit.
But if what you mean is that we could sometimes use logic or mathematics to answer moral questions without first presupposing a set of moral values or an ethical framework, I think it is close to self-evident that this is impossible: when it comes to reasoning or argument, you can't get out more than you put in, and so if you want to reach a normative conclusion, you need normative premises else your reasoning would necessarily be (logically) invalid.
iiioiia t1_j0ng2rm wrote
Oh, I'm not claiming that necessarily correct answers can be reached ("whether it is valid or makes any fucking sense is a secondary matter"), I don't think any framework can provide that for this sort of problem space.
Ok_Meat_8322 t1_j0nn0qc wrote
I'm skeptical about whether moral judgments are even truth-apt at all, but the strength of a line of reasoning or argument is equal to that of its weakest link, so your confidence in your conclusion- assuming your inference is logically valid- is going to boil down to your confidence in your (normative) premises. Which will obviously vary from person to person, and subjective confidence is no guarantor of objective certainty in any case.
So I'm fine with the idea that logic or mathematics could help solve moral dilemmas or problems, in at least some instances (e.g. utilitarian calculations/quantifications of pleasure/happiness vs pain/suffering) but it seems to me that some basic moral values or an ethical framework is a necessary prerequisite... which is usually the tricky part, so I'm somewhat dubious of the overall utility of such a strategy (it seems like it only helps solve what is already the easiest part of the problem).
iiioiia t1_j0nufl7 wrote
> I'm skeptical about whether moral judgments are even truth-apt at all, but the strength of a line of reasoning or argument is equal to that of its weakest link....
Mostly agree. As I see it, the problem isn't so much that answers to moral questions are hard to discern, but that with few exceptions I can think of (including literal murder), do not have a correct answer at all.
> ...so your confidence in your conclusion- assuming your inference is logically valid- is going to boil down to your confidence in your (normative) premises. Which will obviously vary from person to person, and subjective confidence is no guarantor of objective certainty in any case.
Right - so put error correction into the system, so when participants minds wander into fantasy and, provide them with gentle course correction back to reality, which is filled with non-visible (for now at least) mystery.
> So I'm fine with the idea that logic or mathematics could help solve moral dilemmas or problems, in at least some instances (e.g. utilitarian calculations/quantifications of pleasure/happiness vs pain/suffering) but it seems to me that some basic moral values or an ethical framework is a necessary prerequisite... which is usually the tricky part, so I'm somewhat dubious of the overall utility of such a strategy (it seems like it only helps solve what is already the easiest part of the problem).
"Solving" things can only be done in deterministic problem spaces, like physics. Society is metaphysical, and non-deterministic. It appears to be deterministic, but that is an illusion. Just as the average human 200 years ago was ~dumb by our standards (as a consequence of education and progress) and little aware of it, so too are we. This could be realized, but like many things humanity has accomplished, first you have to actually try to accomplish it.
Ok_Meat_8322 t1_j0ny94r wrote
>"Solving" things can only be done in deterministic problem spaces, like physics
I think its more a matter of "solving" things in one domain looking quite differently than in another domain. And solving a moral dilemma doesn't look at all like solving a problem in physics. But that doesn't mean it doesn't happen; oftentimes "solving" a moral problem or dilemma means deciding on a course of action. And we certainly do that all the time.
iiioiia t1_j0o9mf3 wrote
> And solving a moral dilemma doesn't look at all like solving a problem in physics.
Agree, but listening to a lot of people talk with supreme confidence about what "is" the "right" thing to do, it seems like this idea is not very broadly distributed.
> oftentimes "solving" a moral problem or dilemma means deciding on a course of action. And we certainly do that all the time
Right, but the chosen course doesn't have to be right/correct, it only has to be adequate for the maximum number of people, something that I don't see The Man putting a lot of effort into discerning. If no one ever checks in with The People, should we be all that surprised when they are mad at we don't know why (though not to worry: memes "explanatory" "facts" can be imagined into existence and mass broadcast into the minds of the population in days, if not faster).
Viewing a single comment thread. View all comments