eliyah23rd

eliyah23rd t1_iwqi07r wrote

I think the best way to describe our difference is that your project is descriptive and mine is prescriptive. You want the to discover the cause and I am looking for solutions.

However, that simplifies our position a little. In order to propose solutions, I look to build the descriptive case correctly. I am very minimalist about descriptive assumptions, but I cannot avoid them all.

You, on the other hand, seem to propose a sort of “Inference to Best Explanation” argument for motivating the genealogy that you propose. This is a classic descriptive project strategy. However, your last comment highlights that there is a prescriptive wish behind your project, an agenda, if you will. While you present your evidence irrespective of moral outcome, you seem to be motivated by a belief that should your view be accepted, the world would be a better place.

One last point, if I may. You use the word “ought” a number of times. On page 20 you even use it with reference to Hume. However, like de Waal, your use of “ought” seems to be the hypothetical (IF you want X you ought to Y) meaning of “ought” and not the categorical (You ought to Y). To me it seems clear that Hume is very clear that he is referring to the categorical “ought”. I’d be interested to know whether you agree that (a) you are using the hypothetical and (b) Hume is using the categorical.

1

eliyah23rd t1_iwp3p4z wrote

I'm not sure it would.

In your mind (I assume), I am a disembodied online persona. There is no evidence that the sufficient cause of these words is an actual human being.

Say, I claim that the person causing this persona had received 5 shots but that the person spent time every day with their 97-old parent. They express concern that becoming infected might endanger their parent's life. That person's brain would seem to have some Neurons that achieve motivational salience that associate looking after the welfare of a parent as a high-priority value. Say an advanced fMRI could confirm this description.

What would that tell you about what you or anyone else in the world "should" do?

3

eliyah23rd t1_iwmaq10 wrote

Thank you so much for your reply.

I can't really accept either point, but I don't think we actually disagree all that much. Let's say that we are looking at the same scene but from two different angles. Let me try and explain in a different way why, to me, while your answer addresses many great questions, it doesn't address mine.

Suppose I do just want to make other people happy. I just want to help end suffering for other people. As you say on page 20, "a primary value is an arbitrary choice". I understand that a researcher like you is interested in how that came to be. However, that genealogy is not "my" reason for my motivation. It is a cause not a reason. A value is considered by me as the ultimate goal. I don't look for justifications for the value; for other facts that, by virtue of being true, make my goal become valid. I don't care.

I understand what you're doing. For the last 160-odd years people have been given the message that their essence is to survive and out-compete. You are following others who explain that their thinking is an incorrect understanding of evolution. Of course, an "is" does not follow from an "ought". The fact that their understanding might lead to a destruction of our civilization, does not make their thinking wrong. You just show that, in fact, the more desirable interpretation is the correct one.

It's good that you're countering the "be-selfish" brainwashing (if you will), but is it necessary? You are what you are. You will always do what you want to do. The question is only how should we structure our society around that, so that we are all most likely to succeed at our own goals. How do we not step on each other toes? Not because it is bad to step on the toes of other people but (a) many of us don't want to and (b) it will get all our toes burned up if we do.

1

eliyah23rd t1_iwm74l9 wrote

Hi. Your reply is a factual argument about how best to implement my own preference.

Value. I don't want to suffer. Fact. Letting others suffer increases the chances that they will make you suffer. Plan. Decrease their suffering.

I called it a value here. But it is a preference. There is no argument here that I "should" hold to that preference. There is just the description that I do. I could just as easily be a masochist. I know that I have Neural modules that implement Motivational Salience but that is just a fancy way of saying that I do what I want to do.

Again, moral skepticism does not imply that I prefer only survival or my own pleasure (though advertisers have an interest in my thinking that.) I could just prefer helping other people. I get a real buzz when I prevent suffering. Is that a "value"?

2

eliyah23rd t1_iwhqujt wrote

My problem is more in the first paragraph you quoted.

>If we buy the self-evident fact that conscious valence is real, we get an ought from an is.

Nope. Nice paragraph but it's not an argument.

Yes I feel that valence for myself. It so happens I even want to end suffering for others. But that still doesn't tell me why I should want to end suffering for others.

I'm afraid I'm going to have to remain in the moral skeptic camp.

7

eliyah23rd t1_iwgthlb wrote

I enjoyed your post. It motivated me to look at your previous posts and I found your e-book, which I hope to find more time for soon.

I would like to ask a basic question about your methodology.

You seem to make little distinction between the population of humans through time and an individual human being situated at specific moment such that the history of even that individual is secondary to an analysis of the specific subject.

For example, in this post you say:

>There is a basic existential pressure within every organism to do the
things that will lead to an increased chance of thriving, surviving
and/or reproducing.

Certainly an individual at a specific instant may experience a drive to survive, but this is just one among multiple motivations competing for salience among many others. For much of the time, today, it is quite dormant due to lack of threats. Similarly, many spend most of their time uninterested in reproducing because what was once an instrumental goal has now been disconnected from its origin by the availability of contraceptives. The fact that the goal can be analyzed as instrumental matters less than that it figures so centrally in the programming of the human machine as it exists now.

Survival is a selection-oriented statistical drift within a population rather than in individuals.

I see that from your starting point you seek to explain the broader picture, but that serves as a genealogy of morality. Would it not be better to start with the individual as they are at a specific moment and proceed to their goals, limitations and frustrations?

3

eliyah23rd t1_iwcczqa wrote

There seem to me to be two elements here. They are interwoven in the article and in practice they may not be separable.

The first is the speech-like or communication act. This is exemplified by the example of leaving the desecrated photo for your partner to find. However, the act of publishing some of the games mentioned is also a speech-act. "Come have fun burning these effigies". This issue should be considered alongside other speech-act pros and cons.

The second is more unique to video games. I was involved in the development of multiplayer games already 25 years ago. When playing games you are reprogramming the emotional and values oriented modules of your brain.

Of course every moment changes something in you, but that is on a trivial level. When you take actions in a graphic environment, when you do an act that you would find taboo in real life, the short term and longer term sub-linguistic modules that make up who you are - will change.

It may be true to a lesser extent when watching passively, but game designers are sometimes explicit in their ability to change you and your priorities. For example, when you spend time trying to achieve a goal (even putting some pixels in to top left corner), your motivations are being changed.

I do not wish to propose conclusions. There are cognitive values in (some) games as well as social. Having fun is also valid part your preference structure. I am making a more factual claim (though hard to track experimentally) that you are making changes in playing, particularly with the sort of games described in this article.

(1) Do you want to make those changes? (2) If you can program yourself to be a worse person, is it ethical to do so?

2

eliyah23rd t1_iwbp46x wrote

I hope you don't mind these delays in my replies. I've been ruminating in the meantime.

My list for B was actually a disjunctive list (facts OR reason-logic OR higher being). So rejecting one of the list does not mean that B is wrong.

But it doesn't really matter. Let's pretend I only gave the "higher being" option and so you don't agree with B.

You seem to say that you accept that there are people who believe B but you believe in A. (Option 1 in the second set of questions).

Preferring A to B is a philosophical position, is it not?

(On the other hand, I may have misunderstood you. Are you arguing for B after all? Does the emergent phenomenon you are referring you actually justify the value? I continue to assume that you don't hold that, but I wanted to raise the possibility just in case.)

1

eliyah23rd t1_ivuhfdd wrote

I hope you're still around. I wanted to continue our discussion.

I don't think I want to get into Free Will issues right now, unless that is important to you. May I ask you the following question.

Image the following two views:

A. There is nothing over and above the neural description of what it going on when you hold a value.

B. The neural description is all well and good. What matters is that it expresses a linguistic assertion of a value. That value can be justified by some means (disjunction of facts, reason-logic, some higher reality)

I think both you and I hold A. However, I acknowledge that there are people who believe B. My choice of A is a philosophical position about justification of assertions.

Is your position:

  1. Agree
  2. B is not even a position, therefore there is only A. Therefore there is no evaluation to be made between A and B.
  3. Something else.
2

eliyah23rd t1_ivkbdkn wrote

It is valuable, but in a different sense. I would learn more given my subjective goal of learning more. But I already assumed that all my goals (whether you call them moral or not) are just configurations of ions, synaptic receptors etc. Nothing in the description has yet justified the value.

2

eliyah23rd t1_ivjri3f wrote

Thank you for your reply.

Perhaps I phrased it poorly. You are correct, of course, that increasing model size tends to increase overfitting in the normal sense. Overfitting in this case means a failure of generalization. This would also lead to bad results in new data.

In spoke in the context of this article, which claimed that spurious generalizations are found. LLMs move two parameters up in parallel in order to produced the amazing results that they do. They increase both the quantity of data and the numbers of parameters.

1

eliyah23rd t1_ivfr33a wrote

;)

>No shifting of the burden of proof please.

OK. I claim the success of the hard sciences and engineering are the proof of the scientific method.

>So, you disregard any evidence that does not support your beliefs?

Yes. I distinguished between the behavior of some Scientists and the scientific method. Do you believe that all the behavior of any Scientist counts in the evaluation of Science in its idealized form? I propose the "idealized form", while leaving some room for ambiguity, is sufficiently preached in many texts that it have meaningful reference.

3

eliyah23rd t1_ivfni9z wrote

>I think Science is flawless here.Can you expand on this a bit?

Without detracting from soft science, I was referring to hard science here. Given its success, I think I need to turn the question back to you. Which part of scientific method do you see a flaw here. Again, I'm not referring to behavior of eminent scientists when speaking outside the strict confines of their field.

>The value of Science itself is not in question here.I believe this to be incorrect, as I am questioning the value of science.

Given that the subject of the thread is values in the normative sense, I think I need to reword that to the "effectiveness" or "truth-orientation in the instrumental sense" instead of "value"

2

eliyah23rd t1_ivf9vsf wrote

The author's argument seems to be:

  1. There are many people writing machine learning papers without understanding core statistical principles.
  2. The best explanation for this is that there is so much data, that there are no valid methods for distinguishing valid correlations from accidental ones.
  3. Therefore, big data will produce nothing of much value from now on, since we have too much data already.

There are many procedures in place to give some protection from data over-fitting. Random pruning is one of them.

GPT-3 (and its siblings) and DALL-E 2 (and its) would not be possible without the scrape of a significant fraction of all the textual data available (DALL-E obviously combines this with images). They overcome overfitting using hundreds of billions of parameters and moving up. The power requirements of training these systems alone is mind-boggling.

Much medical data that is fed into learning systems is absurdly under fitted. Imagine a (rather dystopian) world where all health indicators of all people taking specific drugs was fed into learning systems. A doctor might one day know whether a specific drug will be effective for you specifically.

There is much yet to learn. To make a falsifiable prediction, corporations will be greedily seeking to increase their data input for decades to come. Power needs will continue to grow. This will be driven by the success (in their own value terms) of their procedures and not blind adherence to false assumptions as the author might seem to suggest.

21

eliyah23rd t1_ivexdc5 wrote

You have some great points there.

If A is nothing other than B, then A does not add anything to B. If morality is a function of multiple other phenomena or even a complex or simple function of one phenomenon, then it does work.

This does not address the question of why I should be committed to the other's preference. If morality is just what I prefer for myself, it is tautologous that I prefer what I prefer. If morality is that I should advance your preferences, then that is itself a valid preference of mine or a value that needs justifying.

If your argument is that there are two concepts that we did not realize were, in fact, identical, then we should abandon one. Once there was the morning star and the evening star. Today we just call it Venus.

I have no problem with the subjective.

Water has macro properties that we are familiar with. H2O does not automatically conjure up those properties. If "water" were to slowly slip out of use, I don't think there would be much harm. "H2O" would carry the connotations of wet.

The problem is that people assume that morality does more than preference does. It attempts to point to obligations that your preference places on me. To deny that it does this extra work is a value statement. If you just withdraw assent due to lack of evidence, you are skeptical of morality despite accepting preference.

Your last point is the one that loses me the most sleep. If there is no moral realism over and above preference, then how do we prevent society descending into a game of chicken (as it seems to do every now and then on the international level). You could claim that there is personal utility to all sides to agree to the rules of a game. The rules of the game are justified only by the plausibility of all sides agreeing to them.

1

eliyah23rd t1_iveuc4i wrote

I find your picture of the future very scary.

Even if you could explain every neuron involved in this value of fearing this future, I would still value it. To explain a value is not to justify it.

2

eliyah23rd t1_ivete2s wrote

We are now in better agreement.

If I go further, I would say that the objective world is just a model living in the subjective world. It is that part that other agents report to be in agreement.

That the objective is part of the subjective does not imply that it is optional. Much of the subjective seems non optional.

1

eliyah23rd t1_ivebbrb wrote

I think Science is flawless here.

Scientists can be heroic but they can certainly be flawed. Even people with high cognitive abilities might be unaware of a whole discipline of thought and may be unaware of their lack of knowledge. They may hold values that they are utterly unaware may be doubtable. They might in some cases have personality issues. Their remarkable success in their own domain may explain their eminence despite their deficiencies. Public media often takes an "either expert or not-expert" attitude that is black and white where the reality is complex.

The value of Science itself is not in question here.

1

eliyah23rd t1_ivbij84 wrote

>My point, though, is that identifying preferences is a helpful moral endeavor.

Totally agree. Your Value statement is preference utilitarianism, which might be non-congnitive (no true or false can be assigned), science determines the Fact and what follows is the Moral Claim that we should satisfy that majority preference. Science is critical, but it did not determine the Value.

3

eliyah23rd t1_ivbh3bg wrote

I would never argue that there *is* not anything beyond their preference. Only that it does not *entail* anything *beyond* their preference. Of course, if you put them in an fMRI, you could see the details that lead them to express their preference, but as far as I can see, that is besides the point.

2

eliyah23rd t1_ivas7n2 wrote

The following is an example for an argument for a moral claim.

Value: All random killing is wrong

Fact: X is a random killing

Moral claim: X is wrong

Science can provide insight into the Fact clause here. Therefore, Science helps us determine the claim. However, Science cannot provide justification for the Value clause.

Shermer makes the following assertions in the interview (roughly).

"If you want to know if something is wrong, ask the people". - This just shows what their preference is. It does not entail anything beyond their preference.

"If it is right for you, it is right for everybody". - While most people today would wholeheartedly agree, this maxim too is a value statement. It could be seen as a version of Kant's Categorical Imperative, but, it is (arguably) an axiom rather than anything independently supported by either Reason or Science.

The best understanding I can give to Shermer is that morality is whatever people prefer. Perhaps that is the best we can do, but it is deflationary of morality. If true, morality is not a useful concept. There are only subjective preferences. It also does not solve the problem of how to aggregate opposing preferences.

160