Viewing a single comment thread. View all comments

1bunch t1_j4v0ues wrote

Professor Keith Stanovich’s metaphor of the “cognitive miser” made me appreciate how tiring it would be if someone wanted to be truly “rational” and “fully capable” at all times:

>…”we tend to be cognitive misers. When approaching a problem, we can choose from any of several cognitive mechanisms. Some mechanisms have great computational power, letting us solve many problems with great accuracy, but they are slow, require much concentration and can interfere with other cognitive tasks. Others are comparatively low in computational power, but they are fast, require little concentration and do not interfere with other ongoing cognition. Humans are cognitive misers because our basic tendency is to default to the processing mechanisms that require less computational effort, even when they are less accurate.” > >—Source, ‘Scientific American — Rational & Irrational Thought’ by Keith Stanovich

Edit: others have mentioned that this idea is basically the core argument of Daniel Kahneman’s “Thinking Fast & Slow”, but just an FYI Stanovich’s metaphor pre-dates Kahneman’s book , and in that book Kahneman openly says he took some of Stanovich’s terms & was “greatly influenced” by Stanovich’s early writings. Kahneman didn’t steal in some secretive way though, he has given Stanovich a lot of credit & speaks about him as a pioneer.

184

Bl4nkface t1_j4v5bdu wrote

That's the argument of Daniel Kahneman's Thinking fast and slow.

73

tyco_brahe t1_j4vza5j wrote

That's what I thought too. System 1 is dumb, System 2 is lazy. Take your pick!

28

TheNotSoGreatPumpkin t1_j4wktsk wrote

My takeaway was it’s not really system two being lazy, it’s the whole brain trying to economize. System two is metabolically way more expensive than system one.

He admits in the book that the two systems don’t really exist independently of each other, but it’s a useful conceptual model for better understanding how our brains operate.

36

tyco_brahe t1_j4x8oso wrote

Necessity is the Mother of invention. Laziness is the Father.

I don't view "lazy" as a pejorative when describing system 2. To me, it means that it's efficient... it won't be engaged unless is has to, because it's expensive (metabolically).

Mostly I was just making a joke about 'lazy' system 2.

10

1bunch t1_j4wc41q wrote

Kahneman was inspired by Stanovich:

>”Among the pioneers [of my field] are.. Keith Stanovich, and Richard West. I borrow the terms System 1 and System 2 from early writings of Stanovich and West that greatly influenced my thinking..” > >—‘Thinking Fast and Slow’ p. 450

He made sure to give Stanovich credit in his public talks too. just off the top I think there was a GoogleTalk Q&A when someone asked Kahneman if “the 2 systems are literal systems that map onto the brain,” and he said something like “no, and to make it even worse, the idea wasn’t even my idea, it was Stanovich’s. I just tweaked his metaphor by making it into an image of ‘2 entities inside you’, but they don’t exist! For some reason I thought it would just be easier to grasp these abstract metaphors about cognitive processes if we imagined these processes as 2 quasi-entities in ourselves”

Kahneman often makes himself seem like a mess in his public q&A’s but he’s just hilariously self-deprecating, he’s quite intelligent and accomplished lol 😆

26

aspartame_junky t1_j4we484 wrote

An essential aspect of academia that I miss (having moved to industry) is the value of giving credit where due.

Yes, there are credit usurpers in academia too, but as a disciple, academia generally values citing your sources and giving credit where due, rather than taking credit for others' work (e.g., Elon)

14

VoraciousTrees t1_j4wd27y wrote

Pair that book with "The Righteous Mind", which deals with morality as a framework for the "fast" system.

8

doireallyneedone11 t1_j4vcwg4 wrote

What's the definition of 'rationality' we're going with here?

16

SocraticMethadone t1_j4vmrdm wrote

In this literature, a rational strategy is one that's suited to your goals. So a rational belief is a belief the holding of which will tend to better position you to achieve your goals.

Now, the fun part is that for a very long time, folks just assumed that true beliefs would further their goals, whereas false ones would not. "Rational," then took up a secondary definition something along the lines of "following truth-preserving rules." So on that secondary definition, it's rational to hold a belief if that belief -- objectively -- follows from your previous beliefs.

30

WhatsTheHoldup t1_j4wksik wrote

>In this literature, a rational strategy is one that's suited to your goals. So a rational belief is a belief the holding of which will tend to better position you to achieve your goals.

Is it then rational for an oil exec to downplay climate change?

It suits their conscious goals of expanding their business, but they presumably have subconscious goals like legacy, happiness and survival which they are adversely affecting.

>for a very long time, folks just assumed that true beliefs would further their goals, whereas false ones would not. "Rational," then took up a secondary definition something along the lines of "following truth-preserving rules."

By this definition I still don't know. It's true that denying climate change helps their business so in that sense it's rational, but it also depends upon believing in untruths and sacrificing their other goals.

But you could also lie to others while not lying to yourself?

Is it better to say it's rational to understand climate change but lie about it, but it's irrational to actually believe the things you say?

>So on that secondary definition, it's rational to hold a belief if that belief -- objectively -- follows from your previous beliefs.

So this is now implying it's rational to be irrational as long as being irrational serves your singularly important goal?

6

SocraticMethadone t1_j4x6kks wrote

In practice, all of us have goals, some of which conflict. This is no less true of oil executives than it is of everyone else. It might well be the case that a certain belief best contributes to a goal that I have but not to the full set of goals. For instance, the executive may want to leave (usable) property to their grandchildren or endow a museum or whatever.

But the answer to your last question is definitely yes. I have lots and lots and lots of false beliefs that simply aren't worth the trouble of rooting out: it would be actively irrational of me to invest the time it would take to find them. In fact, I'd have fewer true beliefs if I tried. That much is mathematically demonstrable. (Take a look at the literature on satisfisizing as a maximization strategy.)

More broadly, though, yeah. A parent believing that their child is particularly adorable or talented might lead to a better relationship than would a more clinical belief set. If you belief a closer relationship to be a valuable thing, then you probably should hold the beliefs you need to form it.

Of course none of this is all-or-nothing. ("Belief the very best thing about your children or you'll die alone.") The point is just that evidence captures only one very narrow dimension of the the things we are doing when we believe.

5

generalmandrake t1_j4zut8a wrote

I think climate change is more of an individual vs the collective thing. Collectively barreling towards major climate change is suicidal, institutions like governments are especially at risk because major turmoil historically normally involves the collapse of regimes.

Individually the story is different. From a purely individualistic perspective the contemporary benefits of fossil fuels can outweigh costs that won’t be borne until after you are dead. Even when you consider things like genetic legacy, the economic wealth you accumulate from fossil fuels could actually put your descendants at an advantage in the future world, their survival may actually be improved. Also, there is a free rider problem as well, no one individual is the deciding factor in how much emissions we emit and how severe climate change will be. The lifetime CO2 output of a given person is marginal. If voluntarily economically hamstringing yourself and your family is not going to make a difference as far as the existential threats of climate change goes then it really is not rational to take that course of action.

1

DrumstickTruffleclub t1_j59h81x wrote

I agree it is a collective problem. But I feel guilty if I don't try to limit my emissions (reasonably, because I AM contributing to the problem) and so it's rational in a way to try to limit that feeling by acting to conserve energy. But there are situations where I feel the benefit to me of doing something (e.g. I would suffer health consequences and significant discomfort if I never turned the heating on in winter) outweighs the guilt. I guess everyone's calculation is different, depending on their circumstances and conditioning.

1

ronin1066 t1_j4wj26z wrote

Sounds similar to the idea that our brains are geared to survival and often pure reason can be a hindrance to that. So our senses are not necessarily geared to give us a completely accurate model of the world, but rather one that will keep us alive.

I think it would be interesting if an AI had a more accurate version of reality but we didn't believe it and considered it a failed experiment. Not that I think we're that far off of reality, just an idea for a novel maybe.

8

WhatsTheHoldup t1_j4ws0mo wrote

>Not that I think we're that far off of reality, just an idea for a novel maybe.

I think we're pretty far off.

Why do humans deserve higher consideration than a rock? Than a single celled organism? Than a plant? Than a cow?

Because the reality we live in is that we do deserve it. All our structures of law, morality, ethics, etc reinforce this.

We can exclude a lot of those by creating a concept of "sentience/sapience/consciousness" which no one can actually properly define. But we're still left with the cow, dolphin, octopus, crow and many other species who we can't rationally justify not having rights.

We may have inadvertently just created ai that now fit those categories and made the problem worse. When the ai tells us it's sapient and deserves the same considerations we do, will we believe it or reject it?

https://www.theverge.com/2022/6/13/23165535/google-suspends-ai-artificial-intelligence-engineer-sentient

(I'm not claiming Google's ai is actually sentient, but one day an ai might be and what happens if they engineers are fired who point that out?)

The only answer is that we are humans so we care about what happens to humans. We aren't cows and we never will be, so we don't care about rationally answering the question for cows nor ai.

An AI can either cut through this bullshit, or perhaps scarier, learn it and encourage us.

3

generalmandrake t1_j4zt5hf wrote

I’m pretty sure I had that exact same thought once. Humans might build a super computer one day they can actually determine the true nature of existence. But because it involves concepts that the human brain can’t grasp it wouldn’t make any sense to us and people just assume that the computer is broken and turn it off.

I like the analogy of trying to explain to a dog how a car engine works. You could sit there all day for years explaining it to the dog and you’ll never get through to them because the dog brain simply isn’t built to understand something like that since it involves concepts and processes that are beyond a dog’s reach cognitively.

For some reason many people seem to think that humans are capable of understanding almost anything, but this doesn’t really make much sense. We are just a more sophisticated version of dogs when it comes to cognition, but it is downright illogical to think that the human brain doesn’t have a ceiling when every other animal brain on earth has a ceiling. I mean, just ask anyone what physical reality actually is or where everything came from and you’ll never get a logical answer from anyone. I don’t necessarily think it’s even due to a lack of information and scientific data, I think the answer to the big question most likely involves certain concepts which the human brain had no evolutionary reason for being able to comprehend. Maybe we could build a computer that could do it, but like I said, the answer may not make any sense to us. I guess that is basically H.P. Lovecraft’s theory as well.

3

Re-lar-Kvothe t1_j4w7zml wrote

I had this conversation with friends that are less "philosophically" inclined. They believe I am a lunatic for seeing the world the way I do. I can only shrug my shoulders. They are my friends after all...

6

PostModernCombat t1_j4wewug wrote

Oh wow that sounds so hard, how do you cope?

4

Re-lar-Kvothe t1_j4wy69s wrote

Simple, because though we don't see the world through the same color eyes, they will defend me to the death if necessary, as I woukd them.

Or were you being sarcastic?

4

PostModernCombat t1_j4wyqrc wrote

Do you often find yourselves in potentially lethal standoffs? Tell me about this iris pigment based defence pact you have going there.

3

Re-lar-Kvothe t1_j4x48f6 wrote

Never in "potentially lethal standoffs." We argue vehemently about topics we are.passionate about but when all is said and done we realize we are not going to solve world hunger and recognize we are all in this together. We are striving for the same.goals and have different paths in mind to achieve thise goals.

We joke about the "philosophical bullshit" that seems to control our lives. We realized long ago it's just that, bullshit. And unless one of becomes POTUS there is nothing are arguments will change. We.chose different ways to address problems beyond our control. Rather than argue and hate we chose to empathize and understand each other. We have been a tight knit group for more than 45 years. Even though we have different philosophies on life and how to live it.

5

PostModernCombat t1_j4xabir wrote

So your friends actually do like to talk about philosophy and even macroeconomics with you, they just find your point of view at times problematic, and you’ve never had to defend each other “to the death…” I gotta say this whole thing has been kind of anticlimactic.

4

AConcernedCoder t1_j53x46d wrote

Interesting. In software development, you're forced to learn very quickly that there isn't enough time to not default to being a cognitive miser, to rule out possibilities, often referred to "rabbit holes," which could each require more concentration and cognitive effort than we have to spend on the task at hand.

2