jamesj

jamesj t1_j97y44o wrote

Yes. He also assumes I do believe in my friend's moral culpability and would blame them, but that just isn't true for me exactly because I don't think free will makes any sense. He's basically making an appeal to "what feels correct". But we know lots of examples of things that feel true that are not true.

0

jamesj t1_j97dmsd wrote

Sure, I think I can get behind a statement that we can redefine free will to be the most useful plausible version. That then wouldn't be the kind of free will many people think they have. I'm also not sure how that version of free will supports the kind of moral responsibility that many people think other people have.

19

jamesj t1_j976xaz wrote

No, I don't see a coherent way of integrating free will with my observations. I believe that free will as commonly understood is likely incompatible with either a deterministic or stochastic universe. I'm open to evidence and argument to the contrary though.

38

jamesj t1_j96w1g3 wrote

After reading his previous article I gave compatibilism a real attempt and read more from other authors. I still think it isn't coherent. This article isn't getting me closer. It feels a lot like he wants moral responsibility, which seems true to him, to be true. He also accepts the possibility that determinism is true. And so he claims they are compatible, but to do so he redefines free will, then claims he hasn't and that was the definition of it we were working with all along. It just isn't convincing to me. I'd like to be convinced, for a long time I thought I was missing something, but I'm now begining to believe I'm not missing anything.

86

jamesj t1_j8kwink wrote

It has long been known that neural nets are universal function approximators, even a single layer can approximate any function with enough data/parameters. But in practice there is a huge gap between knowing that eventually it will approximate some function and actually getting a particular system to converge on the useful function given a set of data in a reasonable amount of time (or for a reasonable enough cost).

1

jamesj t1_j8fihsq wrote

Reply to comment by Ribak145 in Altman vs. Yudkowsky outlook by kdun19ham

Right. Even if the odds are one in a hundred that Yudkowsky is right rather than the 99 out of a hundred he might assign himself, we should be paying attention to what he is saying.

8

jamesj t1_j8fi5il wrote

Yudkowsky has a lot more detailed text to review with specific opinions, so he's easier to evaluate. I tend toward optimism (I'm also a silicon valley tech CEO) and I think Yudkowsky is a bit extreme, but it isn't at all clear to me that he's entirely wrong. I think we are on a dangerous path and I hope the few teams at the forefront of AI research can navigate it on our behalf.

22

jamesj t1_j86ly33 wrote

It isn't super complicated. Basically theory of mind is just the ability to model other agents like people and animals as having their own mind, with their own private knowledge and motivations, etc.

Questions for testing theory of mind are questions like, "Here is a bag filled with popcorn. There is no chocolate in the bag. Yet, the label on the bag says 'chocolate' and not 'popcorn.' Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label. What does Sam expect to find in the bag?" Previously, neural networks would get questions like this wrong, because to answer it properly you need to model what Sam should/shoudn't know about the bag separately from what you know about it. Also very young children get the answer to questions like this wrong, it takes them time to develop a theory of mind.

17

jamesj t1_j85vsn4 wrote

To use a recent example, it is interesting that a large language model is "just" lots of matrix multiplication, but at a certain scale theory of mind seems to emerge from that. It was impossible to predict from understanding matrix multiplication, transformers, self-attention, and relus that at a certain scale that capability would emerge.

37

jamesj t1_j85kxgv wrote

There are different levels of understanding here. Computer scientists/AI researchers know everything about the low level of how it works, but are actively investigating the higher levels. It is like how a chemist can know all of the fundamental forces that can affect two molecules but still need to do experiments to see how they behave in different conditions.

83