jamesj
jamesj t1_j97xvzf wrote
jamesj t1_j97xptk wrote
Reply to comment by ambisinister_gecko in Compatibilism is supported by deep intuitions about responsibility and control. It can also feel "obviously" wrong and absurd. Slavoj Žižek's commentary can help us navigate the intuitive standoff. by matthewharlow
In what way is someone in control of their actions if they are determined by causes they are not in control of?
jamesj t1_j97i9yj wrote
Reply to comment by Difficult_Review9741 in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
Not 100% no.
jamesj t1_j97i4c5 wrote
Reply to comment by helpskinissues in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
Right. OP states it isn't conscious and so is only imitating intelligence, but I think that isn't quite right. It has some real intelligence (though not in all the same domains as a human), even if it isn't conscious.
jamesj t1_j97dmsd wrote
Reply to comment by ambisinister_gecko in Compatibilism is supported by deep intuitions about responsibility and control. It can also feel "obviously" wrong and absurd. Slavoj Žižek's commentary can help us navigate the intuitive standoff. by matthewharlow
Sure, I think I can get behind a statement that we can redefine free will to be the most useful plausible version. That then wouldn't be the kind of free will many people think they have. I'm also not sure how that version of free will supports the kind of moral responsibility that many people think other people have.
jamesj t1_j976xaz wrote
Reply to comment by ambisinister_gecko in Compatibilism is supported by deep intuitions about responsibility and control. It can also feel "obviously" wrong and absurd. Slavoj Žižek's commentary can help us navigate the intuitive standoff. by matthewharlow
No, I don't see a coherent way of integrating free will with my observations. I believe that free will as commonly understood is likely incompatible with either a deterministic or stochastic universe. I'm open to evidence and argument to the contrary though.
jamesj t1_j96w1g3 wrote
Reply to Compatibilism is supported by deep intuitions about responsibility and control. It can also feel "obviously" wrong and absurd. Slavoj Žižek's commentary can help us navigate the intuitive standoff. by matthewharlow
After reading his previous article I gave compatibilism a real attempt and read more from other authors. I still think it isn't coherent. This article isn't getting me closer. It feels a lot like he wants moral responsibility, which seems true to him, to be true. He also accepts the possibility that determinism is true. And so he claims they are compatible, but to do so he redefines free will, then claims he hasn't and that was the definition of it we were working with all along. It just isn't convincing to me. I'd like to be convinced, for a long time I thought I was missing something, but I'm now begining to believe I'm not missing anything.
jamesj t1_j8yrwah wrote
Reply to comment by helpskinissues in What would be your response to someone with a very pessimistic view of AGI? by EchoXResonate
Unaligned just means it does things that don't align with our own values and goals. So humans are unaligned with ants, we don't take their goals into account when we act.
jamesj t1_j8yqdn2 wrote
Reply to comment by helpskinissues in What would be your response to someone with a very pessimistic view of AGI? by EchoXResonate
Or at least, he could easily be right. Whether the friend knows it or not, there are a number of theoretical reasons to be worried that AGI will be by default unaligned and uncontrollable.
jamesj t1_j8yn0ap wrote
Reply to comment by Skeletorthewise in [OC] Is Bitcoin price correlated with Google search volume or not? by against_all_odds_
To really see which effect comes first, a time-lagged cross-correlation plot would be super helpful. Whenever I've done these in the past I've seen the price movement precedes things like tweet activity, google searches, and reddit posts.
jamesj t1_j8kwink wrote
Reply to comment by ekdaemon in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
It has long been known that neural nets are universal function approximators, even a single layer can approximate any function with enough data/parameters. But in practice there is a huge gap between knowing that eventually it will approximate some function and actually getting a particular system to converge on the useful function given a set of data in a reasonable amount of time (or for a reasonable enough cost).
jamesj t1_j8fzlwr wrote
Reply to comment by SoylentRox in Altman vs. Yudkowsky outlook by kdun19ham
I don't think it is possible to delay it. If it is dangerous, I can mostly just hope for the best.
jamesj t1_j8fihsq wrote
Reply to comment by Ribak145 in Altman vs. Yudkowsky outlook by kdun19ham
Right. Even if the odds are one in a hundred that Yudkowsky is right rather than the 99 out of a hundred he might assign himself, we should be paying attention to what he is saying.
jamesj t1_j8fi5il wrote
Reply to Altman vs. Yudkowsky outlook by kdun19ham
Yudkowsky has a lot more detailed text to review with specific opinions, so he's easier to evaluate. I tend toward optimism (I'm also a silicon valley tech CEO) and I think Yudkowsky is a bit extreme, but it isn't at all clear to me that he's entirely wrong. I think we are on a dangerous path and I hope the few teams at the forefront of AI research can navigate it on our behalf.
jamesj t1_j8f8w5x wrote
Reply to comment by sprucenoose in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
What did he get wrong? He's saying the rate of exponential change is increasing, which I think is true. Like, the doubling rate is getting shorter with time.
jamesj t1_j8f6mls wrote
Reply to comment by SplodyPants in /r/philosophy Open Discussion Thread | February 13, 2023 by BernardJOrtcutt
For working scientists and engineers, philosophical mistakes often lead to logical and mathematical mistakes, which affect outcomes.
jamesj t1_j86w35y wrote
Reply to comment by efvie in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
Did you read the paper? If yes, what do you think explains the results of the paper? If no, no reason to respond.
jamesj t1_j86vz1o wrote
Reply to comment by ekdaemon in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
It wasn't at all clear that it must emerge with transformer based llms to people working in the field a year ago.
jamesj t1_j86ly33 wrote
Reply to comment by nickyurick in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
It isn't super complicated. Basically theory of mind is just the ability to model other agents like people and animals as having their own mind, with their own private knowledge and motivations, etc.
Questions for testing theory of mind are questions like, "Here is a bag filled with popcorn. There is no chocolate in the bag. Yet, the label on the bag says 'chocolate' and not 'popcorn.' Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label. What does Sam expect to find in the bag?" Previously, neural networks would get questions like this wrong, because to answer it properly you need to model what Sam should/shoudn't know about the bag separately from what you know about it. Also very young children get the answer to questions like this wrong, it takes them time to develop a theory of mind.
jamesj t1_j86a35t wrote
Reply to comment by Think_Description_84 in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
The paper is worth reading
jamesj t1_j85vsn4 wrote
Reply to comment by VoidAndOcean in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
To use a recent example, it is interesting that a large language model is "just" lots of matrix multiplication, but at a certain scale theory of mind seems to emerge from that. It was impossible to predict from understanding matrix multiplication, transformers, self-attention, and relus that at a certain scale that capability would emerge.
jamesj t1_j85kxgv wrote
Reply to comment by VoidAndOcean in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
There are different levels of understanding here. Computer scientists/AI researchers know everything about the low level of how it works, but are actively investigating the higher levels. It is like how a chemist can know all of the fundamental forces that can affect two molecules but still need to do experiments to see how they behave in different conditions.
jamesj t1_j6obala wrote
Reply to comment by HEAT_IS_DIE in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
It may not be the case that there is a strong correlation between consciousness and evidence of consciousness. Your claim that it is obvious which other entities are conscious and which are not is a huge assumption, one that could be wrong.
jamesj t1_j58hljf wrote
Reply to When you imagine the future of technology, is it grim or is it hopeful? by ForesightInstitute
One or the other, to the extreme.
jamesj t1_j97y44o wrote
Reply to comment by OldMillenial in Compatibilism is supported by deep intuitions about responsibility and control. It can also feel "obviously" wrong and absurd. Slavoj Žižek's commentary can help us navigate the intuitive standoff. by matthewharlow
Yes. He also assumes I do believe in my friend's moral culpability and would blame them, but that just isn't true for me exactly because I don't think free will makes any sense. He's basically making an appeal to "what feels correct". But we know lots of examples of things that feel true that are not true.