marvinthedog
marvinthedog t1_ivzoiuc wrote
Reply to comment by turnip_burrito in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
I have carefully read through your post atleast 5 times throughout the day. Most of your points are still quite confusing to me so it´s difficult for me to adress it all, even though it´s interresting.
​
It almost seems like you are saying that it´s impossible to even make probabilistic estimates about consciousness. But what about other humans then, how do you now they are conscious? If it stands between a replica of you on a silicon substrate and another human, which one of them would you be able to give the most confident estimate about wether they were conscious or not? You know you are conscious and we could certainly make a strong case that the one that is the most identical to you with regards to inner physical functionality is your replica so therefore it seems like you would be able to give the most confident consciousness estimate to your replica and not the other human. Do you agree?
marvinthedog t1_ivxruyw wrote
Reply to comment by TheHamsterSandwich in AGI Content / reasons for short timelines ~ 10 Years or less until AGI by Singularian2501
Then one year later it turned out to be right ;-)
marvinthedog t1_ivvbtnr wrote
Reply to comment by turnip_burrito in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Ok, I had to look up the ambiguity around consciousness because allthough I had heard of it I didn´t know a lot about it: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
I read the first half and found a lot of the concepts a little confusing. I am pretty sure I have read this article before even though it was a long time ago.
I guess I am reffering to the actual raw conscious experience, you know the thing that stands out from all other existing things in an infinitely profound way, the thing that could be argued to be the only thing that holds any real value or disvalue in the universe.
So if I get the article right I guess that´s the hard problem of consciousness and not the easy problem. So I don´t mean self consciousness, awareness, the state of being awake, and so on. I mean the actual raw conscious experience. To quote Thomas Nigel; "the feeling of what it is like to be something".
I don´t think any truly objective measures could ever be done to test if something is conscious (has this raw conscious experience). But I do think high confidence estimates could be done in some or many situations by for instance looking at the internal mechanics and behaviours of systems and comparing them to other systems that we know are conscious.
I would be happy to clarify further if you have further questions.
​
So if we go back to my though experiment: The way I described consciousness with words previously is an output behaviour from a human (me). I think we can both agree that this specific output behaviour is a direct causition of me being conscious and not just a random correlation with me being conscious. It´s not like me writing those very specific word sequences previously has nothing to do with the fact that I am conscious and that that correlation just happened by random chance, right?
So, if a replica outputs a similar sequence of words it´s extremely unlikely that that very specific output behaviour just happened by random chance and has nothing to do with consciousness what so ever. Don´t you agree?
marvinthedog t1_ivsodgz wrote
Reply to comment by turnip_burrito in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
>Instead, I'd say conscious experience reflects the physical activity, but does not change it.
That´s exactly what I meant but I wasn´t clear enough. I agree with everything you say in your second paragraf.
>What is actually much more likely is that the two systems WILL NOT exhibit any measurable distinguishing traits.
I agree with this statement in you last paragraph.
​
What I meant was; the fact that humans are conscious beings highly effect (or a more suitable word might be reflect, or informs) how they think and behave. Let´s say that in a parallell universe evolution would evolve an alternate species to humans and that that species didn´t evolve consciousness. Because they didn´t evolve consciousness the way they think and behave would have major differences from how we think and behave. That´s what I mean when I say that; the fact that humans are conscious beings highly effect (or reflect, or informs) how they think and behave.
​
So let´s get back to the thought experiment. There is a human and a human replica made out of the same stuff as a calculator or whatever. The replica hasn´t been booted up yet. Before we start the replica up the hypothesis is that the replica wont be conscious (only for the sake of argument). We actually don´t even know if the replica is recreated in sufficent nano detail as to give any output behaviour at all. The primary assumption is that it will just give the equivalent output as a "blue screen of death". Then we start it up. It´s output behaviour turns out to be indistinguishable from a real human which demonstrates that the replica is recreated in sufficient nano detail.
​
Now, if the hypothesis is that the replica is not conscious then what would the probability be that the replica would give the extremely specific output behaviour of a typical physical human? Isn´t that probability infinitely low?
​
Since we seem to agree that consciousness highly reflects/informs how we think and behave, for an unconsious replica to give that exact same output behaviour out of an infinitely large possibility space seems infinitely improbable. If instead the hypothesis is that the replica is conscious then the output behaviour is no longer extremely unlikely, which makes that hypothesis extremely likely.
/Edit: a few words in the last sentence.
marvinthedog t1_ivq90ah wrote
Reply to comment by turnip_burrito in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
You do agree that the fact that humans are conscious beings highly effect how they think and behave, right?
​
Let´s say a computable system succeeds to imitate all the inner molecular mechanics of a human to such a degree that the output behaviour is indistinguishable from a typical physical human.
​
Note: the computable system isn´t specifically programmed in any way to imitate human behaviour (like gpt3 is), it is only programmed to exactly immitate the inner molecular mechanics of a human.
​
Now, if the fact that humans are conscious beings highly effect how they think and behave, and if (for the sake of argument) the computable system wouldn´t be conscious - what would be the brobability that the computable system would give the extremely specific output behaviour of a typical physical human? Wouldn´t that probability be infinitely small?
marvinthedog t1_ivgxtwd wrote
Reply to comment by turnip_burrito in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
If the individual minds is of the same type as the collaboratively computed mind (for instance humans computing a human) then we can be sure, no?
marvinthedog t1_ivg453u wrote
Reply to Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Within a handfull of years AI algorithms might become exponentially more conscious than us whithout us even knowing about it. This might be the most important issue in existance.
marvinthedog t1_ivg3er1 wrote
Reply to comment by abudabu in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
> if those those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI?
The consciousness of those large scale computations would be vanishingly small in comparison to the total sum of all individual consiousnesses partisipating in the large scale computations.
marvinthedog t1_itfguq3 wrote
Reply to comment by Kawawaymog in Thoughts on Full Dive - Are we in one right now? by fignewtgingrich
I mean wether we are the same entity or different entitys is just a question of definition and a question of ontology. A forest is a forest and a tree is a tree.
marvinthedog t1_isitn4y wrote
Reply to comment by Quealdlor in How long have you been on the Singularity subreddit? by TheHamsterSandwich
To quote Perry E. Metzger:s twitter post (I don´t know who he is but his arguments are solid):
Today you need to painstakingly raise an engineer over decades. Tomorrow, you’ll be able to boot up a few thousand if you need them, and the team will happily do 20,000 years of R&D in a few hours. Including R&D on building still better and faster engineers of course.
What happens when the design and manufacturing work we expect to happen in decades happens in less time than it takes to brush your teeth? What happens when science and engineering advance millions of years in the time it normally takes to get a new cellphone to market?
marvinthedog t1_isgj6by wrote
Reply to comment by Quealdlor in How long have you been on the Singularity subreddit? by TheHamsterSandwich
I mean we only need AI at human level of intelligence to completely change everything. It doesn´t seem particularly far away. Today we have AI that can create video from text. If that is not human-like intelligence I don´t know what is. 10 Years ago we didn´t have AI at all. So if we extrapolate 10 more years it seems to me like all the bets are off.
marvinthedog t1_isfhtdt wrote
Reply to We've all heard the trope that to be a billionaire you essentially have to be a sociopath; Could we cure that? Is there hope? by AdditionalPizza
My theory is this: If an agent is both super intelligent, rational and conscious it should rationally realise that technically its current self is separeted from its former and future selves just as much as its current self is separated from other conscious agents. Therefore it should rationally value all conscious entities as much as its own consciousness.
​
I know that the vast majority disagrees with me about the premise I base this theory on (that your current self is separeted from your former and future selves just as much as your current self is separated from other conscious people). This has been hotly debated when discussing the teleportation dilemma or mind uploading.
marvinthedog t1_is98vi9 wrote
Reply to comment by Ortus12 in What is the potential for AI vs AI conflict in the future? by iSpatha
This sounds terrible, and like future consciousnesses wont be happier than current ones (assuming AI:s will be conscious). WTF universe, why do you work like this?
marvinthedog t1_irzs3w9 wrote
Reply to comment by Desperate_Donut8582 in How long have you been on the Singularity subreddit? by TheHamsterSandwich
I am not sure it was this sub reddit. It´s possible this sub was more Kurzweil oriented back then. Kurzweil is very much an optimist.
marvinthedog t1_irzo7ef wrote
I think I have been on here for about 10 years. Over those years I have gone from extremely optimistic to extremely pessimistic about the outcome of the singularity. :-( My estimated timelines has gotten a little shorter aswell. Right now I think it´s about 5 to 15 years left.
​
I first got here through https://www.kurzweilai.net/forums/ which is now dead. I don´t know how I found that forum. I have allways been interested in sci fi and the future but when I saw The Matrix for the first time it completely changed my world view.
marvinthedog t1_irdo2im wrote
Reply to comment by Cryptizard in How concerned are you that global conflict will prevent the singularity from happening? by DreaminDemon177
If the thing we create gets to have a consciousness and that consciousness gets to experience less suffering and more happiness than we did then that´s a win in my book.
​
One worrysome clue that points to future AGI/ASI:s not being conscious is the fact that those types of consciousnesses should be way more common than our types of consciousness and therefore it should be a lot more probable that a random observer would be an AGI/ASI instead of, for instance, you or me.
marvinthedog t1_irbqfes wrote
Reply to How concerned are you that global conflict will prevent the singularity from happening? by DreaminDemon177
I rather die by an AGI injecting nanobots into my blodstream than by a nuclear war. In the former option I probably get to live a few years longer before it happens, it´s probably less painfull and it´s a way cooler way to die.
marvinthedog t1_iqwhzim wrote
It´s important to remember that we allready have considerably better lives than we had 100 years ago, not to mention 1000 years ago.
marvinthedog t1_iw1qs77 wrote
Reply to comment by turnip_burrito in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
It seems you might have missunderstood me when you said you agree to what I proposed in my thought experiment, because what I proposed was actually that your replica provides a lot stronger evidence for consciousness than the other human. You know you are conscious and the one who has the most functionally similar physical neural architecture to you is your replica.
​
When all the three of you describes consciousness in your own words the neural processes in your head is a lot more similar to your replicas neural processes than the other humans neural processes. For instance you and your replica might be thinking mainly in pictures and be wizards in abstract math while the other human might be thinking mainly in words and be exceptionally good at remembering facts or whatnot. Also your written down description of consciousness will be a lot closer to you replicas than the other human. So the fact that you seem to think that the human provides stronger evidence than the replica is very perplexing to me.
​
And you seem to think even some animals provide stronger evidence than your replica which is even way more perplexing. Animals cannot even communicate what conscousness is (atleast not in a language we can understand) and their neural architecture is way way more different than your replicas.