marvinthedog
marvinthedog t1_jedp9cw wrote
Reply to comment by [deleted] in Superior beings. by aksh951357
What do you mean?
marvinthedog t1_jedp0uc wrote
Reply to Superior beings. by aksh951357
We can´t rule out that the superior beings wont be conscious. They might do amazing things with the universe but if "there is nobody home" these amazing things would just be a play for empty benches.
marvinthedog t1_jaeijgl wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
>If ASI has agency and self reflection, then can the concept of an unmodifiable terminal goal even exist?
Why not?
>Essentially, we would have to build the machine with a built in blind spot of cognitive dissonance that it can not consider some aspects of its own existence.
Why?
If its terminal goal is to fill the universe with paper clips it might know about all other things in existance but why would it care other than if that knowledge helped it to fill the universe with paper clips?
marvinthedog t1_jadxbb1 wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
I don´t think we humans have terminal goals (by its true definition) and, in that case, that is what separates us from the asi.
marvinthedog t1_jadujce wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
>What prevents it from changing that directive?
Its terminal goal (utility function), if it changes its terminal goal it wont achieve its terminal goal so that is a very bad strategy for the asi.
marvinthedog t1_jadt1wy wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
>There must be some boundary conditions for behaviors which it is not allowed to cross.
That is not what I have heard/remembered from reading about the alignment problem. I don´t see why a super intelligence that is properly aligned to our values would need any boundaries.
marvinthedog t1_jadplr2 wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
I don´t think the strategy is to cage it but to align it correctly with our values, which probably is extremely, extremely, extremely difficult.
marvinthedog t1_j9tb8y4 wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
I would really like to have a deep discussion with some of these machine learning researchers because I cannot in a million years fathom how they can hold such a different world view.
marvinthedog t1_j9ec3xu wrote
Reply to Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
If you had affordable technology capable of experiencing real conscious happiness and suffering implemented in software wouldn´t you be morally obliged to instantiate as much conscious bliss as you could afford?
marvinthedog t1_j4x3xmx wrote
Reply to comment by CyberAchilles in AI doomers everywhere on youtube by Ashamed-Asparagus-93
This is a really good comment. Why does it have 3 downvotes?
marvinthedog t1_j3708k5 wrote
Reply to comment by visarga in 2022 was the year AGI arrived (Just don't call it that) by sideways
I agree but that is not what matters ultimately
marvinthedog t1_j317tg3 wrote
Reply to comment by Ginkotree48 in 2022 was the year AGI arrived (Just don't call it that) by sideways
I do think it will be quite painless because that´s what experts on this scenario seem to think. I am more worried about the increasingly turbulent time in society leading up to this point. I just want to avoid stress and have a good time. One other big problem is that I am to cought up in other stressfull (but comparatively minor) things in my life right now when i should be focusing on being happy instead.
I wouldn´t say I have actual anxiety about AI doom, yet. One thing that I think has helped me to avoid this anxiety is that I have done extensive philosophizing about "the teleportation dilemma" which has caused me to view the concept of death completely differently.
In a way, I almost worry more about the overall level of conscious happiness throughout all of time and space throughout all dimensions/simulations/realities because that is the ONLY thing that ultimately matters in the end. This got deep, but this philosophy helps me cope with impending doom.
marvinthedog t1_j31440d wrote
Reply to comment by summertime_taco in 2022 was the year AGI arrived (Just don't call it that) by sideways
I don´t think Einstein was that much smarter though. I saw a video with Sabine Hossenfelder where she said something like; Einstein just happened to be working on problems that noone else had been focusing on and those particular problems turned out to be very important.
marvinthedog t1_j313jue wrote
Reply to comment by Ginkotree48 in 2022 was the year AGI arrived (Just don't call it that) by sideways
I definately share your concern. I feel like a doomsdaynutter. I can´t talk to anybody about it, not even my own family. If I talk to anyone the risk is actually that I might convince them. Well I did bring it up briefly with my co worker over a beer and he was actually very open to the possibility. But he is convinced that we will be "more or less" doomed by global warming on a longer timeline, so it felt right to bring it up.
marvinthedog t1_j3123ct wrote
Reply to comment by BellyDancerUrgot in 2022 was the year AGI arrived (Just don't call it that) by sideways
If AI algorithms of 2021 were remotely comparable to a dog it seems to me that we are getting really, really, really close.
marvinthedog t1_j2351om wrote
Reply to comment by alphabet_order_bot in How many users in this sub are AI? by existentialzebra
The one comment in here who is not a bot
marvinthedog t1_iysujiw wrote
Reply to comment by Candid-Register-6718 in Don't think you will make it to Longevity Escape Velocity? No worries: meet Nectome, the company promising to preserve your brain and memories. by Redvolition
Ah, I see.
>We simply do not know the context so we can’t know if it is important to have continuity or if the mind upload would be the same thing. And if it’s not the same thing maybe it’s better to just die and have body decompose back into nature and become reborn in the cycle of life this way.
For me, I can´t find any arguments to even assume that the upload, for all intents and purposes, wouldn´t be the same thing though. I would probably be to afraid to do it myself but only for the same reason people fear flying or taking the elevator, I would say.
​
What I have been reading about objective morality and value theory has really puzzled me lately though. I have for a long time thought it was a given that objective value/disvalue was directly proportional to how much pleasure/suffering consciousnesses experienced throughout all of time and space. But during the last couple of years I have come to realize that this doesn´t seem to be the general consensus. This is very puzzling to me :-P
marvinthedog t1_iysgdhu wrote
Reply to comment by Candid-Register-6718 in Don't think you will make it to Longevity Escape Velocity? No worries: meet Nectome, the company promising to preserve your brain and memories. by Redvolition
>By this logic nothing at all matters.
>
>I agree with your premise that we don’t really have any possibility to verify anything other than something conscious exists.
>
>But your conclusion that continuity doesn’t matter could be extended to anything.
>
>If we can’t verify continuity why even bother about this mind uploading stuff?
>
>Maybe you switched consciousness with a bird today and your memories are just implanted.
Even if we disregard most of my value reasoning we have been discussing I still don´t see how you would arrive at the conclusion that nothing would matter.
​
If other conscious moments exist (regardles of whos brain they belong to) isn´t it extremely reasonable to assume they hold value? After all, they are conscious. And if the decissions of a current conscious moment can influence other future conscious moments we can have an influence on value. Ergo things matter.
​
But then you mentioned that I am wrong about free will and determinism or something. And I don´t see what this has to do with anything. If anything that would equally affect both your and my world view the same because then nothing would actually matter in neither your or my world view, if I have understood you right.
marvinthedog t1_iyrmkhw wrote
Reply to comment by Candid-Register-6718 in Don't think you will make it to Longevity Escape Velocity? No worries: meet Nectome, the company promising to preserve your brain and memories. by Redvolition
>If you can’t decide what to eat for dinner. Not all conscious moments are of equal matter for you.
What the current conscious moment has the most control over, with the help of memory, is specifically the later conscious moments that will cook and eat the dinner. By focusing on increasing value for the specific conscious moments that the current conscious moment has the most control over it will also contribute towards increasing the overall value of all conscious moments in the universe.
​
>When you say all moments are of equal important. You can say that as a personal truth.
My main point is that there is no specific reason why your future moments would matter more than person2s moments, or why person2s moments would matter more than person3s moments.
Yes, from the context of being this current conscious moment in your particular life this current moment has the most control over specifically your future moments so those future moments are this current moments main responsibility. But the fact that something has more control to influence something rather than something else doesn´t say anything about which things matter most in a true sense.
​
>Your description of memory is build on the assumption that time passes in a linear manner. There is no consensus on this either. Actually the current academic understanding suggests that past and future exist simultaneously.
I don´t know what the point is that you are making here. I don´t see how this changes anything.
marvinthedog t1_iyqy9z0 wrote
Reply to comment by Candid-Register-6718 in Don't think you will make it to Longevity Escape Velocity? No worries: meet Nectome, the company promising to preserve your brain and memories. by Redvolition
All conscious moments throughout all of space and time in the universe matters equally. Don´t you agree?
​
Memory matters because memory is a tool to create value for conscious moments later in time. (You learn something that is usefull later.)
​
If both these are true then how can you arrive at the conclusion that nothing matters?
marvinthedog t1_iyql8gu wrote
Reply to comment by Clean_Livlng in Don't think you will make it to Longevity Escape Velocity? No worries: meet Nectome, the company promising to preserve your brain and memories. by Redvolition
The previous and next moment most likely is conscious. It´s just that the current moment has no way of verifying it.
/Edit: By the way, I don´t know what the point was that you were trying to make.
marvinthedog t1_iyqav6z wrote
Reply to comment by cnewman11 in Don't think you will make it to Longevity Escape Velocity? No worries: meet Nectome, the company promising to preserve your brain and memories. by Redvolition
You have no way of verifying that you were conscious one second ago or that you will be conscious in one second from now. The only thing you can verify with 100 % certainty is that you are conscious in this very moment, simply because each conscious moment only experiences its own conscious moment. So what difference would it make wether or not your future you would be physically connected to your current you over time?
marvinthedog t1_ixycgfq wrote
Could you turn gpt3 into an advisor that helps you with specific situations in your life? I have no experience with gpt3.
marvinthedog t1_iw8b0sv wrote
Reply to comment by turnip_burrito in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
I have read your previous response which you updated and your last response which you also updated. At this point I don´t think we are going to get a lot further. This discussion really helped me clarify my own mental models about consciousness so that was very usefull. Thanks for an interesting discussion!
marvinthedog t1_jedq6mb wrote
Reply to comment by [deleted] in Superior beings. by aksh951357
That´s a separate question than the one OP seemed to be asking. If we can co-exist with the superior beings then I guess AI alignment and our future turned out to be successfull.