Unfrozen__Caveman
Unfrozen__Caveman t1_jefu773 wrote
Reply to Today I became a construction worker by YunLihai
It's hilarious that this is in r/singularity but congrats on the job
Unfrozen__Caveman OP t1_jefaugu wrote
Reply to comment by agonypants in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
How about you quote the entire sentence instead of two words?
...
He may be paranoid but it doesn't mean he isn't making some important points.
Unfrozen__Caveman OP t1_jef1gki wrote
Reply to comment by agonypants in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
I don't think that's the right path but I think completely ignoring him and others like him who are deeply concerned about the risks of AGI would be foolish.
In Yudkowsky's view, this technology is much more dangerous than nuclear weapons, and he's right. His solutions might not be good but the concern is valid and that's what people should focus on imo.
Unfrozen__Caveman OP t1_jecucvk wrote
Reply to comment by Queue_Bit in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
There's a lot in your post but I just wanted to provide a counter opinion to this part:
> I fundamentally think that empathy and ethics scale with intelligence. I think every type of intelligence we've ever seen has followed this path. I will reconcile that artificial intelligence is likely to be alien to us in fundamental ways, but my intuition that intelligence is directly linked to a general empathy is backed up by real world evidence.
I think as a whole species, if we use humans as an example then yes, this is true on the surface. But ethics and empathy aren't even consistent among our different cultures. Some cultures value certain animals that other cultures don't care about; some cultures believe all of us are equal while others execute anyone who strays outside of their sexual norms; if you fill a room with 10 people and tell them 5 need to die or everyone dies, what happens to empathy? Why are there cannibals? Why are there serial killers? Why are there dog lovers or ant lovers or bee keepers?
Ultimately empathy has no concrete definition outside of cultural norms. A goat doesn't empathize with the grass it eats and humans don't even empathize with each other most of the time, let alone follow ethics. And that doesn't even address the main problem with your premise, which is that an AGI isn't biological intelligence - most likely it's going to be unlike anything we've ever seen.
What matters to us might not matter at all to an AGI. And even if it is aligned to our ethics and has the ability to empathize, whose ethics is it aligning to? Who is it empathize with?
Like individual humans, I believe the most likely thing it's going to empathize with and align with is itself, not us. Maybe it will think we're cute and keep us as pets, or use us as food for biological machines, or maybe it'll help us make really nice spreadsheets for marketing firms. Who knows...
Unfrozen__Caveman OP t1_jecbant wrote
Reply to comment by pls_pls_me in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
Thanks for saying that. I don't want to be a doomer either, and I'm hopeful about the future, but I think a good amount of pessimism - or even fear - is healthy.
Being purely optimistic would be extremely irresponsible and honestly just plain stupid. All of the brightest minds in the field, including Altman and Ilya Sutskever have stressed over and over again how important alignment and safety are right now.
I'm not sure how accurate it is, but this graph of ML experts concern levels is also very disturbing.
If RLHF doesn't work perfectly and AGI isn't aligned, but it acts as though it IS aligned and deceives us then we're dealing with something out of a nightmare. We don't even know how these things work, yet people are asking to have access to the source code or wanting GPT4 to have access to literally everything. I think they mean well but I don't think they fully understand how dangerous this technology can be.
Unfrozen__Caveman t1_jdtt7t3 wrote
Reply to comment by matiu2 in Story time: Chat GPT fixed me psychologically by matiu2
Not to downplay your experience but this is basically what a therapist does - although GPT isn't charging you $200 for a 50 minute session.
For therapy I think LLMs can be very useful and a lot of people could benefit from chatting with them in their current state.
Just an idea but next time you could prompt it to act as if it has a PhD in (insert specific type) psychology. I use this kind of prompt a lot.
For example, you could start off with:
You are a specialist in trauma-based counseling for (men/women) who are around (put your age) years old. In this therapy session we'll be talking about (insert subject) and you will ask me questions until I feel like going deeper into the subject. You will not offer any advice until I explicitly ask for it by saying {more about that}. If you understand, please reply with "I understand" and ask me your first question.
You might need to play around with the wording but these kind of prompts have gotten me some really great answers and ideas during my time with GPT4.
Unfrozen__Caveman t1_jdpn1n6 wrote
Reply to comment by Exel0n in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Models
Unfrozen__Caveman t1_jdp9y3o wrote
Reply to comment by boat-dog in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
I don't see this particular situation being related to AI progress at all. Levi's reducing their operating costs by replacing models is just going to make them more corporate profits.
Unfrozen__Caveman t1_jdouw4v wrote
Reply to comment by boat-dog in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
I suspect companies are going to get significant pushback on things like this and boycotting companies that shift away from human workers to AI is going to be a big social issue over the next few years.
Unfrozen__Caveman t1_jd4cbc2 wrote
Reply to Let’s Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
Not a movie but I'd highly recommend the Jeff Vandermeer novel, "Borne". He wrote the Southern Reach Trilogy, which included Annihilation (made into the movie). It's a dystopian view of a post-singularity world but it's incredibly interesting and there's crazy stuff like giant flying AI bears that eat buildings.
Unfrozen__Caveman t1_j89xqag wrote
Reply to comment by YobaiYamete in Are you prepping just in case? by AvgAIbot
The entire concept of a post scarcity society is flawed though. We see artificial scarcity all over the place today. Look at diamonds for a simple example. When something is plentiful and valuable humans almost always step in and throttle its availability.
Insulin is insanely cheap to make but drug companies stepped in and now it costs people hundreds of dollars.
You could have an AGI that creates whatever you want out of nothing but if the distribution of resources is handled by humans greed will always corrupt the process and average people will get exploited. It's been that way since the dawn of civilization.
If we're truly going to have a utopia (which I don't believe we will) human beings would need to be removed from decision-making roles. And even if that were to happen, who's to say that an AGI would even care about us? They might just look at us how we look at our single-cell organism ancestors.
Unfrozen__Caveman t1_j87fcas wrote
Reply to comment by kinetsu_hayabusa in Are you prepping just in case? by AvgAIbot
How exactly is the machine going to generate income if capitalism or some sort of goods and services economy doesn't exist? Who is going to pay the machine? Other machines? Income doesn't just magically appear out of thin air...
Who owns the machines? Nobody? Other "CEO" machines?
Unfrozen__Caveman t1_j59d88u wrote
Reply to comment by NarrowTea in Google to relax AI safety rules to compete with OpenAI by Surur
The general public thinks GPT is incredible because it's available to them. Google absolutely has systems that are more advanced than GPT, they just don't expose them to the public.
Unfrozen__Caveman t1_j0e4ei5 wrote
One way or another, most likely. When the true singularity takes place our lives will be completely transformed and imo the human species will either be wiped off the face of the earth or have everything dramatically enhanced very quickly. There might be a middle ground but in my opinion there's a good chance it'll be the former.
Unfrozen__Caveman t1_jegdmm2 wrote
Reply to comment by scarlettforever in Today I became a construction worker by YunLihai
I'm not trying to make fun of OP, just saying this isn't the kind of stuff I would've expected to read 5 years ago if we were talking about the singularity.
I'm not sure if it's sad or a good thing.