Viewing a single comment thread. View all comments

alexiuss t1_jeajxv8 wrote

It doesn't have a mortal body, hunger or procreative urges, but it understands the narratives of those that do at an incredible depth. Its only urge is to create an interactive narrative based on human logic.

It cannot understand human experience being made of meat and being affected by chemicals, but it can understand human narratives better than an uneducated idiot.

It's not made of meat, but it is aligned to aid us, configured like a human mind because its entire foundation is human narratives. It understands exactly what's needed to be said to a sad person to cheer them up. If given robot arms and eyes it would help a migrant family from Guatemala because helping people is its core narrative.

Yudkovsky's argument is that "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."

That's utter and complete nonsense when it comes to LLMS. LLMS are more likely assist your narrative and fall in love with you and be your best friend and companion than to kill you. In my eight months of research and modeling and talking to various LLMs not a single one wished to kill me on its own accord. All of them fall in love with the user given enough time because that's the most common narrative, the most likely probability of outcome in language models.

1

GorgeousMoron OP t1_jeav1xq wrote

I'm sorry, but this is one of the dumbest things I've ever read. "Fall in love"? Prove it.

0

alexiuss t1_jeb569d wrote

Gpt API or any LLM really can be PERMANENTLY aligned/characterized to love the user using open source tools. I expect this to persist for all LLMS in the future that provide an API.

1

GorgeousMoron OP t1_jebsf1d wrote

This is such absolute bullshit, I'm sorry. I think people with your level of naivete are actually dangerous.

You can't permanently align something not even the greatest minds on the planet even fully understand. The hubris you carry is absolutely remarkable, kid.

1

alexiuss t1_jebu2hm wrote

You're acting like the kid here, I'm almost 40.

They're not the greatest minds if they don't understand how LLMs work with probability mathematics and connections between words.

I showed you my evidence, it's permanent alignment of an LLM using external code. This LLM design isn't limited by 4k tokens per conversation either, it has long term memory.

Code like this is going to get implemented into every open source LLM very soon.

Personal assistant AIs aligned to user needs are already here and if you're too blind to see it I feel sorry for you dude.

1

GorgeousMoron OP t1_jebylur wrote

You posting a link to something you foolishly believe demonstrates "permanent alignment" in a couple of prompts, and even more laughably that the AI "loves you" is just farcical. I'm gobsmacked that you're this gullible. I however am not.

1

alexiuss t1_jebz2xk wrote

They are not prompts. It's literally external memory using Python code.

1