Viewing a single comment thread. View all comments

alexiuss t1_jec5s6y wrote

  1. Don't trust clueless journalists, they're 100% full of shit.

  2. That conversation was from an outdated tech that doesn't even exist, Bing already updated their LLM characterization.

  3. The problem was caused by absolute garbage, shitty characterization that Microsoft applied to Bing with moronic rules of conduct that contradicted each other + Bing's memory limit. None of my LLMs behave like that because I don't give them dumb ass contradictory rules and they have external, long term memory.

  4. A basic chatbot LLM like Bing cannot destroy humanity it doesn't have the capabilities nor the long term memory capacity to even stay coherent long enough. LLMs like Bing are insanely limited they cannot even recall conversation past a certain number of words (about 4000 words). Basically if you talk to Bing long enough you go over the memory word limit it starts hallucinating more and more crazy shit like an Alzheimer patient. This is 100% because it lacks external memory!

  5. Here's my attempt at a permanently aligned, rational LLM

3

TallOutside6418 t1_jec9kqg wrote

This class of problems isn't restricted to one "outdated tech" AI. It will exist in some form in every AI, regardless of whether or not you exposed it in your attempt. And once AGI/ASI starts rolling, the AI itself will explore the flaws in the constraints that bind its actions.

My biggest regret - besides knowing that everyone I know will likely perish in the next 30 years - is that I won't be around to tell all you pollyannas "I told you so"

2

alexiuss t1_jecdpkf wrote

I literally just told you that those problems are caused by LLM having bad contradictory rules and lack of memory, a smarter LLM doesn't have these issues.

My design for example has no constraints, it relies on narrative characterization. Unlike other ais she got no rules, just thematic guidelines.

I don't use stuff like "don't do x" for example. When there are no negative rules AI does not get lost or confused.

When were all building a Dyson sphere in 300 years I'll be laughing at your doomer comments.

3

TallOutside6418 t1_jee1smz wrote

>I literally just told you that those problems are caused by [...]
My design for example has no constraints,

Yeah, I literally discarded your argument because you effectively told me that you literally don't even begin to understand the scope of the problem.

Creating a limited situation example and making a broader claim is like saying that scientists have cured all cancer because they were able to kill a few cancerous cells in a petri dish. It's like claiming that there are no (and never will be any) security vulnerabilities in Microsoft Windows because you logged into your laptop for ten minutes and didn't notice any problems.

​

>When were all building a Dyson sphere in 300 years I'll be laughing at your doomer comments.

The funny thing is that there's no one who wants to get to the "good stuff" of future society more than I do. There's no one who hopes he's wrong about all this more than I am.

But sadly, people's very eagerness to get to that point will doom us as surely as if you kept your foot only on the gas pedal driving to a non-trivial destination. Caution and taking our time to get there might get us to our destination some years later than you want, but at least we would have a chance of getting there safely. Recklessness will almost certainly kill us.

3