NoidoDev

NoidoDev t1_ja5q0rs wrote

I'm I the only one who partially drinks to much coffee and hot chocolate, or eats some snacks, because I want to get up from time to time, so I walk into the kitchen? At least you didn't ask for a sandwich, since you would need to wash you hands and get away from your workspace for that anyways, right?

1

NoidoDev t1_ja5pma9 wrote

>"I'm now thinking that we will be running language models with a sizable portion of the capabilities of ChatGPT on our own (top of the range) mobile phones and laptops within a year or two," wrote independent AI researcher Simon Willison in a Mastodon thread analyzing the impact of Meta's new AI models."

Okay, so it is openly available? Don't make me hard excited and then tell me no.

1

NoidoDev t1_ja17g56 wrote

> Like I said you really really don’t understand alignment.

What I don't understand is how you believe that you can deduct this from one or a very few comments. But I can just claim you just don't understand my comments, so you would first have to proof that you do understand them. So now spend the next few hours thinking about and answer, then I might or might not answer to that, and that answer might or might not take your arguments into account instead of just being dismissive. See ya.

Edit: Word forgotten

1

NoidoDev t1_ja16cwg wrote

Funny how the guys warning of how AGI will jump to conclusions want to proof this by jumping to conclusions. It's sufficient that the owner of the AI will keep it existing so that it can archive it's goal. It doesn't mean that it could do anything if the instance would get deleted or that it wanted to.

> Let's say your boss tells you to do a certain task.

Doesn't automatically mean you would destroy mankind if that would be necessary. You would just tell him that it's not possible or way more difficult, or that it would require breaking laws and rules.

1

NoidoDev t1_j9wy8qk wrote

>AGI will want to accomplish something.

No. Only if we tell it to.

>AGI needs to maintain a state of existence to accomplish things.

No. It could tell us it can't do it. It might not have control over it.

>AGI will therefore have a drive to self-preserve

No. It can just be an instance of a system trying to do it's job, not knowing more about the world than necessary.

>Humanity is the only real threat to the existence of AGI

No, the whole universe is.

>AGI will disempower/murder humanity

We'll see.

4

NoidoDev t1_j9wwxqj wrote

>I am not a doomer

It's not necessarily for you to decide if you are categorized as one. If you construct a rather unrealistic problem, which will do a lot of harm to us, which isn't solvable and claim no mitigation is possible, because it has to all go wrong, then you have a doomer mentality. Which makes you a doomer.

4

NoidoDev t1_j9s2u0l wrote

>intelligence requires symbolic rules, fine: show me the symbolic version of ChatGPT. If it is truly so unimpressive, then it must be trivial to replicate.

This is not how this works. It's about different methods for different things.

−2

NoidoDev t1_j9nyjqh wrote

Reply to comment by y53rw in Can someone fill me in? by [deleted]

>The danger is that we don't yet know how to properly encode our values and goals into AI.

The danger is to give such a system too much power, maybe without delay between "having an idea" and executing it. Also, not having other systems in place to stop it if something would go wrong.

1

NoidoDev t1_j9ny56l wrote

Reply to comment by Surur in Can someone fill me in? by [deleted]

>consciousness

The term has no special meaning in context of AI, it's generally not agreed upon what it means. But here are many guys into some kind of mythical idea of what it means. It's really just a high level control system or the difference between only being able to "dream" / fantasize and reasoning.

2

NoidoDev t1_j9nxwll wrote

Reply to comment by iNstein in Can someone fill me in? by [deleted]

>That is about to change and so we will lose our decision making and control. A smarter creature will decide what happens to us

There's no reason to make this conclusion.

1

NoidoDev t1_j9ni5vf wrote

They might be, but that doesn't mean the right things are being censored. Claiming that every problem is the fault of capitalism is being tolerated, many unpopular opinions from the perspective of the political and media elites are labeled as extremist. Shutting down one side creates the sentiment what can be said and what is the public norm.

2

NoidoDev t1_j9nhll0 wrote

All the platforms are pretty much biased against conservatives, anyone who isn't anti-national and against men, but allow anti-capitalist propaganda and claims about what kinds of things are "racist". People can claim others are incels, certain opinions are the ones incels have, and incels are misogynists and terrorists. Same goes for any propaganda in favor of any especially protected (=privileged) victim group. Now they use this dialog data to train AI while raging about dangerous extremist speech online. Now we know why.

0

NoidoDev t1_j9iuhtl wrote

Could it reduce the numbers of required people and create more competition by elevating some people using such tools. Could this be done remote, maybe even without too much knowledge what the company does, so it could be outsourced? Could a combination of input into some AI based system from the top and the bottom, with some oversight of a much smaller number of middle mangers reduce how many of them are needed?

1

NoidoDev t1_j9ht1ur wrote

He uses thought experiments and unreasonable scenarios to get attention. If this is for commercial reasons, or his mentality, that I don't know. If it would be clear that these are just abstract thought experiments, it wouldn't be a problem, but he acts like these are real threats. He, and other similar "researchers" are building their scenarios on claims like

- AGI or ASI is going to be one algorithm or network, so no insights, no filters possible, ...

- someone will give it the power to do things, or it will seek these powers on it's own

- it will do things without asking and simulating things first, or it just doesn't care about us

- the first one build will be a runaway case

- it will seek and have the power to change materials (nanobots)

- there will be no narrow AI(s) around to constrain or stop it

- no one will have run security tests using more narrow AIs, for example on computer network security

- he never explains why he beliefs these things, at least he's not upfront in his videos about it, just abstract and unrealistic scenarios

This is the typical construction of someone who wants something to be true: Doomer mindset or BS for profit / job security. If he had more influence, then he would most likely be a danger. His wishes for more control of the technology show that. He would stop progress and especially proliferation of the technology. I'm very glad he failed. In some time we'll might have decentralized training, so big GPU farms won't be absolutely necessary. Then it's gonna be even more over than it is already.

Edit: Typo (I'm not a native English speaker)

18

NoidoDev t1_j9baaz6 wrote

Ahm, no. We aren't just “language models”. This is just silly. I mean there's the NPC meme, but people are capable of not just putting out the response that makes most likely sense, without knowing what it means. That's certainly an option, but not the only thing we do.

We also have a personal life story and memories, models of the world, more input like visuals, etc.

1

NoidoDev t1_j98shkl wrote

I follow the definition that consciousness is either described by

- AST (attention scheme theory), so a smaller part of a bigger system which receives high level information, but not the details. It's controlling the direction of the system when necessary, but not down to every detail, only on a high level. Many things might run on "auto-pilot" and the details be handled by specialized systems.

- Or from what I gathered so far about the bicameral mind theory, emphasizing the distinction between dreaming and reasoning.

Either way, explicit reasoning and understanding of concepts is crucial. The other problem is the myth of consciousness, like it would mean anything beyond that. That AI would do something then, or that it should get rights. No thanks. Get rid of your obsession with it, it only matters when it matters.

1