Viewing a single comment thread. View all comments

purepersistence OP t1_j6tua19 wrote

I see the threat, and like millions of others won't let that happen. It's not like we don't know how our computers work. Hell chatGPT is just a language grab bag. If you drill down on that code you can understand every line of it. And "intelligence" is far from what you'll find. I maintain that any autonomy will be by design, and like I say all the fears in the souls of billions of people aren't going to let your future get started because the possible dangers will be easily imagined.

Think about how we humans are. Not only will the possible dangers be anticipated, a whole lot of impossible ones will be too. Will not happen.

−7

Surur t1_j6tv455 wrote

> I see the threat, and like millions of others won't let that happen.

You are not in charge of McDonalds or Intel, and we are not talking about ChatGPT taking over the world, but some future AGI.

For a good analogy, think of Chinese chipsets in our technology. We let that happen, despite concerns around China implanting backdoors.

> If you drill down on that code you can understand every line of it.

BTW, you may understand the code, but you probably cant understand the weights. Just like I can bash open your skull and see your neurons, but I cant read your thoughts by doing that.

16

GPT-5entient t1_j6uksjv wrote

>If you drill down on that code you can understand every line of it.

You should try it. It is 275 B parameters (numbers) which drive how ChatGPT responds. Let us know how it's going!

Machine learning models have been black boxes for a while now and GPT-3 is one of the biggest ones...

12

purepersistence OP t1_j6w2m21 wrote

>You should try it. It is 275 B parameters (numbers) which drive how ChatGPT responds.

You don't get the difference between parameters and lines of code.

0

CertainMiddle2382 t1_j6vwxpj wrote

We have absolutely no clue about exactly what the latent space of those models represent.

Their own programmers have been trying to do that even with pre Transformer models without much success.

There is a huge incentive in doing so especially for time critical and vital systems like in medicine or machine control.

Above a few layer, we really don’t have a clue on what the activation pattern represent…

3

Mokebe890 t1_j6wdksx wrote

Ofc it will happen, humans are weak. Artificial intelligence will surpas us in everything. Mere language model like chatgpt is way better than average student, it just lack reasoning. And what you going to to? Throw bricks? Our only way is to merge with machine, dont fight it.

1

Quealdlor t1_j6wtfco wrote

We need to upgrade, improve, enhance, augment humans. Transhumanism ftw!

1

Mokebe890 t1_j6x207h wrote

From the moment I understood the weakness of my flesh, it disgusted me

1

DukkyDrake t1_j6ut4m2 wrote

I wouldn't worry about chatGPT.

Language abilities != Thinking

0

CertainMiddle2382 t1_j6vwbrd wrote

Well we don’t actually know what “thinking” is.

And as the most abstract human production, language seems a great place to find out…

4

purepersistence OP t1_j6w2xl7 wrote

Starting with language is a great way to SIMULATE intelligence or understanding by grabbing stuff from a bag of similar text that's been uttered by humans in the past.

The result will easily make people think we're ahead of where we really are.

2

CertainMiddle2382 t1_j6wwyvp wrote

“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”

In all honesty, I don’t really know if Im really thinking/aware, or just a biological neural network interpreting itself :-)

2

purepersistence OP t1_j6x005a wrote

>“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”

The problem is people believe that. With chatGPT it just ain't so. I've given it lots of coding problems. It frequently generates bugs. I point out the bugs and sometimes it corrects them. The reason they were there to begin with is it didn't have enough clues to grab the right text. Just as often or more, it agrees with me about the bug but it's next change fucks up the code even more. It has no idea what it's doing. But it's still able to give you a very satisfying answer to lots and lots of queries.

1