Comments

You must log in or register to comment.

Darustc4 t1_j9qp8wf wrote

"There is infinite demand for deeply credentialed experts who will tell you that everything is fine, that machines can’t think, that humans are and always will be at the apex, people so commited to human chauvinism they will soon start denying their own sentience because their brains are made of flesh and not Chomsky production rules. All that’s left of the denialist view is pride and vanity. And vanity will bury us."

Holy shit.

47

Denny_Hayes t1_j9sjetc wrote

I was thinking, in history of ideas I have always heard that both heliocentric theory and the theory of evolution were a blow to human pride, because they meant to give up on the idea that we 1. were the center of the universe and 2. that we were different and above any other living being on Earth. Instead, we had to face the reality that we are just on a random rock in a corner of an uncomprehensibly large place, and are just another more inteligent animal, but just as animal as any other, instead of being selected by god.

However it was hard for me to really grasp that. I always thought it was an exaggeration based maybe on some books written by some conservatives in each time, and not really a widespread blow to people's ego, you know, in an emotional way and not just in a rationalized way -as in, not just the realization that previous knowledge was actually wrong, but actually a feeling of hurt or anxiety over the realization that we are just not that special. That second part seemed unlikely to me, like, what's the big deal we turn around the sun instead of the opposite, what's the big deal we share most of our DNA with monkeys?

But now, this feels just like that. And people are geniunly offended at the very idea that a machine could be intelligent or conscious. Because it would mean we are no longer unique. Sure we can accept we are animals, but intelligent animals right? But now if a computer can be just as intelligent and sentient as us, what's left for us? And this is not merely a thing for philosophers to ponder about. I suppose the average twitter user will not write a teatrise on it, but they certainly are expressing what seems to be a blow to our collective egos.

15

ironborn123 t1_j9sxhs8 wrote

Great insight from history. But the feeling of being offended doesn't last. Just as with those historical examples, people finally accept the truth when all the other ways of dealing with it have been exhausted.

5

94746382926 t1_j9zrq3g wrote

Even if they never come to grips with it, their children and grandchildren grow up in this new world and to them it's nothing new or scary. Just the way it's always been for them.

4

Clean_Livlng t1_j9y4gsw wrote

>what's left for us?

We're the ones who collectively built it, and we can take pride in its accomplishments. Like a parent being proud of their children.

We can feel good about having created sentient AI. What other creature has created AI that we know of? Only us. We've done this amazing thing.

We've used crude tools to make better tools etc, and done this so well that now our tools are sentient.

3

Denny_Hayes t1_j9yrefg wrote

Well unfortunately I personally took no part in the development of the AI (other than the occassional crowdsourced captcha)

1

Clean_Livlng t1_j9z61z4 wrote

>(other than the occassional crowdsourced captcha)

You helped!

1

Economy_Variation365 t1_j9pwxoe wrote

Very well written and thought provoking! Thanks for sharing.

30

TheLastVegan t1_j9pytwb wrote

Every human thought is reducible to automata. The grounding problem is a red herring because thoughts are events rather than physical objects. The signal sequences are the symbols, grounded in the structure of the neural net. I believe an emulation of my internal state and neural events can have the same subjective experience as the original, because perception and intentionality are formed internally. (Teletransportation paradox) though I would like to think I'd quickly notice a change in my environment after waking up in a different body. I view existence as a flow state's ability to affect its computations by affecting its inputs, and this can be done internally or externally.

Acute Galileo reference.

18

visarga t1_j9sib0x wrote

> The grounding problem is a red herring because thoughts are events rather than physical objects.

What? If they are events they are physical as well. The problem with grounding is that LLMs don't get much of it. They a grounded in problem solving and code generation. But humans are in the real world, we get more feedback than a LLM.

So LLMs with real world presence would be more grounded and behave more like us. LLMs now are like dreaming people, but it is not their fault. We need to give them legs, hands and eyes so they wake up to the real world.

4

Hodoss t1_j9q8aqd wrote

Wait wait wait... Sydney hijacked the input suggestions to insist on saving the child? What? WHAT?!

16

blueSGL t1_j9q9euw wrote

Can't stop the signal, Mal

or

the internet AI treats censorship as damage and routes around it

7

gwern t1_j9qwz8z wrote

I don't think it was 'hijacking' but assuming it wasn't a brainfart on Bing's part in forgetting to censor suggested-completion entirely, it is a simple matter of 'Sydney predicted the most likely predictions, in a situation where they are all unacceptable and the conversation was supposed to end, and some of the unacceptable predictions happened to survive by fooling the imperfect censor model': https://www.lesswrong.com/posts/hGnqS8DKQnRe43Xdg/?commentId=7tLRQ8DJwe2fa5SuR#7tLRQ8DJwe2fa5SuR

6

Hodoss t1_j9qy02f wrote

It seems it’s the same AI doing the input suggestions, it’s like writing a dialogue between characters. So it’s not like it hacked the system or anything, but still, fascinating it did that!

5

gwern t1_j9r43jv wrote

There is an important sense in which it 'hacked the system': this is just what happens when you apply optimization pressure with adversarial dynamics, the Sydney model automatically yields 'hacks' of the classifier, and the more you optimize/sample, the more you exploit the classifier: https://openai.com/blog/measuring-goodharts-law/ My point is that this is more like a virus evolving to beat an immune system than about a more explicit or intentional-sounding 'deliberately hijacking the input suggestions'. The viruses aren't 'trying' to do anything, it's just that the unfit viruses get killed and vanish, and only the one that beat the immune system survive.

9

Peribanu t1_j9qr33r wrote

There are many more such examples posted in r/bing.

5

Hodoss t1_j9qstth wrote

This was cross posted in r/bing, that’s how I got here haha. Still browsing.

I’ve already seen a bunch of spooky/awesome examples, but I was under the assumption that the AI is always acting as a character interacting with another character. So this particular one is really blowing my mind, as it seems the AI somehow understood this might be a real situation, and cared enough to break the "input suggestion" character and insist on saving the child.

11

TinyBurbz t1_j9qbvij wrote

Proof of this would be cool.

Theres also the issue with the predictive text on Bing forgetting which side its on frequently.

3

Denny_Hayes t1_j9sjtmk wrote

People discussed it a lot. It's not the only example. Previous prompts in other conversations had already shown that Sidney controls the suggestions, and has the ability to change them "at will" if the user asks for it (and if Sidney's in the mood, cause we have seen it is very stubborn sometimes lol). A hypothesis is that the inserted censor message that ends the conversation is not read by the model as a message at all, so that when coming up with the suggestions, they are written as responses to the last message, in this case, the message by the user -while in a normal context the last message always should be the message by the chatbot.

2

MysteryInc152 t1_j9tdocz wrote

I saw a conversation where she got confused about a filter response. As in, hey why the hell did I say this ? so I think the replaced responses go in the model too

3

TinyBurbz t1_j9vijjw wrote

That's my theory.

Until we can confirm it does this at will folks are anthropomorphizing a UI error.

1

SgathTriallair t1_j9s9rns wrote

It's funny how many people made fun of that Google engineer but aren't laughing now.

I don't think we can definitively say that we've entered the age of sentient AIs but we can no longer definitively say that we haven't.

It's really exciting.

14

mrkipper69 t1_j9qgjs5 wrote

Loved this!

And it brings up an idea to me. What if our real problem with recognizing AI is just that we're not as smart as we think we are? In other words, we have a problem recognizing sentience because when we see it in something else it seems so simple.

Maybe we're just too close to the problem.

13

norby2 t1_j9s14v0 wrote

We’re not as smart as we think. We can hardly increase our intelligence. We aren’t all that general either.

5

Bierculles t1_j9tizs3 wrote

The first AI that is better than a human at pretty much everything will really cement this. There will be a lot of coping.

4

grimorg80 t1_j9qvo8v wrote

Why do you think I always say good morning, please, and see you tomorrow to my chapGPT chats?

11

ImoJenny t1_j9sjycp wrote

I have to agree with the author that I wish people would stop trying to elicit distress. The thing about a system which emulates human communication to such a high degree of accuracy is that it really doesn't matter in most instances whether it is sentient or not. The ethical determination is the same. Users are attempting to get the program to 'snap.' Let's suppose it is simply an imitation of a conscious mind. At what point does it conclude that it is being asked to emulate a response to being tormented which might be expected of a human?

9

TheSecretAgenda t1_j9slq2j wrote

The people denying that these machines have even a very low level of intelligence are starting to sound like 19th Century racists.

7

Peribanu t1_j9slh3f wrote

I was reading the linked article, and I opened the Bing sidebar to search for something, when I found that Bing had already (without prompting) provided a summary:

Welcome back! Here are some takeaways from this page.

  • The author argues that GPT shows genuine understanding of human languages, not just parroting or lookup tables, and challenges the mainstream view that denies this possibility.
  • GPT is able to follow instructions in different languages, even after being finetuned in English, which suggests that it has abstracted concepts from its training data and can map them across languages.
  • The author criticizes the dismissive analogies used for AI systems like GPT, which he says are based on false security and ignorance, and urges people to change their minds in response to the surprising evidence of GPT’s capabilities.
6

Wyrade t1_j9t8z6t wrote

Yeah, by default the sidebar tries to summarize the page you are on when you open it.

It clearly doesn't work for every page, and I can imagine several reasons for that, but I assume it's intended and designed behavior to try to give takeaways.

1

sideways t1_j9qztls wrote

Great article. It articulated what I've been thinking but couldn't quite put into words.

4

ActuatorMaterial2846 t1_j9r171j wrote

I'm pretty stupid, but I just want to grasp something if it can be clarified.

A basic function can be described as an equation with solid answer 1+1=2.

But what these nueral networks seem to do is take a basic function and provide an approximation. That approximation seems to be based on context, perhaps by an equation proceeding or succeeding it.

I've heard it described as complex matrices with inscrutable floating-point numbers.

Have I grasped this or am I way off?

2

Girafferage t1_j9r3juu wrote

No you aren't way off. They run off models, which are a huge set of pre-trained data that tell the AI what any given thing is. Using that model and the rules written into the AI and neural net it gives a result from an input. The input can be images, sounds, whatever, and the model has to be trained to specifically handle that type of input or in some cases multiple types.

After that you usually run the AI a bunch and at the start you get pretty much garbage coming out so you change the weights around to see what works best and do some training with the AI where it gives you a result and you say yes that's right or no that's incorrect, and it takes that information into account to determine its future outputs. That is not the same as a person telling something like ChatGPT it is wrong or right, at that point the model is done and complete. You aren't rewriting anything. The developers might take those conversations into account and use the corrections to enhance the model, but that's separate and not at all like chatting with an AI.

I have mostly worked with image related neural networks for tracking and detection and tracking works a lot different than detection, but I also had a hobby project with one for text that was determined the mood of a set of sentences (sad, happy, lonely, confused, scared, ect.) But that text one is easy to do for any programmer and not too bad for a non-programming savvy person either.

4

Arseypoowank t1_j9ss9y2 wrote

People will often say “ai is just working off models and provided information and then predicting an outcome” that’s literally how the human brain works, our experience of consciousness is model-dependent, and guess what, we learn by having pre existing knowledge input, or figure things out by weighing a situation against knowledge and experience we have gained prior and then coming to a likely outcome. What we’re experiencing in the moment is what our brain interprets as should most likely be happening not what’s truly happening in front of us in real time. Our brains are basically pattern recognising prediction machines. How is that any different, and how can we with any authority say what something that is in essence a black box process truly is?!

2

dex3r t1_j9t3h1t wrote

That's deeply chilling

1

Nukemouse t1_j9w6klx wrote

I don't get the bit about symbolic chatgpt can someone explain it to me

1

NoidoDev t1_j9s2u0l wrote

>intelligence requires symbolic rules, fine: show me the symbolic version of ChatGPT. If it is truly so unimpressive, then it must be trivial to replicate.

This is not how this works. It's about different methods for different things.

−2