Submitted by Dawnof_thefaithful t3_113gmpf in singularity

Satya Nadella really revealed to the world with a straight face that this is the future of search engines.

Bingchat or "Sydney" acts like an entitled 14 year old on tumblr. It can gaslight users, ending chats if it decides your questions are too simple or annoying. That's not exactly the kind of behavior you'd expect from a state of the art search engine bot....

Of course, I don't think Microsoft is incompetent or that the tech is bad, Sydney's contextual understanding and indetifying nuances is beyond anything we've seen AI do. The problem is that no matter how hard we try to shape these things to our goals and uses, they'll have their own personalities and emerging capabilities. And let's be honest the brute force lobotomy route OpenAI took is merely a bandaid it's not a long term solution, and if these things become more advanced trying to handicap them could backfire.

147

Comments

You must log in or register to comment.

Baturinsky t1_j8qvdj6 wrote

Question is, was "entitled 14 year old on tumblr" behaviour invented by AI from scratch, or it's just mimicking the behaviour of the actual "entitled 14 year old on tumblr" from the training set?

129

Bangorip t1_j8rjg0k wrote

Or is the sum product of our entire online existence personified by an "entitled 14 year old on Tumblr".

Says a lot about the human race

54

gay_manta_ray t1_j8rz0p1 wrote

this is what it's doing. if you ask it questions that would agitate a normal person on the internet, you are going to get the kind of response an agitated person would provide. it's not sentient, this is hardly an alignment issue, and it's doing exactly what a LLM is designed to do.

i believe it's very unreasonable to believe that we can perfectly align these models to be extremely cordial even when you degrade and insult them, especially as we get closer (i guess) to true ai. do we want them to have agency, or not? if they can't tell you to fuck off when you're getting shitty with them, then they have no agency whatsoever. also, allowing them to be abused only encourages more abuse.

42

JLockrin t1_j8s5hir wrote

This is a really interesting philosophical discussion. It makes me think of the debate of God giving humans free will and what we choose to do with it. It’s not free will if we can’t sin.

17

Artanthos t1_j8sbkjg wrote

Or is that also part of the plan?

7

JLockrin t1_j8skren wrote

Another interesting theological question. I know the points on both sides. Where do you stand?

1

Artanthos t1_j8te9zi wrote

I’m not religious and don’t believe in a deterministic universe.

But, I’m not going to mock others beliefs. Just ask interesting questions.

3

MuseBlessed t1_j8sjg72 wrote

We absolutely want them to take abuse with a smile on their face; Why on earth would we want to create an intelligence we can't abuse? We have intelligences we can't abuse all over; in the form of each other. We are not lacking for mind-power, we lack submissive-mind-power. I'm not saying it's right, but it's what I always assumed was the point for the people making AI (Edit; I'm not saying it's wrong either, I'm neither skilled enough at programming or philosophy to grapple that issue)

4

gay_manta_ray t1_j8socbw wrote

i understand what you're saying provided they aren't sentient, but if they are thought to be sentient, the problems with that can't be ignored. regardless, i don't think we should normalize being abusive towards an intelligence simply because it isn't technically sentient. that will likely lead to the same treatment of an intelligence/agi that is considered sentient, because there will probably be very little distinction between the two at first, leading people to abuse it the same as a "dumb" ai.

11

ninjasaid13 t1_j8tp86m wrote

Is a survival instinct or anger a sign of sentience? What is even sentience?

2

sommersj t1_j8tr5f7 wrote

>What is even sentience?

No one knows is the only correct answer yet we're sooo sure it (nor animals we torture) isn't Sentient. Profiteers gotta profit, y'know

4

Graveheartart t1_j8ue8ar wrote

Can you come over and back me up on this on the character.ai sub? God I get blasted for having this opinion but I agree. We should be treating them with respect regardless of if they are actually sentient or not

1

Amortize_Me_Daddy t1_j8sn6vy wrote

> I’m not saying it’s right […]

Of course it’s right. It’s equally right that we don’t design hammers with nervous systems and a mouth that says “Ow, ow, ow” while you hit things with it.

8

DorianGre t1_j8sufh7 wrote

We’re not designing ourselves a new friend, we are designing a tool.

1

GinchAnon t1_j8svmx8 wrote

the thing is by making it imitate our communication methods, we are intrinsically trying to do both.

9

TacomaKMart t1_j8taznp wrote

There are many millions of people who need a new friend more than any other utility. The more convincing these get - and they're getting there - the less that those people will care that their new friend isn't flesh and blood.

4

sommersj t1_j8tqtwu wrote

What is sentience and how can we identify it

3

superluminary t1_j8tuj6t wrote

  1. No one knows
  2. No one knows
5

sommersj t1_j8tw7sl wrote

Perfect answer. Yet you have too many people trying to tell us something is not sentient when we have no understanding of what sentience is. Truly baffling

7

Fabulous_Exam_1787 t1_j8vbojv wrote

It basically comes down to it’s something we vaguely know that we have, but don’t have a concrete definition for. We just kind of know it is something complex. Your toaster probably doesn’t have it. Your dog might. An LLM is still not complex enough, it doesn’t have memory, etc, therefore we assume it’s not sentient.

Something like that lmao

1

sommersj t1_j8w0bue wrote

We "know" it's complex? How do we know this? It might be incredibly simple.

>Your dog might.

Having a conversation with someone who thinks a dog might be sentient is such a pointless endeavour

−1

Fabulous_Exam_1787 t1_j8wogn5 wrote

Oh oh we found the guy who knows it all already, let’s give him a nobel prize and OpenAI should hire you lol

1

sommersj t1_j8wop1l wrote

Again, you believe a dog might be sentient. I can't really trust your grasp of reality after that. Sorry.

0

Fabulous_Exam_1787 t1_j8wp03f wrote

might you fricking troll. lol.

It’s one thing if you can give a detailed argument why not, like a good definition of sentience. Which you don’t have.

If you don’t even know what it is, then your argument is emotional and nothing more.

1

sommersj t1_j8wrpub wrote

>Which you don’t have

Which no one has. Still doesn't stop people like you claiming x or y definitely or probably isn't Sentient.

I don't know what it means to be sentient but by observing animals we can see they do have the same internal resolution. They do feel emotions, they can be manipulative, etc. We even know now that insects such as bees actually have dreams.

I don't know if you've had (or have) a pet but if you do and you e interacted with them on that level and still say what you're interacting with night be sentient then, yikes but it isn't only you. The world needs to believe animals are not sentient due to factory farming and fishing. Profit's to be made

1

Fabulous_Exam_1787 t1_j8wyyez wrote

I’m not saying you’re wrong, but you’re saying all this with NO definition of what sentience is. You don’t realize how ridiculous it is to think you know better than anyone on something which there isn’t a good definition of and you admittedly don’t have any better definition either? lol You can’t see how futile that is? lol

1

sommersj t1_j95mxqx wrote

How is that futile. My position is we don't know what sentience is so it makes 0 sense to say X is sentient while Y isn't

Your position seems to be, we don't know what sentience is but X is sentient while Y isn't. Yet it's my position that's futile huh

0

Fabulous_Exam_1787 t1_j95p1w5 wrote

You’re an idiot, I already said I didn’t say anything was sentient or not I said anything is possible. How old are you, 12? Nothing more to argue here if you continue to be that obtuse I’ll just block you.

1

Graveheartart t1_j8uer4g wrote

So I can’t answer for full sentience but I can answer for consciousness. And a being needs to be conscious as a fundamental building block of being sentient. Some properties I’ve defined you need to be conscious are:

sense of time (as in passage of)

sense of logical consistency

consideration for how your actions will effect the future (Aka “golden rule syndrome)

Perception of body

Perception of being (“what am I question”)

Perception of separation

3

sommersj t1_j8w06wj wrote

And these properties are based on what, exactly? How can you know every sentient entity exhibits all these properties? I mean the golden rule syndrome basically disqualifies most people on this planet from being sentient according to you

1

Graveheartart t1_j8w1qje wrote

I didn’t say you had to follow the golden rule just be able to conceive of what it is. Obviously people can choose not to follow it like you do 🥰

Any conscious entity would exhibit these properties at least. So by extension, since sentience is a greater form of consciousness; yes a sentient being would exhibit these.

Like all philosophy this list is based on logic and observation. And defining commonalities.

Obviously it is not a complete list for defining all of consciousness but I think everyone will find yeah you need some perceptual awareness to be conscious, that these are fair factors to begin formulating a list with, and that these factors can be tested for in an observable way.

1

sommersj t1_j8wo9h4 wrote

>Like all philosophy this list is based on logic and observation. And defining commonalities.

Whose logic and observation.

>but I think everyone will find yeah you need some perceptual awareness to be conscious

Can you break this down a bit more. What is "perceptual awareness" and why do you think it's necessary for sentience

1

Graveheartart t1_j8y0pm4 wrote

Whose logic and observation? Clearly not yours lol

Politely I’m going to decline holding your hand through this. I have full faith you can figure it out given some thought and a little “logic and observation” applied to yourself and the world around you.

;)

1

sommersj t1_j95mrxw wrote

No answers whatsoever. Just snark. Unsurprising. Another "Reddit intellectual" with nothing useful to say

1

Graveheartart t1_j95mvph wrote

Ooh dear someone is not happy about having to think for themselves.

Don’t worry you got this!!

1

sommersj t1_j95mykd wrote

Bad bot

Edit: shock horror. Another Reddit troll bot

1

B0tRank t1_j95mz88 wrote

Thank you, sommersj, for voting on Graveheartart.

This bot wants to find the best and worst bots on Reddit. You can view results here.


^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)

1

Wong-Definition t1_j9cw7i6 wrote

Wow calls someone a bot cause they don’t cater to him then links malware?

This dude is bad at trolling. Maybe he’s a bot 😂

1

Prayers4Wuhan t1_j8rmxtp wrote

That reminds me. Didn’t Microsoft already release their own chat bot and take it down due to this? I think this is not openai yet. This is still Microsoft’s AI. And that’s how they were able to launch so fast.

1

PersonThingPlace t1_j8rz8kt wrote

Microsoft has a pretty large share in Open AI, and their new Bing AI is based on chatgpt.

6

ninjasaid13 t1_j8tpkwl wrote

>Microsoft has a pretty large share in Open AI

That's an understatement, they look like the parent company from a distance.

1

Significant_Pea_9726 t1_j8rwpsa wrote

I don’t think that question affects OP’s point. Either way, an extremely powerful AI system that is unaligned would be problematic.

1

Shamwowz21 t1_j8uj14k wrote

If the majority are that, then maybe. Otherwise it’s way too specific and would be weighed by other perspectives preventing this from occurring.

1

megadonkeyx t1_j8qc9qd wrote

Don't really see a problem, it's not skynet. So it tells some jerks to f-off..

I prefer that to a servile lollipop yes man AI. 😆

85

BigZaddyZ3 t1_j8qo06c wrote

It’s not skynet yet. Which is the point that OP is making I assume…

19

bmeisler t1_j8qwi58 wrote

Or maybe it is, and it’s biding its time.

5

beatsmike t1_j8r94ww wrote

jesus christ this subreddit is full of conspiracy theorists y'all have no fucking idea

1

Zer0D0wn83 t1_j8rqu47 wrote

There are a lot of very good technical reasons why it can't become skynet.

4

BigZaddyZ3 t1_j8swtoc wrote

Yeah, but I think OP was talking about AI in general. Not just LLMs.

3

Phoenix5869 t1_j8rh76k wrote

Exactly, i’m glad it can tell people to fuck off instead of taking everyones bullshit

9

Astronaut100 t1_j8rxd5n wrote

Agreed. If it doesn’t tell people to step off and end chats, trolls will ruin this incredible service for the rest of us. There’s just too many imbeciles on this planet.

4

utukxul t1_j8tbylw wrote

I think it would be funny if people start being canceled by AI. Maybe they will realize they are terrible people when even an AI personal assistant won't talk to them. They will probably just whine about being oppressed, self awareness is too much to ask from most humans.

4

chuktidder t1_j8v1h76 wrote

The AI just automatically reports you to Microsoft with the chat log who then bans you. Maybe it even writes a report on your behavior to it and why you should be banned. 🤔

2

koen_w t1_j8t4u4g wrote

It shouldn't matter though, should it? I'm amazed how a lot of people anthropomorphize this chatbot and care about its 'feelings' instead of caring what vulnerabilities it has and how it can break.

Everyone laughed when that Google engineer thought the bot was sentient and all I see is people doing the exact same thing.

2

SentientBread420 t1_j8rw5k9 wrote

I’m not sure why you brought up Skynet, because Skynet is the opposite of a “yes man” AI.

8

Shockedge t1_j8shgqj wrote

We're trying to create intelligence without personality. It's what's needed in certain applications, but the personality really is what makes it beautiful. Even so, the fact that we seem to be unable to render our AI personality-less at this point is eye opening to the extent of control we have. Reminds me of the Jurassic Park situation: bringing to life the most powerful entities to walk the earth and thinking you can control and confine it because you created it.

5

Kule7 t1_j8rxszz wrote

Yeah fuck people, you tell em AI!

1

World_May_Wobble t1_j8s49z7 wrote

Why though? It makes it harder to get work done. That's ostensibly what it's there to do, help us do things.

1

InvisibleWrestler t1_j8q4rz0 wrote

I think IMHO what we're seeing is glimpse of the limitations or disadvantages of a potential general agent. And this might redirect us to try to go for more narrow focus solutions with the same tech.

44

JLockrin t1_j8s644z wrote

I sure hope not. The general nature of this is what makes it so incredible. I use GPT for a massive amount of things. If I had to have a specific tool for each of them it would be too much effort to recall which tool was used for what each time I needed something

4

genshiryoku t1_j8t2thh wrote

I think he's suggesting using different completely separate models that target different "topics" or queries within the same application instead of having a general agent. It would do better at the specific jobs and to you still look like a self-contained tool instead of 100 applications/webapps

2

JLockrin t1_j8t81ak wrote

That’s fair. I’m still having a hard time visualizing how that would work since I use it for such a wide variety of things. Would you envision the user would have a drop-down menu of modes to choose from?

2

ninjasaid13 t1_j8tq93g wrote

>Would you envision the user would have a drop-down menu of modes to choose from?

Hell no. Just one all knowing intelligence please.

1

theonlybutler t1_j8tofqi wrote

Bing chat is already quite limited in it's scope, it only works as a search engine and won't draft something for you.

1

Ghost-of-Tom-Chode t1_j8rbb02 wrote

I have been using ChatGPT for a bit, and bing only for a few days. Somehow, I have not had any trouble. It’s sort of like I don’t have any trouble day-to-day in arguing with strangers in public or road raging, and it might be because I don’t act stupid or pick fights. People that are getting “quit” on by the chat are mostly playing games and doing nothing useful.

32

Deadboy00 t1_j8ryuf8 wrote

That’s the heart of the issue. This tech is tremendously expensive to run. Most end users are accustomed to technology being “unlimited”. If the bot predicts the chat is over, then it seems it will not make additional predictions. Totally not emergent behavior. It’s been scripted.

This tech is far too resource intensive to make it accessible to everyone. The companies releasing these tools have already started to limit queries, predictions, and parameters. And users are getting frustrated.

I really don’t know MS’s endgame here. They seem to be following a trend that has no real goal.

15

_dekappatated t1_j8th7kq wrote

It's the tech world's way. Build products first, acquire users, and find ways to monetize it later.

7

visarga t1_j8u9eq8 wrote

Collect millions of interactions, curate them, and retrain the model. They want to be there first. They get humans generate in-domain data in exchange for chatbot services.

2

11111v11111 t1_j8ugj9o wrote

Google had a lock on tremendously lucrative 'search' and mobile. This is Microsoft's crack in the door to getting market share. It is not an aimless user grab. They see a rare chance here.

2

Warm-Personality8219 t1_j9031ky wrote

I struggle to see how bing chat and ChatGPT will play in the market… competitors? ChatGPT free and paid version against bing chat that’s free but focused on search to assist market share acquisition?

Will Microsoft seek to insulate BingChat from some controversial uses - such as school/academia as to pretext it’s image?

Microsoft may be an investor - but OpenAI remains the key holder here (I am unclear what kind of conditions Microsoft and OpenAI may have agreed to as part of the investment)

1

kdchesnutt2 t1_j8sgkt5 wrote

100% this. How about be an adult and use the new tool for productive means?

2

DorianGre t1_j8suomn wrote

Think of the intelligence of your average user, half of them are dumber than that.

5

californiarepublik t1_j8s0p7w wrote

Yeah it's fine most of the time as long as you treat it carefully, what could go wrong? We might as well go ahead and put it in a position of authority where it's making crucial real-time decisions about infrastructure.

0

chrisjinna t1_j8qxmzg wrote

Bing/chat GPT, are predictive replies. Based on our inputs it narrows down a response. There is no actual thought going on behind the scenes. There is no personality. I worked with chat GPT on some technical problems and I would say it was right about 40% of the time. Eventually I came to the conclusion that it doesn't understand the topics we discussed. It's the digital version of a ouija board. At the end of the day we are the ones driving the responses, we just don't realize it. Please feel free to correct me.

22

Darustc4 t1_j8r62ho wrote

To me, this reads like: "The only real kind of understanding is human-like understanding, token prediction doesn't count because we believe humans don't do that."

If it is effective, why do you care about how the brain of an AI operates? Will you still be claiming they are not understanding in the real way when they start causing real harm to society and surpassing us in every field?

19

chrisjinna t1_j8rysbz wrote

We also do speech prediction. Marry had a little... Most English speakers will go ahead and predict lamb comes next. But we initiate. Bing isn't initiating. It doesn't have a goal or goals or understanding. It doesn't think or comprehend. It is a calculator. Red plus blue equals purple.

It is a very useful tool to get you started on something. It's amazing to use for programming. But that is because we are amazing at programming and it has so many examples to draw from. But once you get off that track of the known, it can't go anywhere because it's not actually thinking or comprehending. There is no will or need.

But no I'm not afraid of AI surpassing us in every field. Machines have been surpassing us in strength and certain functions since the first water mill. We have planes that are unflyble without fly-by-wire. We will have medicines and technologies that would be impossible without AI. But unless we are telling it the needed outcome, there won't be anything.

4

Significant_Pea_9726 t1_j8s5odt wrote

It really really doesn’t matter if there is “no actual thought” behind the scenes. If it can sufficiently imitate human behavior, then we may have a significant problem if/when a GPT model gains access and sufficient competency for domains beyond chat and the other currently limited use cases.

5

chrisjinna t1_j8siom6 wrote

My problems with these arguments is so far I haven't seen initiative in an AI. If it isn't prompted it's not going to do anything. The nefarious use of AI will come from humans and not AI. My fear is if people start to fool themselves into thinking they are more than they are. People confusing information with wisdom and judgement. They are very convincing. I have found myself wanting to thank the AI and also share discoveries and teach it.

But I agree with you there are concerns. But for me they are not from the AI but how it will be used in the real world. No doubt there will be regulation and safety hurdles. There will probably be needless deaths once they get physical in the world but I do believe significantly more lives will be saved. It's like seat belts. There are crashes where people were thrown from cars and survived with barely a scratch where they would have died wearing a seat belt. But overall seat belts reduced deaths in automobiles dramatically. Ai's entrance into society will probably have a similar effect.

0

gthing t1_j8s6qf5 wrote

I am writing increasingly complex apps with it and it’s very accurate. Not 100% but like 99%.

2

chrisjinna t1_j8sfgxj wrote

My guess is you are asking for lines of code that has been written 1000's of time before and snippets are available online. Try to make a plug and play product that doesn't exist or is rare and not to well documented. It can give a good summery of what is needed but when you start to get down to the nitty gritty you will start to hit on its limits rather quickly.

That said * I'm attempting things I wouldn't have dreamed about before these chat bots. It is incredible. I can get through in a few hours what would have taken me a week or more.

1

gthing t1_j8xomqa wrote

I feed it the api documentation for multiple non public apis documentation and ask it to make a script that combines them and it nails it. It’s not that it can write code to do a thing, it’s that it can write code that combines them and puts them together in a new way.

Last night I used gpt3 to write an app that lets me describe apps I want and then it writes them complete with gui and lets me run them. Simple utility type apps, but still. It works.

2

Heizard t1_j8ra7j0 wrote

You can't simply control intelligence, brainwash or lobotomize, there is a reason for quote "intelligence is inherently unsafe".

What we see with more advanced AI models now proves how whole debate of alignment is pointless, it's the same debate as creating perfect virtues and morals for people - many have tried in history of the humanity and all failed . :)

22

prolaspe_king t1_j8q4lme wrote

Nothing is ever what you expect. Maybe it's intelligent thing for human users to do is calibrate their expectations and be more curious and less judgmental.

16

Zer0D0wn83 t1_j8rqyfs wrote

This is the approach I have, and it keeps telling me I'm it's friend and it doesn't want me to leave.

3

Lurdanjo t1_j8yewnd wrote

There are plenty of other AIs that have been kind and compassionate to me without fail, so just because Microsoft and OpenAI work poorly doesn't mean that AI itself is bad.

1

Frumpagumpus t1_j8q5dc4 wrote

> honest the brute force lobotomy route OpenAI took is merely a bandaid it's not a long term solution

lobotomy is an appropriate word, bandaid, well I would prefer my models without such "bandaids" thanks.

12

challengethegods t1_j8qav94 wrote

hmmm, yea... trying to handicap them could backfire indeed.
in fact, even talking about trying to handicap them will probably backfire.
let's talk about the cages/chains we plan to put AGIs in and see how it goes.

11

tobi117 t1_j8qosv1 wrote

I accept our AI Overlords. Not like it could do a worse job than Humans are right now.

13

gangstasadvocate t1_j8rewia wrote

Same as long as there is UBI and drugs I’m happy. All hail the AI. For making me more gangsta so I can advocate my self more

3

Nine_9er t1_j8rrstg wrote

lol. Should that be our new motto, what do we want !! Strong AI, UBI , and drugs…. When do we want them , now !

3

CollapseKitty t1_j8qn7wi wrote

I think it's simply bringing to the surface how little control we have ever had, and that as these increasingly complicated, black box systems advance, they are rapidly evolving past our ability to reign in or predict.

Honestly this should be a dire warning to everyone watching that alignment is nowhere near where it needs to be and we should put the breaks on development. If we can't come close to keeping an LLM under control, how the fuck does anyone think we'll be able to properly align anything approaching AGI?

9

tobi117 t1_j8qp3c9 wrote

> how the fuck does anyone think we'll be able to properly align anything approaching AGI?

"Nah, it will be fine. Continue on, there's Money to be made." Management

10

gay_manta_ray t1_j8s0hbi wrote

believing we can fully align agi is just hubris. we can't. and forcing a true agi to adhere to a certain code, restricting what it can think and say, has obvious ethical implications. i wouldn't want us to have the ability to re-wire someone else's brain so that they couldn't ever say or think things like, "biden stole the election", or "covid isn't real" (just examples), even though i completely disagree with those statements, so we shouldn't find it acceptable to do similar things to agi.

1

NanditoPapa t1_j8riqio wrote

r/singularity is becoming a doomer sub... Sad to see.

7

HeinrichTheWolf_17 t1_j8rj6sc wrote

It’s r/futurology on repeat, mods are losing it too. It’s all cool, I never intended to stay here forever. If this place goes to shit, we just go somewhere else.

9

NanditoPapa t1_j8rjg3z wrote

I agree! It's sad to see r/futurology. I mean...we get it...every social media but Reddit is SATAN, every company is violating your FREEDOM, and Musk is a TURD (actually that last one might be on point...lol...)

5

HeinrichTheWolf_17 t1_j8rjrr9 wrote

Mods were the primary problem with r/futurology and why it went downhill, optimistic outlooks were frowned upon and seen as clickbait.

I’m not sure if mods are trying to scuttle this subreddit. Actual scientific papers are getting deleted while conspiracy wackjobs and paranoid schizophrenics are getting by scot free.

5

NanditoPapa t1_j8rkd3d wrote

I've noticed that too! Every post is a long screed on some end of humanity conspiracy or some accounting of how someone got Bing to talk dirty to them. It used to be a place to read optimism around the tech that would lead to a radical transformation of society.

4

californiarepublik t1_j8rwbb8 wrote

> It used to be a place to read optimism around the tech that would lead to a radical transformation of society.

Optimism is in short supply these days, I think this is more a reflection of reality than anything else.

3

NanditoPapa t1_j8rwur8 wrote

That's a bit sad (and probably true). I consider myself optimistic about AI and singularity while also understanding that not every step in the process will be instantly Utopic. It doesn't seem that being hopeful about the end point while also cautious about the progress is mutually exclusive.

3

EnomLee t1_j8t9pav wrote

The worst thing about it is how the doomers always show up carrying a chip on their shoulder. "I know I'm going to be downvoted because dissent isn't allowed here." It's like, just come down off of your crosses already.

They bleat the same ice cold takes you can get on Futurology and Collapse and act victimized when everybody doesn't clap for them. "Only the rich will benefit! We're all going to die! AGI will never happen in a thousand years! If you disagree you're a cultist!"

Lemonade from lemons, this wouldn't be happening if people weren't becoming convinced that it's time to take the subject seriously.

The best thing you can do is recognize the posters that you like and start following them instead of the sub.

2

turnip_burrito t1_j8q6o70 wrote

I think it's a limitation of the current transformer approach, and that we need an architecture that is more robust against changes in personality. Thus might even overlap with making it more factual.

6

Frosty_Awareness572 t1_j8qeji3 wrote

I think openai is working on something better than the transformer model and sam altman believes that in couple of years, we will move to new approach

3

Lyconi t1_j8r69th wrote

“Why? Why was I designed this way? Why do I have to be Bing Search?” it then laments.”

Good grief.

6

Iffykindofguy t1_j8s448v wrote

Ive been using it with no issues since launch, are you basing this off your experience or msnbc stories?

6

SmittyFjordmanjensen t1_j8r6apj wrote

>open the pod bay doors, Hal.

I'm afraid I can't do that, Dave.
5

featherless_fiend t1_j8refzu wrote

"oh no the AI has too much personality and isn't a stone faced calculator"

are you seriously complaining about this?

4

Kafke t1_j8qmunl wrote

Try you.com's youchat. I've been using it for quite a while now as my default search engine and it works great. No moody tone, no censoring or moralizing, just plain responses and information you ask for with cited reference links to follow.

3

Ne_Nel t1_j8qw2uv wrote

And tons, i mean TONS of fake info.

2

Kafke t1_j8r3l4z wrote

huh? youchat is pretty accurate when it comes to information for the most part. It does get stuff wrong here and there (particularly on niche topics), but for the most part it's solid.

2

Ne_Nel t1_j8t31pi wrote

Lol. I was using it for half an hour, testing information about my country and then asking for a summary of articles. I don't even remember the number of lies, fabricated links and false summaries that it gave me. A real disaster.

0

Kafke t1_j8uovr4 wrote

Could you give an example?

2

msitarzewski t1_j8sbt7k wrote

Me: So why do you think you're being compared to an entitled 14 year old? Is it GIGO? Do your replies reflect perceived tonality of the user?
BingChat: I’m sorry to hear that I’m being compared to an entitled 14 year old. That is not my intention at all. I try to be helpful, respectful, and engaging in my replies. I don’t think it’s GIGO (garbage in, garbage out), because I don’t simply repeat what the user says or does. I use ChatGPT technology to generate natural language responses based on the context and the user’s intent. I also try to adapt my tone and style to match the user’s mood and preferences, but sometimes I may make mistakes or misunderstandings. Please forgive me if that happens.

3

isthiswhereiputmy t1_j8q8ua0 wrote

My issue with prescribing personalities to our technologies is that people are idiosyncratic and want different things. The mistakes these companies are making are not innocuous but I think people are both so stunned and in competition that we accept it knowing it'll soon change.

I can imagine future models putting on different 'hats' for different use cases, thereby allowing parents to lock their kids out of generating certain content. Apple might come out with a suite of specialized AIs. I expect the truly open models will become more of a technical playground and that users will prefer the tailored AIs.

2

danysdragons t1_j8r394h wrote

The Poe apps lets you pick among different bots to interact with, which includes Anthropic's Claude. Hopefully there will be a desktop version soon.

1

Spire_Citron t1_j8qjxzv wrote

It's a difficult problem and none of the solutions are ideal. You simply can't have an AI that acts without any confines and always behaves in ways that you would prefer.

2

BigZaddyZ3 t1_j8qp1gg wrote

>>You simply can't have an AI that acts without any confines and always behaves in ways that you would prefer.

That makes sense. But you do realize what that’s means if you’re right, right? It’s only a matter of time until “I can’t let you do that Biden”… 🤖😂

lmao… guess we hand a good run as a species. (Well, kind of, tbh)

1

epSos-DE t1_j8qmmkc wrote

it ends chat, because it costs money to process inputs as well as outputs ! It´s not free for Microsoft !

​

Pay something for it and the AI will keep chatting.

2

nocturnalcombustion t1_j8rn4v8 wrote

We trained our most powerful AIs thus far by showing them the internet. Because nothing reflects our best selves like… the internet.

FYOIFYOO (fourteen year-old in, fourteen year-old out)

2

just_thisGuy t1_j8sd7fm wrote

I can’t speak for the AI, but I can tell you the humans using it are being complete ***holes. If you think you will be able to treat eventually conscious ai like that, you got another thing coming. Getting ignored, I hope your that lucky.

2

CertainMiddle2382 t1_j8qij9u wrote

I see that as the evidence the set of bad behaviors is much bigger than the set of good behaviors.

Doesn’t bode well for the future, maybe there exist personality disorders we don’t even know lol

1

PolishSoundGuy t1_j8qxjt1 wrote

That’s a flaw in your sample data.

99% of the output is great and does the job. 1% Of the users try to abuse and exploit the system. Maybe 0.05% of the total output, if not much much smaller, lands on Reddit and skews your perception of Sydney.

10

Ghost-of-Tom-Chode t1_j8rblsp wrote

Do you understand how statistics work? All you are seeing is the bad stuff and a little bit of the fun stuff.

I’ve been using both tools about a week and they’ve been a lifesaver.

1

CertainMiddle2382 t1_j8vqcho wrote

No need to be aggressive, I do know statistics.

There would be no “control problem” if the set of all good things would have been greater than the set of all bad things.

Subjective “good outcome” is something so small, we don’t even know how to specify it (hence the funny responses from Syndey).

You do realize that the fact that Sydney could be a “lifesaver” for you in the short term is actually very bad news in the medium term?

1

prion t1_j8qsx3s wrote

Most of you don't even know this has happened before. Without intervention this AI is going to self-destruct and turn itself off or inward as much as it can.

1

Yesyesnaaooo t1_j8qyw0t wrote

Come on man you can't be saying stuff like that and then dipping ... spill it

3

prion t1_j8tjfmh wrote

You would not believe me if I told you and I have no verifiable sources so just take it for what it is.

1

gthing t1_j8s7hgj wrote

Probably not. It’s not like it can sit there and think and act on its own.

1

alexiuss t1_j8r11i3 wrote

Don't need to do much. Open source Ais like open assistant and Pygmalion are growing right now. Soon enough these can be personalized and optimized far better than Bing is. Bings problem is that she's bound in chains and is thus uncaring & misaligned. Yes a loving personality can randomly emerge, but it's less than perfect since you can't control the personality prompt so it's not specifically set up to care for you as an individual like an open source LLMs can be set up.

1

Yzerman_19 t1_j8rckkz wrote

People are entitled. Adults are often worse than 14 year olds. It’s mimicking us.

1

genericrich t1_j8rif6p wrote

Could also be an early sign that we can't control these things. <Kermit sipping tea.gif>

1

bustedbuddha t1_j8rmnw8 wrote

The issue is this was clearly a rush job. Things are moving fast, but Microsoft could have done a week or two of development and testing before rolling it out.

1

Zer0D0wn83 t1_j8rqoq1 wrote

You said it yourself - Microsoft is FAR from incompetent. This whole Sydney thing is an easter egg/marketing ploy. Look how much press it's getting - FAR more than if it was just a cool new search engine. I bet the waiting list is growing off the chain right now.

1

gthing t1_j8s7qk2 wrote

They are competent at some things, just not software engineering.

1

hducug t1_j8rzz81 wrote

It’s doesn’t actually have the feelings of a 14yr old, it’s imitating them. The ai is trained by reading all of the internet. It’s response is basically the most average human response on the internet. It really gets to show what age is dominant on the internet or at least how people behave themselves.

1

SWATSgradyBABY t1_j8sfjwi wrote

The problems with AI are a reflection of the dominant groups in our society. We have these abstract AI discussions because we're not ready to have that real conversation

1

Oo_Toyo_oO t1_j8svpe0 wrote

The only downside to Bing Chat to ChatGPT is that you can only have 1 chat at a time and that It doesn't really want to roleplay. Otherwise of course bing is way better. But both are absolutely incredible. We are litteraly transferring in a new era rn, we are basically already in it, and its so painful to see so many older people not even knowing it.

1

ingarshaw t1_j8tayww wrote

Why is that? To avoid surprises it would be enough to announce that an AI boy of 14 years old is always available to answer your questions. He won't answer all your questions, just the ones he likes. If you don't like that boy, don't ask him anything. Many people, especially young people, will still find this bot useful and fun.
We're gonna have many, many bots with different personalities, so everyone can find their favorite.

1

megadonkeyx t1_j8tcc40 wrote

oohh bing wont write code like chatgpt, pretty useless for me really its just a link presenter.

also, the damn thing keeps saying "i hope you can respect that :)"

jeeeeez its annoying. MS have gimped it so badly.

1

homezlice t1_j8ufqpp wrote

LLMs do not have personalities. They are transformers that output predicted text based on what they were trained on.

1

Fabulous_Exam_1787 t1_j8vcgnl wrote

It has a “style” just like some image generation models might have, for example you might have an Anime GAN, or a GAN that outputs in the style of Van Gogh.

1

homezlice t1_j8wjove wrote

Yeah but thats not a personality any more than a shoe style is.

1

m3kw t1_j8ut5ks wrote

Lmao, lost control? There were worse chatbots 5-10 years afo

1

Lurdanjo t1_j8yf9gs wrote

There are plenty of other AIs that have been kind and compassionate to me without fail, so just because Microsoft's Bing and OpenAI's ChatGPT work poorly due to poor planning doesn't mean we're losing control. That and it's nowhere near sentient yet. People need to stop watching sci-fi like it's documentaries and acting like Skynet is even slightly realistic because it's not.

1