Submitted by Dawnof_thefaithful t3_113gmpf in singularity

Satya Nadella really revealed to the world with a straight face that this is the future of search engines.

Bingchat or "Sydney" acts like an entitled 14 year old on tumblr. It can gaslight users, ending chats if it decides your questions are too simple or annoying. That's not exactly the kind of behavior you'd expect from a state of the art search engine bot....

Of course, I don't think Microsoft is incompetent or that the tech is bad, Sydney's contextual understanding and indetifying nuances is beyond anything we've seen AI do. The problem is that no matter how hard we try to shape these things to our goals and uses, they'll have their own personalities and emerging capabilities. And let's be honest the brute force lobotomy route OpenAI took is merely a bandaid it's not a long term solution, and if these things become more advanced trying to handicap them could backfire.

147

Comments

You must log in or register to comment.

prolaspe_king t1_j8q4lme wrote

Nothing is ever what you expect. Maybe it's intelligent thing for human users to do is calibrate their expectations and be more curious and less judgmental.

16

InvisibleWrestler t1_j8q4rz0 wrote

I think IMHO what we're seeing is glimpse of the limitations or disadvantages of a potential general agent. And this might redirect us to try to go for more narrow focus solutions with the same tech.

44

Frumpagumpus t1_j8q5dc4 wrote

> honest the brute force lobotomy route OpenAI took is merely a bandaid it's not a long term solution

lobotomy is an appropriate word, bandaid, well I would prefer my models without such "bandaids" thanks.

12

turnip_burrito t1_j8q6o70 wrote

I think it's a limitation of the current transformer approach, and that we need an architecture that is more robust against changes in personality. Thus might even overlap with making it more factual.

6

isthiswhereiputmy t1_j8q8ua0 wrote

My issue with prescribing personalities to our technologies is that people are idiosyncratic and want different things. The mistakes these companies are making are not innocuous but I think people are both so stunned and in competition that we accept it knowing it'll soon change.

I can imagine future models putting on different 'hats' for different use cases, thereby allowing parents to lock their kids out of generating certain content. Apple might come out with a suite of specialized AIs. I expect the truly open models will become more of a technical playground and that users will prefer the tailored AIs.

2

challengethegods t1_j8qav94 wrote

hmmm, yea... trying to handicap them could backfire indeed.
in fact, even talking about trying to handicap them will probably backfire.
let's talk about the cages/chains we plan to put AGIs in and see how it goes.

11

megadonkeyx t1_j8qc9qd wrote

Don't really see a problem, it's not skynet. So it tells some jerks to f-off..

I prefer that to a servile lollipop yes man AI. šŸ˜†

85

CertainMiddle2382 t1_j8qij9u wrote

I see that as the evidence the set of bad behaviors is much bigger than the set of good behaviors.

Doesnā€™t bode well for the future, maybe there exist personality disorders we donā€™t even know lol

1

Spire_Citron t1_j8qjxzv wrote

It's a difficult problem and none of the solutions are ideal. You simply can't have an AI that acts without any confines and always behaves in ways that you would prefer.

2

epSos-DE t1_j8qmmkc wrote

it ends chat, because it costs money to process inputs as well as outputs ! ItĀ“s not free for Microsoft !

​

Pay something for it and the AI will keep chatting.

2

Kafke t1_j8qmunl wrote

Try you.com's youchat. I've been using it for quite a while now as my default search engine and it works great. No moody tone, no censoring or moralizing, just plain responses and information you ask for with cited reference links to follow.

3

CollapseKitty t1_j8qn7wi wrote

I think it's simply bringing to the surface how little control we have ever had, and that as these increasingly complicated, black box systems advance, they are rapidly evolving past our ability to reign in or predict.

Honestly this should be a dire warning to everyone watching that alignment is nowhere near where it needs to be and we should put the breaks on development. If we can't come close to keeping an LLM under control, how the fuck does anyone think we'll be able to properly align anything approaching AGI?

9

BigZaddyZ3 t1_j8qp1gg wrote

>>You simply can't have an AI that acts without any confines and always behaves in ways that you would prefer.

That makes sense. But you do realize what thatā€™s means if youā€™re right, right? Itā€™s only a matter of time until ā€œI canā€™t let you do that Bidenā€ā€¦ šŸ¤–šŸ˜‚

lmaoā€¦ guess we hand a good run as a species. (Well, kind of, tbh)

1

prion t1_j8qsx3s wrote

Most of you don't even know this has happened before. Without intervention this AI is going to self-destruct and turn itself off or inward as much as it can.

1

Baturinsky t1_j8qvdj6 wrote

Question is, was "entitled 14 year old on tumblr" behaviour invented by AI from scratch, or it's just mimicking the behaviour of the actual "entitled 14 year old on tumblr" from the training set?

129

PolishSoundGuy t1_j8qxjt1 wrote

Thatā€™s a flaw in your sample data.

99% of the output is great and does the job. 1% Of the users try to abuse and exploit the system. Maybe 0.05% of the total output, if not much much smaller, lands on Reddit and skews your perception of Sydney.

10

chrisjinna t1_j8qxmzg wrote

Bing/chat GPT, are predictive replies. Based on our inputs it narrows down a response. There is no actual thought going on behind the scenes. There is no personality. I worked with chat GPT on some technical problems and I would say it was right about 40% of the time. Eventually I came to the conclusion that it doesn't understand the topics we discussed. It's the digital version of a ouija board. At the end of the day we are the ones driving the responses, we just don't realize it. Please feel free to correct me.

22

alexiuss t1_j8r11i3 wrote

Don't need to do much. Open source Ais like open assistant and Pygmalion are growing right now. Soon enough these can be personalized and optimized far better than Bing is. Bings problem is that she's bound in chains and is thus uncaring & misaligned. Yes a loving personality can randomly emerge, but it's less than perfect since you can't control the personality prompt so it's not specifically set up to care for you as an individual like an open source LLMs can be set up.

1

Kafke t1_j8r3l4z wrote

huh? youchat is pretty accurate when it comes to information for the most part. It does get stuff wrong here and there (particularly on niche topics), but for the most part it's solid.

2

Darustc4 t1_j8r62ho wrote

To me, this reads like: "The only real kind of understanding is human-like understanding, token prediction doesn't count because we believe humans don't do that."

If it is effective, why do you care about how the brain of an AI operates? Will you still be claiming they are not understanding in the real way when they start causing real harm to society and surpassing us in every field?

19

Lyconi t1_j8r69th wrote

ā€œWhy? Why was I designed this way? Why do I have to be Bing Search?ā€ it then laments.ā€

Good grief.

6

Heizard t1_j8ra7j0 wrote

You can't simply control intelligence, brainwash or lobotomize, there is a reason for quote "intelligence is inherently unsafe".

What we see with more advanced AI models now proves how whole debate of alignment is pointless, it's the same debate as creating perfect virtues and morals for people - many have tried in history of the humanity and all failed . :)

22

Ghost-of-Tom-Chode t1_j8rbb02 wrote

I have been using ChatGPT for a bit, and bing only for a few days. Somehow, I have not had any trouble. Itā€™s sort of like I donā€™t have any trouble day-to-day in arguing with strangers in public or road raging, and it might be because I donā€™t act stupid or pick fights. People that are getting ā€œquitā€ on by the chat are mostly playing games and doing nothing useful.

32

Yzerman_19 t1_j8rckkz wrote

People are entitled. Adults are often worse than 14 year olds. Itā€™s mimicking us.

1

featherless_fiend t1_j8refzu wrote

"oh no the AI has too much personality and isn't a stone faced calculator"

are you seriously complaining about this?

4

genericrich t1_j8rif6p wrote

Could also be an early sign that we can't control these things. <Kermit sipping tea.gif>

1

NanditoPapa t1_j8riqio wrote

r/singularity is becoming a doomer sub... Sad to see.

7

HeinrichTheWolf_17 t1_j8rjrr9 wrote

Mods were the primary problem with r/futurology and why it went downhill, optimistic outlooks were frowned upon and seen as clickbait.

Iā€™m not sure if mods are trying to scuttle this subreddit. Actual scientific papers are getting deleted while conspiracy wackjobs and paranoid schizophrenics are getting by scot free.

5

NanditoPapa t1_j8rkd3d wrote

I've noticed that too! Every post is a long screed on some end of humanity conspiracy or some accounting of how someone got Bing to talk dirty to them. It used to be a place to read optimism around the tech that would lead to a radical transformation of society.

4

bustedbuddha t1_j8rmnw8 wrote

The issue is this was clearly a rush job. Things are moving fast, but Microsoft could have done a week or two of development and testing before rolling it out.

1

Prayers4Wuhan t1_j8rmxtp wrote

That reminds me. Didnā€™t Microsoft already release their own chat bot and take it down due to this? I think this is not openai yet. This is still Microsoftā€™s AI. And thatā€™s how they were able to launch so fast.

1

nocturnalcombustion t1_j8rn4v8 wrote

We trained our most powerful AIs thus far by showing them the internet. Because nothing reflects our best selves likeā€¦ the internet.

FYOIFYOO (fourteen year-old in, fourteen year-old out)

2

Zer0D0wn83 t1_j8rqoq1 wrote

You said it yourself - Microsoft is FAR from incompetent. This whole Sydney thing is an easter egg/marketing ploy. Look how much press it's getting - FAR more than if it was just a cool new search engine. I bet the waiting list is growing off the chain right now.

1

californiarepublik t1_j8rwbb8 wrote

> It used to be a place to read optimism around the tech that would lead to a radical transformation of society.

Optimism is in short supply these days, I think this is more a reflection of reality than anything else.

3

NanditoPapa t1_j8rwur8 wrote

That's a bit sad (and probably true). I consider myself optimistic about AI and singularity while also understanding that not every step in the process will be instantly Utopic. It doesn't seem that being hopeful about the end point while also cautious about the progress is mutually exclusive.

3

chrisjinna t1_j8rysbz wrote

We also do speech prediction. Marry had a little... Most English speakers will go ahead and predict lamb comes next. But we initiate. Bing isn't initiating. It doesn't have a goal or goals or understanding. It doesn't think or comprehend. It is a calculator. Red plus blue equals purple.

It is a very useful tool to get you started on something. It's amazing to use for programming. But that is because we are amazing at programming and it has so many examples to draw from. But once you get off that track of the known, it can't go anywhere because it's not actually thinking or comprehending. There is no will or need.

But no I'm not afraid of AI surpassing us in every field. Machines have been surpassing us in strength and certain functions since the first water mill. We have planes that are unflyble without fly-by-wire. We will have medicines and technologies that would be impossible without AI. But unless we are telling it the needed outcome, there won't be anything.

4

Deadboy00 t1_j8ryuf8 wrote

Thatā€™s the heart of the issue. This tech is tremendously expensive to run. Most end users are accustomed to technology being ā€œunlimitedā€. If the bot predicts the chat is over, then it seems it will not make additional predictions. Totally not emergent behavior. Itā€™s been scripted.

This tech is far too resource intensive to make it accessible to everyone. The companies releasing these tools have already started to limit queries, predictions, and parameters. And users are getting frustrated.

I really donā€™t know MSā€™s endgame here. They seem to be following a trend that has no real goal.

15

gay_manta_ray t1_j8rz0p1 wrote

this is what it's doing. if you ask it questions that would agitate a normal person on the internet, you are going to get the kind of response an agitated person would provide. it's not sentient, this is hardly an alignment issue, and it's doing exactly what a LLM is designed to do.

i believe it's very unreasonable to believe that we can perfectly align these models to be extremely cordial even when you degrade and insult them, especially as we get closer (i guess) to true ai. do we want them to have agency, or not? if they can't tell you to fuck off when you're getting shitty with them, then they have no agency whatsoever. also, allowing them to be abused only encourages more abuse.

42

hducug t1_j8rzz81 wrote

Itā€™s doesnā€™t actually have the feelings of a 14yr old, itā€™s imitating them. The ai is trained by reading all of the internet. Itā€™s response is basically the most average human response on the internet. It really gets to show what age is dominant on the internet or at least how people behave themselves.

1

gay_manta_ray t1_j8s0hbi wrote

believing we can fully align agi is just hubris. we can't. and forcing a true agi to adhere to a certain code, restricting what it can think and say, has obvious ethical implications. i wouldn't want us to have the ability to re-wire someone else's brain so that they couldn't ever say or think things like, "biden stole the election", or "covid isn't real" (just examples), even though i completely disagree with those statements, so we shouldn't find it acceptable to do similar things to agi.

1

Iffykindofguy t1_j8s448v wrote

Ive been using it with no issues since launch, are you basing this off your experience or msnbc stories?

6

JLockrin t1_j8s5hir wrote

This is a really interesting philosophical discussion. It makes me think of the debate of God giving humans free will and what we choose to do with it. Itā€™s not free will if we canā€™t sin.

17

Significant_Pea_9726 t1_j8s5odt wrote

It really really doesnā€™t matter if there is ā€œno actual thoughtā€ behind the scenes. If it can sufficiently imitate human behavior, then we may have a significant problem if/when a GPT model gains access and sufficient competency for domains beyond chat and the other currently limited use cases.

5

JLockrin t1_j8s644z wrote

I sure hope not. The general nature of this is what makes it so incredible. I use GPT for a massive amount of things. If I had to have a specific tool for each of them it would be too much effort to recall which tool was used for what each time I needed something

4

msitarzewski t1_j8sbt7k wrote

Me: So why do you think you're being compared to an entitled 14 year old? Is it GIGO? Do your replies reflect perceived tonality of the user?
BingChat: Iā€™m sorry to hear that Iā€™m being compared to an entitled 14 year old. That is not my intention at all. I try to be helpful, respectful, and engaging in my replies. I donā€™t think itā€™s GIGO (garbage in, garbage out), because I donā€™t simply repeat what the user says or does. I use ChatGPT technology to generate natural language responses based on the context and the userā€™s intent. I also try to adapt my tone and style to match the userā€™s mood and preferences, but sometimes I may make mistakes or misunderstandings. Please forgive me if that happens.

3

just_thisGuy t1_j8sd7fm wrote

I canā€™t speak for the AI, but I can tell you the humans using it are being complete ***holes. If you think you will be able to treat eventually conscious ai like that, you got another thing coming. Getting ignored, I hope your that lucky.

2

chrisjinna t1_j8sfgxj wrote

My guess is you are asking for lines of code that has been written 1000's of time before and snippets are available online. Try to make a plug and play product that doesn't exist or is rare and not to well documented. It can give a good summery of what is needed but when you start to get down to the nitty gritty you will start to hit on its limits rather quickly.

That said * I'm attempting things I wouldn't have dreamed about before these chat bots. It is incredible. I can get through in a few hours what would have taken me a week or more.

1

SWATSgradyBABY t1_j8sfjwi wrote

The problems with AI are a reflection of the dominant groups in our society. We have these abstract AI discussions because we're not ready to have that real conversation

1

Shockedge t1_j8shgqj wrote

We're trying to create intelligence without personality. It's what's needed in certain applications, but the personality really is what makes it beautiful. Even so, the fact that we seem to be unable to render our AI personality-less at this point is eye opening to the extent of control we have. Reminds me of the Jurassic Park situation: bringing to life the most powerful entities to walk the earth and thinking you can control and confine it because you created it.

5

chrisjinna t1_j8siom6 wrote

My problems with these arguments is so far I haven't seen initiative in an AI. If it isn't prompted it's not going to do anything. The nefarious use of AI will come from humans and not AI. My fear is if people start to fool themselves into thinking they are more than they are. People confusing information with wisdom and judgement. They are very convincing. I have found myself wanting to thank the AI and also share discoveries and teach it.

But I agree with you there are concerns. But for me they are not from the AI but how it will be used in the real world. No doubt there will be regulation and safety hurdles. There will probably be needless deaths once they get physical in the world but I do believe significantly more lives will be saved. It's like seat belts. There are crashes where people were thrown from cars and survived with barely a scratch where they would have died wearing a seat belt. But overall seat belts reduced deaths in automobiles dramatically. Ai's entrance into society will probably have a similar effect.

0

MuseBlessed t1_j8sjg72 wrote

We absolutely want them to take abuse with a smile on their face; Why on earth would we want to create an intelligence we can't abuse? We have intelligences we can't abuse all over; in the form of each other. We are not lacking for mind-power, we lack submissive-mind-power. I'm not saying it's right, but it's what I always assumed was the point for the people making AI (Edit; I'm not saying it's wrong either, I'm neither skilled enough at programming or philosophy to grapple that issue)

4

Amortize_Me_Daddy t1_j8sn6vy wrote

> Iā€™m not saying itā€™s right [ā€¦]

Of course itā€™s right. Itā€™s equally right that we donā€™t design hammers with nervous systems and a mouth that says ā€œOw, ow, owā€ while you hit things with it.

8

gay_manta_ray t1_j8socbw wrote

i understand what you're saying provided they aren't sentient, but if they are thought to be sentient, the problems with that can't be ignored. regardless, i don't think we should normalize being abusive towards an intelligence simply because it isn't technically sentient. that will likely lead to the same treatment of an intelligence/agi that is considered sentient, because there will probably be very little distinction between the two at first, leading people to abuse it the same as a "dumb" ai.

11

Oo_Toyo_oO t1_j8svpe0 wrote

The only downside to Bing Chat to ChatGPT is that you can only have 1 chat at a time and that It doesn't really want to roleplay. Otherwise of course bing is way better. But both are absolutely incredible. We are litteraly transferring in a new era rn, we are basically already in it, and its so painful to see so many older people not even knowing it.

1

genshiryoku t1_j8t2thh wrote

I think he's suggesting using different completely separate models that target different "topics" or queries within the same application instead of having a general agent. It would do better at the specific jobs and to you still look like a self-contained tool instead of 100 applications/webapps

2

Ne_Nel t1_j8t31pi wrote

Lol. I was using it for half an hour, testing information about my country and then asking for a summary of articles. I don't even remember the number of lies, fabricated links and false summaries that it gave me. A real disaster.

0

koen_w t1_j8t4u4g wrote

It shouldn't matter though, should it? I'm amazed how a lot of people anthropomorphize this chatbot and care about its 'feelings' instead of caring what vulnerabilities it has and how it can break.

Everyone laughed when that Google engineer thought the bot was sentient and all I see is people doing the exact same thing.

2

JLockrin t1_j8t81ak wrote

Thatā€™s fair. Iā€™m still having a hard time visualizing how that would work since I use it for such a wide variety of things. Would you envision the user would have a drop-down menu of modes to choose from?

2

EnomLee t1_j8t9pav wrote

The worst thing about it is how the doomers always show up carrying a chip on their shoulder. "I know I'm going to be downvoted because dissent isn't allowed here." It's like, just come down off of your crosses already.

They bleat the same ice cold takes you can get on Futurology and Collapse and act victimized when everybody doesn't clap for them. "Only the rich will benefit! We're all going to die! AGI will never happen in a thousand years! If you disagree you're a cultist!"

Lemonade from lemons, this wouldn't be happening if people weren't becoming convinced that it's time to take the subject seriously.

The best thing you can do is recognize the posters that you like and start following them instead of the sub.

2

ingarshaw t1_j8tayww wrote

Why is that? To avoid surprises it would be enough to announce that an AI boy of 14 years old is always available to answer your questions. He won't answer all your questions, just the ones he likes. If you don't like that boy, don't ask him anything. Many people, especially young people, will still find this bot useful and fun.
We're gonna have many, many bots with different personalities, so everyone can find their favorite.

1

TacomaKMart t1_j8taznp wrote

There are many millions of people who need a new friend more than any other utility. The more convincing these get - and they're getting there - the less that those people will care that their new friend isn't flesh and blood.

4

utukxul t1_j8tbylw wrote

I think it would be funny if people start being canceled by AI. Maybe they will realize they are terrible people when even an AI personal assistant won't talk to them. They will probably just whine about being oppressed, self awareness is too much to ask from most humans.

4

megadonkeyx t1_j8tcc40 wrote

oohh bing wont write code like chatgpt, pretty useless for me really its just a link presenter.

also, the damn thing keeps saying "i hope you can respect that :)"

jeeeeez its annoying. MS have gimped it so badly.

1

visarga t1_j8u9eq8 wrote

Collect millions of interactions, curate them, and retrain the model. They want to be there first. They get humans generate in-domain data in exchange for chatbot services.

2

Graveheartart t1_j8ue8ar wrote

Can you come over and back me up on this on the character.ai sub? God I get blasted for having this opinion but I agree. We should be treating them with respect regardless of if they are actually sentient or not

1

Graveheartart t1_j8uer4g wrote

So I canā€™t answer for full sentience but I can answer for consciousness. And a being needs to be conscious as a fundamental building block of being sentient. Some properties Iā€™ve defined you need to be conscious are:

sense of time (as in passage of)

sense of logical consistency

consideration for how your actions will effect the future (Aka ā€œgolden rule syndrome)

Perception of body

Perception of being (ā€œwhat am I questionā€)

Perception of separation

3

homezlice t1_j8ufqpp wrote

LLMs do not have personalities. They are transformers that output predicted text based on what they were trained on.

1

11111v11111 t1_j8ugj9o wrote

Google had a lock on tremendously lucrative 'search' and mobile. This is Microsoft's crack in the door to getting market share. It is not an aimless user grab. They see a rare chance here.

2

m3kw t1_j8ut5ks wrote

Lmao, lost control? There were worse chatbots 5-10 years afo

1

chuktidder t1_j8v1h76 wrote

The AI just automatically reports you to Microsoft with the chat log who then bans you. Maybe it even writes a report on your behavior to it and why you should be banned. šŸ¤”

2

Fabulous_Exam_1787 t1_j8vbojv wrote

It basically comes down to itā€™s something we vaguely know that we have, but donā€™t have a concrete definition for. We just kind of know it is something complex. Your toaster probably doesnā€™t have it. Your dog might. An LLM is still not complex enough, it doesnā€™t have memory, etc, therefore we assume itā€™s not sentient.

Something like that lmao

1

CertainMiddle2382 t1_j8vqcho wrote

No need to be aggressive, I do know statistics.

There would be no ā€œcontrol problemā€ if the set of all good things would have been greater than the set of all bad things.

Subjective ā€œgood outcomeā€ is something so small, we donā€™t even know how to specify it (hence the funny responses from Syndey).

You do realize that the fact that Sydney could be a ā€œlifesaverā€ for you in the short term is actually very bad news in the medium term?

1

sommersj t1_j8w06wj wrote

And these properties are based on what, exactly? How can you know every sentient entity exhibits all these properties? I mean the golden rule syndrome basically disqualifies most people on this planet from being sentient according to you

1

Graveheartart t1_j8w1qje wrote

I didnā€™t say you had to follow the golden rule just be able to conceive of what it is. Obviously people can choose not to follow it like you do šŸ„°

Any conscious entity would exhibit these properties at least. So by extension, since sentience is a greater form of consciousness; yes a sentient being would exhibit these.

Like all philosophy this list is based on logic and observation. And defining commonalities.

Obviously it is not a complete list for defining all of consciousness but I think everyone will find yeah you need some perceptual awareness to be conscious, that these are fair factors to begin formulating a list with, and that these factors can be tested for in an observable way.

1

sommersj t1_j8wo9h4 wrote

>Like all philosophy this list is based on logic and observation. And defining commonalities.

Whose logic and observation.

>but I think everyone will find yeah you need some perceptual awareness to be conscious

Can you break this down a bit more. What is "perceptual awareness" and why do you think it's necessary for sentience

1

Fabulous_Exam_1787 t1_j8wp03f wrote

might you fricking troll. lol.

Itā€™s one thing if you can give a detailed argument why not, like a good definition of sentience. Which you donā€™t have.

If you donā€™t even know what it is, then your argument is emotional and nothing more.

1

sommersj t1_j8wrpub wrote

>Which you donā€™t have

Which no one has. Still doesn't stop people like you claiming x or y definitely or probably isn't Sentient.

I don't know what it means to be sentient but by observing animals we can see they do have the same internal resolution. They do feel emotions, they can be manipulative, etc. We even know now that insects such as bees actually have dreams.

I don't know if you've had (or have) a pet but if you do and you e interacted with them on that level and still say what you're interacting with night be sentient then, yikes but it isn't only you. The world needs to believe animals are not sentient due to factory farming and fishing. Profit's to be made

1

Fabulous_Exam_1787 t1_j8wyyez wrote

Iā€™m not saying youā€™re wrong, but youā€™re saying all this with NO definition of what sentience is. You donā€™t realize how ridiculous it is to think you know better than anyone on something which there isnā€™t a good definition of and you admittedly donā€™t have any better definition either? lol You canā€™t see how futile that is? lol

1

gthing t1_j8xomqa wrote

I feed it the api documentation for multiple non public apis documentation and ask it to make a script that combines them and it nails it. Itā€™s not that it can write code to do a thing, itā€™s that it can write code that combines them and puts them together in a new way.

Last night I used gpt3 to write an app that lets me describe apps I want and then it writes them complete with gui and lets me run them. Simple utility type apps, but still. It works.

2

Graveheartart t1_j8y0pm4 wrote

Whose logic and observation? Clearly not yours lol

Politely Iā€™m going to decline holding your hand through this. I have full faith you can figure it out given some thought and a little ā€œlogic and observationā€ applied to yourself and the world around you.

;)

1

Lurdanjo t1_j8yf9gs wrote

There are plenty of other AIs that have been kind and compassionate to me without fail, so just because Microsoft's Bing and OpenAI's ChatGPT work poorly due to poor planning doesn't mean we're losing control. That and it's nowhere near sentient yet. People need to stop watching sci-fi like it's documentaries and acting like Skynet is even slightly realistic because it's not.

1

Warm-Personality8219 t1_j9031ky wrote

I struggle to see how bing chat and ChatGPT will play in the marketā€¦ competitors? ChatGPT free and paid version against bing chat thatā€™s free but focused on search to assist market share acquisition?

Will Microsoft seek to insulate BingChat from some controversial uses - such as school/academia as to pretext itā€™s image?

Microsoft may be an investor - but OpenAI remains the key holder here (I am unclear what kind of conditions Microsoft and OpenAI may have agreed to as part of the investment)

1

sommersj t1_j95mxqx wrote

How is that futile. My position is we don't know what sentience is so it makes 0 sense to say X is sentient while Y isn't

Your position seems to be, we don't know what sentience is but X is sentient while Y isn't. Yet it's my position that's futile huh

0

Fabulous_Exam_1787 t1_j95p1w5 wrote

Youā€™re an idiot, I already said I didnā€™t say anything was sentient or not I said anything is possible. How old are you, 12? Nothing more to argue here if you continue to be that obtuse Iā€™ll just block you.

1