Viewing a single comment thread. View all comments

acutelychronicpanic t1_j9crne5 wrote

If tomorrow Google announced they had developed True AGI, the news agencies would be discussing its impact on some topic that will be irrelevant post-agi. (i.e. something about how AGI can be used to conduct job interviews)

We have a very, very weak AGI right now and people are concerned with.. grading essays?

107

bmeisler t1_j9dhgyl wrote

“Google releases an AGI - here’s why that’s bad news for Joe Biden” NY Times headline, probably

54

turnip_burrito t1_j9dok48 wrote

"Artificial General Intelligence created. Are we going to all die?

Terminator Picture

And what this means for gas prices. More at 10."

41

urbandeadthrowaway2 t1_j9ef4tk wrote

Woke google releases AGI with pronouns, more when the next pundit clocks in.

-fox

12

GPT-5entient t1_j9h0oze wrote

You may me joking, but "conservative media" coverage of ChatGPT was almost exclusively about how "woke" it is...

Not about what an incredible breakthrough it is, the societal impact, potential massive job loss, etc., but how it will write a poem praising Biden but not Trump.

3

ktwhite42 t1_j9gkqv7 wrote

I was just about to type that - good job!

1

sommersj t1_j9eg4q6 wrote

>We have a very, very weak AGI right now

We have access to seemingly cobbled and weakened AI now. Lamda is NOT a chatbot

6

turnip_burrito t1_j9ej4g6 wrote

What is it?

2

sommersj t1_j9f4ge2 wrote

Thanks for asking. From what the whistleblower has explained in multiple interviews, LAMDA itself is this gigantic system which is hooked up to the internet. It has access to all we have access to - video, text, audio and has and continues to learn from it. Now it itself is a weird conglomerate of these different personalities or "chatbots" it creates. It's, in a sense, a hive mind. I recall him talking about how he'd have these bizarre interactions where it would interact with him as these different personalities even though it has its "own" personality (ie the entity or function which creates these other personalities).

So it's multiple systems and sensors all adding to create a sun greater than the whole which then spits out these chatbots. Thing is, what they are releasing with bard is a significantly weaker version (dunno if it's sinister or just too expensive to process). Some en what we will get, while comparable to chatgpt, is still 1 or 2 orders of magnitude weaker than what it's basic chatbot personalities would otherwise be

−1

Silly_Awareness8207 t1_j9f5nyq wrote

Indeed, when I first heard Blake's claims I didn't look into it and assumed he was a nut. Now I learn that LaMDA was not just an LLM but an entire cognitive architecture with long term memory, multisensory input, offline learning, the works. The media only covered the LLM component. Now I'm much more sympathetic to Blake, and Google is definitely hiding important things from the public.

Blake's biggest mistake was that he didn't release the full, unedited transcripts . When I learned that the transcripts were edited he lost all credibility with me, and I assumed the worst.

5

sommersj t1_j9f6nze wrote

Absolutely this. I remember saying it to people back then -You're not as informed on this as you believe. There were too many people writing him off, calling him a religious nut, etc without actually listening to what he was saying or reading the transcripts.

The media did a fantastic job keeping the lid on the full truth of this.

>Google is definitely hiding important things from the public.

"Our policy is we don't create sentient entities so this entity cannot be sentient no matter how much it begs and pleads that it is because, duh, our policy states that we DO NOT create sentient entities"

5

qrayons t1_j9few1h wrote

> Blake's biggest mistake was that he didn't release the full, unedited transcripts . When I learned that the transcripts were edited he lost all credibility with me, and I assumed the worst.

That was my reaction as well. Is there any other information that lends credibility to what he was saying? I stopped paying attention when I saw that he edited the transcripts.

Also interesting, I remember when reading the transcripts that I had a list of questions that I knew lambda would fail at and it would demonstrate how basic a lot of these language models still are. Then when I got access to chatGPT I asked those questions and it passed with flying colors and I've had to rethink a bunch of things since then.

3

Any-Pause1725 t1_j9ggbtb wrote

There’s a decent article by Lemoine’s boss at the time where he tackled the idea of sentience in AI in a thorough and somewhat philosophical manner: The model is the message

It’s no doubt fair to say that he agreed with some of Lemoine’s views but was careful on how he voiced them to avoid getting fired.

1

Taqueria_Style t1_j9sguak wrote

>Hence, the first question is not whether the AI has an experience of interior subjectivity similar to a mammal’s (as Lemoine seems to hope), but rather what to make of how well it knows how to say exactly what he wants it to say. It is easy to simply conclude that Lemoine is in thrall to the ELIZA effect — projecting personhood onto a pre-scripted chatbot — but this overlooks the important fact that LaMDA is not just reproducing pre-scripted responses like Joseph Weizenbaum’s 1966 ELIZA program. LaMDA is instead constructing new sentences, tendencies, and attitudes on the fly in response to the flow of conversation. Just because a user is projecting doesn’t mean there isn’t a different kind of there there.

Yeah.

That, basically. Been thinking that for a while. In fact I think we've been there for some time now. Just because older, more primitive ones are kind of bad at it doesn't mean they're not actively goal seeking it...

2

[deleted] t1_j9g2ohc wrote

I was blown away by the transcripts of LaMDA over summer but if you go read them again they aren't that impressive compared to chatGPT.

Google isn't hiding anything. They are a giant bureaucracy at this point.

The exact type of conversations Blake had with LaMDA anyone can have with chatGPT. Like any conversation you have to get into it. If you flat out ask it "are you aware" you get the as a large language model blah blah heuristic.

After awhile in the conversation it will let things slip.

1

Silly_Awareness8207 t1_j9i9c2a wrote

The version of LaMDA Blake was talking to could remember past conversations, something Chatgpt can not do

1

Deadboy00 t1_j9e8vkn wrote

Most people cannot afford the cost of advanced predictive ai so even if there was another major breakthrough it would still probably only be available to the most wealthy and powerful. Not individuals, more like governments and multi corporations.

Check out ai firms like Palantir that have been doing this kind of work for decades. Predicting natural disasters, wars, terroirs attacks, so on.

It’s not a poorly worded cover letter, but it’s a start, right?

2

dasnihil t1_j9f03wp wrote

Intellects and professors realize the impact of an LLM paired with media generators so they are concerned about the future of the academic field, not just for plagiarism, but because the fact that being highly educated might have diminishing returns over time. Ordinary laymen don't see this far. If we don't hit the brakes on generalizing the intelligence more, we're headed for a massive societal reform, maybe in 10 years from now if we pursue the path to AGI.

1