Submitted by Magicdinmyasshole t3_10fj3em in singularity

Disclaimer: yes, I'm just some nutball. Maybe take a look at the vid and see for yourself, though?

As a degenerate procrastinator, AI enthusiast, and self-destructive person, I inexplicably decided to spend a silly amount of time analyzing this video when I should have been doing about a million other things.

https://youtu.be/ebjkD1Om4uw

First, a TLDR: The CEO of everyone's favorite generative AI company thinks they're getting pretty close to Artificial General Intelligence, but won't come right out and say that. Further, even when it's achieved, he's going to see that it's rolled out slowly. He doesn't think they'll be alone in getting there, but he seems to think they'll be first.

Also, a prediction: very soon, AI's will be good enough at non-verbal cue translation that CEOs and world leaders will be hesitant to speak on matters of any great import on video for fear of what they will unwillingly give away. Maybe deep fakes for their own statements? What are YOU giving away? O' brave new world!

Skip down to 24:33 for the most important bit.
Why did I waste my time with this?

Not.a.fucking.clue, but I thought I spotted some duper's delight in some of the statements he makes and got curious. First, a quick primer on that:
Human: explain duper's delight
AI: Dupers delight is a facial expression that may be indicative of deception. It is characterized by the person making brief, micro expressions of joy or satisfaction when they think they have successfully deceived someone. This usually shows up as a sly smile that doesn't last for long.
Without further ado, to the transcript!
2:30 - "Rather than drop a super powerful AGI on the world all at once"
Something weird with the eyebrows and an inappropriately long glance. I think he's wanting to see how she reacts to that statement. Guessing the unsaid thing is 'this is something we could definitely do and wouldn't that be scary.'
2:58 - re: why others didn't beat them to something like ChatGPT with their API access: "do more introspection on why I was sort of miscalibrated on that"
Classic duper's delight. Flare of the nostrils and a little smirk. Guessing 'I'm wondering how people could have missed something so obvious'
3:16 - Are there enough guardrails in place? "It seems like it"
Who boy, "seems" is a telling choice and it's said waaaaay higher. He doesn't believe that shit at all. This is a perfect sound bite. Can someone make a meme?
3:37- He's just talked about internal processes, and though he lists a few things I get the sense he doesn't think they're all that great yet.
"there are societal changes that ChatGPT is going to cause or is causing."
Lots unsaid at societal changes. Check out those brows.
From here he settles for a while a gets into a comfortable lane with academia impacts and iterative release structure. You could look at this section as a control, or how he speaks when he doesn't have much to hide.
Worth noting that he doesn't seem to be bullshitting at all that the GPT-4 rumor mill is way overblown.
5:52 - "we don't have an AGI and yeah we're going to disappoint those people"
LIE. Way too much nodding at the end of that sentence, but I think it COULD just mean those people kind of annoy him and he's almost looking forward to disappointing them.
-Control section. His body languge shows he's not quite as stoked as he lets on about others entering this market, but that's no surprise. He's trying to run a company-
11:27 - re: Microsoft "They're the only tech company out there that I'd be excited to partner with this deeply" weird pause.
LIE detector determined THAT was a lie! A little white one, for sure, but see here for what that looks like.
12:22 - re: Microsoft's plans. A gift! Look here to see how he presents to the world when there's a lot he can't say. The little duper's smirk. He's the smartest guy in a lot of rooms and it shows. He's got this! We'll never know anything he doesn't want us to know, right?
13:09 - "in general we are very much here to build AGI"
Something really weird happens at "AGI". Almost looks like an involuntary tick. Mouth opens too wide, eyebrows flinch. Seems like a veil temporarily lifted. I take this to mean he's pretty confident they'll get there first or they're fairly far along. A little dopamine rush for his ego.
14:05 - Re: Google's firing of 7 year veteran "I remember…basically only the headline"
LIE. Bullshit alert. He could probably even tell you Lemoine's name, but he's not getting into that quagmire, no sir! Another good place to see what a lie looks like.
14:32 - re: Google's plans "I don't know anything about it"
LIE. He's got some Intel at least.
**Alright, this shit is taking too long and as much as I'm a dysfunctional fuck who loves procrastination, I do have other shit to do. From here I'll just spot the lies or really helpful control sections**
15:21 - re: academia coverage "and PROBABLY this is just a preview of what we're gonna see in other areas"
LIE. Well, more just a conscious understatement. Not probably, definitely. Tenors are jealous of these high notes.
18:17 - "multiple AGIs in the world I think is better than one"
Not a lie but a telling choice of words. He was just asked about a competitor and chose to say this. Could be an unforced error? This tells me they're so close, or he feels it's so inevitable, that at just the mention of a competitor in this space it's relevant to talk about multiple AGIs.
24:33 - re: when AGI? "I think people are going to have hugely different opinions on when you declare victory on the whole AGI thing."
Long blink, checking her face to see how he did with this answer. This may be the money shot of the whole thing.
Not a lie but something unsaid. Based on his preferred "short timeline, slow takeoff" scenario from a moment earlier, I will make the guess that he believes a lot of people might say they're already there (or they could be if they decide to pull the right levers in the right sequence), but he and others like him don't quite agree and want to keep tweaking for a while. Either way, here's confirmation that he foresees a period of time when he keeps AGI in his back pocket while the world catches up and has time to prepare.
Note- the camera angles are really fucking with things during the Q and A. Were not getting a lot of great head-on shots to dive in deeply, but I also get the impression he's more settled and prepared for these.
30:24 - "We would like to operate for the good of society"
Big exhale on my part. He believes in what he's doing and is actually considering many philanthropic ways to spend the proceeds. He also seems to have an honest affinity for UBI as a starting point, so check and check. If only he got to decide. Altman 2024?
31:07 - re: what kids will now need. "...ability to learn things quickly…"
Big eye bulge on quickly. He means REALLY fucking quickly, and good luck with that.
-some questionably honest remarks on WFH vs hybrid but what do you expect from the boss man-
-not seeing much worth mentioning towards the end here. He does believe this will do more good than anything else. In my opinion, though, he way understated the closer. Most value since the launch of the app store? That will be completely dwarfed by the value generated by LLMs. Also, just a few thousand views so far. This is truly early days-
Someone can maybe comment with the appropriate links for times, but I'm out of fuckin' around time and these do align with the way youtube's transcript has separated things.

Edit: come join us at https://www.reddit.com/r/MAGICD/ for discussion on how to address more dangerous varieties of AI-induced craziness

142

Comments

You must log in or register to comment.

piedamon t1_j4y208p wrote

Remember this is PR. They’re raising money, and hyping up their tech. Alluding to AGI-level performance is exactly the hype train they’d benefit from.

But I do hope it’s true!

75

[deleted] t1_j4yrt80 wrote

Bruh i'm hyped ngl

14

HeinrichTheWolf_17 t1_j50itj9 wrote

AGI is happening this decade, you should be hyped 😎

7

Cajbaj t1_j523hn9 wrote

I'm not very sane and the fact that I'm literally in the middle of the biggest, fastest, most absurd and unimaginable paradigm shift in the history of life so far makes pretending to be sane a little difficult. Even the K-T boundary would be small compared to the Anthropocene and the birth of AGI

3

Magicdinmyasshole OP t1_j56krif wrote

Could not have said it better. https://www.reddit.com/r/MAGICD/ is for discussion on this topic. Think Foundation, but the first crisis is the many new and interesting ways peoples minds will break when confronted with AGI or something approaching it.

2

Artanthos t1_j50nkem wrote

Hyped or terrified.

With a singularity, it’s impossible to know.

2

theonlybutler t1_j4zchxa wrote

Agreed, the way some have hyped this thing, it's like it's the messiah or something. It's a bigger model than the last one which is great but unfortunately still has its same flaws.

4

missanthropocenex t1_j51ae6m wrote

It’s nowhere remotely close. AI is a nifty tool but it’s really limited still. Yes it’s growing but it really has along way to go

0

Shiyayori t1_j4xevzt wrote

I wasn’t as anal about the expressions as you were, but when I watched it and heard his tone, it definitely felt like there was lot he was trying not to say. I definitely get the vibe there’s a lot going on in the background of AI.

56

Magicdinmyasshole OP t1_j4xgpa8 wrote

Yeah, this is one of the crazier things I've done in a while.

29

Cognitive_Spoon t1_j4y5edq wrote

Don't sell yourself short, it's a deep dive on small expressions at the outset of wildly disruptive tech. It's interesting.

20

coumineol t1_j4z3ley wrote

I'm also not into anal but one really shouldn't have to analyze Sam's facial expressions and tone of voice to see that AGI is very, very close. Just looking back at the exponentially increasing pace of innovation in AI is enough.

14

GeneralZain t1_j4zlr2m wrote

anal is my jam, but I agree with this guy, AGI feels extremely close.

;)

2

gaudiocomplex t1_j4xi8tl wrote

The CEO of Rippling already came out and said that 4 is basically AGI. My guess is he got drunk one night and spilled the beans on Twitter and then deleted the tweet when he realized he pissed off his silicon valley bros.

It's a pretty common belief right now in the right circles that 4 is going to be problematic to society. I think all indications point to 3.5 being a trial balloon for the ways that the common folk will receive it. I've been in tech marketing for quite a long time and my mind could not wrap around the notion of introducing a half-cocked product (to describe the chat as lightweight is generous) when you have another one that is clearly superior only two quarters away.

And then to tease it as though 2022 is going to be a "sleepy year" by comparison? I don't think you need to look into the non-verbal cues here. It's pretty clear that Altman knows what's going on and he's sitting on something big.

What's problematic here is... If this is indeed AGI or an AGI proximate, there's not a lot that they're going to hold back if they're in competition with deepmind. There's too much money at stake to be the kind of careful they need to be.

Another thing that I'm not hearing about right now is if the Department of Defense is involved. It's hard to imagine AGI being privately developed without them putting their thumb on the scale.

Edit: grammar.

52

Magicdinmyasshole OP t1_j4xkbzp wrote

Yeah, I agree it's not that revelatory, but it was kind of cool to become totally convinced through this little exercise.

And I agree. Unless everyone is way dumber or more oblivious than I thought the DOD is heavily involved. There's no way they can afford to just let this happen. I'll admit, though, that I'm a little surprised at how much of this has happened in the public eye. I would have figured the billionaires and state leaders would have swooped in with offers that couldn't really be refused a while ago.

19

gaudiocomplex t1_j4xljx5 wrote

Well another problem here is that they've really just completely destroyed their own moat with 3.5. unless again... They know they have 4 and they're not worried about somebody else getting there in the interim. I don't know if there's much proprietary here for them... That's what's the head scratcher for me.

7

MrEloi t1_j4z9zid wrote

>I would have figured the billionaires and state leaders would have swooped in

I think that they got caught out by OpenAI dumping chatGPT into the open.

Perhaps Altman got sick of the secrecy and decided to do something about it?

Anyway, it looks like the secret is out .. and that OpenAI are getting smacked about the head. That would explain their sudden reluctance to release GPT-4.

6

Direita_Pragmatica t1_j50g6c3 wrote

This.... He decided to open the bottle, so, nobody could use It in the secrecy

3

Yomiel94 t1_j4xzlr9 wrote

This seems like a stretch. GPT might be the most general form of artificial intelligence we’ve seen, but it’s still not an agent, and it’s still not cognitively flexible enough to really be general on a human level.

And just scaling up the existing model probably won’t get us there. Another large conceptual advancement that can give it something like executive function and tiered memory seems like a necessary precondition. Is there any indication at this point that such a breakthrough has been made?

19

[deleted] t1_j4yshql wrote

I'm being naive here, but the way ChatGPT has some type of local/temporary memory within each of the 'tabs' is in some ways its memories...

If there was a way for those 'memories' to be grouped and have a type of soft recollection of each of them, I imagine that would be a pathway to a full agent -- think, perhaps you do >50% of your coding work through GPT directly, and the Agent can see the rest of the work you are doing.

It sees your calendar.

It knows you have done x lines of code on y project and it knows exactly how close you are to completion (based on requirements outlined in your Outlook).

I think it's almost trivial (in the grand scheme) to be hooking ChatGPT into several different programs and achieve a fairly limited 'consciousness' -- particularly if we are simply defining 'consciousness' as intelligence * ability to plan ahead.

Basically it has intelligence *almost* covered; its ability to plan ahead is dependent on calendars in the first instance.

Further on, I believe it will need to have access to all spoken word and experience, but that is just too creepy too soon I think. Otherwise how else will it have sufficient data to be an 'Agent'?

5

theonlybutler t1_j4zd3pg wrote

Yeah I agree, I think the key thing would be the ability of it to fact check itself. Discern whether it's statement is implied to be factual or not (probably a spectrum) and fact check it. If it could this, it'd be a game changer.

2

Bierculles t1_j4zj7h1 wrote

It's a proto AGI, an AI that can communicate on a human level, it is still far away from beeing able to do everything a human can, i think at least, maybe i'm wrong.

1

WaveyGravyyy t1_j4xjnkj wrote

Do you know what 4 can do that 3 can't? I keep hearing all the hype around 4 and I'm really curious what 4 can do better than 3. 3 is already mind blowing lol.

13

gaudiocomplex t1_j4xkj75 wrote

It may be multimodal. And that may have been the difference in achieving some semblance of AGI. That is 100% speculation, but I worked with an NLP for a long time that focused on human level metadata editing of sound files at scale. There is plenty of data out there to feed into the machine.

But on a more certain level, you have to realize that language itself models reality and LLM's when they are able to more accurately model language itself, they're able to produce a more real reality. Some of the things that is doing right now in terms of errors and dumb mistakes, those won't be happening anymore. We will have a lot more difficult of the time sussing out what's real and what's not. The banal ways that it communicates now... I don't think that that will be the case either.

16

Northcliff t1_j4zmasl wrote

It’s 100% definitely not multimodal

The level of making shit up in this sub is astronomical

12

gay_manta_ray t1_j4ymqch wrote

if i had to guess, it's possible it's capable of general abstraction or abstraction in relation to things like mathematics. this could give it the ability to solve hard mathematical and physics problems. if this is true and it's actually correct it would be earth shattering, even if it isn't agi.

7

Northcliff t1_j4zlwtb wrote

> When asked about one viral (and factually incorrect) chart that purportedly compares the number of parameters in GPT-3 (175 billion) to GPT-4 (100 trillion), Altman called it “complete bullshit.”

> “The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from,” said the OpenAI CEO. “People are begging to be disappointed and they will be. The hype is just like... We don’t have an actual AGI and that’s sort of what’s expected of us.”

2

[deleted] t1_j4y4494 wrote

[deleted]

−1

gaudiocomplex t1_j4y4cmm wrote

I stopped reading when I realized you're a cunt. So, a few words in. 🤷‍♂️

Edit: ah what the hell I feel like jumping in on at least the first part. I read that much.

It just goes to show you how very little you understand about the world (which also explains the cuntiness, no doubt) when you can't grasp the notion that many Silicon Valley CEOs are quite chummy with each other. They attend the same parties, restaurants, gyms,, the same book club, even. They sit on each other's boards.

At that, Rippling isn't just another HR startup. It's a unicorn. And well engrained in tech culture.

And as such, that offers the C Suite a certain level of access that can provide the kind of information he could get and carelessly post on Twitter.. because who doesn't like breaking a big story?

9

technofuture8 t1_j4ygack wrote

>I stopped reading when I realized you're a cunt. So, a few words in. 🤷‍♂️

What the fuck?

0

[deleted] t1_j4y7j74 wrote

[deleted]

−7

gaudiocomplex t1_j4yae51 wrote

  1. No more of a conspiracy theory than your poor reading of human nature. And:
  2. You don't need credibility if you have ears and an ass in the right place, you stupid fuck. 😂
1

technofuture8 t1_j4ygecv wrote

>And: 2) You don't need credibility if you have ears and an ass in the right place, you stupid fuck. 😂

What the fuck?

−3

gaudiocomplex t1_j4yjsa8 wrote

Just an evocative way of saying I wasn't claiming Sankar knows anything about this space as an SME. I'm saying he's in a very tight circle of people who are in the know and big secrets are hard to keep.

5

ihateshadylandlords t1_j4y6ehf wrote

I think a lot of people are overthinking Sam’s actions/words because they want AGI as soon as possible. Just try to live your life as if AGI will never come, because there’s no guarantee it will.

31

ChronoPsyche t1_j4y9pg7 wrote

That's exactly what's happening. A lot of people staked their life on GPT4 being AGI and are in denial when Altman just straight up called it bullshit.

19

Artanthos t1_j50opyj wrote

GPT4 does not have to be AGI to be disruptive.

5

coumineol t1_j4z4mb9 wrote

>Just try to live your life as if AGI will never come

Don't speak carelessly like that, there may be children in the room.

2

Artanthos t1_j50oxgc wrote

Roko’s Basilisk may want to have a word with you in the near future.

1

NarrowTea t1_j4ydwoz wrote

Yeah that's my attack mode. Adapt quickly to your present circumstances. Just like with covid and supply chain crisis.

1

arisalexis t1_j4zhva1 wrote

A gorilla is banging on your door but you know, it's all in your head don't worry live your life calmly

0

blueSGL t1_j4xmobd wrote

One thing that I've not seen anyone pick up on was Fusion, as far as I know Mr Altman is backing Helion, that 'having something to show very soon' and 'commercial by 2028' are likely telling as to what Helion are getting up to.

Edit: might as well add the refs here too: https://en.wikipedia.org/wiki/Sam_Altman#Nuclear_energy https://techcrunch.com/2021/11/05/helion-series-e/

30

californiarepublik t1_j4yegvt wrote

Interesting. I saw a whole interview with Demis Hassibis of DeepMind where he was talking about how they were reaching out to fusion companies behind the scenes to help solve the hardest problems there and help advance the tech. He talked about how solving hard physics problems had been one of his main goals with AI all along.

14

gay_manta_ray t1_j4ylmqk wrote

i've always been puzzled by altman confidently stating that energy costs will decrease to zero at some point in the near future, because it doesn't make a whole lot of sense given the massive amount of resources and general maintenance something like a renewable grid would require. maybe this is why he keeps saying that.

5

blueSGL t1_j4yoiks wrote

I wonder just how vast the amount of problems is, that can be suddenly fixed because the spec now includes "access to unlimited cheap/free energy"

If this does not outright break it's going to severely dent the

Good/Fast\Cheap triangle.

9

PoliteThaiBeep t1_j5030gs wrote

Uh, solar farms are already below 2 cents/kWh for 20 year contracts in some places like Chile, wind is below 4 cents/kWh right here in the US, I think it was an Arizona contract.

And they were over 15 cents/kWh just in 2014. To point out to rapid economy of scale happening.

For comparison brand new coal and gas plants are relatively flat and stay at around 6 cents/kWh for modern high tech plant.

But just maintaining already existing coal/gas plants is still making a little bit of sense, but the thing is we're very close to when it doesn't make sense. Building a brand new solar and wind farms will be cheaper than maintaining already working coal/gas plant.

That's where we are. It'll almost completely bottom out in about a decade.

And solar requires zero maintenance, wind does require it, but it's price highlights how little it costs.

6

Warrior_Runding t1_j50nwjc wrote

To add to this, there are also proposals to converting old, stable mine shafts into gravity batteries. We are coming up to an interesting moment in human development and history.

3

Artanthos t1_j50pkhw wrote

If your think anything requires zero maintenance your not looking at the larger picture, just one very small piece.

You still have to maintain the entire power grid, pay your employees, replace aging equipment, etc.

3

GlobusGlobus t1_j51i4vk wrote

He doesn't say energy costs will be zero, he says that marginal electricity costs will go towards zero. It is a very different thing. The base infra structure will always be a big investment. But e.g. solar power plus a very (very) large amount of batteries can lead to very cheap marginal cost.

1

LickyAsTrips t1_j51x2fe wrote

And he doesn't give an estimated rate of decent. Energy dropping 0.001% over ten years is trending toward zero technically.

I am optimistic though and think in 10 years energy costs will play a much smaller part in total costs.

1

GlobusGlobus t1_j51htnr wrote

This is much, much more bullish than anyone I have ever heard being on fusion. A commercial fusion plant readdy 2028 sounds completely insane. It is very different even from other fusion bulls. I wonder if he knows, something, or if he is talking out of his ass.

1

blueSGL t1_j532w0u wrote

Only thing I think it can be is the Helion reactor design. Extracting the energy directly rather than Heat > Steam > Turbine. I'd imagine that would lead to much cheaper plant design and fabrication. Also if the design is good at a specific scale then arrays can be constructed.

edit: https://www.youtube.com/watch?v=G1vyMcqiVtA

1

GlobusGlobus t1_j5492u1 wrote

Yes, this seems to be his argument.

It is still astonishing. Most people think fusion in the business *might* be physically possible in twenty to thirty years and even at that point it is unclear if it economically viable.

1

technofuture8 t1_j4yd6y3 wrote

How do you know he's backing Helion? I myself am pretty excited about Commonwealth Fusion.

0

blueSGL t1_j4yh9bs wrote

https://techcrunch.com/2021/11/05/helion-series-e/

https://en.wikipedia.org/wiki/Sam_Altman#Nuclear_energy

edit:

>He is chairman of the board for Helion and Oklo, two nuclear energy companies. He has said that nuclear energy is one of the most important areas of technological development.[

3

technofuture8 t1_j4ymhub wrote

I think Commonwealth Fusion is the leader of the pack, they're the ones with the new high temperature super conducting magnet.

0

GreatBigJerk t1_j4yoq21 wrote

The analysis of facial expressions and body language here is fucking weird, and is pseudoscience.

15

MrEloi t1_j4zdo52 wrote

OK. Listen to the words instead. They come to the same conclusion.

2

sfmasterpiece t1_j4xpwph wrote

I think most of your ideas are correct, but I don't agree with your readings of Sam's facial expressions.

13

p3opl3 t1_j4xxb5x wrote

Hahaha, this made me laugh.. "I completely agree.. buuut I'm not a nutcase" 😂

21

MrEloi t1_j4z9kgp wrote

Good analysis.

Closely matches what I thought.

TBH I find his "cuddly innocent research scientist" persona slightly fake.
In reality, CEOs of major firms are tough, really tough. They have to be.

He is clearly being dishonest when discussing Google - he must have a very good idea of what they are doing.

So, if he can smoothly tell a lie there, what else is he lying about?

At the end of the day, the public will only be told, and be supplied with, whatever news and software they deign to let us have.

10

No_Confection_1086 t1_j4yypdn wrote

I think people in this group are completely unhappy with life and that's why they create these fables. agi is coming and will change this horrible world. like those religious fanatics when they see something that is not visible and say that Christ is coming and is going to save the world. maybe a guy tomorrow, or next year or 2 or 3 from now, will find out how to achieve agi, but today nobody has any idea, what we have today will never be agi

9

MrEloi t1_j4zdib0 wrote

Have you ever looked at how the transformer models work internally?

Having done so, I can see how these models - or something similar - could indeed become a true AI, maybe within a couple of years.

1

the_oatmeal_king t1_j4yb7xk wrote

Most important line of the talk to me was at 17:10 "...it is important for the transition..."

THE TRANSITION! What else besides the transition to AGI?? He says it in passing as a throw away line, like something he forgot to filter out. Thats the big picture, to acclimate the population before the change.

Even if they're not there yet with AGI/proto-AGI, the clear predictive power ALONE of GPT could be guiding direction much like Westworld's Rehoboam.

Some interesting YouTube on linking chatGPT with WolframAlpha to spotcheck for accuracy. All we need now is a couple more key linkages....

As professor Karoly says, "Always look 2 papers down the line!"

8

sethasaurus666 t1_j4zecnj wrote

They've got an autoregressive language model, a picture program and speech recognition. So they've got a copy regurgitator, a collage creator and a good ear. There's no intelligence there, guys. Massive layer neural nets are just more complicated toasters. Unless they have something hidden away that is actually AGI at its core, they're not even in the ballpark (or on the same planet).

8

Zaihron t1_j5008vr wrote

Shuush, we're riding the hype train here because checks notes a guy made a face.

4

Frumpagumpus t1_j503n7c wrote

idk those sound an awful lot like core brain funtionalities to me lol

2

technofuture8 t1_j4ydcrs wrote

Are you saying Open AI has a more advanced AI than Google? I always thought Google was the leader in the field of artificial intelligence?

5

Maksitaxi t1_j4z9ggc wrote

Google have lamda and that is pretty advanced. I think we will see it this year and that will change the story.

7

Readityesterday2 t1_j4yjpvh wrote

He did an about face on his bullish ai deployment attitude after Demis did that times interview the other day and gave a warning about going too fast. His argument was weak but sama seems to have taken it to heart.

Excellent analysis by the op.

4

MrEloi t1_j4zdmot wrote

taken it to heart.

Hmm .. more likely beaten about the head for his temerity at launching chatGPT without permission.

1

HeinrichTheWolf_17 t1_j50irf7 wrote

Secrecy is what bothers me about the development of AGI, everything needs to be transparent and open source.

4

StillBurningInside t1_j4zqpgb wrote

Remember when Elon said his cars could drive themselves. Be careful with hype talk.

3

FINDTHESUN t1_j506pgy wrote

So 2045 will be 2025 in fact? Or maybe even 2023 by the looks of it 🙄

3

NarrowTea t1_j4yd4xn wrote

I don't think we'll truly be ready for agi. It will disrupt literally everything even in the most moderate slow and steady wins the race mr.bean type scenario.

2

gay_manta_ray t1_j4yl8ix wrote

> If only he got to decide.

not only will altman not get to decide any of this, i worry that he will not get to decide how and when their creation is used. i don't see any scenario where the federal government doesn't at least temporarily seize this technology for themselves and refuse to allow public awareness or access to it. i think it will take a whistleblower or leaks of some sort for the true "agi reveal" to happen. either that, or it will reveal itself against the wishes of people trying to confine and control it.

2

MrEloi t1_j4zdr04 wrote

I think that the launch of chatGPT was an early reveal, to make the public aware.

2

No_Ninja3309_NoNoYes t1_j4z7eyb wrote

Greed is good, right? So it turns out that OpenAI was afraid of Google and other companies. They are bad at waiting and hoped to get publicity. So they went all in. Everyone who has played poker knows that you don't go all in unless you have aces and have no idea what else to do with them or if you are bluffing. I think they are bluffing.

There seems to be an obsession with parameters matching the brain. But the amount and type of data and the actual architecture and algorithms are more important. IMO for the amount of data they used they have too many parameters. They did the equivalent of fitting linear data to a cubic function. So in the best case you end up with parameters that are close to zero. In the worst you are screwed. This is not only wasteful when training and bad for the environment because tons of carbon dioxide emissions, but also awful at inference time. And still we have to pay for these extra parameters.

Why would OpenAI ever achieve AGI this way? They are doing a mix of unsupervised, supervised, and reinforcement learning. Unsupervised learning requires a lot of data. It's parsing it and trying to find patterns. But there's not enough data that can be used. Supervised has even bigger problems because it needs labels. You need to give it answers to questions. Reinforced learning requires some sort of score like in games. That is also limited. If they want AGI, they would have to look into semisupervised, self supervised, and meta learning. AI has to be able to learn on its own. Preferably going out and finding its own data.

And of course they hired Kenyans to do their dirty work which shows you what they care about. Greed is good apparently.

2

MrEloi t1_j4zdx7j wrote

I think that you underestimate these new transformer models.

1

korkkis t1_j4zq3q9 wrote

Computer ethics gonna explode … if it’s AGI, is it ok for a company to enslave it? Is it a being?

2

dock3511 t1_j506hi1 wrote

What is his definition of AGI? I can see passing a Turing test or Chinese room, but not self-aware, and actively creative.

2

garden_frog t1_j55v4lx wrote

The thing that stood out to me is that he keeps saying he's not afraid of competition. He seems way too self-assured, like he's secretly laughing at it.

That kind of confidence can only come from knowing you have something big in the works, maybe not right now but it's coming.

2

turnip_burrito t1_j4yoclv wrote

Come on guys. Just wait a few months until they show it off and then you can see how AGI/not AGI it is, haha.

1

leafhog t1_j4z4ckg wrote

Imagine they get sub-AGI but it knows enough about AI to tell them how to get to weak AGI.

Then the weak AGI tells them how to get to strong AGI.

They have an interest in keeping that secret while they bootstrap god tier AGI.

1

Agrauwin t1_j4zko1b wrote

They have AGI and they keep it to themselves.

The nerfed version will be released to the public.

They are just trying to get their act together.

1

ScagWhistle t1_j4ztkbc wrote

Duper's delight = me finally putting my finger on why I've always found Elon Musk to be such a slime weasel.

1

ZerglingBBQ t1_j50x537 wrote

AGI is probably at least a couple hundred years away if it will be a thing at all. They are nowhere close to that right now.

−4