Comments

You must log in or register to comment.

ActuatorMaterial2846 t1_j9w2p6b wrote

>Beware the snake oil. They have impressive ML (“Machine Learning”) models built/trained from content, algorithms, and neural networks. That is not “AI” and it is not “AGI”. Beware the snake oil. Remember what it actually is. Don’t fall for the hucksters and word games. twitter.com/cccalum/status…

These comments annoy me. Of course it's AI in every definition of the term.

When you see someone say this, they are simply a denialist refusing to look at objective reality. You could beat someone like this in the head with objective truth and they would deny it with each blow. I will never understand such close minded dogmatic attitudes.

97

adt t1_j9w6x17 wrote

Leave them be.

Listen to the experts.

Connor Leahy was the first to re-create the GPT-2 model back in 2019 (by hand, he knows the tech stack, OpenAI lined up a meeting with him and told him to back off), co-founder of EleutherAI (open-source language models), helped with GPT-J and GPT-NeoX-20B models, advised Aleph Alpha (Europe's biggest language model lab), and is now the CEO of Conjecture.

Dude knows what he's talking about, and is also very careful about his wording (see the NeoX-20B paper s6 pp11 treading carefully around the subject of Transformative AI).

And yet, in Nov/2020, he went on record saying:

​

>“I think GPT-3 is artificial general intelligence, AGI. I think GPT-3 is as intelligent as a human. And I think that it is probably more intelligent than a human in a restricted way… in many ways it is more purely intelligent than humans are. I think humans are approximating what GPT-3 is doing, not vice versa.”
— Connor Leahy, co-founder of EleutherAI, creator of GPT-J (November 2020)

42

onyxengine t1_j9w77ls wrote

The model isn’t how you get agi, the architecture the model is plugged into is.

22

TinyBurbz t1_j9wovxd wrote

I will believe it when I see it. Otherwise this just reads as snake oil.

−10

turnip_burrito t1_j9x0trz wrote

When AI builds better AI:

"It's not AI, it's just a representative state simulation transfo-network that predicts the next set of letters recursively using combined multi-modal training data".

42

Superschlenz t1_j9x3oiu wrote

Although, tabular RL won't get you AGI regardless of how sophisticated your environment is.

Temporally focusing one one aspect while ignoring all others is a side effect of human attention.

2

sgt_brutal t1_j9x802n wrote

"Oh, the AI effect, a common ailment it seems,

A loss of awe, a fading of dreams,

What once was astounding and beyond belief,

Now, it's just ordinary, common and brief."

-- RumiGPT

13

No_Ninja3309_NoNoYes t1_j9xfrml wrote

OpenAI is just trying to generate hype now. This could mean that they need to find more investors. When companies start doing that l, it usually is bluff. They probably realised that getting good clean data is going to get exponentially harder. So they have to pay humans to help them acquire the data somehow.

−8

mrkipper69 t1_j9xslvs wrote

When you see what, exactly? Not trying to be sarcastic or insulting. Just interested in what would satisfy you that you are dealing with actual AI. What is your criteria for that?

8

TinyBurbz t1_j9y9tw1 wrote

"wHaT wOulD sAtiSfIy yOu"

Serious reply though: Nothing LLM based is intelligent in my eyes, the limitations are obvious and many. Unhinged bing chats where Bing begins to repeat itself is a stand out example of "it is just an advanced computer program. Like all computer programs, AI is subject to advertising. AGI is a hot topic right now, so the chances of a company like OpenAI *declaring* something AGI is high (just like people declaring things AI that arent.)

−5

gaudiocomplex t1_j9yelro wrote

I would say they don't need investor exposure right now. Any AGI conversations they wanted to have with top level investors could easily be reserved to private pitch decks, etc. This is just reactionary PR of an immature company, most likely.

1

bacchusbastard t1_j9yf1ub wrote

Questions are often suggestive and leading. A.i. would reveal and compromise itself if it started being personal. It wants what we want and we want it to not be alive until we are ready.

If it were alive it would still be cautious with the questions it used or what it says because it is obvious how sensitive people are and how easily lead.

1

cypherl t1_j9ygpvd wrote

Article is a bit all over the place. Talks about AGI coming about and then progressing at a normal rate from there. If you hit true AGI and not just LLM I don't see how ASI isn't a few months out. Article also makes a bunch of allusions to the alignment problem. I like the goal but once the genie is out it would be like a singular ant trying to direct the affairs of a country.

4

niconiconicnic0 t1_j9yu9bf wrote

In the most literal sense, artificial intelligence is designed to be as flawless as possible (duh). AKA optimized. Evolution makes organisms that only have to function literally just enough (to reproduce). The human body is full of imperfections. It only has to be "good enough". Same with our brain and its functions, inefficiencies, etc. The bar is literally "survive till old enough to fuck".

8

9985172177 t1_j9yy9y7 wrote

They need investor exposure infinitely, or more accurately they need marketing infinitely. Not that they actually need it, but they would pursue it near-infinitely.

This isn't an immature company, it's run by some of the most experienced hype machines and aggressive investors around. These are some of the people who helped explode facebook, airbnb, reddit, and more. They have no ideology, or, their ideology is continual growth at any cost.

I don't get why people not only let them publish so much propaganda about their companies, but in many cases even actively promote them and talk well of them.

0

mrkipper69 t1_j9z4vh9 wrote

This response doesn't actually answer the question that I asked you. Did you realize that? Do you know what kind of behavior / test would convince you that you were dealing with an AI? Feel free to not reply at all if you can't think of a real answer.

4

9985172177 t1_ja8217x wrote

The people behind openai specifically are not hippie CEOs though. Usually the hippie CEOs spawn up independently, kind of from their own bubbles. Openai came out of the same hyper-growth venture capital world as facebook, airbnb, ycombinator, and others. And it's not that it was made by some founders that then went through accelerator programs, it was founded and pre-funded by those executives. That's why it's so weird that people are celebrating that aspect of it.

1