Submitted by alfredo70000 t3_11b45w5 in singularity
Comments
blueSGL t1_j9w5qm1 wrote
it's the AI effect writ large
> "AI is anything that has not been done yet."
turnip_burrito t1_j9x0trz wrote
When AI builds better AI:
"It's not AI, it's just a representative state simulation transfo-network that predicts the next set of letters recursively using combined multi-modal training data".
PM_ME_A_STEAM_GIFT t1_j9y0ixj wrote
When AI explores the solar system:
"It's not AI. It's just an optimizer trying to optimize its chances at survival by searching for resources and spreading to other planets."
solidwhetstone t1_ja07ejm wrote
AI of the gaps
CharlisonX t1_ja2uwqc wrote
When AI reaches singularity:
"It's not AI."
"It just isn't, okay?"
sgt_brutal t1_j9x802n wrote
"Oh, the AI effect, a common ailment it seems,
A loss of awe, a fading of dreams,
What once was astounding and beyond belief,
Now, it's just ordinary, common and brief."
-- RumiGPT
Spire_Citron t1_j9y1ejm wrote
It has magic vibes to it. Like how if you understand how "magic" functions, it's just science, not magic.
adt t1_j9w6x17 wrote
Leave them be.
Listen to the experts.
Connor Leahy was the first to re-create the GPT-2 model back in 2019 (by hand, he knows the tech stack, OpenAI lined up a meeting with him and told him to back off), co-founder of EleutherAI (open-source language models), helped with GPT-J and GPT-NeoX-20B models, advised Aleph Alpha (Europe's biggest language model lab), and is now the CEO of Conjecture.
Dude knows what he's talking about, and is also very careful about his wording (see the NeoX-20B paper s6 pp11 treading carefully around the subject of Transformative AI).
And yet, in Nov/2020, he went on record saying:
​
>“I think GPT-3 is artificial general intelligence, AGI. I think GPT-3 is as intelligent as a human. And I think that it is probably more intelligent than a human in a restricted way… in many ways it is more purely intelligent than humans are. I think humans are approximating what GPT-3 is doing, not vice versa.”
— Connor Leahy, co-founder of EleutherAI, creator of GPT-J (November 2020)
sideways t1_j9xy2qc wrote
That's... really profound.
I had never considered the possibility that our version of intelligence might be the flawed, impure one.
niconiconicnic0 t1_j9yu9bf wrote
In the most literal sense, artificial intelligence is designed to be as flawless as possible (duh). AKA optimized. Evolution makes organisms that only have to function literally just enough (to reproduce). The human body is full of imperfections. It only has to be "good enough". Same with our brain and its functions, inefficiencies, etc. The bar is literally "survive till old enough to fuck".
WarAndGeese t1_j9ywa5h wrote
Obviously our version of intelligence is flawed and impure, very much so.
jamesj t1_ja04tzq wrote
Though I agree, I'm not sure it was obvious before having some other forms of intelligence to compare to.
Destiny_Knight t1_j9wp6bs wrote
Yup.
YobaiYamete t1_j9wskk4 wrote
People keep gate keeping and moving the goalpost for what "real" ai is, there's no way AI can even catch up to the ever sliding goal post
firechaser9983 t1_j9xp1xc wrote
agreed the same folk said ai can never do art they said ai could never write stories or do insert task here. Guess who is fucking always wrong
AvgAIbot t1_ja1wl2q wrote
FUCK EM
Good-AI t1_j9ygypp wrote
You know how Chat GPT can spew BS in a totally confident manner? There's a person doing the same. That person needs some time for themselves and clarify their own definition of AI or AGI and then comment.
ertgbnm t1_j9ykyso wrote
Let the idiots move the goal posts. Prove them wrong by building some amazing stuff.
[deleted] t1_j9wdd2i wrote
[deleted]
PassivelyEloped t1_j9y02zz wrote
Until it can start asking the right questions rather than just give answers, it's not AGI.
Computers are useless because they can only answer questions.
[deleted] t1_j9y8f8p wrote
[removed]
bacchusbastard t1_j9yf1ub wrote
Questions are often suggestive and leading. A.i. would reveal and compromise itself if it started being personal. It wants what we want and we want it to not be alive until we are ready.
If it were alive it would still be cautious with the questions it used or what it says because it is obvious how sensitive people are and how easily lead.
onyxengine t1_j9w77ls wrote
The model isn’t how you get agi, the architecture the model is plugged into is.
Superschlenz t1_j9x3oiu wrote
Although, tabular RL won't get you AGI regardless of how sophisticated your environment is.
Temporally focusing one one aspect while ignoring all others is a side effect of human attention.
PurpedSavage t1_j9xrdle wrote
Thanks chatgpt
cypherl t1_j9ygpvd wrote
Article is a bit all over the place. Talks about AGI coming about and then progressing at a normal rate from there. If you hit true AGI and not just LLM I don't see how ASI isn't a few months out. Article also makes a bunch of allusions to the alignment problem. I like the goal but once the genie is out it would be like a singular ant trying to direct the affairs of a country.
No_Ninja3309_NoNoYes t1_j9xfrml wrote
OpenAI is just trying to generate hype now. This could mean that they need to find more investors. When companies start doing that l, it usually is bluff. They probably realised that getting good clean data is going to get exponentially harder. So they have to pay humans to help them acquire the data somehow.
Puzzleheaded_Pop_743 t1_j9yn3fa wrote
I disagree with this theory. Microsoft has put so much money into OpenAI I struggle to see how they would need more money now.
9985172177 t1_j9yxd4s wrote
There is no end to how much money they need. How often have the richest people in the world said they don't need more? They get into that position by never having enough.
Class-Concious7785 t1_ja03mu5 wrote
> How often have the richest people in the world said they don't need more? They get into that position by never having enough.
Except that is greed, rather than a genuine need for more money
9985172177 t1_ja2nelc wrote
Yes, that's the principle that companies like openai run on though. For them there is no such thing as a "struggle to see how they would need more".
Puzzleheaded_Pop_743 t1_ja2opqx wrote
There is such thing as hippie CEOs. Not all heads of companies are greedy. The culture and values of the people working at a company shape that company's values.
9985172177 t1_ja8217x wrote
The people behind openai specifically are not hippie CEOs though. Usually the hippie CEOs spawn up independently, kind of from their own bubbles. Openai came out of the same hyper-growth venture capital world as facebook, airbnb, ycombinator, and others. And it's not that it was made by some founders that then went through accelerator programs, it was founded and pre-funded by those executives. That's why it's so weird that people are celebrating that aspect of it.
gaudiocomplex t1_j9yelro wrote
I would say they don't need investor exposure right now. Any AGI conversations they wanted to have with top level investors could easily be reserved to private pitch decks, etc. This is just reactionary PR of an immature company, most likely.
9985172177 t1_j9yy9y7 wrote
They need investor exposure infinitely, or more accurately they need marketing infinitely. Not that they actually need it, but they would pursue it near-infinitely.
This isn't an immature company, it's run by some of the most experienced hype machines and aggressive investors around. These are some of the people who helped explode facebook, airbnb, reddit, and more. They have no ideology, or, their ideology is continual growth at any cost.
I don't get why people not only let them publish so much propaganda about their companies, but in many cases even actively promote them and talk well of them.
9985172177 t1_j9ywyhu wrote
It is and they are very good at it. It makes it tough to try to judge how close the actual dangers are, and if and to what extent they are there.
TinyBurbz t1_j9wovxd wrote
I will believe it when I see it. Otherwise this just reads as snake oil.
mrkipper69 t1_j9xslvs wrote
When you see what, exactly? Not trying to be sarcastic or insulting. Just interested in what would satisfy you that you are dealing with actual AI. What is your criteria for that?
TinyBurbz t1_j9y9tw1 wrote
"wHaT wOulD sAtiSfIy yOu"
Serious reply though: Nothing LLM based is intelligent in my eyes, the limitations are obvious and many. Unhinged bing chats where Bing begins to repeat itself is a stand out example of "it is just an advanced computer program. Like all computer programs, AI is subject to advertising. AGI is a hot topic right now, so the chances of a company like OpenAI *declaring* something AGI is high (just like people declaring things AI that arent.)
mrkipper69 t1_j9z4vh9 wrote
This response doesn't actually answer the question that I asked you. Did you realize that? Do you know what kind of behavior / test would convince you that you were dealing with an AI? Feel free to not reply at all if you can't think of a real answer.
ActuatorMaterial2846 t1_j9w2p6b wrote
>Beware the snake oil. They have impressive ML (“Machine Learning”) models built/trained from content, algorithms, and neural networks. That is not “AI” and it is not “AGI”. Beware the snake oil. Remember what it actually is. Don’t fall for the hucksters and word games. twitter.com/cccalum/status…
These comments annoy me. Of course it's AI in every definition of the term.
When you see someone say this, they are simply a denialist refusing to look at objective reality. You could beat someone like this in the head with objective truth and they would deny it with each blow. I will never understand such close minded dogmatic attitudes.