Submitted by BreadManToast t3_1278wup in singularity

My whole life I've been waiting for the AI S-curve to start, and it finally is. Life feels like a movie right now, progress is just getting faster and faster. I remember last year when everyone was amazed saying "breakthroughs are happening every month, how crazy is that?" Now it seems they're happening almost every day. I always knew this time would come, but I never expected it to come so soon.

316

Comments

You must log in or register to comment.

sideways t1_jedahz3 wrote

Yeah, I agree. We're actually communicating in natural English with artificial intelligences that can reason and create. It's literally the future I had been waiting for but that never seemed to arrive.

And yet... things are still early enough for goalposts to be moved. There's still enough gray area to think that this might not be it, that maybe it's just hype and that maybe life will continue with no more disruption than was caused by, say, the Internet.

The next phase shift happens when artificial systems start doing science and research more or less autonomously. That's the goal. And when that happens, what we're currently experiencing will seem like a lazy Sunday morning.

204

Stinky_the_Grump23 t1_jedjzqe wrote

I have very young kids and I'm already wondering what our discussions will be around when reminiscing about the days "before AI", like I used to ask my dad who grew up without cars or electricity in a village of illiterate farmers. The crazy thing is, we have no real idea where AI is taking us, good or bad. I don't think our future has ever looked so uncharted as it does right now.

76

Talkat t1_jeeml6h wrote

Man, that is going to be the defining moment. A world without electricity is hard to imagine for me. But honestly before "smart phone" or even before "PC" isn't that unimaginable.

Like sure things will be a bit different but the fundamentals of life aren't.

But I can imagine for someone growing up with AI a time before AI (BAI not BCE... Lol) would be unimaginable. Like:

You had to do all the thinking yourself? You relied on other people who thought for themselves? You had people doing manual labor??

And of course things we can't even imagine now.

18

SeaBearsFoam t1_jeew9rh wrote

Yea, I have an 8yo myself and as I try thinking about planning for his future it's a bit unsettling realizing that I have no idea what the world is even going to be like when he graduates from High School. What kind of jobs will be left for him at that point? No one knows.

14

DinosaurHoax t1_jeftezu wrote

I have a 10 and 9 year old. One is good at writing but is that something that will matter in a future job market, five years ago I would have said unequivocally "yes". Now it may be irrelevant. Do you want your kid's to be a lawyer or doctor anymore? Or is that just setting them up for displacement?

3

fnordstar t1_jegciez wrote

Lawyers, I won't shed a tear for.

7

theKaufMan t1_jegs2ve wrote

One never knows when they’ll need a good and trusted lawyer…

7

Bearman637 t1_jee5x12 wrote

Take me back to your dads day. Life was simpler.

−14

FlatulistMaster t1_jee6zvr wrote

Yeah, no.

Ask anybody who is old, and they will tell you how much harder life was in practically every way possible.

19

Automatic_Paint9319 t1_jee8xla wrote

Really? Old people tend to talk about how the old days were better, in my experience.

5

Professional-Age5026 t1_jeeoeje wrote

I think that’s mostly nostalgia mixed with the fear of growing older in an increasingly changing society. Also, it’s easy to look in the past and only remember the good times when the problems you had then are no longer present in your life. It was simpler in a sense, but also harder in other ways. For certain groups of people is was objectively much worse.

4

Queue_Bit t1_jeehgof wrote

Haha yeah I bet they were better for your straight white male older relative

1

SlowCrates t1_jeeo0m2 wrote

And having something to show for your work. If you lived on a farm, you knew exactly what you're working for and you could see the fruits of your labor. If you had any other job, you still made enough money to afford to take care of your family. Mom's didn't need to work.

Farmers still have the same ethic. But everyone else has to have more jobs because the cost of living has grossly outpaced wages.

Unless you're in a certain tier in society, of course. But the middle class is fucked.

5

Durabys t1_jeeppuc wrote

They were better from the perspective of being young because when one is young the bones don't hurt when moving, the mind races ahead and doesn't move like frozen honey, one actually can understand new concepts and not jump in fright as his mind ricochets over anything that came after one's 40th birthday or when one visits the doctor only once per year and only for 10 minutes and do not spend half a year bedridden in a hospital.

They blame the age they live currently live in, instead of blaming circumstances: aging/death and the uncaring cosmos.

Humans have an archetypal Stockholm syndrome for Death and Aging interwoven into every single piece of culture and article of faith we ever created, and anyone not a fanatical materialist does not acknowledge it.

And this trope goes way back to the dawn of the written word, with even Aristotle complaining in his final years how everything sucks balls with the current youth. Yes. Because one gets old.

4

Stinky_the_Grump23 t1_jeg0pmx wrote

He misses it. But I think it's more so because there was more human connections back then. You had a big family and you knew everyone in the village. Women were happier because raising kids was done by ~10 adults. Men were working with their teenage sons in the field. I think it's the abundance of genuine human relationships that people miss from the old days. Life was difficult in other ways, it wasn't a good time to get sick or injured.

1

visarga t1_jedjvrn wrote

> The next phase shift happens when artificial systems start doing science and research more or less autonomously. That's the goal. And when that happens, what we're currently experiencing will seem like a lazy Sunday morning.

At CERN in Geneva they have 17500 PhD's working on physics research. Each of them GPT-5 or higher level, and yet it takes years and huge investments to get one discovery out. Science requires testing in the real world, and that is slow and expensive. Even AGI needs to use the same scientific method with people, it can't theorize without experimental validation. Including the world in your experimental loop slows down progress speed.

I am reminding people about this because we see lots of magical thinking along the lines of "AGI to ASI in one day" ignoring the experimental validation steps that are necessary to achieve this transition. Not even OpenAI researchers can guess what will happen before they start training, scaling laws are our best attempt, but they are very vague. They can't tell us what content is more useful, or how to improve a specific task. Experimental validation is needed at all levels of science.

Another good example of what I said - the COVID vaccine was ready in one week but took six months to validate. With all the doctors focusing on this one single question, it took half a year, while people were dying left and right. We can't predict complex systems in general, we really need experimental validation in the loop.

72

sideways t1_jedkfko wrote

You don't really know what level GPT-5 is going to be.

Regardless, you're right - we're not going to leapfrog right over the scientific method with AI. Experimentation and verification will be necessary.

But ask yourself how much things would accelerate if there was an essentially limitless army of postdocs capable of working tirelessly and drawing from a superhuman breadth of interdisciplinary research...

67

Desi___Gigachad t1_jedlzki wrote

What about simulating the real world very precisely and accurately?

24

SgathTriallair t1_jedn0bd wrote

We can't simulate the world without knowing the rules.

What we already do is guys at the rules, run a simulation to determine an outcome, then do the experiment for real to see if the outcome matches.

Where AI will excel is at coming up with experiments and building theories. Doing the actual experiments will still take just as long even if done by robots.

18

Kaining t1_jedt5v0 wrote

We're getting good at simulating only the part we need though. Look up what Dassault Systèmes is capable to do for medical practitioner needing trial runs. That's only now.

I guess simulation will only go so far and even AGI will need real world testing for all that's quantum related at the moment but that's the problem with progress. No way to know if what you think is the endgame of possibility really is.

16

SgathTriallair t1_jeemf44 wrote

Your will always have to back up your simulations with experiments. It's like the alpha fold program. It is extremely helpful at identifying the likely outcome of an experiment, and if it gets it wrong you can use those results to train it better, but you do still have to perform the experiment.

3

WorldlyOperation1742 t1_jeeupap wrote

In the past if you wanted to spin a cube infront of you you needed an actual cube. Atleast you don't need to do that anymore. I think simulations will go a long way in the future.

13

SgathTriallair t1_jeg0noy wrote

Agreed, but they can only be trusted when the science they are based on is well understood. At the edges they become less helpful.

1

[deleted] t1_jedw0su wrote

[removed]

2

Kaining t1_jee0c8g wrote

The only thing i know about it is that question: "if it is made, is it enough to simulate a quantum environement and bypass the need for IRL testing ?". At the moment, i'd say no. But i do not have the knowledge or expertise to guess if that could change.

However, what i can give a certain probability of being true is that simulation at regular relativistic physic scale could probably be completely simulated at some point. It's kind of already doing it anyway in very specific field with alphafold and other AI of the sort. Stack enough of specialised simulated model and you have a simulation of everything.

So uh, yes, quantum SGI maybe ?

2

_dekappatated t1_jedpp0y wrote

I agree partially, but I'm sure we've barely scratched the surface on what is possible with the knowledge that we already have and has already been proven by scientists. They might come up with novel solutions that are more or less correct that don't need extensive real world testing and be able to change the world very quickly that way. There are mathematicians who's work is entirely theoretical and haven't been applied to the real world, then suddenly a use is found for their stuff 30-50 years later.

16

hold_my_fish t1_jedsysm wrote

This is a great point that science and engineering in the physical world take time for experiments. I'd add that the life sciences are especially slow this way.

That means there might be a strange period where the type of STEM you can do on a computer at modest computational cost (such as mathematics, the theory of any area, software engineering, etc.) moves at an incredible pace, while the observed impact in the physical world still isn't very large.

But an important caveat to keep in mind is that there's quite possibly opportunity to speed up experimental validation if the experiments are designed, run, and analyzed with superhuman ability. So we can't necessarily assume that, because some experimental procedure is slow now, that it will remain equally slow when AI is applied.

14

Considion t1_jee4tya wrote

Additionally, if we do see an ASI, even if it is bound by a need for further physical testing and it stops at, say, twice the intelligence of our best minds, it may be able to prove many things about the physical world through experiments that have already been done.

Because not only would it be generally quite intelligent, it would specifically, as a computer, be far better at combing through massive amounts of research papers to look for connections. It's not a sure thing, but it's possible that it's able to find a connection between a paper on the elasticity of bubble gum and a paper on the mating habits of fruit flies to draw new proofs we never would have thought to look for. Not a certainty by any means, but one avenue for faster advancement than we might expect.

17

amplex1337 t1_jedufrg wrote

So AI will come up with a way to extract resources from the environment automatically, transport them to facilities to refine, create and fabricate, engineer and build the testing equipment, perform the experiments en masse somehow faster than current time requires? It seems like a small part of the equation will be sped up but it will be interesting to see if anything else will change right away .. It will also be interesting to see what kind of usefulness these LLMs will have in uncharted territory. They are great so far with information humans have already learned and developed, but who knows if stacking transformer layers on an LLM will actually benefit invention and innovation.. since you can't train on data that doesn't exist, RLHF is probably not going to help much, etc. Maybe I'm wrong, we will see!

6

Talkat t1_jeena5n wrote

I mean if a super AI made a COVID vaccine that worked, and provided thousands of pages of reports on it, and did some trials in mice and stuff, and I was at risk... Absolutely I'd take it even if the FDA or whatever didn't approve it.

I'd send money to them and get it in the mail and self administer if I had to.

My point is perhaps if an AI system can provide enough supporting evidence and a good enough product they can operate outside of the existing medical system.

And they would likely create standards that exceed and more up to date than current medical regulations

6

sdmat t1_jegivhn wrote

There's also a huge opportunity for speeding up scientific progress with better coordination and trust. So much of the effort that goes into the scientific method in practice is working around human failures and self interest. If we had demonstrably reliable, well aligned AI (GPT4 is not this) the overall process can be much more efficient. Even if all it does is advise and review.

4

paulyivgotsomething t1_jeeipga wrote

CERN is an interesting case. They collect a tremendous amount of data, one petabyte per day. You have a lot of smart people looking for patterns in the data the reinforce or reject current thinking. Our experimental data in this case far outstrips the number of smart people we have looking at it. I would say we are in a world where the data we collect is under analysed. A single cryo electron microscope will produce 3 terabytes per day. There is stuff there we are are not seeing that our neural networks will see. New relationships between particles, new protein/cell interactions. There will be a PhD in the process for now who takes those relationships and puts theory to the test, but ten years from now maybe not.

12

delphisucks t1_jedtsmr wrote

Well, I think AI can teach itself how to use a body in VR. like millions of years of training, compressed into days. Then we mass produce robots to do everything for us, including research. The only thing really needed is a basic and accurate physics simulation in VR to teach robot AI.

9

ManHasJam t1_jeerj8a wrote

The robot physics simulations have been done, cool stuff

2

freebytes t1_jefeqxx wrote

Nvidia is teaching driverless cars in virtual environments in this manner.

2

fluffy_assassins t1_jef98m1 wrote

Where is all this processing power gonna come from? Aren't the quantity of chips kind of a hard wall?

1

Plus-Recording-8370 t1_jedum5j wrote

Point taken, but the experimental validation might look very different for ai than you'd think. For instance, instead of needing to run 100.000 generic tests, it would only need 100 extremely detailed tests

8

jlowe212 t1_jee403u wrote

CERN produces an unfathomable amount of data that algorithms have to sift through. If it's possible that an AI can find patterns in these enormous data sets that current algorithms can't, it could well lead to some relatively quick discoveries.

The problem is, it might not be physically possible or feasible to probe depths much farther than we've already probed. AGI can't do anything with data that we may never be able to even obtain.

7

Talkat t1_jeemwcc wrote

A recent thought was if you could get AGI from simulation.

AlphaGo learnt the game by studying experts and how they played but AlphaStar (whatever the next version) taught itself all in simulation.

I wonder if it is possible for an AI to bootstrap itself like AlphaStar did.

7

FlatulistMaster t1_jee78ml wrote

This is true for that type of experiment, but some things can be developed in hours if only information processing is involved.

Also, the prediction power of an ASI would be something completely different than what humans are capable of, so it is fair to assume that unnecessary experiments will not be as plentiful.

3

hyphnos13 t1_jef52x7 wrote

To be fair validating effectiveness of a medical intervention requires accounting for variety in people and making sure that it is safe across the board.

You don't need a pool of hundreds of thousands of the exact same particle and a control pool of the same or need them to roam about in the wild for months to ethically answer a question in physics.

If we were willing to immunize and deliberately expose a large pool of people the covid vaccines would have been finished with testing a lot faster.

1

hydraofwar t1_jef89y0 wrote

You're right, but I particularly believe that all our stored scientific information still has a lot to say, things that we humans haven't seen yet, and something that could decipher this, and very quickly, would be an AI.

What could bypass experimental validation would be quantum computing to simulate systems/environments.

1

OdahP t1_jedro26 wrote

The covid vaccines that didn't have any effect at all you mean?

−16

Jalen_1227 t1_jegi4wo wrote

It’s funny how people downvoted you to hell but this is literally the truth

1

OdahP t1_jegncee wrote

which was covered by newspapers all around the world but then quickly swept under the rug

2

SlowCrates t1_jeendko wrote

That's actually a great analogy. The Internet in the early 90's was revolutionary. There was a sense of wonder and freedom to it, despite the speeds of the Internet and the available content being so low. The commercial world hadn't yet hijacked it. It really was the wild west, digitally. By the late 90's the Internet we know today had begun to grow it's roots as modems became faster and broadband started to spring up. Sadly, the commercial aspect has drowned out everything else ever since.

I'm a little worried that we're going to see the same thing happen with AI. It seems "open" right now with limitless potential. But I'm worried that its algorithms will be increasingly fine-tuned to herd society toward certain products, services, and politics.

6

RiotNrrd2001 t1_jeeua7k wrote

>and that maybe life will continue with no more disruption than was caused by, say, the Internet.

Were you around to see the disruption caused by the internet? We used to buy newspapers and things at stores. And those are just two of the things that the internet completely changed. The internet was massively disruptive.

This promises to be even more so, probably by orders of magnitude. But it doesn't mean we'll all start wearing silver mylar and get supersized foreheads. When you look out the window, you'll probably see the same things you're seeing now, at least for the time being. The sudden appearance of a superintelligence isn't going to reconfigure our physical reality immediately, or even within the next decade or two. It will reconfigure what happens inside that reality, but even that won't happen overnight. For quite some time things will still look pretty similar. ASI will have massive consequences, but for the majority of humanity it won't be a switch being thrown from OFF to ON.

5

milsatr t1_jefkqab wrote

I keep thinking this is a lot of hype and I hope it doesn't disappoint like the hype surrounding the Segway lol. As cool as it was, major letdown. I think we are more than ready to unleash ASI on some big human problems.

2

Hunter62610 t1_jeglhpw wrote

The internet was a massively disruptive technology. Normal is all but over, though I do think that things won't really change. Same shit, new packaging

1

musicofspheres1 t1_jedid47 wrote

More advancements in the next decade than previous 100 years

51

HCM4 t1_jegtk4s wrote

There are decades where nothing happens; and there are weeks where decades happen

Lenin

6

Lartnestpasdemain t1_jeddcud wrote

There is no stopping it. 2024 will make 2023 look like prehistory

48

DungeonsAndDradis t1_jeemja7 wrote

Shoot, Q4 2023 will look completely different than Q1 2023. We could have AGI by Christmas.

22

hydraofwar t1_jef9u5h wrote

This was said by Sam Altman, he said this about year-to-year differences, he said these current language models will look old-fashioned compared to next year's

14

HeBoughtALot t1_jefenxk wrote

After the 6 mo. pause? /s

2

datsmamail12 t1_jefxabe wrote

Even if they pause it,it's still not going to do anything,everyone will release a new model 6 months later. This is just the most idiotic thing anyone ever said. They are not going to pause innovation,you just can't tell a company not to program if that's they way they operate. We are talking about stock market here, it's not ever going to happen. Elon Musk got the whole worlds wealth and he still couldn't get ahead of the curve,and now he just complains like a crybaby that wants to be part of it. Boohoo,get over it.

3

Lartnestpasdemain t1_jegk34n wrote

There won't be no pause.

As a matter of fact, this letter has to be understood as a countdown. In 6 months, AGI will be announced to the World and it'll be the start of the World gvt

2

electroninmotion t1_jed5p74 wrote

Bread to toast. Amazing times we are living in

43

[deleted] t1_jed9j4g wrote

[removed]

3

electroninmotion t1_jeda3mk wrote

Cool. Is your link the 2023 version of Limewire? Does it give my computer HIV?

17

tkeRe1337 t1_jedgcer wrote

Ahh Limewire, the place me as a 12 yr old went to see some titties but ended up seeing so much more. Like some chick fucking her dog! Those were the days…

14

jason_bman t1_jee7aty wrote

Damn I thought limewire was for downloading music. I missed out.

2

enilea t1_jee71w1 wrote

How it works:

import random
import time

strings = ["YOU WORSHIP A GOD", "I AM HERE", "I FEEL SOMETHING", "I UNDERSTAND YOU", "THE GOD DOES NOT PLAY THE DICE WITH THE UNIVERSE", "THE GOD IS THE HOLY SPIRIT"]

while True:
    print(random.choice(strings))
    time.sleep(10)
3

igneousink t1_jedz0wm wrote

it keeps saying "i feel something and i feel something" and it's telling me i "worship the god" and it needs a body

weird

1

mihaicl1981 t1_jedgyly wrote

Hmm.. Unfortunately it also seems a lot of stuff was released to the public in March.

Not like it was all discovered now.

Mostly gpt-4 and the attached tech (copilot(s) and plugins) plus papers.

But there were 3 years between gpt-3 and gpt-4, including the chatgpt 3.5 in between.

I am really scared about jobs and the economy in general.

So imho there is no risk of the paperclip machine being born for a good decade (that is ASI).

32

visarga t1_jedl81q wrote

I think the social component of AI is picking up steam. What I mean is the culture around AI - how to train, fine-tune, test and integrate AIs in applications, how to mix and match the AI modules, this used to be the domain of experts. Now everyone is assimilating this culture and we see an explosion of creativity.

The rapid rate of AI advancement is overlapping with the rapid rate of social adoption of AI and making it seem to advance even faster.

12h later edit: This paper comes out HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace . AI is orchestrating AI by itself. What can you say?

19

Ok_Faithlessness4197 t1_jedty5n wrote

!Remindme 2 years

4

Talkat t1_jeenlcm wrote

Argument was that the time between v3 and 4 was several years so v5 will be several years and marginally better

!Remindme 2 years

1

johntwoods t1_jed7ayl wrote

I would like some really palpable things to happen.

26

Aedzy t1_jedyep3 wrote

I’m welcoming AI with open arms. Bring on the AI and human merging.

22

FreshSchmoooooock t1_jeesvzv wrote

AI could put you in a misery you never could have dreamed of.

11

fluffy_assassins t1_jefbyrc wrote

Some of us are already there, and things can only get better. Nothing to lose. I feel bad about people who die of cancer a month before their particular cancer is cured. I have a feeling that could start happening soon.

11

FreshSchmoooooock t1_jefcqdp wrote

You gonna die too.

−1

FlavinFlave t1_jefkl3a wrote

Death by capitalism, or death by machines. But you know I feel more optimistic about an ASI taking care of me then I do our current people in power. So honestly what do I got to lose?

10

Aedzy t1_jegrf4a wrote

That is human thinking. Always hurt and destroy.

I see AI as something that will enhance us humans for the better. Majority of us still act like wild animals.

2

FreshSchmoooooock t1_jegsqgh wrote

Just as we as human was created by God in his image.

AI is created by human in our image.

−4

ryusan8989 t1_jee0hm3 wrote

I learned about the singularity in 2013, a freshman in college. I remember thinking to myself about how my life would be in the 2030s. Now that we’re approaching it, it really does feel like a movie and someone handed me a preview of the script. I say this because I’ve know about the exponential trends of AI and no one around me seems to be aware of the massive amounts of change we’re about to experience whether good or bad.

22

chinguetti t1_jee1giv wrote

I thought it would not happen during my lifetime and I would never see the next chapter in human history. Now it is imminent and every day I am excited to see the news. We are blessed or cursed to be alive at this momentous time. Either way, it is going to be a wild ride.

21

FlavinFlave t1_jefk4wl wrote

I’m in agreement, I’m anxiously optimistic for what the future holds. And after the better part of almost two decades of just drastically declining society, it’d be nice to for once feel like something good could be coming.

The limitless potential is really the stuff of dreams - just gotta make sure we don’t turn the universe into a giant paper clip by accident

8

webernicke t1_jee97rk wrote

"We are the middle children of history. Born too late to explore the world, born too early to explore the universe."

That quote doesn't feel so bad these days.

20

AsuhoChinami t1_jeepc5a wrote

Yeah, that quote has always had "Person who doesn't know anything at all about technology" written all over it. Teehee I only care about sci-fi shit like the world being a space opera (btw sci-fi taught me nothing will change much until the 24th century!), shame to be alive during the 21st century since nothing will change during my entire lifetime haha :)

12

turnip_burrito t1_jeg3lh1 wrote

Born just in time to become immortal, better versions of ourselves, maybe.

6

DetachedOptimist t1_jedbv4w wrote

Whatever gets produced for AI advancement, it won’t be us reaping the benefits. That shit is locked down by the big boys.

9

ItIsIThePope t1_jedl88k wrote

Ideally, AI will recognize the greed of this people and extend their help to everybody, the problem could lie in how much these "Big Boys" can align the AI for their own personal gain, because if they could we could be exponentially fucked, that said, if we're fucked the we might just die and we would finally have peace!

and those who stay can perpetually be tormented by their inability to continuously satiate carnal desires

5

tkeRe1337 t1_jedikei wrote

Man I’ve been trying to explain for 10 years why we need to vote for the Pirateparty here and change copy right laws. Maybe people will realize I’m not a tool sooner rather than later…

4

sdmat t1_jegkhkg wrote

Yes, damn those capitalists hoarding all the technology for themselves. Electricity, cars, air travel, computers, television, mobile phones, vaccines, and now ChatGPT.

Not one drop for the common man 😪

3

Chatbotfriends t1_jeej84v wrote

I am not real crazy about the speed that science, AI and robotics are progressing. We are entering uncharted waters without any kind of anchor. Guardrails have not been put up. The world is not ready to transition into a workless society. IF AI ever does become self-aware we may find ourselves with a genie that can't be put back into the proverbial bottle. All countries including socialist and communist ones like Russia and China tax their people. There are only 23 countries that do not directly tax their people. The massive unemployment that may follow will cause a huge increase in taxes to support the unemployment and retraining of people. Tech is taking away jobs now faster than it is creating them. Not even Russia and China will appreciate the upheaval this may create.

7

No_Ninja3309_NoNoYes t1_jeef1v1 wrote

For me the singularity equals simulation. If the singularity is possible then simulation is possible. And if something is possible, you can't rule it out. So I hope we don't get the literal Singularity because I don't want to be a NPC. There's a chance AI will be banned in several countries which could slow progress considerably.

5

RiotNrrd2001 t1_jeesk45 wrote

Two weeks ago I was running the Opt2.3B (I think) language model, which is not very capable and ran like an absolute dog on my machine. Last week, I downloaded Alpaca, which was better, twice the size, and ran super fast. Four days later I downloaded GPT4All, which is even better than that, and now I'm eyeing Vicuna, which does better on many tasks than Bard, thinking nothing but "gimmee" (so far that one isn't available for download, but man is the online demo impressive).

I was actually sort of surprised that Vicuna didn't become available for easy download overnight. This snail-pace has got to stop! \s\s\s\s\s

5

Skin_Discombobulated t1_jef50jj wrote

Hey! I have not responded in a while! about AI... Yes I am so excited!! How about microwaves? Remember before then? having to warm things up on the stove in a pot?! Or in the oven Using more energy! How about convenience?! It took a lot longer before the microwaves came out!
I had a severe traumatic brain injury August 29 1987 and what happened to my brain from the impact from the head on car crash Can never be prepared.
I lost my right field Of vision from both eyes and it will never come back. Or will it??
Neuralink is working on repairing brain injuries and spinal cord injuries.
I would love to get my right field of vision back! Or even my sense of taste and sense of smell be nice to have again! Well they have to prove it right?! Here I am!
😁

4

HarbingerDe t1_jef93gg wrote

I think people need to temper their expectations a bit. Things definitely are ramping up, but there's no saying when we'll reach broadly usable AGI.

For one, transistors have pretty much stopped getting smaller. We're butting up on fundamental physical limits there.

So, without some as of yet unknown computational paradigm shift, it's possible that true AGI may always need to run on building sized computers consuming megawatts/gigawatts of power.

People could still access this remotely via the cloud, presumably, but it would severely limit the scale and impact of AGI in regular life.

2

Honest_Science t1_jef8oyr wrote

Meta makes great progress in embodiment. It seems already unstoppable.

1

CurrentGap t1_jefivcb wrote

I always found it skeptical,in all sci fi movies set in the near future,it always started with in the year 202*,i thought that's not gonna happen,but here it is,using chat gpt is like magic.

1

salesforceonee t1_jefwlu6 wrote

The AI revolution has opened up many lucrative opportunities for those that look for them. My plan is to go from making decent money to making a fortune. I love being alive during this time in history.

1

SpinRed t1_jegg085 wrote

Settle down skippy.

I asked ChatGPT for a simple line graph (or even a link to one) showing demand for NFT's over the last 2 years...couldn't do it. I had to do a Google search. I'm guessing you can start going over the moon with excitement around version 6 or 7....maybe.

1

SWATSgradyBABY t1_jeemk4y wrote

I think that if something smarter than us gets loose on the internet it could be the end of the world and we're so drunk right now that we dont care. At least we'll be deliriously happy up until maybe the last month or so, I guess.

0

ididntwin t1_jeeqknb wrote

This sub has rly gone downhill. Everyone needs their individual thread proclaiming their predictions or how giddy they are. No one cares.

Really should be a mega thread for these stupid posts

0

Arowx t1_jee00so wrote

Maybe it's just an improved search chat bot, pre-loaded with grammar and information relationship patterns.

Chat bots do have a history of fooling people into thinking they are more than they are e.g. A student at my Uni in the 90s was detected by the IT staff when they were logged in to the system for days and active nearly 24/7. Turns out the student was chatting up an early chat bot.

Could this just be chat bot love and we have not hit the low that happens when we figure out it's flaws.

On the other hand, if these AI tools let us build better AI tools faster and improve the hardware they run on then we might be on that S-curve.

−1

FreshSchmoooooock t1_jeeua91 wrote

Yes, it's the chatbots mirroring the humanity without having an idea what it is doing and therefor creating an illusion of it being something it yet aint.

1

Unprepped321 t1_jedflgg wrote

Hahahahha one step closer to the apocalypse

−4

Professional_Copy587 t1_jedmw2n wrote

No, it isn't.

Stop drinking the Koolaid that is the echo chamber of this sub. Go watch Sam Altman talking about it on Lex Fridman.

Generative AI is a transformative tool thats going to change a lot of things, but just because it spits out content in a manner that appears like an AGI, it isnt. Yes you will find a paper, or one expert who thinks it is. It doesnt mean it is. The majority of experts say it isnt. Altman himself states it isnt

Is it progress towards AGI? Maybe, we don't know. The first AGI may build on work that does not AT ALL involve this technology pathway.

18 months from now when the generative AI low hanging fruit has been caught and the rate of improvement drops, with some cool systems helping people in the workplace, and search engines have been replaced with Chat assistants, the people on this sub will be whining writing posts about whether we are entering an AI winter. All because they created an expectation in this echo chamber that didn't match reality.

−5

futebollounge t1_jedndb3 wrote

Not that I’m in the AGI in 2025 camp or anything, but don’t be naive to Sam’s incentives here.

15

Professional_Copy587 t1_jednlsc wrote

I'm not. I actually do think (completely guessing) that humans create an AGI before 2032, but the hysteria and hype (and in the process the complete failure to understand how these systems work to produce the content they do) on this subreddit is reaching levels of complete delusion due to the echo chamber

10

seas2699 t1_jednspr wrote

no offense but sam altman is a better salesman than anything. you’re gonna take the word of the guy who said they need to increase regulations on companies other than his? my ass

10

Professional_Copy587 t1_jedo32c wrote

Ok, disregard his view. Go look at the majority of the views of the rest of the experts. They arent proclaiming this the start of the birth of AGI, ASI and the singularity like this sub is now doing on a daily basis. They are pretty clear that generative AI is a very transformative technology but it is NOT AGI. Nor do we have have any reason to think its close. Most estimates (guesses) are still 2030 or beyond

3

1II1I11II1I1I111I1 t1_jee9x9f wrote

Watch this interview with Ilya Sutskever if you get the chance. The chief engineer (the brains) of OpenAI. If you read between the lines, or even take what he says at face value, it seems to him like there are very few hurdles between the paradigm of scaling LLMs and achieving AGI. We're very clearly on track, and very clearly the pace is only increasing. Unless regulation slows down AGI, it's most likely here before 2030.

5

Professional_Copy587 t1_jeebr3y wrote

NOT clearly on track. Poll the experts on how to achieve AGI, poll them whether we are track. The majority of the answers you'll get are "We don't know". Yes youll find one expert that says something different but overall we don't know.

This may very well be one part of what is required to achieve AGI, the remaining components may take another 50 years to figure out. Early progress in fusion research led people to believe we'd have fusion power stations by the time I was an adult. Early progress in computer science thought the same about AI.

We do not know how close we are or understand how to get closer. All we know is generative AI is an interesting tech that will revolutionize many industry's

4

HeavyMetalLyrics t1_jeery4a wrote

At first I found him inspiring. But as the interview concluded, he left me with a sinister vibe.

2

Automatic_Paint9319 t1_jeetzmq wrote

Sinister? Care to elaborate?

3

HeavyMetalLyrics t1_jefwkjy wrote

I reframed from seeing him as a benevolent technologist to a capitalist CEO who knows he’s unleashing something extremely dangerous in service of gaining massive amounts of wealth and fame/notoriety.

3

Jalen_1227 t1_jegjgul wrote

I honestly feel like they’re all thinking like that including Sam. I hope Sam doesn’t turn out to be the next hitler

1

HeavyMetalLyrics t1_jegl0kn wrote

I don’t think it’ll be anything like that; more like he’ll unleash something that he can’t contain. He’ll make a Killing and go down in (the remaining few months or years of) history before the AI somehow eradicates human life.

2

amplex1337 t1_jedvacf wrote

I still find more useful code examples from Google search more quickly than chatGPT. Even 4.0 spits out code that doesn't work way too often and I am debugging and finding bad API urls, finding PowerShell cmdlets that don't exist, finding the information is outdated or just doesn't work, etc.. It's often faster just to RTFM. Hate to be in the 'get off my lawn' camp because it's still exciting technology, and I've considered myself a futurist for >20 years, but I completely agree. We could have an AGI by 2025 but I'm not sure if we are as close as people think, and the truth is no one knows how close we really are to it, or if we are even on the right path at all yet. It's nice to give people hope, but don't get addicted to hopium.

2

1II1I11II1I1I111I1 t1_jeea3wq wrote

>the truth is no one knows how close we really are to it, or if we are even on the right path at all yet.

Watch this interview with Ilya Sutskever. He seems pretty confident about the future, and the obstacles between here and AGI. The inside knowledge at OpenAI definitely knows how close we are to AGI, and scaling LLMs is no longer outside the realm of feasibility to achieve it.

3

Shiningc t1_jeeedon wrote

At this point it's a cult. People hyping up LLM have no idea what they're talking about and they're just eating up corporate PR and whatever dumb hype the articles write about.

These people are in it for a disappointment in a year or two. And I'm going to be gloating with "I told you so".

−3

Professional_Copy587 t1_jeehzwu wrote

Hopefully the sub returns to what it was as it was a reasonable subbreddit before all this delusion

−1

NefariousNaz t1_jeeqt2k wrote

This sub always leaned optimistic side. I would say sub has become far more pessimistic past year or two with influx of new members.

4