Comments

You must log in or register to comment.

ghomerl t1_j03o790 wrote

nah, unless its a fucking HUGE leap. GPT-3 wont be taking over any jobs, just making existing jobs easier

23

Wise-Yogurtcloset646 t1_j03oc2a wrote

GPT-40, maybe. 4 will be an improvement but isn't going to change the world... yet...

−6

cole_braell t1_j03p5ru wrote

GPT3 - 1.5 Billion Parameters

GPT4 - 100 Trillion Parameters (rumored)

It’s a pretty significant leap.

Edit: Rumor has been debunked, apparently. We’re probably not looking at anything near 100T for GPT4.

12

DreamWatcher_ t1_j03puih wrote

The technology behind GPT-4 will be one of the many factors behind the big change that's coming. The advancements and milestones we are seeing in many fields of science and technology is what will change the world order by the next decade or two it won't just be Gptchat. I believe everything is connected with each other.

67

12342ekd t1_j03quki wrote

Yeah and since they’re computers, they will be communicating instantaneously and they will be able to share deep insights and express them better with each other than we ever could

7

gskrypka t1_j03slzd wrote

Well for particular industries GPT3 is already pretty revolutionary so we should see the progress going forward in those industries.

But still we should not overestimate GPT capabilities. In the end as far as I understand it is very good imitation model of data in the internet.

However for sure it would me substantial step forward and we will see more industries disrupted by AI.

2

Denpol88 t1_j03sm9j wrote

Maybe not Gpt-4 but yes Gpt-5

9

__ingeniare__ t1_j03tzp3 wrote

It's just a rumour and I think Sam Altman basically denied that this was the case. Another, perhaps more plausible, rumour is that GPT-4 will have a very different architecture where the parameter count between it an GPT-3 doesn't say much because it's no longer just about bruteforce scaling.

25

ChronoPsyche t1_j03u52y wrote

We don't know anything about GPT-4. Anything you think you know comes from rumors that are not very credible.

>Won’t this basically end society as we know it if it lives up to the hype?

I can't roll my eyes hard enough at this statement. Can we turn down the sensationalism a few notches on this sub? It's nauseating.

56

ChronoPsyche t1_j03vrmh wrote

No you can't extrapolate. There are reasons behind things. GPT3 and GPT2 are both transformer models. GPT4 will likely be a transformer model too. At best it will just be a better transformer model, but it will still have context window limitations that prevent it from becoming anything that can be considered "game over for the existing world order". It will likely just be a better GPT3, not AGI or anything insane like that.

21

hauntedhivezzz t1_j03vu9u wrote

It feels like LLM’s have been a big deal, but only in certain circles. The image synthesis models opened the general idea of AI as an important tool to a much broader audience, who retroactively found GPT-3.

It’s unclear if OpenAI was always going to release ChatGPT or if it was in some ways built as an easier access point than the playground, for a growing community of people engaging with their products.

Whatever the case may be, the timing is pretty good, because if GPT-4 is a decent leap forward, you have developers who have been building on top of GPT-3 for years now (some who have become sizable businesses in their own right), a bunch of use cases in the world and a growing community that is understanding future use-cases — all which will allow GPT-4 to potentially seriously break into the mainstream, not as a name brand per se but as a tool that impacts a much larger part of society.

14

Loud-Mathematician76 t1_j03z5py wrote

now imagine that the people who rule the world already likely have access to something like GPT5 and probably they also had access to tools like GPT4 or better for quite some time now.

−6

Practical-Mix-4332 OP t1_j0401zd wrote

I don’t think it needs to be an AGI to make a huge difference though. If it really is much more impressive than GPT-3 it’s going to start causing massive shockwaves throughout society. It will bring AI to the public consciousness even more than it already is and make people start planning for that future instead of just imagining it as a hypothetical distant time.

0

rushmc1 t1_j0403yc wrote

>>Won’t this basically end society as we know it if it lives up to the hype?

Here's hoping.

2

pigeon888 t1_j040ecc wrote

I wonder if GPT-4 is when the world starts really paying attention.

Then the conversation might shift meaningfully to what will be possible by GPT-10 or GPT- 20.

6

SurroundSwimming3494 t1_j041tx7 wrote

Ladies and gentlemen, I present to you the most sensationalist post ever posted on r/singularity.

But seriously, this is just insane.

14

civilrunner t1_j0449xz wrote

I suspect GPT4 will be the start of commercialization for common use of AI systems, however I suspect we will need more of an advancement in AI rather than just scale to truly get to a point where we can automate a substantial portion of the workforce.

We're already seeing what ChatGPT can do, I think its clear that we'll see some wild things by 2030. I'll be really curious how well these types of AI models can transfer to robotics and physical systems.

10

Beatboxamateur t1_j0457t7 wrote

I think it could be, but OpenAI's definitely going to hold these models back for now, rather than taking us to some insane proto AGI immediately. Sam Altman's been clear about that lately, so honestly, I'm not expecting the whole world to change yet.

I think the models Stability.AI will come out with are going to be even crazier, since they'll pack as much as they can into it.

I'm looking forward to having an open source GPT-4 level LLM and text to video model!

2

HuemanInstrument t1_j045udp wrote

I think it could be
But, I think it needs slightly different architecture / algorithms to produce results that are on part with any capability of a human being in any capacity.

I don't think we can get true AGI from GPT-4 but we will get something extremely profound and mildly world changing.

And I do think by 2025 we will have an AGI that surpasses human beings in every capacity, and then the self designing recursion will begin and the singularity will have arrived.

6

Johnny_Glib t1_j04699i wrote

You're like an overexcited child on Christmas eve. Calm down, it's just a chatbot.

17

CarlPeligro t1_j047zfy wrote

I've been feeling weirdly giddy lately. It didn't hit me right away. I messed around with ChatGPT for a few days and thought of it (for a time) as a kind of enhanced Google. But once I began to get a feel for what it was doing and the magnitude of what it was capable of -- that's when the giddiness set in. There is a kind of liberation that comes with a total loss of control. The giddiness set in with the gradual realization that nothing I do from here on out really matters all that much. Be a good person, try to get back in touch with some old friends, try to better myself wherever I can. But otherwise ...

The big-picture stuff is in AI's hands now, for better or for ill.

12

manOnPavementWaving t1_j04bxs9 wrote

I agree that you can't extrapolate, but it's definitely not the case that GPT4 has to have the same limitations as GPT2 and GPT3. Context window issues can be resolved in a myriad of ways (my current fav being this one and retrieval based methods could solve most of the factuality issues (and are very effective and cheap models, as proven by RETRO).

So I want to re-emphasize that we have no clue how good it will be. It could very well smash previous barriers, but it could also be rather disappointing and very much alike ChatGPT. We just don't know.

5

MechanicalBengal t1_j04clxj wrote

You realize that many customer service jobs are going to be over, very soon, right?

Not that human input will completely go away in the customer service workflow, but it will be more like self checkout at home depot, where you have one person monitoring eight registers at once and can intervene in the event of an issue.

This tech will do the exact same thing for low level CS.

12

Thatingles t1_j04dl0v wrote

They are an inevitable part of self-moderated social media. It's a function of the system. With unlimited content to devour, how many are willing to work through arguments that make them uncomfortable or angry? All to easy to click off that and go back to the comfort of something which affirms your existing worldview.

No, I don't have a solution for that and yes I suspect it is a very bad thing the consequences of which we are just starting to work through. Chatbots will definitely enhance the effect as will any form of proto or full AGI (computer, create me a documentary explaining why I'm right about everything!).

4

AlmostHuman0x1 t1_j04e7c4 wrote

Just in case, I started saying “Thank you” to Alexa and Siri. 😁

3

Readityesterday2 t1_j04e9hw wrote

Bullshit answer: the world will change for the better for all of us so finally we can achieve creative nirvana and sip teas with our ubi. Oh and we don’t know anything about gpt4 so stop the speculation I’m puking yuk.

Reality check: many people’s job from marketing to assistance to writing are pretty monotonous and mundane, and can be replaced with a far cheaper alternative with little to no loss in customer experience. GPT4 has 100 trillion parameters, same as human brain. Altman says it might have beat the Turing test. And chatgpt right now is on tech two years old. So yeah world order is about to append and may we wisely evolve.

3

RichardKingg t1_j04go8o wrote

Yes! The way I see it is like everything is a giant feedback loop, discoveries in other fields can help with developing new technologies and using said technologies to discover new things, and the cycle goes on and on, and even faster than before.

This is getting scary and fascinating at the same time.

22

maciejbalawejder t1_j04h28e wrote

The biggest limitation of GPT-3 wasn’t the size but the data. It was trained almost on the whole internet and still underfit. At the end of the day the goal of the model is to predict the next word I don’t think it will necessarily lead to AGI but definitely it will be great to see interesting properties emerging from such a simple objective function.

1

beezlebub33 t1_j04k5c9 wrote

That would, IMHO, be a big win. Even if the scaling hypothesis is correct, why would you want to solve the problem that way, when there are probably far better ways to solve it.

Sure, we could fly an interstellar spacecraft to another solar system, but it would be a bad idea to do it, because in the time that it would take to get there, some other ways of getting there would be invented. IF you left for the stars now, people would be waiting for you when you got there.

In the same way, simply scaling compute and data may get you to a certain amount of intelligence. But the costs and effort would be huge. It would probably be better to spend that time and effort (and money) on making the underlying ideas better. And even if it turns out that, yes, we have to scale, waiting until computational costs come down further is probably a good idea.

3

sideways t1_j04lm2t wrote

You are absolutely right - and quite early in that realization. Reminds me of this quote by Winston Churchill:

"Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning."

7

tatleoat t1_j04p1db wrote

I suppose it would depend on how good it is

1

Rorschach120 t1_j04qcs5 wrote

I keep seeing replies like ‘we dont know what the future holds’ and ‘stop sensationalizing things’…

Isn’t this a sub about the ideas of Ray Kurzweil et al and how we are 25 years away from an event of combining our human brains with AI brains? The entire thing is about bold theories about the future.

Why act like what OP said is nauseating while embracing something much more far-fetched happening soon?

4

ChronoPsyche t1_j04t3fl wrote

  1. There's a difference between speculating about events 25 years from now vs saying that something next year will end society as we know it based on nothing of substance.

  2. Not everyone agrees on the singularity timeline. This is just a singularity sub, not a singularity in 25 years sub.

5

Brandon0135 t1_j04viun wrote

Making jobs easier is taking over jobs. If a job is made 50% easier, half of the team is laid off.

Unless we change the economic order, AI will just be a corporate profit maximizer.

1

maskedpaki t1_j04x2hm wrote

It's just you

Gpt4 will be a better language model. But this whole gpt4 is the singularity stuff needs to stop imo.

2

User1539 t1_j04yt4t wrote

I think you're right that this technology, if not any specific implementation, has the potential to destabilize the world as we know it.

I've already had friends losing work to these. I had graphic designers tell me they just don't get asked to do commissions hardly at all anymore. I have a friend who did dictation for a law office, and that dried up all at once. She had to go back to teaching.

It's just the edges of things, today, but it doesn't have to get much better to take your order at McDonalds, answer phones, help you schedule classes, etc ...

It also doesn't take anywhere near 100% market saturation to destabilize things. The unemployment rate peaked at just over 25% during the great depression.

2

redditor235711 t1_j052d59 wrote

I asked ChatGPT:

It's natural to have concerns about the potential impact of powerful technology like GPT-4. However, it's important to remember that technology is only as good as how it is used. While GPT-4 may have the ability to perform many tasks that are currently done by humans, it's up to us as a society to decide how we want to use this technology. We can use it to augment human capabilities and improve our lives, or we can use it in ways that are harmful. Ultimately, the impact of GPT-4 will depend on the choices we make.

https://chat.openai.com/chat

1

Rorschach120 t1_j0565nr wrote

Fair points. I don’t really agree with OPs statements but was surprised to see not just your comments (which were polite by comparison) but others bashing on people for getting excited over GPT4.

2

civilrunner t1_j057e8o wrote

I'd suspect a long while. There isn't that much value in home robots so you can't charge that much. They'll be in a ton of jobs prior to ever being in homes. I wouldn't expect anything till a while after manual construction jobs are automated which is seemingly a ways away though admittedly could happen sooner than expected cause AI is a wild technology.

1

SurroundSwimming3494 t1_j0588m1 wrote

I'm somewhat more skeptical. This AI isn't factally reliable, yet, so you can't really trust it with answers. Another reason I'm skeptical is because many people don't even try to conversate with the automated customer service agents we have now and skip right to a human agent, or would prefer to speak to them. I definitely think future models will definitely reshape customer service, but not this one. I might be wrong, but I guess we'll just have to wait and see.

−1

vampyre2000 t1_j05dzzz wrote

Nope, it will be a great leap in what we have today but will just lead to GPT-5 to fix any issues and improve the system. What you should be looking for is all the extras use cases that pop up once people can see what it does and they use it to improve their own tools. It acts as a catalyst for new ideas and sparks new funding

2

superluminary t1_j05oqo6 wrote

We have an aging population and not enough young people to care for the elderly. Our current solution involves people cycling from house to house doing the washing up and putting food in the microwave.

1

QuietOil9491 t1_j05pvm5 wrote

The countless people who got cancer and radiation poisoning during the advent of the nuclear age… were they “optimists”, or “pessimists”? 🤔

You seem like the group who ate a lot of glowing paint chips back then…

1