ChronoPsyche t1_j03u52y wrote
We don't know anything about GPT-4. Anything you think you know comes from rumors that are not very credible.
>Won’t this basically end society as we know it if it lives up to the hype?
I can't roll my eyes hard enough at this statement. Can we turn down the sensationalism a few notches on this sub? It's nauseating.
blxoom t1_j04ay65 wrote
it cancels out the amount of pessimism on r/technology and r/futurology
ChronoPsyche t1_j04b8v2 wrote
That's not how it works lol. It just makes more echo chambers.
Thatingles t1_j04dl0v wrote
They are an inevitable part of self-moderated social media. It's a function of the system. With unlimited content to devour, how many are willing to work through arguments that make them uncomfortable or angry? All to easy to click off that and go back to the comfort of something which affirms your existing worldview.
No, I don't have a solution for that and yes I suspect it is a very bad thing the consequences of which we are just starting to work through. Chatbots will definitely enhance the effect as will any form of proto or full AGI (computer, create me a documentary explaining why I'm right about everything!).
[deleted] t1_j04p7wr wrote
Gpt4 for singularity mod 2023
rixtil41 t1_j04anjw wrote
I just dont want it to be a slim improvement over chatgtp.
QuietOil9491 t1_j04nek9 wrote
This is a sub for wanking after all
TopicRepulsive7936 t1_j04tr34 wrote
Pessimists have never been correct. The optimists haven't either because they are pessimists too.
QuietOil9491 t1_j05pvm5 wrote
The countless people who got cancer and radiation poisoning during the advent of the nuclear age… were they “optimists”, or “pessimists”? 🤔
You seem like the group who ate a lot of glowing paint chips back then…
TopicRepulsive7936 t1_j05qmqd wrote
I don't know but we will all get cancer.
Rorschach120 t1_j04qcs5 wrote
I keep seeing replies like ‘we dont know what the future holds’ and ‘stop sensationalizing things’…
Isn’t this a sub about the ideas of Ray Kurzweil et al and how we are 25 years away from an event of combining our human brains with AI brains? The entire thing is about bold theories about the future.
Why act like what OP said is nauseating while embracing something much more far-fetched happening soon?
ChronoPsyche t1_j04t3fl wrote
-
There's a difference between speculating about events 25 years from now vs saying that something next year will end society as we know it based on nothing of substance.
-
Not everyone agrees on the singularity timeline. This is just a singularity sub, not a singularity in 25 years sub.
Rorschach120 t1_j0565nr wrote
Fair points. I don’t really agree with OPs statements but was surprised to see not just your comments (which were polite by comparison) but others bashing on people for getting excited over GPT4.
ihateshadylandlords t1_j04ybbl wrote
Amen. People here need to go touch grass and stop acting like the sky is falling.
mantheship t1_j055mem wrote
To be fair, could you have anticipated how powerful GPT-3 was going to be? Some concern is warranted.
[deleted] t1_j05dz1y wrote
[deleted]
Practical-Mix-4332 OP t1_j03ujzs wrote
I mean you can kind of extrapolate based on the difference between GPT 2 and 3, but yes you are correct it is all speculation.
ChronoPsyche t1_j03vrmh wrote
No you can't extrapolate. There are reasons behind things. GPT3 and GPT2 are both transformer models. GPT4 will likely be a transformer model too. At best it will just be a better transformer model, but it will still have context window limitations that prevent it from becoming anything that can be considered "game over for the existing world order". It will likely just be a better GPT3, not AGI or anything insane like that.
manOnPavementWaving t1_j04bxs9 wrote
I agree that you can't extrapolate, but it's definitely not the case that GPT4 has to have the same limitations as GPT2 and GPT3. Context window issues can be resolved in a myriad of ways (my current fav being this one and retrieval based methods could solve most of the factuality issues (and are very effective and cheap models, as proven by RETRO).
So I want to re-emphasize that we have no clue how good it will be. It could very well smash previous barriers, but it could also be rather disappointing and very much alike ChatGPT. We just don't know.
Practical-Mix-4332 OP t1_j0401zd wrote
I don’t think it needs to be an AGI to make a huge difference though. If it really is much more impressive than GPT-3 it’s going to start causing massive shockwaves throughout society. It will bring AI to the public consciousness even more than it already is and make people start planning for that future instead of just imagining it as a hypothetical distant time.
Viewing a single comment thread. View all comments