Submitted by BrownSimpKid t3_1112zxw in singularity
Frumpagumpus t1_j8dg6rt wrote
Reply to comment by helpskinissues in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
you are moving the goalposts.
two messages ago is short term memory, what you are now talking about is long term memory.
you can also try and give it long term memory by summarizing previous messages for example.
But, yes, it is more limited than humans, so far, at incorporating NEW knowledge into its long term memory (although it has FAR more text memorized than any human has ever memorized)
helpskinissues t1_j8dh25h wrote
>two messages ago is short term memory, what you are now talking about is long term memory.
Any memory actually, it is indeed very incapable.
>(although it has FAR more text memorized than any human has ever memorized)
No. It doesn't memorize, it tries to compress knowledge, failing to do so, that's why it's usually wrong.
>it is more limited than humans
And more limited than ants. The vast majority of living beings is more capable than chatGPT.
PoliteThaiBeep t1_j8dv9is wrote
>And more limited than ants. The vast majority of living beings is more capable than chatGPT.
Nick Bostrom estimated to simulate functional brain requires about 10^18 flops
Ants have about 300 000 less, let's say 10^13 (really closer to 10^12) flops.
Chat GPT inference per query reportedly be able to generate single word on a single A100 GPU in about 350ms. That of course if it could fit in a single GPU - it can't. You'd need 5 GPUs.
But for the purposes of this discussion we can imagine something like chatGPT can theoretically work albeit be slow on a single modified GPU with massive amounts of VRAM
A single A100 is 300 Tera flops which is about 10^14 flops. And it would be much slower than the actual chatGPT we use via the cloud.
So no I disagree that it's more limited than ants. It's definitely more complex by at least one order of magnitude at least regarding the brain complexity.
And we didn't even consider training compute load in this consideration, which is orders of magnitude bigger than inference, so the real number is probably much higher.
helpskinissues t1_j8dwlnk wrote
Having flops =/= Being an autonomous intelligent machine
This subreddit is full of delusional takes.
PoliteThaiBeep t1_j8dzg5j wrote
The word "singularity" in this subreddit refers to Ray Kurzwail book "Singularity is near". It literally assumes you read at least this book to come here where the whole premise stems on ever increasing computational capabilities that will eventually lead to AGI and ASI.
If you didn't, why are you even here?
Did you read Bostrom? Stuart Russell? Max Tegmark? Yuval Noah Harari?
You just sound like me 15 years ago, when I didn't know any better, haven't read enough, yet had more than enough technical expertise to be arrogant.
helpskinissues t1_j8dztpc wrote
I did, I've been in this field for more than 15 years, singularity doesn't mean saying a PS5 is an autonomous intelligent machine because it has flops. Lol. Anyway I have better things to do. If you have anything relevant to share I may reply. For now it's just cringe statements of chatGPT being smarter than ants because of flops. lmao
Frumpagumpus t1_j8dhip1 wrote
usually wrong and mostly right lol. Better than a human.
I literally just explained to you that you COULD give it short term memory by prepending context to your messages. IT IS TRIVIAL. if i were talking to gpt3 it would not be this dense.
Humans take time to pause and compose their responses. gpt3 is afforded no such grace, but still does a great job anyway, because it is just that smart
yesterday I gave it two lines of sql ddl and asked it to create a view denormalizing all columns except primary key into a nested json object. it did in in .5 seconds, i had to change 1 word in a 200 line sql query to get it to work right.
yea that saved me some time. It does not matter that it was slightly wrong. If that is a stochastic parrot then humans must be mostly stochastic sloths barely even capable of parroting responses.
helpskinissues t1_j8dhsz4 wrote
Nonsense, sorry. Ants do not need prepending context.
"mostly right" no, it's actually mostly wrong. The heck are you saying? Try to play chess with ChatGPT, most of the times it'll make things up.
Anyway, I suggest you to read some experts rather than acting like gpt3, being mostly wrong. Cheers.
Frumpagumpus t1_j8diap9 wrote
lol ants cant speak and i would be curious to read any literature on if they possess short term memory at all XD
challengethegods t1_j8dn5cy wrote
These "it's dumber than an ant" type of people aren't worth the effort in my experience, because in order to think that you have to be dumber than an ant, of course. Also yea, it's trivial to give memory to LLMs, there's like 100 ways to do it.
helpskinissues t1_j8dwubv wrote
Waiting for your customized chatGPT model that maintains consistency after 5 messages, make sure to ping me, I'd gladly invest in your genius startup.
challengethegods t1_j8dylol wrote
That alone sounds like a pretty weak startup idea because at least 50 of the 100 methods for adding memory to an LLM are so painfully obvious that any idiot could figure them out and compete so it would be completely ephemeral to try forming a business around it, probably. Anyway I've already made a memory catalyst that can attach to any LLM and it only took like 100 lines of spaghetticode. Yes it made my bot 100x smarter in a way, but I don't think it would scale unless the bot had an isolated memory unique to each person, since most people are retarded and will inevitably teach it retarded things.
helpskinissues t1_j8dzdfi wrote
Enjoy your secret private highly demanded chatbot version then.
This subreddit... Lol.
Viewing a single comment thread. View all comments