Submitted by Vegetable-Skill-9700 t3_121agx4 in deeplearning
BellyDancerUrgot t1_jdns4yg wrote
Reply to comment by suflaj in Do we really need 100B+ parameters in a large language model? by Vegetable-Skill-9700
-
Paper summarization and factual analysis of 3d generative models, basic math, basic oops understanding were the broad topics I experimented it on. Not giving u the exact prompts but you are free to evaluate it yourselves.
-
Wrong choice of words on my part. When I said ‘ace’ I implied that It does really good on leetcode questions from before 2021 and it’s abysmal after. Also the ones it does solve it solves at a really fast rate. From a test that happened a few weeks ago it solved 3 questions pretty much instantly and that itself would have placed it in the top 10% of competitors.
-
Unbiased implies being tested on truly unseen data which there is far less off considering the size of the train data used. Many of the examples cited in their new paper “sparks of agi” are not even reproducible.
https://twitter.com/katecrawford/status/1638524011876433921?s=46&t=kwpwSgfnJvGe6J-1CEe_5Q
-
Insufficient because as I said , no world model, no intuition, only memory. Which is why it hallucinates.
-
Intuition is understanding the structure of the world without having to have the entire internet to memorize it. A good analogy would be of how a child isnt taught how gravity works when they first start walking. Or how you can not have knowledge about a subject and still infer based on your understanding of underlying concepts.
These are things u can inherently not test or quantify when evaluating models like gpt that have been trained on everything and you still don’t know what it has been trained on lol.
-
You can keep daring me and idc because I have these debates with fellow researchers in the field, always looking for a good debate if I have time. I’m not even an NLP researcher and even then I know the existential dread creeping in on NLP researchers because of how esoteric these results are and how AI influencers have blown things out of proportion citing cherry picked results that aren’t even reproducible because you don’t know how to reproduce them.
-
There is no real way an unbiased scientist reads openAIs new paper on sparks of AGI and goes , “oh look gpt4 is solving AGI”.
-
Going back on what I said earlier, yes there is always the possibility that I’m wrong and GPT is indeed the stepping stone to AGI but we don’t know because the only results u have access to are not very convincing. And on a user level it has failed to impress me beyond being a really good chatbot which can do some creative work.
suflaj t1_jdnvq8q wrote
> Not giving u the exact prompts
Then we will not be able to verify your claims. I hope you don't expect others (especially those with a different experience, challenging your claims) to carry your burden of proof.
> When I said ‘ace’ I implied that It does really good on leetcode questions from before 2021 and it’s abysmal after.
I have not experienced this. Could you provide the set of problems you claim this is the case for?
> Also the ones it does solve it solves at a really fast rate.
Given its architecture, I do not believe this is actually the case. Its inference is only reliant on the output length, not the problem difficulty.
> From a test that happened a few weeks ago it solved 3 questions pretty much instantly and that itself would have placed it in the top 10% of competitors.
That does not seem to fit my definition of acing it. Acing is being able to solve all or most question. Given a specific year, that is not equal to being able to solve 3 problems. Also, refer to above paragraph about why inference speed is meaningless.
Given that it is generally unknown what it was trained on, I don't think it's even adequate to judge its performance on long-known programming problems.
> Insufficient because as I said , no world model, no intuition, only memory. Which is why it hallucinates.
You should first cite some authority on why it would be important. We generally do not even know what it would take to prevent hallucination, since we humans, who have that knowledge, often hallucinate as well.
> Intuition is understanding the structure of the world without having to have the entire internet to memorize it.
So why would that be important? Also, the world you're looking for is generalizing, not intuition. Intuition has nothing to do with knowledge, it is at most loosely tied to wisdom.
I also fail to understand why such a thing would be relevant here. First, no entity we know of (other than God) would possess this property. Secondly, if you're alluding that GPT- like models have to memorize something to know, you are deluding yourself - GPT-like models memorize relations, they are not memory networks.
> A good analogy would be of how a child isnt taught how gravity works when they first start walking.
This is orthogonal to your definition. A child does not understand gravity. No entity we know of understands gravity, we at most understand its effects to some extent. So it's not a good analogy.
> Or how you can not have knowledge about a subject and still infer based on your understanding of underlying concepts.
This is also orthogonal to your definition. Firstly it is fallacious in the sense that we cannot even know what is objective truth (and so it requires a very liberal definition of "knowledge"), and secondly you do not account for correct inference by chance (which does not require understanding). Intuition, by a general definition, has little to do with (conscious) understanding.
> These are things u can inherently not test or quantify when evaluating models like gpt that have been trained on everything and you still don’t know what it has been trained on lol.
First you should prove that these are relevant or wanted properties for whatever it is you are describing. In terms of AGI, it's still unknown what would be required to achieve it. Certainly it is not obvious how intuition, however you define it, is relevant for it.
> I’m not even an NLP researcher and even then I know the existential dread creeping in on NLP researchers because of how esoteric these results are and how AI influencers have blown things out of proportion citing cherry picked results that aren’t even reproducible because you don’t know how to reproduce them.
Brother, you just did an ad hominem on yourself. These statements only suggest you are not qualified to talk about this. I have no need to personally attack you to talk with you (not debate), so I would prefer if you did not trivialize your standpoint. For the time being, I am not interested in the validity of it - first I'm trying to understand what exactly you are claiming, as you have not provided a way for me to reproduce and check your claims (which are contradictory to my experience).
> There is no real way an unbiased scientist reads openAIs new paper on sparks of AGI and goes , “oh look gpt4 is solving AGI”.
Nobody is even claiming that. It is you who mentioned AGI first. I can tell you that NLP researchers generally do not use the term as much as you think. It currently isn't well defined, so it is largely meaningless.
> Going back on what I said earlier, yes there is always the possibilit
The things worth considering you said are easy to check - you can just provide the logs (you have the history saved) and since GPT4 is as reproducible as ChatGPT, we can confirm or discard your claims. There is no need for uncertainty (unless you will it).
BellyDancerUrgot t1_jdp945d wrote
Claim, since you managed to get lost in your own comment:
Gpt hallucinates a lot and is unreliable for any factual work. It’s useful for creative work when the authenticity of its output doesn’t have to be checked.
Your wall of text can be summarized as, “I’m gonna debate you by suggesting no one knows the definition of AGI.” The living embodiment of the saying “empty vessels make much noise. No one knows what the definition of intuition is but what we know is that memory does not play a part in it. Understanding causality does.
It’s actually hilarious that you bring up source citation as some form of trump card after I mention how everything you know about GPT4 is something someone has told you to believe in without any real discernible and reproducible evidence.
Instead of maybe asking me to spoon feed you spend a whole of 20 secs googling.
https://twitter.com/random_walker/status/1638525616424099841?s=46&t=kwpwSgfnJvGe6J-1CEe_5Q
https://twitter.com/chhillee/status/1635790330854526981?s=46&t=kwpwSgfnJvGe6J-1CEe_5Q
https://aisnakeoil.substack.com/p/gpt-4-and-professional-benchmarks
https://aiguide.substack.com/p/did-chatgpt-really-pass-graduate
“I don’t quite get it how works” + “it surprises me” ≠ it could maybe be sentient if I squint.
Thank you for taking the time to write two paragraphs pointing out my error in using the phrase “aces leetcode” after I acknowledged and corrected the mistake myself, maybe you have some word quota you were trying to fulfill with that . Inference time being dependent on length of output sequence has been a constant since the first attention paper let alone the first transformer paper. My point is, it’s good at solving leetcode when it’s present in the training set.
Ps- also kindly refrain from passing remarks on my understanding of the subject when the only arguments you can make are refuting others without intellectual dissent. It’s quite easy to say, “no I don’t believe u prove it” while also not being able to distinguish between Q K and V if it hit u on the face.
suflaj t1_jdqh5se wrote
> Gpt hallucinates a lot and is unreliable for any factual work.
No, I understand that's what you're saying, however, this is not a claim that you can even check. You have demonstrated already that your definitions are not aligned with generally accepted ones (particularly for intuition), so without concrete examples this statement is hard to take into account seriously.
> Your wall of text can be summarized as, “I’m gonna debate you by suggesting no one knows the definition of AGI.”
I'm sad that's what you got from my response. The point was to challenge your claims about whether GPT4 is or isn't AGI based on the mere fact you're judging that over properties which might be irrelevant for the definition. It is sad that you are personally attacking me instead of addressing my concerns.
> No one knows what the definition of intuition is
That is not correct. Here are some definitions of definition:
- an ability to understand or know something immediately based on your feelings rather than fact (Cambridge)
- the power or faculty of attaining to direct knowledge or cognition without evident rational thought and inference (Merriam-Webster)
- a natural ability or power that makes it possible to know something without any proof or evidence : a feeling that guides a person to act a certain way without fully understanding why (Brittanica)
You might notice that all these 3 definitions are satisfied by DL models in general.
> but what we know is that memory does not play a part in it.
This is also not true: https://journals.sagepub.com/doi/full/10.1177/1555343416686476
The question is - why are you making stuff up despite the counterevidence being 1 Google search away?
> It’s actually hilarious that you bring up source citation as some form of trump card after I mention how everything you know about GPT4 is something someone has told you to believe in without any real discernible and reproducible evidence.
I bring it up as you have not provided any other basis for your claims. You refuse to provide the logs for your claims to be checked. Your claims are contrary to my experience, and it seems others' experience as well. You claim things contrary to contemporary science. I do not want to discard your claims outright, I do not want to personally attack you despite being given ample opportunity to do so, I'm asking you to give me something we can discuss and not turn it into "you're wrong because I have a different experience".
> Instead of maybe asking me to spoon feed you spend a whole of 20 secs googling.
I'm not asking you to spoon feed me, I'm asking you to carry your own burden of proof. It's really shameful for a self-proclaimed person in academia to be offended by someone asking them for elaboration.
Now, could you explain what those links mean? The first one, for example, does not help your cause. Not only does it not concern GPT4, but rather Bard, a model significantly less performant than even ChatGPT, it also claims that the model is not actually hallucinating, but not understanding sarcasm.
The second link also doesn't help your cause - rather than examining the generalization potential of a model, it suggest the issue is with the data. It also does not evaluate the newer problems as a whole, but a subset.
The 3rd and 4th links also do not help your cause. First, they do not claim what you are claiming. Second, they list concerns (and I applaud them for at least elaborating a lot more than you), but they do not really test them. Rather than claims, they present hypotheses.
> “I don’t quite get it how works” + “it surprises me” ≠ it could maybe be sentient if I squint.
Yeah. Also note: "I don't quite get how it works" + "It doesn't satisfy my arbitrary criteria on generalization" ≠ It doesn't generalize
> after I acknowledged and corrected the mistake myself
I corrected your correction. It would be great if you could recognize that evaluation the performance on a small subset of problems is not equal to evaluating whether the model aces anything.
> maybe you have some word quota you were trying to fulfill with that
Not at all. I just want to be very clear, given that I am criticisng your (in)ability to clearly present arguments; doing otherwise would be hypocritical.
> My point is, it’s good at solving leetcode when it’s present in the training set.
Of course it is. However, your actual claim was this:
> Also the ones it does solve it solves at a really fast rate.
Your claim suggested that the speed at which it solves it is somehow relevant to the problems it solves correctly. This is demonstrably false, and that is what I corrected you on.
> Ps- also kindly refrain from passing remarks on my understanding of the subject when the only arguments you can make are refuting others without intellectual dissent.
I am not passing these remarks. You yourself claim you are not all that familiar with the topic. Some of your claims have not only cast doubt about your competence on the matter, but now even of the truthfulness of your experiences. For example, I have been beginning to doubt whether you have even used GPT4 given your reluctance to provide your logs.
The arguments I am making is that I don't have the same experience. And that's not only me... Note, however, that I am not confidently saying that I am right or you are wrong - I am, first and foremost, asking you to provide us with the logs so we can check your claims, that for now are contrary to the general public's opinion. Then we can discuss what actually happened.
> It’s quite easy to say, “no I don’t believe u prove it” while also not being able to distinguish between Q K and V if it hit u on the face.
It's also quite easy to copy paste the logs that could save us from what has now turned into a debate (and might soon lead to a block if personal attacks continue), yet here we are.
So I ask you again - can you provide us with the logs that you experienced hallucination with?
EDIT since he (u/BellyDancerUrgot) downvoted and blockedme
> Empty vessels make much noise seems to be a quote u live by. I’ll let the readers of this thread determine who between us has contributed to the discussion and who writes extensively verbose commentary , ironically , with 0 content.
I think whoever reads this is going to be sad. Ultimately, I think you should make sure as little people see this as possible, this kind of approach bring not only shame to your academic career, but also to you as a person. You are young, so you will learn not to be overly enthusiastic in time, though.
BellyDancerUrgot t1_jds7yao wrote
Empty vessels make much noise seems to be a quote u live by. I’ll let the readers of this thread determine who between us has contributed to the discussion and who writes extensively verbose commentary , ironically , with 0 content.
Viewing a single comment thread. View all comments