pentin0
pentin0 t1_iruki8b wrote
Reply to comment by beachmike in What happens in the first month of AGI/ASI? by kmtrp
His point is pretty simple to understand: there is no qualitative leap between the brains and minds of people who do well at MIT and the common healthy bloke. He isn't claiming that anyone could "do well" in those schools (because it would imply performing at the same level on a battery of standardized tests... which basically are a proxy for IQ testing. Since no one here is claiming that we all have the same IQ, your rebuke to his position would qualify as a strawman.
Regarding "good memory", it actually is pretty much the gist of it. It's not about having good long-term memory (the ability to "memorize" stuff) but sufficient working memory performance (the neocortex's distributed "RAM"); which has been observed to strongly correlate with IQ. To make it short, the main differences amongst humans that are relevant to the IQ distribution seem to be quantitative in nature (mostly, working memory performance, which itself is highly dependent on white matter integrity i.e. myelination of neuronal axons).
Notice that I didn't say "working memory size" because, as the research shows, these resources are scattered over such a sizeable portion of the brain that the relatively tiny differences in unit recruitment wouldn't explain much of the experimental data within the prevailing theories. So yeah, I'm talking about short-term memory encoding/decoding performance, here.
I know it's a hard pill to swallow but if you want to rely on "intelligence" to explain that phenomenon, then you'll lose your biggest opportunity to argue for qualitative factors as the main drivers of academic performance. In fact, working memory performance (which is much more straightforwardly quantitative than intelligence) is an even better predictor of academic success, especially at higher IQs (interestingly enough, the scenario that would be more relevant to this AGI/ASI debate).
Finally, since we're playing this game, I also went to an engineering school (studied AI), so don't expect your appeal to authority to work here. Let's be real about STEM classes: that shit might be "HARD" but it ain't witchcraft. It's also ironic that you used the driest and most clear-cut subjects as examples. It doesn't strengthen your point.
pentin0 t1_irugtmm wrote
Reply to comment by MurderByEgoDeath in What happens in the first month of AGI/ASI? by kmtrp
Some people seem to have a hatred for counterfactuals and/or abstraction. Let them live in the prison of their own emotions.
pentin0 t1_irtzztg wrote
Reply to comment by raccoon8182 in What happens in the first month of AGI/ASI? by kmtrp
Sentience isn't general intelligence.
"Having an algorithm solve any question we throw at it" is too loose to be a good definition/criterium either.
Your viewpoint is too narrow and the one you're objecting to, too vague.
pentin0 t1_irtzfhm wrote
Reply to comment by LowAwareness7603 in What happens in the first month of AGI/ASI? by kmtrp
Username checks out
pentin0 t1_irtz87m wrote
Reply to comment by QuantumReplicator in What happens in the first month of AGI/ASI? by kmtrp
I've learned to accept the fact that most human beings (even those on this sub) are uncomfortable with nuance
pentin0 t1_iwkf51v wrote
Reply to comment by Akimbo333 in Introducing Galactica. A large language model for science. by Qumeric
Likely not. Solving some of these problems might turn out to be equivalent to creating an AGI.
Galactica might still be great for purely summarizing and reviewing purposes and thus, would still help accelerate scientific progress