royalemate357 t1_j9rsqd3 wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>We are not even remotely close to anything like actual brain functions. Intelligence need not look anything remotely close to actual brain functions though, right? Like a plane's wings don't function anything like a bird's wings, yet it can still fly. In the same sense, why must intelligence not be algorithmic?
At any rate I feel like saying that probabilistic machine learning approaches like GPT3 are nowhere near intelligence is a bit of a stretch. If you continue scaling up these approaches, you get closer and closer to the entropy of natural language/whatever other domain, and if youve learned the exact distribution of language, imo that would be "understanding"
wind_dude t1_j9rvmbb wrote
When they scale they hallucinate more, produce more wrong information, thus arguably getting further from intelligence.
royalemate357 t1_j9rzbbc wrote
>When they scale they hallucinate more, produce more wrong information
Any papers/literature on this? AFAIK they do better and better on fact/trivia benchmarks and whatnot as you scale them up. It's not like smaller (GPT-like) language models are factually more correct ...
wind_dude t1_j9s1cr4 wrote
I'll see if I can find the benchmarks, I believe there are a few papers from IBM and deepmind talking about it. And a benchmark study in relation to flan.
MinaKovacs t1_j9s04eh wrote
It's just matrix multiplication and derivatives. The only real advance in machine learning over the last 20yrs is scale. Nvida was very clever and made a math processor that can do matrix multiplication 100x faster than general purpose CPUs. As a result, the $1bil data center, required to make something like GPT-3, now only costs $100mil. It's still just a text bot.
Viewing a single comment thread. View all comments