MinaKovacs t1_jbyzv1v wrote
A binary computer is nothing more than an abacus. It doesn't matter how much you scale up an abacus, it will never achieve anything even remotely like "intelligence."
RedditLovingSun t1_jbz78cm wrote
Depends on your definition of intelligence, the human brain is nothing but a bunch of neurons passing electrical signals to each other, I don't see why it's impossible for computers to simulate something similar to achieve the same results as a brain does.
MurlocXYZ t1_jbzk75t wrote
> A binary computer is nothing more than an abacus
I could say the same thing about the human brain. It's just a complex abacus.
MinaKovacs t1_jbzso7m wrote
One of the few things we know for certain about the human brain is it is nothing like a binary computer. Ask any neuroscientist and they will tell you we still have no idea how the brain works. The brain operates at a quantum level, manifested in mechanical, chemical, and electromagnetic characteristics, all at the same time. It is not a ball of transistors.
hebekec256 OP t1_jbz0mpm wrote
Yes, I understand that. but LLMs and extensions of LLMs (like PALM-E) are a heck of a lot more than an abacus. I wonder what would happen if Google just said, "screw it", and scaled it from 500B to 50T parameters. I'm guessing there are reasons in the architecture that it would just break, otherwise I can't see why they wouldn't do it, since the risk to reward ratio seems favorable to me
TemperatureAmazing67 t1_jbzcn6a wrote
>extensions of LLMs (like
>
>PALM-E
>
>) are a heck of a lot more than an abacus. I wonder what would happen if Google just said, "screw it", and scaled it from 500B to 50T parameters. I'm guessing there are reasons in the architecture that it would
The problem is that we have scaling laws for NN. We just do not have the data for 50T parameters. We need somehow to get these data. The answer on this question costs a lot.
Co0k1eGal3xy t1_jbzi8wc wrote
- Double Decent, more parameters are MORE data efficient.
- Most of these LLMs barely complete 1 epoch, so there is no concern about overfitting currently.
MinaKovacs t1_jbz2gqw wrote
I think the math clearly doesn't work out; otherwise, Google would have monetized it already. ChatGPT is not profitable or practical for search. The cost of hardware, power consumption, and slow performance are already at the limits. It will take something revolutionary, beyond binary computing, to make ML anything more than expensive algorithmic pattern recognition.
Viewing a single comment thread. View all comments