currentscurrents t1_j2csenb wrote
Reply to comment by nogop1 in [R] LAMBADA: Backward Chaining for Automated Reasoning in Natural Language - Google Research 2022 - Significantly outperforms Chain of Thought and Select Inference in terms of prediction accuracy and proof accuracy. by Singularian2501
The number of layers is a hyperparameter, and people do optimization to determine the optimal values for hyperparameters.
Model size does seem to be a real scaling law. It's possible that we will come up with better algorithms that work on smaller models, but it's also possible that neural networks need to be big to be useful. With billions of neurons and an even larger number of connections/parameters, the human brain is certainly a very large network.
Viewing a single comment thread. View all comments