Submitted by floppy_llama t3_1266d02 in MachineLearning
pier4r t1_jead39m wrote
As a semi layman, while I was amazed by the progress in ML, I was skeptical of every increasing models, needing more and more parameters to do good. I felt like "more parameters can improve things, then other factor follows".
I asked myself whether there was any effort in being more efficient shrinking things and recently I read about LLAMA and I realized that that direction is now pursued as well.
Viewing a single comment thread. View all comments