Viewing a single comment thread. View all comments

Sm0oth_kriminal t1_j7y6wv6 wrote

This is probably only the case in which there’s a very low “compression ratio” of model parameters to learned entropy.

Basically, if the model has “too many” parameters it can be distilled but we’ve found that, empirically, until that point is hit, transformers scale extremely well and are generally better than any other known architecture.

Another topic is sparsificafion, which takes a trained model and tries to cut out some percentage of weights that have a minimal output effect, then fine tuning that model. You can check out Neural Magic online and associated works… they can run models on CPUs that normally require GPUs

4

avocadoughnut t1_j7yaq8w wrote

I'm considering a higher level idea. There's no way that transformers are the end-all-be-all model architecture. By identifying the mechanisms that large models are learning, I'm hoping a better architecture can be found that reduces the total number of multiplications and samples needed for training. It's like feature engineering.

8

nikgeo25 t1_j7yjicm wrote

Know any papers related to their work? Magic sounds deceptive...

1