AllowFreeSpeech t1_jeevp3b wrote
What bothers me is that most researchers don't care to use any model compression or efficiency techniques. They want others to pay for their architectural inefficiencies. IMO such funding could be a bad idea if it were to stop competition of neural architectures, and a good idea otherwise.
For example, is matrix-matrix multiplication necessary or can matrix-vector multiplication do the job? Similarly, are dense networks necessary or can sparse networks do the job? Alternatively the funding can go toward the engineering of optical and analog hardware that is significantly more power efficient.
Viewing a single comment thread. View all comments