I am talking about ML in general, language processing was just a tangible example for the sake of this post.
Models keep one-upping each other in size and capabilities, but do you see meaningful potential for reduction of size (through configuration or new approaches) in more specific use cases?
naequs OP t1_ivj4vad wrote
Reply to comment by mocialov in [D] Do you think there is a competitive future for smaller, locally trained/served models? by naequs
this is exactly the philosophical motivation of this post