Viewing a single comment thread. View all comments

Takadeshi t1_iw09t4e wrote

Idk that might be true but we don't really know what the limits of scaling these models are, nor do we know the limits of how much faster we can make ML hardware. Expert opinion on the latter though suggests quite a lot; GPUs are really just the tip of the iceberg when designing hardware to train models

3