Submitted by Singularian2501 t3_yx7zft in MachineLearning
Paper: https://arxiv.org/abs/2211.04325
Blog: https://epochai.org/blog/will-we-run-out-of-ml-data-evidence-from-projecting-dataset
Abstract:
>We analyze the growth of dataset sizes used in machine learning for natural language processing and computer vision, and extrapolate these using two methods; using the historical growth rate and estimating the compute-optimal dataset size for future predicted compute budgets. We investigate the growth in data usage by estimating the total stock of unlabeled data available on the internet over the coming decades. Our analysis indicates that the stock of high-quality language data will be exhausted soon; likely before 2026. By contrast, the stock of low-quality language data and image data will be exhausted only much later; between 2030 and 2050 (for low-quality language) and between 2030 and 2060 (for images). Our work suggests that the current trend of ever-growing ML models that rely on enormous datasets might slow down if data efficiency is not drastically improved or new sources of data become available.
Possible solutions based on the following papers:
https://arxiv.org/abs/2112.04426 , https://arxiv.org/abs/2111.00210 and https://openreview.net/forum?id=NiEtU7blzN / Retrival machanisms, EfficientZero and synthetic data can be seen as possible solutions that need to be improved on.
ktpr t1_iwode1v wrote
What’s wrong with self supervision? It enables combinatorial expansion of dataset sizes if the task is specified well.