Depends on the transformer, but generally yes. Pretraining BERT costs like 10k$ in compute, maybe less now. You can train BiLSTM models from scratch on a single consumer card for a similar task in a day or so.
Transformers gain the most when comparing size of training corpus and log likelihood performance. It is also in the scope of large data sets and large sequence lengths that transformers really stand out
Transformers do well with lots of data. This is because the transformer is an extremely flexible and generic architecture. Unlike a fully connected neural network where each input is mapped through a weight matrix to the next layer and the weight matrices are fixed with respect to any input, transformers use attention blocks where the actual "effective" weight matrices are computed using the attention operation using query, key, and value vectors and thus depend on the inputs. What this means is that in order to train a transformer model you need a lot of data in order to get better performance than less flexible neural network architectures such as LSTMs or fully connected networks.
Considering fully connected networks as "less flexible" than transformers sounds misleading. Although very generic, as far as I can see, transformers have much more inductive bias than, e.g., an MLP that would take the whole sequence of word embeddings as input.
I don't think that's true. It would imply that Bi-LSTMs reach good performance faster than Transformers, and Transformers catch up later during training.
I've never seen proof for that, nor do my personal experiences confirm this.
It depends on the accuracy you want, I can train a transformer in 30 min with 30k sentences on an RTX2070 Super and get meaningful embeddings (similar words are close to each others), it works but same as for all models it won't be SOTA if you don't use billions of sentences and a much larger model with much more GPUs.
I was told the same thing and I wouldn't agree, you need a huge pretraining process if you want SOTA results, if you can compromise you don't need as much data, but LSTM might perform better with little data.
suflaj t1_iycm2mj wrote
Depends on the transformer, but generally yes. Pretraining BERT costs like 10k$ in compute, maybe less now. You can train BiLSTM models from scratch on a single consumer card for a similar task in a day or so.