Viewing a single comment thread. View all comments

gkamer8 t1_iz0ad6v wrote

I’ve been trying to train a transformer from scratch on a couple books in hopes that it can give me English-ish text, even if it’s overfitting. The model is getting stuck just outputting the most likely token as “space”, second mostly likely as “comma”, third “and” and so on. That’s for every token. Has anyone run into similar issues, or can help me brainstorm some problems? Some things I’ve checked/tried so far:

  • The model can learn a toy problem where sequences are either abc or def - first token is a/b 50%, rest of tokens are 99% correct because they can tell if the first token was a or d. So the model is not completely broken
  • Warmup / long warmup. I used the learning rate formula in vaswani et al. Just tried it last night with a much longer warmup with learning rates multiplied by 0.01, no dice.
  • layer norm epsilon - added one for numerical stability
  • input/output embeddings use shared weights, input embeddings are multiplied 1/sqrt(dmodel)
  • using label smoothing = .1 on my cross entropy loss
  • increased the batch size by accumulating gradients, so every batch had about 20k tokens
  • ran overnight in hopes that it would break out of the local minimum, didn’t
  • using the Adam optimizer

Some other details-

  • using the GPT 2 tokenizer
  • sequence length of 64
  • batches of size 200
  • model is made completely from scratch, so no PyTorch or hugging face libraries
  • the model has the same parameters as “base” in vaswani et al

Any suggestions would be appreciated

1

Brudaks t1_iz4av37 wrote

My intuitive understanding is that transformers are far too "powerful"/assumption-free that they are quite data-hungry and need far more than "a couple books" to learn the structure.

If all you have is a couple of books, then IMHO a small RNN would bring better results than a transformer (but still bad - "all the works of Shakespeare" seems to be a reasonable minimum to get decent results) and the inflection point where transformer architecture starts to shine is at much larger quantities of training data.

If you do want to exactly that (and with overfitting), try starting from a sequence length of, say, 4 or 8 as a starting point.

2

gkamer8 t1_iz55n2z wrote

Thanks- since writing this, I got past that particular minimum with better initialization and a modified arch, but it still isn’t generating terribly interesting text. I upped the dataset to about 10 books. I think I’ll download a proper large dataset to see if it can do any better. Thanks!

1