Submitted by SAbdusSamad t3_10siibd in MachineLearning
Erosis t1_j72rzdl wrote
Reply to comment by SAbdusSamad in [D] Understanding Vision Transformer (ViT) - What are the prerequisites? by SAbdusSamad
You'll probably be fine learning transformers directly, but a better understanding of RNNs might make some of the NLP tutorials/papers containing transformers more easily comprehensible.
Attention is an very important component of transformers, but attention can be applied to RNNs, too.
SAbdusSamad OP t1_j759v4v wrote
I agree that having a background in RNNs and attention with RNNs can make the learning process for transformers, and by extension ViT, much easier.
Viewing a single comment thread. View all comments