bo_peng OP t1_j4rht4i wrote
Reply to comment by currentscurrents in [P] RWKV 14B Language Model & ChatRWKV : pure RNN (attention-free), scalable and parallelizable like Transformers by bo_peng
RWKV is a RNN that also works as a linear transformer (or we may say it's a linear transformer that also works as a RNN). So it has both parallel & serial mode, and you get the best of both worlds (fast and saves VRAM).
Almost all such "linear transformers" are bad at language modeling, but RWKV is the exception. The basic idea is a bit similar to https://arxiv.org/abs/2105.14103. Then I added lots of new ideas :)
_Arsenie_Boca_ t1_j4rxdt8 wrote
Is there some more detailed description? Would be interesting to read about these lots of new ideas :)
currentscurrents t1_j4s2n9t wrote
It looks like he goes into a lot more detail on his github.
mrconter1 t1_j4wq1zs wrote
How does the memory scale with the context window size?
Viewing a single comment thread. View all comments