MysteryInc152 OP t1_jcpxcn5 wrote
Reply to comment by Temporary-Warning-34 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
Oh for sure. Changed it to long context, i think that's better. I just meant there's no hard context limit.
Viewing a single comment thread. View all comments