[R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs github.com Submitted by MysteryInc152 t3_11utpud on March 18, 2023 at 5:01 PM in MachineLearning 49 comments 201
Temporary-Warning-34 t1_jcpwx16 wrote on March 18, 2023 at 5:23 PM Reply to comment by MysteryInc152 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152 RP isn't forever, though. Permalink Parent 6 MysteryInc152 OP t1_jcpxcn5 wrote on March 18, 2023 at 5:26 PM Oh for sure. Changed it to long context, i think that's better. I just meant there's no hard context limit. Permalink Parent 8
MysteryInc152 OP t1_jcpxcn5 wrote on March 18, 2023 at 5:26 PM Oh for sure. Changed it to long context, i think that's better. I just meant there's no hard context limit. Permalink Parent 8
Viewing a single comment thread. View all comments