[R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs github.com Submitted by MysteryInc152 t3_11utpud on March 18, 2023 at 5:01 PM in MachineLearning 49 comments 201
emotionalfool123 t1_jcsmthb wrote on March 19, 2023 at 6:36 AM Reply to comment by wyhauyeung1 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152 du -hs for the rescue. Permalink Parent 2 luaks1337 t1_jctagcz wrote on March 19, 2023 at 12:04 PM In German this command could be interpreted as "you son of a whore" Permalink Parent 3 Tr4sHCr4fT t1_jctempd wrote on March 19, 2023 at 12:48 PM ncdu ftw Permalink Parent 2 emotionalfool123 t1_jctg58u wrote on March 19, 2023 at 1:02 PM Thanks for letting me know a better way. Permalink Parent 1
luaks1337 t1_jctagcz wrote on March 19, 2023 at 12:04 PM In German this command could be interpreted as "you son of a whore" Permalink Parent 3
Tr4sHCr4fT t1_jctempd wrote on March 19, 2023 at 12:48 PM ncdu ftw Permalink Parent 2 emotionalfool123 t1_jctg58u wrote on March 19, 2023 at 1:02 PM Thanks for letting me know a better way. Permalink Parent 1
emotionalfool123 t1_jctg58u wrote on March 19, 2023 at 1:02 PM Thanks for letting me know a better way. Permalink Parent 1
Viewing a single comment thread. View all comments