[D] Running an LLM on "low" compute power machines? Submitted by Qwillbehr t3_11xpohv on March 21, 2023 at 6:27 PM in MachineLearning 21 comments 48
sanxiyn t1_jd68827 wrote on March 22, 2023 at 3:00 AM You don't need leaked LLaMA weight. ChatGLM-6B weight is being distributed by the first party. Permalink 1
Viewing a single comment thread. View all comments