Submitted by ortegaalfredo t3_11kr20f in MachineLearning
polawiaczperel t1_jb98qce wrote
Reply to comment by wywywywy in [R] Created a Discord server with LLaMA 13B by ortegaalfredo
Even with one rtx 3090 https://github.com/oobabooga/text-generation-webui/issues/147#issuecomment-1456626387
ortegaalfredo OP t1_jbaswga wrote
Interesting, will research more into that code, its exactly what I need to run 33B.
Currently using a single card it's still too slow to use it as a chatbot.
Viewing a single comment thread. View all comments