Here's something unique, where a smaller LLM outperforms GPT-3.5 on specific tasks. It's multimodal and based on T5, which is much more runnable on consumer hardware.
Yeah but it's super iffy. My exact script works most of the time, so idk even what to fix. That's why I just want to use something else, the software is obviously not stable.
not_particulary t1_jd51f0h wrote
Reply to [D] Running an LLM on "low" compute power machines? by Qwillbehr
There's a lot coming up. I'm looking into it right now, here's a tutorial I found:
https://medium.com/@martin-thissen/llama-alpaca-chatgpt-on-your-local-computer-tutorial-17adda704c23
​
Here's something unique, where a smaller LLM outperforms GPT-3.5 on specific tasks. It's multimodal and based on T5, which is much more runnable on consumer hardware.
https://arxiv.org/abs/2302.00923