not_particulary
not_particulary OP t1_jadmsf9 wrote
Reply to comment by neu_jose in [D] More stable alternative to wandb? by not_particulary
Huh, so I'm only using the offline sync, so it can only be wandb that has a memory leak.
not_particulary OP t1_ja9g6o1 wrote
Reply to comment by Jean-Porte in [D] More stable alternative to wandb? by not_particulary
Yeah but it's super iffy. My exact script works most of the time, so idk even what to fix. That's why I just want to use something else, the software is obviously not stable.
not_particulary OP t1_ja9962l wrote
Reply to comment by [deleted] in [D] More stable alternative to wandb? by not_particulary
lol
not_particulary t1_ixep6x1 wrote
Reply to comment by Dvorkam in [WP] When you learned your mother was a goddess, things finally seemed to fall into place. The other demigods laughed at you, the only child born to the goddess of the hearth, Hestia. But your power was so much more than they could dream of. by not_quite_graceful
> the power of peace withdrawn.
What a haunting line.
not_particulary t1_jd51f0h wrote
Reply to [D] Running an LLM on "low" compute power machines? by Qwillbehr
There's a lot coming up. I'm looking into it right now, here's a tutorial I found:
https://medium.com/@martin-thissen/llama-alpaca-chatgpt-on-your-local-computer-tutorial-17adda704c23
​
Here's something unique, where a smaller LLM outperforms GPT-3.5 on specific tasks. It's multimodal and based on T5, which is much more runnable on consumer hardware.
https://arxiv.org/abs/2302.00923