BelialSirchade
BelialSirchade t1_j174fni wrote
Reply to comment by DavesEmployee in [D] Running large language models on a home PC? by Zondartul
You don’t need NVLink though, PyTorch support model parallelism through deepspeed anyways, so go ahead and buy that extra 4090
BelialSirchade t1_j174112 wrote
Reply to comment by GoofAckYoorsElf in [D] Running large language models on a home PC? by Zondartul
More vram probably, but you can just hook up 2 3090 ti at half the price
Though for LLM you probably need 10 3090 ti and even then it’s probably not enough
BelialSirchade t1_izukc2e wrote
Reply to [D] - Has Open AI said what ChatGPT's architecture is? What technique is it using to "remember" previous prompts? by 029187
Curious about this too
BelialSirchade t1_j635kpt wrote
Reply to If given the chance in your life time, will join a theoretical transhumanist hive mind? by YobaiYamete
The current internet already functions like a hive mind, and if it helps to pool our knowledge together in a more efficient manner than just reading books, why not