DingWrong
DingWrong t1_jauuc0z wrote
Reply to Meta’s LLaMa weights leaked on torrent... and the best thing about it is someone put up a PR to replace the google form in the repo with it 😂 by RandomForests92
Now just if there were seeders in that torrent..
DingWrong t1_j7klrp2 wrote
Reply to [D] Should I focus on python or C++? by NoSleep19
Focus on the basics. Math, algos, structures. The exact language is just a way to express these basics. Now, if you want to start coding right away, python is more common in ML atm.
DingWrong t1_j77e86c wrote
Reply to please help a bunch of students?(with pre annotated data set) we were assigned to this task with no prior knowledge of ML i don't know where to begin with we tried a couple of method which ultimately failed id be thankful for anyone who would tell me in steps what to do with this data[D] by errorr_unknown
Try asking ChatGPT. That's what the world is using for their assignments.
DingWrong t1_j0cypwn wrote
Reply to comment by vin227 in I have 6x3090 looking to build a rig by Outrageous_Room_3167
Gotcha. Mining cases are not a good option.
Open frames like this https://www.amazon.com/Mining-Computer-Currency-Bitcoin-Accessories/dp/B09CNG58R1/ should work. Building your own base on available space is best though.
DingWrong t1_j0cxdpf wrote
Reply to comment by vin227 in I have 6x3090 looking to build a rig by Outrageous_Room_3167
Exactly why I suggest Cooler Master x16 risers. (they don't have x1...) I use the 20 and 30 cm PCIe 3.0 ones as my motherboard does not support PCIe 4.0.
Chassis is the same as a mining rig.
DingWrong t1_j0cp68s wrote
Mining community has been building these for quite some time now. Depending on you TR mobo, you might get away with pcie 3.0 risers. Cooler master has some quality models.
DingWrong t1_iyq0nr0 wrote
Reply to comment by computing_professor in GPU Comparisons: RTX 6000 ADA vs A100 80GB vs 2x 4090s by TheButteryNoodle
Big models get sharded and chunks get loaded on each gpu. There are a lot of frameworks ready for this as the big NLP models can't fit on a single gpu. Alpa even shards the model on different machines.
DingWrong t1_jcc3axk wrote
Reply to How To Fine-tune LLaMA Models, Smaller Models With Performance Of GPT3 by l33thaxman
Is there a written version? I like reading.