Submitted by Zondartul t3_zrbfcr in MachineLearning
limapedro t1_j13qfxr wrote
The cheaper option would be to run on 2 RTX 3060s! Each GPU costing 300 USD you could buy two for 600ish! Also there's a 16 GB A770 from Intel! To run a very large model you could split the weights into so called blocks, I was able to test it to myself in a simple keras implementation, but the code for conversion is hard to write, although I think I've seen somewhere something similar from HuggingFace!
maizeq t1_j162xtk wrote
How is the tooling and performance for the A770 on machine learning workloads? Do you have any experience with it?
limapedro t1_j175nby wrote
No, I haven't! Although in theory it should be really good, you could still run Deep Learninig using Directml, but a native implemenation should be really fast because of its XMX cores, they're similar to Tensor Cores.
wywywywy t1_j18a6g2 wrote
I haven't tried it myself but Intel has their own dist of Python and they also have their own Pytorch extension. They seem to be quite usable from looking at some of the github comments.
Viewing a single comment thread. View all comments