[D] Looking for suggestions on setting up autoscaling on GPU servers for AI inference (without kubernetes)? Submitted by fgp121 t3_yhjpo2 on October 30, 2022 at 5:08 PM in MachineLearning 5 comments 3
alibrarydweller t1_iuedvyb wrote on October 30, 2022 at 6:18 PM You might look at Nomad -- it manages containers like K8s, but it's significantly simpler. We run GPU jobs on it, although we don't currently autoscale. Permalink 3
Viewing a single comment thread. View all comments