ZestyData
ZestyData t1_isocotj wrote
Reply to comment by Moppmopp in rx6900xt for ML? [D] by Moppmopp
Ah gotcha, so you are hoping to publish research!
Well, if you have the GPU clusters then you can prototype on your machine with/without the speed of a GPU (or with a slow GPU) and run actual experiments remotely on the clusters. The functionality and process of doing ML is identical whether using CUDA or not.
Point stands that you'd want a CUDA card.
ZestyData t1_isobpf6 wrote
Reply to rx6900xt for ML? [D] by Moppmopp
You don't need a GPU to get into ML. You can do everything on CPU.
You only really need a GPU if you're doing industry-grade ML where vast sums of money are dependent on models training quickly on large datasets. Or if you are hoping to publish research papers - but that'd be a long way away regardless.
​
...However, if its a non-CUDA card then it won't be helpful.
ZestyData t1_isk14sf wrote
Reply to [P] I built densify, a data augmentation and visualization tool for point clouds by jsonathan
I think one aspect that's really crucial and missing is the statistical/mathematical justification for using this. Before using a tool we'd need to be certain its behaviour is valid.
You mention that you use Delaunay Triangulation (which should really be emphasized higher up, being the crucial aspect of this tool existing). But can you provide and make note of the references that justify Delaunay Triangulation as an effective method for generating data to fit an existing statistical distribution?
I haven't really used Delaunay Triangulation in this manner but by my basic understanding of the algorithm, doesn't it attempt to create an optimal triangulation, and therefore would tend towards outputting rather uniformly distributed internal points, rather than learning the distribution of the input? And the higher number, the greater that trend?
If that hypothesis were the case, it'd be less than useless as an artificial data source, it'd be harmful for the vast majority of use cases! I very well may be wrong, but my main point is that you should definitely make note of the method's performance if you're advertising it as a solution.
ZestyData t1_iqybgm0 wrote
Reply to comment by gratus907 in [D] Why restrict to using a linear function to represent neurons? by MLNoober
get this son of a bitch a doctorate and remove their "Student" flair cos gotdamn u spittin fax
ZestyData t1_iqybakm wrote
Reply to comment by MrFlufypants in [D] Why restrict to using a linear function to represent neurons? by MLNoober
Lol thank you! Totally dropped the ball by missing that crucial element out.
ZestyData t1_iqwjiua wrote
- Layered linear nodes can model non-linear behaviours.
- Computational complexity. Its more efficient to use the aforementioned layers of linears than it is to use non-linear functions directly
ZestyData t1_isog0e4 wrote
Reply to comment by Moppmopp in rx6900xt for ML? [D] by Moppmopp
Awesome, glad to hear you have an interest. Coming from the pure Computer Science side applying ML to purely CS problems, I find ML applied to natural sciences very exciting!
So, you don't need a GPU at all. Your regular CPU that runs everything on a computer also can process ML algorithms very well. GPUs just speed this process up because their electronic architecture is designed to multiply matrices together (originally done because 3D computer graphics is essentially multiplying matrices together). Modern CPUs are quick enough to do most tasks in machine learning, its only when you get to scale up your experiment to get top performance that CPUs take a long time, and GPUs make a speedy difference. There is no ultimate difference in experiment outcomes, however. Just time.
And all of that GPU matrix magic happens way behind the scenes, such that the code you write (and by extension the way you implement these neural networks) is identical whether you have a GPU or not. You'd use one extra line of code to enable GPU support!
CUDA is the crucial middle-man technology made by Nvidia that sits between our neural network code and the GPU's circuitry itself, allowing our normal code to run on GPUS without us having to change any details and tinker with super low level electronics programming. CUDA is the magic that takes all of our normal code that usually runs on CPUs, and instead funnels it to the GPU in a way that makes it run incredibly quickly.
And because NVidia invented this technology, it only works with Nvidia cards.
AMD are developing their own version of CUDA to allow us to use AMD cards but at the moment that's not really ready for use. This is why, in the ML world, the terms 'CUDA' and 'GPU' are often used interchangeably.
When you open your text editor and write a python script, nothing is different whether you have a (CUDA enabled / Nvidia) GPU or whether you don't. That's why if you're just getting started learning, it really won't matter.