Viewing a single comment thread. View all comments

randomoneusername t1_j9iyf7r wrote

I mean this has two elements in it.

DL is not the only algorithm that works in scale for sure.

26

[deleted] t1_j9jgblt wrote

[deleted]

−10

randomoneusername t1_j9jkzs7 wrote

The statement stand-alone that you have there is very vague. Can I assume is taking about NLP or CV projects ?

On tabular data even with non linear relationship normal boosting, ensemble algorithms can scale and be top of the game.

13

bloodmummy t1_j9jzvnr wrote

It strikes me that people who tout DL as a hammer-for-all-nails never touched tabular data in their lives. Go try to do a couple of Kaggle Tabular competitions and you'll soon realise that DL can be very dumb, cumbersome, and data-hungry. Ensemble models,Decision Tree models, and even feature-engineered Linear Regression models still rule there and curb-stomp DL all day long ( For most cases ).

Tabular data is also still the type of data most-used with ML. I'm not a "DL-hater" if there is such a thing, in fact my own research is using DL only. But it isn't a magical wrench, and it won't be.

8

Mefaso t1_j9jgvoz wrote

Anything that scales sub-quadraticaly?

Anything "big-data"

1

GraciousReformer OP t1_j9jib2n wrote

Then why DL?

−10

suflaj t1_j9jjetb wrote

Because it requires the least amount of human intervention

Also because it subjectively sounds like magic to people who don't really understand it, so it both sells to management and to consumers.

At least it's easier for humans to cope it is like magic than to accept that a lot of what AI can do is just stuff that is trivial and doesn't require humanity to solve.

−12