Submitted by tekktokk t3_11w4kqd in MachineLearning
tekktokk OP t1_jd3f2xf wrote
Reply to comment by UnusualClimberBear in [R] What do we think about Meta-Interpretive Learning? by tekktokk
So the main problem with MIL or ILP is that it would not be able to handle the scale of the quantity of raw input data that the system would have to process?
UnusualClimberBear t1_jd3gqap wrote
Usually, the problem is the combinatorial nature of the possible number of rules that could apply. Here they seem to be able to find a subset of possible rules with a polynomial complexity, but as table 7 of the second paper contains tiny 'wrt ML/RL data) instances of problems, I would answer yes to your questions. ILP is something coming with strong guarantees, while ML comes with a statistical risk. Theses guarantees aren't free.
tekktokk OP t1_jd3l4vl wrote
Alright, thank you. Then I guess last question, if you happen to know; what is the current state of ILP in the ML/AI industry? Is it pretty much dead? Is it merely an interesting theory but hasn't found much application in the market? Does anyone see a bright future for it?
UnusualClimberBear t1_jd3mklf wrote
Even for protein folding it has been overridden by deep models. It might be useful for critical tasks where error is not allowed and everything is deterministic, but I'm not expert of the field.
tekktokk OP t1_jd3orau wrote
Got it. Appreciate the insight.
Viewing a single comment thread. View all comments