currentscurrents OP t1_j2g9mvy wrote
Reply to comment by Dylan_TMB in [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
Thanks, that is the question I'm trying to ask! I know explainability is a bit of a dead-end field right now so it's a hard problem.
An approximate or incomprehensible algorithm could still be useful if it's faster or uses less memory. But I think to accomplish that you would need to convert it into higher-level ideas; otherwise you're just emulating the network.
Luckily neural networks are capable of converting things into higher-level ideas? It doesn't seem fundamentally impossible.
Dylan_TMB t1_j2gc9va wrote
I actually think you are looking for this:
https://arxiv.org/abs/2210.05189
Proof that all neural networks can be represented by a decision tree. Navigating a decision tree is an algorithm so this would be a representation of the "algorithm"
So a question to ask would be if it is the minimal decision tree?
currentscurrents OP t1_j2gctk4 wrote
Interesting!
This feels like it falls under emulating a neural network, since you've done equivalent computations - just in a different form.
I wonder if you could train a neural network with the objective of creating the minimal decision tree.
Dylan_TMB t1_j2gcz2v wrote
Or just learn to minimize a tree that's input.
Viewing a single comment thread. View all comments