Submitted by Imaginary_Carrot4092 t3_xudng9 in MachineLearning
Imaginary_Carrot4092 OP t1_iqvqztc wrote
Reply to comment by PassionatePossum in [D] Model not learning data by Imaginary_Carrot4092
Yes your assumption 1 is exactly right. The network I am using is very simple with 2 hidden layers (I am not sure if this model is enough to learn this data). This is not a time series data.
The input is the number of hours it takes for a certain process to complete and the output is one of the process variables.
PassionatePossum t1_iqvz2n0 wrote
Yeah, this has no chance of working. Neural networks aren't magic. They are function approximators, nothing more. And an neuron can only learn a linear combination of its inputs.
Since you only have one input, the first layer will only be able to learn fractions of the original input. And the second layer will learn how to add them together. So some non-linearities (due to activations) aside, your model can essentially only learn to add fractions of the original input.
And while the universal approximation theorem says that theoretically this is enough to approximate any function if you make your network wide or deep enough, you have no guarantees that the solver will actually find the solution. And in practice, it won't.
A common trick is to use (1, x, x^2, ..., x^n) as input but I doubt that this will do the trick in your case. If there is a function that describes a relationship between your input variable and the output variable, it has to be a polynomial of extremely high degree.
If you have additional inputs you could use, it might help. But just looking at what you have provided, it is not going to work.
Tgs91 t1_iryxsqa wrote
A neural network is capable of learning non-linear relationships from a 1d input to a 1d output. The problem is that your data doesn't doesn't have any relationship between those variables. You need to find some input variables that are actually related to the output. A neural net can't approximate a relationship that doesn't exist
Viewing a single comment thread. View all comments