Etodmitry22

Etodmitry22 t1_j6rpice wrote

The loss will always fluctuate especially for complex networks/tasks, the thing you should care about is loss decreasing overall and metrics giving better results on the test set. No fluctuation in loss and perfect convergence is a very rare thing that is mostly seen in ML tutorials and not real-world cases.

If you do not see any improvement overall try to overfit on a small subset of training data - if your model cannot overfit to small data it means bugs in your model or data.

1