Submitted by rubbledubbletrubble t3_zmxbb5 in deeplearning
suflaj t1_j0dt970 wrote
Reply to comment by rubbledubbletrubble in Why does adding a smaller layer between conv and dense layers break the model? by rubbledubbletrubble
Not really, 950 is smaller than 1000 so not only are you destroying information, but you are potentially getting into a really bad local minimum.
When you add that intermediate layer, what you are essentially doing is random hashing your previous distribution. If your random hash kills the relations between data your model learned, then of course it will not perform.
Now, because Xavier and Kaiming-He initializations aren't exactly initializations to get the functionality of a universal random hash, they might not kill all your relations, but they are still random enough to have the potential depending on the task and data. You might get lucky, but on average, you will almost never get lucky.
If I was in your place I would train with linear warmup to a fairly large learning rate, like 10x higher than previous maximum. This will make very bad weights shoot out of their bad minima once LR reaches the max and hopefully you'll get better results once they settle down as the LR falls down. Just make sure you clip your gradients so your weights don't go to NaN, because this is the equivalent of driving your car into a wall in hopes of the crash turning it into a Ferrari.
As for how long you should train it... Well, the best would be to add the layer without any nonlinear function and see how much you need to reach original performance. Since there is no non-linear function the new network is equally as expressive as the original. Once you get the number of epochs, add like 25% to that number and train the one with the non-linear transformation after your bottleneck that long.
rubbledubbletrubble OP t1_j0du040 wrote
Thank! I’ll give this a shot!
Viewing a single comment thread. View all comments