rubbledubbletrubble
rubbledubbletrubble OP t1_j0iib4p wrote
Reply to comment by BrotherAmazing in Why does adding a smaller layer between conv and dense layers break the model? by rubbledubbletrubble
The 1000 layer is the softmax layer. I am using a pretrained model and training the classification layers. My logic is to reduce the number of output layers the feature extractor to reduce the number of total parameters.
For example: If mobilenet outputs 1280 and I had a 1000 unit dense layer. The parameters would be 1.28 million. But if I added a 500 unit layer in the middle, it would make the network smaller.
I know the question is bit vague. I was just curious
rubbledubbletrubble OP t1_j0du040 wrote
Reply to comment by suflaj in Why does adding a smaller layer between conv and dense layers break the model? by rubbledubbletrubble
Thank! I’ll give this a shot!
rubbledubbletrubble OP t1_j0dsv9t wrote
Reply to comment by suflaj in Why does adding a smaller layer between conv and dense layers break the model? by rubbledubbletrubble
I am doing this at the last layer. That is why it doesn’t make sense to me. I’d assume with 950 I should get similar results.
rubbledubbletrubble OP t1_j0drg5e wrote
Reply to comment by suflaj in Why does adding a smaller layer between conv and dense layers break the model? by rubbledubbletrubble
Yes, but shouldn’t the model still train and learn something?
I currently have an accuracy of 0.5% with the middle layer ranging from 100 to 950.
rubbledubbletrubble t1_j13ejfu wrote
Reply to How to train a model to distinguish images of class 'A' from images of class 'B'. The model can only be trained on images of class 'A'. by 1kay7
Would a Siamese network work here?