Submitted by boutta_call_bo_vice t3_zpt405 in deeplearning
Recently learning about CNNs, and it was pretty interesting that the filter components themselves are the learned weights. I get how that works. However my prior naive understanding was that you would have a selection of known filters (edge detection, etc) with weights on the connections as in basic neural networks. Is there any reason or precedent to have both types? That is, to have an architecture where you force the CNN to have edge detection etc. As one of its feeders to the next layer while letting other filters be learned in the standard way? Thanks in advance
invoker96_ t1_j0umchs wrote
While defining fixed weights defeats the purpose of 'learning', you may add some filters and set them non-trainable if you have domain experience. In fact, transfer learning can be seen as fixing primary/coarse filters and learning finer ones.