Submitted by _Arsenie_Boca_ t3_118cypl in MachineLearning
LudaChen t1_j9i0vmp wrote
To put it simply, the bottleneck layers is a process of reducing dimension first and then increasing dimension. So, why do we need to do this?
In theory, not reducing dimensionality can preserve the most information and more features, which is certainly not a problem. However, for specific tasks, not all features are equally important, and some features may even have a negative impact on the results. Therefore, we need to select some features that we should pay more attention to through some means, and reducing dimensionality can to some extent achieve this function. On the other hand, increasing dimensionality is to enhance the representational ability of the network. Although the channel number of the features after increasing dimensionality is the same as that before reducing dimensionality, the latter is actually restored from low-dimensional features, and the former can be considered to be more specific to the current task.
Viewing a single comment thread. View all comments