Submitted by AutoModerator t3_zcdcoo in MachineLearning
Ricenaros t1_izkmdxd wrote
I'm trying to understand concepts involving feature engineering and correlation, because I feel like I'm encountering conflicting ideas about these two points. On the one hand, we can generate new features by combining our existing features, for example multiplying feature 1 by feature 2. This is said to improve ML models in some cases.
On the other hand, I have read that a desirable property of our input/output data is predictors being highly correlated with the target variable, but not correlated with other predictors. This idea seems to conflict with feature engineering, as our newly derived features can be correlated with the features they were constructed from. Am I missing something here?
I-am_Sleepy t1_izr846m wrote
I am not sure why your output need to not be correlated with other predictor. If the task is correlated then its feature should be correlated too e.g. panoptic segmentation and depth estimation
For feature de-correlation there are some technique you can applied. For example in DL there is orthogonal regularization (enforce feature dot product to be 0), and this blog post
Viewing a single comment thread. View all comments