Dimensionality Reduction
It reduces the number of input variables in a dataset to find a lower-dimensional representation that still preserves the salient relationships in the data.
it is classified as unsupervised learning
techniques sometimes, for some subgroups that don’t require labeled data.
Many of the Unsupervised learning methods implement a
transform
method that can be used to reduce the dimensionality.
PCA - Principal Component Analysis
PCA ([Principal Component Analysis](https://en.wikipedia.org/wiki/Principal_component_analysis))
ICA - Independent Component Analysis
i.e. Independent Component Analysis (ICA)
TBC….
T-SNE - T-distributed Stochastic Neighbor Embedding
T-SNE - T-distributed Stochastic Neighbor Embedding
Extended reading:
- https://machinelearningmastery.com/dimensionality-reduction-for-machine-learning/
- https://machinelearningmastery.com/lstm-autoencoders/
- https://www.amazon.com/Machine-Learning-Probabilistic-Perspective-Computation/dp/0262018020?keywords=Machine+Learning:+A+Probabilistic+Perspective&qid=1580679017&sr=8-1&linkCode=sl1&tag=bnomial-20&linkId=e80497d63d021ac1073a2ed9be092b02&language=en_US&ref_=as_li_ss_tl
- https://www.knime.com/blog/seven-techniques-for-data-dimensionality-reduction
- https://en.wikipedia.org/wiki/Principal_component_analysis