machine learning - Why linear transformation improves accuracy and efficiency of classification for high-dimensional data? -
Let's have an MA? N (M: number of records, and n: number of properties) should be dataset. When the number of properties is large and the dataset is X noise, the classification becomes more complex and the accuracy of the classification decreases. One way to eliminate this problem is to use linear changes, that is, to classify on Y = XR, where R is an N.O. P Matrix and P & lt; = n I was wondering how linear change simplifies classification? And why the classified accuracy increases, if we classify the conversion data, then is the Y X noise?
will not work as a linear conversion, but some linear changes are sometimes useful especially , And linear changes are often used for dimensional reduction.
The basic idea is that most information is probably contained in some linear combinations of the features of the dataset, and that, away from throwing everyone else, we force ourselves to use simple models Are / are in short dose.
It is not always so great, for example, even if one of the features is actually trying to classify it, it can still be released by PCA, its There is less variability - thus losing important information
Comments
Post a Comment