Cover’s theorem on the separability of patterns

E641827

Cover’s theorem on the separability of patterns is a fundamental result in statistical learning theory stating that complex pattern-classification problems are more likely to be linearly separable when data are mapped into a higher-dimensional feature space.

Try in SPARQL Jump to: Statements Referenced by

Statements (45)

Predicate Object
instanceOf result in statistical learning theory
theorem
appliesTo binary classification
pattern classification problems
assumes patterns are in general position
patterns are randomly labeled
category theorem in information theory
theorem in machine learning
clarifies relationship between dimensionality and classification complexity
concerns random dichotomies of finite point sets
systems of linear inequalities
contrastsWith curse of dimensionality
coreIdea complex pattern-classification problems are more likely to be linearly separable in higher-dimensional spaces
describes probability of linear separability of patterns
field information theory NERFINISHED
machine learning
pattern recognition
statistical learning theory
hasConsequence linear classifiers can be powerful in suitably chosen feature spaces
nonlinear decision boundaries in input space can correspond to linear boundaries in feature space
implies high-dimensional embeddings can increase separability of classes
nonlinear transformations can simplify classification
inspired kernel methods
support vector machines
involves dimensionality of feature space
linear separability
mapping data into higher-dimensional feature spaces
number of patterns
randomly placed patterns
motivates use of high-dimensional feature spaces
use of nonlinear feature mappings
namedAfter Thomas M. Cover NERFINISHED
oftenIllustratedBy mapping data to a higher-dimensional space where a hyperplane can separate classes
originalArticleTitle Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition NERFINISHED
provides formula for probability that random dichotomies of points are linearly separable
publishedIn IEEE Transactions on Electronic Computers NERFINISHED
relatedTo VC dimension NERFINISHED
capacity of linear classifiers
kernel trick
perceptron learning
states for a given number of patterns, the probability of linear separability increases with the dimensionality of the feature space up to a point
usedIn analysis of neural network architectures
design of pattern classifiers
feature engineering strategies
yearProposed 1965

Referenced by (1)

Full triples — surface form annotated when it differs from this entity's canonical label.

Thomas M. Cover knownFor Cover’s theorem on the separability of patterns