Radial basis function kernel

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In machine learning, the (Gaussian) radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernelized learning algorithms. In particular, it is commonly used in support vector machine classification.[1]

The RBF kernel on two samples x and x', represented as feature vectors in some input space, is defined as[2]

K(\mathbf{x}, \mathbf{x'}) = \exp\left(-\frac{||\mathbf{x} - \mathbf{x'}||^2}{2\sigma^2}\right)

\textstyle||\mathbf{x} - \mathbf{x'}||^2 may be recognized as the squared Euclidean distance between the two feature vectors. \sigma is a free parameter. An equivalent, but simpler, definition involves a parameter \textstyle\gamma = \tfrac{1}{2\sigma^2}:

K(\mathbf{x}, \mathbf{x'}) = \exp(-\gamma||\mathbf{x} - \mathbf{x'}||^2)

Since the value of the RBF kernel decreases with distance and ranges between zero (in the limit) and one (when x = x'), it has a ready interpretation as a similarity measure.[2] The feature space of the kernel has an infinite number of dimensions; for \sigma = 1, its expansion is:[3]

\exp\left(-\frac{1}{2}||\mathbf{x} - \mathbf{x'}||^2\right) = \sum_{j=0}^\infty \frac{(\mathbf{x}^\top \mathbf{x'})^j}{j!} \exp\left(-\frac{1}{2}||\mathbf{x}||^2\right) 
\exp\left(-\frac{1}{2}||\mathbf{x'}||^2\right)

Approximations

Because support vector machines and other models employing the kernel trick do not scale well to large numbers of training samples or large numbers of features in the input space, several approximations to the RBF kernel (and similar kernels) have been devised.[4] Typically, these take the form of a function z that maps a single vector to a vector of higher dimensionality, approximating the kernel:

z(\mathbf{x})z(\mathbf{x'}) \approx \varphi(\mathbf{x})\varphi(\mathbf{x'}) = K(\mathbf{x}, \mathbf{x'})

where \textstyle\varphi is the implicit mapping embedded in the RBF kernel.

One way to construct such a z is to randomly sample from the Fourier transformation of the kernel.[5] Another approach uses the Nyström method to approximate the eigendecomposition of the Gram matrix K, using only a random sample of the training set.[6]

External links

See also

References

  1. Yin-Wen Chang, Cho-Jui Hsieh, Kai-Wei Chang, Michael Ringgaard and Chih-Jen Lin (2010). "Training and testing low-degree polynomial data mappings via linear SVM". J. Machine Learning Research 11:1471–1490.
  2. 2.0 2.1 Vert, Jean-Philippe, Koji Tsuda, and Bernhard Schölkopf (2004). "A primer on kernel methods". Kernel Methods in Computational Biology.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Andreas Müller (2012). Kernel Approximations for Efficient SVMs (and other feature extraction methods).
  5. Ali Rahimi and Benjamin Recht (2007). "Random features for large-scale kernel machines". Neural Information Processing Systems.
  6. Lua error in package.lua at line 80: module 'strict' not found.