Kolmogorov's inequality
In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The inequality is named after the Russian mathematician Andrey Kolmogorov.[citation needed]
Statement of the inequality
Let X1, ..., Xn : Ω → R be independent random variables defined on a common probability space (Ω, F, Pr), with expected value E[Xk] = 0 and variance Var[Xk] < +∞ for k = 1, ..., n. Then, for each λ > 0,
where Sk = X1 + ... + Xk.
Proof
The following argument is due to Kareem Amin and employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence is a martingale. Without loss of generality, we can assume that
and
for all
. Define
as follows. Let
, and
for all . Then
is also a martingale. Since
is independent and mean zero,
The same is true for . Thus
This inequality was generalized by Hájek and Rényi in 1955.
See also
- Chebyshev's inequality
- Etemadi's inequality
- Landau–Kolmogorov inequality
- Markov's inequality
- Bernstein inequalities (probability theory)
References
- Lua error in package.lua at line 80: module 'strict' not found. (Theorem 22.4)
- Lua error in package.lua at line 80: module 'strict' not found.
This article incorporates material from Kolmogorov's inequality on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.