Bennett's inequality

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In probability theory, Bennett's inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount. Bennett's inequality was proved by George Bennett of the University of New South Wales in 1962.[1]

Let X1, … Xn be independent random variables, and assume (for simplicity but without loss of generality) they all have zero expected value. Further assume |Xi| ≤ a almost surely for all i, and let

 \sigma^2 = \frac1n \sum_{i=1}^n \operatorname{Var}(X_i).

Then for any t ≥ 0,

\Pr\left( \sum_{i=1}^n X_i > t \right) \leq
\exp\left( - \frac{n\sigma^2}{a^2} h\left(\frac{at}{n\sigma^2} \right)\right),

where h(u) = (1 + u)log(1 + u) – u.[2]

See also Freedman (1975) [3] and Fan et al. (2012) [4] for a martingale version of Bennett's inequality and its improvement, respectively.

Comparison to other bounds

Hoeffding's inequality only assumes the summands are bounded almost surely, while Bennett's inequality offers some improvement when the variances of the summands are small compared to their almost sure bounds. In both inequalities, unlike some other inequalities or limit theorems, there is no requirement that the component variables have identical or similar distributions.

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.


<templatestyles src="Asbox/styles.css"></templatestyles>