Total variation distance of probability measures
In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes just called "the" statistical distance.
Definition
The total variation distance between two probability measures P and Q on a sigma-algebra of subsets of the sample space
is defined via[1]
Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event.
Special cases
For a finite alphabet we can relate the total variation distance to the 1-norm of the difference of the two probability distributions as follows:[2]
Similarly, for arbitrary sample space , measure
, and probability measures
and
with Radon-Nikodym derivatives
and
with respect to
, an equivalent definition of the total variation distance is
Relationship with other concepts
The total variation distance is related to the Kullback–Leibler divergence by Pinsker's inequality.
See also
References
<templatestyles src="Reflist/styles.css" />
Cite error: Invalid <references>
tag; parameter "group" is allowed only.
<references />
, or <references group="..." />
<templatestyles src="Asbox/styles.css"></templatestyles>
- ↑ Lua error in package.lua at line 80: module 'strict' not found.[dead link]
- ↑ http://books.google.com/books?id=6Cg5Nq5sSv4C&lpg=PP1&pg=PA48#v=onepage&q&f=false