Margin Infused Relaxed Algorithm
Margin-infused relaxed algorithm (MIRA)[1] is a machine learning algorithm, an online algorithm for multiclass classification problems. It is designed to learn a set of parameters (vector or matrix) by processing all the given training examples one-by-one and updating the parameters according to each training example, so that the current training example is classified correctly with a margin against incorrect classifications at least as large as their loss.[2] The change of the parameters is kept as small as possible.
A two-class version called binary MIRA[1] simplifies the algorithm by not requiring the solution of a quadratic programming problem (see below). When used in a one-vs.-all configuration, binary MIRA can be extended to a multiclass learner that approximates full MIRA, but may be faster to train.
The flow of the algorithm[3][4] looks as follows:
Algorithm MIRA Input: Training examplesOutput: Set of parameters
![]()
← 0,
← 0 for
← 1 to
for
← 1 to
![]()
← update
according to
![]()
←
end for end for return
![]()
- "←" is a shorthand for "changes to". For instance, "largest ← item" means that the value of largest changes to the value of item.
- "return" terminates the algorithm and outputs the value that follows.
The update step is then formalized as a quadratic programming[2] problem: Find , so that
, i.e. the score of the current correct training
must be greater than the score of any other possible
by at least the loss (number of errors) of that
in comparison to
.
References
<templatestyles src="Reflist/styles.css" />
Cite error: Invalid <references>
tag; parameter "group" is allowed only.
<references />
, or <references group="..." />
External links
- adMIRAble - MIRA implementation in C++
- Miralium - MIRA implementation in Java
- MIRA implementation for Mahout in Hadoop
- ↑ 1.0 1.1 Lua error in package.lua at line 80: module 'strict' not found.
- ↑ 2.0 2.1 Lua error in package.lua at line 80: module 'strict' not found.
- ↑ Watanabe, T. et al (2007): "Online Large Margin Training for Statistical Machine Translation". In: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 764–773.
- ↑ Bohnet, B. (2009): Efficient Parsing of Syntactic and Semantic Dependency Structures. Proceedings of Conference on Natural Language Learning (CoNLL), Boulder, 67–72.