Mean absolute scaled error

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found. In statistics, the mean absolute scaled error (MASE) is a measure of the accuracy of forecasts . It was proposed in 2005 by statistician Rob J. Hyndman and Professor of Decision Sciences Anne B. Koehler, who described it as a "generally applicable measurement of forecast accuracy without the problems seen in the other measurements."[1] The mean absolute scaled error has favorable properties when compared to other methods for calculating forecast errors, such as root-mean-square-deviation, and is therefore recommended for determining comparative accuracy of forecasts.[2]

Rationale

The mean absolute scaled error has the following desirable properties:[3]

  1. Scale invariance: The mean absolute scaled error is independent of the scale of the data, so can be used to compare forecasts across data sets with different scales.
  2. Predictable behavior as y_{t} \rightarrow 0 : Percentage forecast accuracy measures such as the Mean absolute percentage error (MAPE) rely on division of y_{t}, skewing the distribution of the MAPE for values of y_{t} near or equal to 0. This is especially problematic for data sets whose scales do not have a meaningful 0, such as temperature in Celsius or Fahrenheit, and for intermittent demand data sets, where y_{t} = 0 occurs frequently.
  3. Symmetry: The mean absolute scaled error penalizes positive and negative forecast errors equally, and penalizes errors in large forecasts and small forecasts equally. In contrast, the MAPE and median absolute percentage error (MdAPE) fail both of these critera, while the "symmetric" sMAPE and sMdAPE[4] fail the second criterion.
  4. Interpretability: The mean absolute scaled error can be easily interpreted, as values greater than one indicate that in-sample one-step forecasts from the naïve method perform better than the forecast values under consideration.
  5. Asymptotic normality of the MASE: The Diebold-Mariano test for one-step forecasts is used to test the statistical significance of the difference between two sets of forecasts. To perform hypothesis testing with the Diebold-Mariano test statistic, it is desirable for DM \sim N(0,1), where DM is the value of the test statistic. The DM statistic for the MASE has been empirically shown to approximate this distribution, while the mean relative absolute error (MRAE), MAPE and sMAPE do not.[2]

Non seasonal time series

For a non-seasonal time series,[5] the mean absolute scaled error is estimated by

\mathrm{MASE} = \frac{1}{T}\sum_{t=1}^T\left( \frac{\left| e_t \right|}{\frac{1}{T-1}\sum_{t=2}^T \left| Y_t-Y_{t-1}\right|} \right) = \frac{\sum_{t=1}^{T} \left| e_t \right|}{\frac{T}{T-1}\sum_{t=2}^T \left| Y_t-Y_{t-1}\right|}[3]

where the numerator et is the forecast error for a given period, defined as the actual value (Yt) minus the forecast value (Ft) for that period: et = Yt − Ft, and the denominator is the mean absolute error of the one-step "naive forecast method" on the training set,[5] which uses the actual value from the prior period as the forecast: Ft = Yt−1[6]

Seasonal time series

For a seasonal time series, the mean absolute scaled error is estimated in a manner similar to the method for non-seasonal time series:

    \mathrm{MASE} = \frac{1}{T}\sum_{t=1}^T\left( \frac{\left| e_t \right|}{\frac{1}{T-m}\sum_{t=m+1}^T \left| Y_t-Y_{t-m}\right|} \right) = \frac{\sum_{t=1}^{T} \left| e_t \right|}{\frac{T}{T-m}\sum_{t=m+1}^T \left| Y_t-Y_{t-m}\right|}[5]

The main difference with the method for non-seasonal time series, is that the denominator is the mean absolute error of the one-step "seasonal naive forecast method" on the training set,[5] which uses the actual value from the prior season as the forecast: Ft = Yt−m,[6] where m is the seasonal period.

This scale-free error metric "can be used to compare forecast methods on a single series and also to compare forecast accuracy between series. This metric is well suited to intermittent-demand series[clarification needed] because it never gives infinite or undefined values[1] except in the irrelevant case where all historical data are equal.[3]

When comparing forecasting methods, the method with the lowest MASE is the preferred method.

See also

References

  1. 1.0 1.1 Hyndman, R. J. (2006). "Another look at measures of forecast accuracy", FORESIGHT Issue 4 June 2006, pg46 [1]
  2. 2.0 2.1 Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 3.2 Hyndman, R. J. and Koehler A. B. (2006). "Another look at measures of forecast accuracy." International Journal of Forecasting volume 22 issue 4, pages 679-688. doi:10.1016/j.ijforecast.2006.03.001
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. 5.0 5.1 5.2 5.3 Lua error in package.lua at line 80: module 'strict' not found.
  6. 6.0 6.1 Hyndman, Rob et al, Forecasting with Exponential Smoothing: The State Space Approach, Berlin: Springer-Verlag, 2008. ISBN 978-3-540-71916-8.