# Time-series Forecasting Error Statistics

How can IPredict measure the optimality of a time-series forecast? We cannot expect a time-series forecast to be perfect; it will surely and always have prediction errors. IPredict defines some statistics based on the error terms:

this is the difference between the actual time-series Xt and the forecast X’t and will be useful to analyze and summarize the accuracy of the forecasts. The Cumulative Forecast Error (CFE) is the sum of all prediction errors:

The Mean Error (ME) is the arithmetic average of all prediction errors:

The Mean Squared Error (MSE) is the arithmetic mean of the sum of the squares of the prediction errors; this error measure is popular and corrects the ‘canceling out’ effects of the previous two error measures:

The Root Mean Squared Error (RMSE) is the square root of the MSE:

The Standard Deviation (SD) is as the name implies the standard deviation of the prediction errors.

The Mean Absolute Deviation (MAD) is another popular error measure that corrects the ‘canceling out’ effects by averaging the absolute value of the errors:

The Mean Absolute Percent Error (MAPE) is a very popular measure that corrects the ‘canceling out’ effects and also keeps into account the different scales at which this measure can be computed and thus can be used to compare different predictions:

How much accuracy can we expect from a forecasting system? How much does this accuracy (or inaccuracy) cost to you or your company? In general a MAPE of 10% is considered very good, a MAPE in the range 20% - 30% or even higher is quite common. How much will you or your company save if the MAPE reduces say from 25% to 20%? Inaccurate forecasts will increase the need to keep stock in your inventory system and will reduce the service level to the customer. Inaccurate forecasts mean poor stock trading decisions and wrong timings. The cost can thus be very high and it is worth the effort to insure that forecasts are as accurate as possible.