英文摘要 |
This study employs uni-variable time-series models to forecast the trends of the U.S. Federal Funds Rate (FFR). The empirical data covers FFR monthly data from January 1991 to December 2005. Two models, ARIMA and GARCH, are used to analyze the data respectively. In order to compare the accuracies of these two models, the methods of MAE, MSE, RMSE and MAPE are the main tools used to measure their prediction performances.Major conclusions of this study are stated as follows. First of all, the forecasting performance of GARCH model is better than that of ARIMA. Secondly, the setting of time points and length of forecasting periods lead to different performances of these two models. For instance, performances of both models for six months are better than those of twelve months, eighteen months and twenty-four months. The prediction performances of both models are depends heavily on the length of the in-sample data. Finally, these two models cannot predict precisely the change of the Federal Funds Rate especially when the interesting rate comes across with a sudden change. They can only outline roughly the trend of the interest rates based upon the previous in-sample period. |