Forecast uncertainty can it be measured




















However, this may not be the case if increasing span increases heterogeneity, and the forecasting model is not successful in representing the effects of this heterogeneity. Nancy Kirkendall of the Office of Management and Budget discussed different approaches to presenting the uncer-. Different measures of error are appropriate to forecasts with different time frames. For short-term models, the uncertainty of point estimates can be measured by the mean squared error, in which forecasted values are compared with actual observations.

Other measures of uncertainty include the confidence interval and mean absolute percentage error. These simple methods do a good job of representing uncertainty.

Some forecast competitions have shown that these are the best methods to measure uncertainty for short-term forecasts. Longer-term forecasts are more complex than short-term forecasts and are more vulnerable to unanticipated changes in the economic environment. Forecasts made for a longer time span are typically less accurate than those made for a shorter span. The example below shows the systematic reduction in the forecast's percent error as the time span of the forecast decreases.

It is possible to take these errors and attribute them to factors unknown at the time of the forecast, such as immigration rates. There may be exceptions to the relative accuracy of long- and short-term forecasts due to the volatility in conditioning variables.

For example, annual rainfall may be forecasted with better percentage accuracy than rainfall on any particular day because short-run volatility tends to average our over the long run. The following tabulation shows BLS forecasts for the civilian noninstitutionalized population aged 16 and over for the total population and the errors associated with the total population forecast:. The Energy Information Administration EIA publishes a short-term forecasting system that projects energy prices for one to two years.

The EIA computes the mean squared errors for these different forecasts and uses the information to fine-tune its models. However, for long-term forecasts they do not publish the error terms. Because the accuracy of the forecasts would not be very good, long-term forecasts should not be used to produce point estimates. Long-term forecasting models are useful for other purposes. The examples that follow demonstrate some of these uses.

Scenario analyses are useful in illustrating the variability and uncertainty of long-term forecasts. An example is the case of natural gas forecasting. Figure shows the historical trend in natural gas wellhead prices and forecasts future prices with a reference case and two additional scenarios. The scenarios reflect changes that might happen in the forecast depending on the assumptions made on how rapidly new technology penetrates the.

The scenarios include high economic growth 2. The reference case posits 1. Scenario analysis can be quite instructive if past data are highly variable. Past variability communicates to the user that the assumption of smooth patterns of change is unwarranted but that volatility may not be predictable. Sometimes scenarios are constructed from statistical confidence bounds on model inputs or parameters, such as confidence limits on the future economic growth rate.

This is particularly an issue when the. Finally, the scenarios can illustrate a range of outcomes under the assumption that there are no large shocks.

Providing a model in which users can adjust inputs is another way to help users understand the effects of uncertainty in forecasts. For example, in the EIA made a series of assumptions in its forecasts of the price of electricity. The Annual Energy Outlook addressed electricity restructuring by incorporating the Federal Energy Regulatory Commission actions on open access, lower costs for natural gas-fired generation plants, and early retirements of higher-cost fossil plants.

The Annual Energy Outlook makes additional assumptions about competitive pricing and restructuring, including:. California, New York, and New England will begin competitive pricing in with stranded cost recovery phased out by As illustrated in Figure , because of the additions to and changes in model assumptions and lower projected coal prices, average forecasted electricity prices for are 13 percent lower than were forecasted in There are four fundamental sources of uncertainty in the forecasts made for the availability of scientists and engineers.

The first may be identified as exogenous variables, factors that are outside the labor market for scientists and engineers but never-. AEO97, — cents per kilowatt-hour. These include economic growth both in the United States and elsewhere in the world, technology growth, defense needs, wars, and the demographics of scientists and engineers. We are reasonably good at describing the overall demographics of the population, but scientists and engineers are a special group that is affected by things that we are not good at predicting, such as labor force participation and immigration.

The demographics of scientists and engineers may also be affected by the changing ethnic and gender composition of the work force. To date, white and Asian American males have been far more likely to go into careers in science and engineering than members of other groups, which make up an increasing share of the student population.

Finally, we need to be concerned about the number of people who have sufficient preparation in mathematics to enter the scientific and engineering fields. A second source of uncertainty stems from factors that are subject to policy control but that do not necessarily accord with the interests of optimizing the labor market for scientists and engineers. For example, to some extent, we know that immigration policy incorporates concerns about the market for scientists and engineers.

However, immigration policy is also directed toward a number of other objectives. Moreover, government subsidies may also affect the market for scientists and engineers. In addition to the funding for research and education provided by NSF and NIH, spending on defense contracts, defense-related research, and funding of medical schools through the Medicare program must be considered. While these variables are somewhat subject to policy control, they are directed toward goals other than ensuring that the market for scientists and engineers remains stable or grows at some desired rate.

The third source of uncertainty is behavioral uncertainty, which comes from our inability to predict perfectly how people will respond to the market. These areas of uncertainty include attitudes of college students toward science, plans of the scientists or engineers for shifting from scientific work to other tasks such as administration, and attitudes toward retirement. The fourth source of uncertainty is the most serious and the hardest to convey to forecast users—what economists call parameter uncertainty also called systematic errors , or uncertainty in the estimated model itself.

This uncertainty relates to both our inability to capture all the nuances of the real world in our models parameter uncertainty and limits our ability to calibrate our models perfectly with limited data model uncertainty.

We would like to believe that we could build a structural model that really incorporates all of the behavioral decisions that people make when they choose to enter science or engineering. In fact, we have neither the data nor the behavioral laws to permit the construction of such a model. Our models are not true structural models, and their parameters change as adjustment occurs at neglected margins.

For example, in recent years when the wages for scientists, engineers, or other technically trained people rose, employers divided jobs into components. Some of these components did not require a highly trained analytical person and could be given to people who had lesser or more simpler technical training.

Variation in the skill requirements needed to achieve a particular kind of production is not typically incorporated in the models we build. In addition, it is difficult to measure the potential of using technology to make one person able to do a job previously done by two people. These measures of quality and substitution are very difficult to quantify. Forecasts should be designed for a specific objective.

We recognize that the labor market for scientists and engineers is not a classic spot market in which workers offer their labor in response to a wage offer by employers and higher wage offers immediately bring forth a supply of additional workers.

Were it so, the demand for labor by government and industry would be quickly met by a supply of scientists and engineers and the market would clear easily. They are trying to decide how many graduate fellowships and traineeships to provide in order to create opportunities for people who years later may become scientists and engineers.

Targeting quantity with sufficient lead time for consumers of these forecasts is very difficult. However, it is less difficult to assess whether the wages of scientists and engineers are similar to those of other personnel with comparable aptitude based on SATs and ACTs who take jobs where there is some substitution with science and engineering. It is not difficult to know whether the salaries of scientists have stayed constant while the salaries of other groups with similar years of training for example, lawyers have risen see Figure We should also examine possible sources of short-run adjustment.

Institutions could make the market for scientists and engineers more like a spot market. These actions include:. Usually this final measure is destabilizing, since institutions have difficulty expanding or contracting faculty size in the short term. A distinction should be made between the instruments of policy control e. While the former are subject to rapid change, the latter are not likely to be characterized by volatility or disruptions that have extreme private or social.

One not only needs to understand how policy affects behavior, but also to project how short-term behavioral adjustments translate into long-term market conditions. Forecasters need ways to reassure the users of their forecasts that what they are doing is acceptable, correct, and scientific.

They need to describe forecasts more clearly and to describe how they should and should not be used. Finally, forecasters should document exactly what they do. Even with very clear documentation, some skeptics will assume that the forecasters have fiddled with the numbers. The most common practice is to use Eqs. As we focus on uncertainty, we propose alternative weights to better summarize the common information in individual uncertainties.

As with point forecasts in the previous section and Genre et al. To estimate the alternative measures of uncertainty, we need to estimate the first- and second-order moments of the individual distributions.

We have considered these intervals for the whole sample and assigned 0 probability when they were not available. In most of the cases forecasters place their probabilities to just a few intervals and the normality assumption has been tested and rejected by Conflitti The midpoint approach is an approximation that also depends on the width of the intervals, and it produces bias upwards estimates of the individual variances.

To compensate for this, we apply the Sheppard correction to variance estimates see Kendall and Stuart This criterion leaves us with 31 individuals for GDP growth and 29 for inflation. Despite this restriction, the number of participants is still not constant along time and both mean and variance from the density histograms are unbalanced panels of data.

Figures 2 , 3 and 4 show the alternative uncertainty measures estimated for the euro area by both, equal and principal component weights, from density forecasts, disagreement and forecast errors. All the measures of uncertainty rise during the crisis. After the financial crisis average uncertainty U of the individual density forecasts increased by 1. There are no signs of receding uncertainty in any of the two measures. Disagreement measures in GDP growth 1 year ahead forecasts are shown in Fig.

Therefore, aggregate uncertainty, as it is shown in Fig. There are no signs of receding uncertainty in any of the two measures, and they even seem to follow an upward trend. There is high heterogeneity among the forecasters in 1 year ahead forecasts for Q3, Q2, Q4 and Q4. In this section, we check empirically whether the uncertainty estimated by density forecasts helps in forecasting GDP growth and inflation.

The inclusion of uncertainty or second-order moments in the mean equation can only be justified if the data are non-normal. Lahiri and Teigland and the references therein discuss whether the shape of the distribution has an impact on forecast precision and find that, for the ASA-NBER quarterly surveys, distributions vary significantly over time and the assumption of normality is rejected quite often.

Paloviita and Viren also deal with the relationship between first and second moments of inflation and output growth forecasts from the ECB-SPF, finding that in the recent crisis individual forecasters have reacted to increasing uncertainty by adopting a completely different distribution.

The forecasting procedure is two steps and replicates Eqs. In the second step the estimated parameters are used as weights to give a forecast for the target variable given the values of the uncertainty measures. Table 2 summarizes the results. Column 1 shows the average observed data for the whole forecasting sample and the average of the observed data in every natural year for GDP growth and for inflation. Therefore, a value smaller than 1 indicates better behavior of the alternative model against consensus and greater than 1 otherwise.

We have also calculated this ratio yearly, considering only the forecast errors in every calendar year. Footnote 4. At first sight we can see that none of the uncertainty measures improves consensus forecast accuracy for the whole forecasting sample rows corresponding to — for GDP growth or inflation. Given the business cycle complexity of the time span used for the forecasting excercice, it is interesting to look at the yearly performance.

Looking at GDP growth we can see that uncertainty measures improve the forecasting accuracy in years , and and and , when GDP growth is decelerating or negative. On the contrary, uncertainty measures seem highly misleading in forecasting the years or accelerating GDP growth. Regarding inflation, we can observe some similar but dual behavior to GDP growth. Uncertainty seems to help forecasting inflation in years to and and when inflation was over the ECB inflation target, but again uncertainty seems to mislead forecasts when inflation is low.

This behavior of uncertainty is therefore in line with expected according to economic theory. All the measures of uncertainty used in the previous sections indicate the level of ex ante uncertainty shared by respondents. In this section, we are going to look at the predictive accuracy as a proxy for ex post uncertainty. The underlying hypothesis assumes that episodes associated with low ex post predictive forecast errors are indicative of a low level of uncertainty and the opposite when the forecast errors are high.

Despite uncertainty, surprises also play a key role in the process of expectation formation. They are defined as the unforecastable part of daily news, and they impact on high frequency asset prices, exchange rates, interest rates or government bond yields [see, for instance, Andersen et al. Jurado et al. The importance of surprises in traditional short-run time series forecasting is also well known, where the use of the forecast errors allow for high adaptability to the most recent characteristics.

However, it is not so frequent to use surprises for forecasting macroeconomic indicators at medium-term horizon, specifically in our case, for 1 year ahead forecasts. The reasons behind the use of surprises in forecasting GDP growth and inflation in the euro area for the period — are twofold: On one hand, there are some concerns on bias and the inclusion of a constant term in the regressions does not seem to help; on the other hand, uncertainty measures that are always positive seem to be adding predictive content only in certain specific years, looking as if the sign of the surprise matters.

We define Surprise as the forecast error made in moment t by the consensus of the forecasters 1 year in advance:. Figure 1 bar lines plots the surprises realized for GDP growth and inflation.

We assess the predictive content of ex post uncertainty and surprises for the period — in the euro area by running two pseudo-real-time forecasting exercises. As in the previous sections, we proceed in two steps and adapt Eqs. Notice that the first difference of the consensus gives an idea of both, how the forecasters perceive the target variable is changing and what have they learned from the previous forecasts. As the consensus made for time t is already an exogenous regressor, we just use its lagged value as an instrument in the case of GDP growth rate.

Finally, we could not use this approach for inflation. We have tried other possible instruments like lagged 4 periods values of the observed data or the surprises, but they were not appropriate. A closer look at inflation data see Fig. Forecasters are under-predicting inflation for the first part of the sample. So the information conveyed in surprises is that they should increase their forecasts of inflation. Then you get to the maximum and the surprise tells you that you should overcorrect more and in the next periods inflation suffers a drop of four points.

This leads to the absence of correlation between the forecast errors surprises and consensus. The information contained in the sign of the surprises is totally misleading. Forecasting 1 year ahead GDP growth and inflation in the euro area for the period — is a challenging task, as the macroeconomic performance has been strongly affected by business cycle fluctuations and exogenous shocks coming mainly from the price of commodities, specially oil.

We have analyzed disagreement and traditional measures of uncertainty based on equal weights and proposed alternative ones based on principal component weights extracted from individual uncertainties. We find that while disagreement has diminished since the crisis, uncertainty measures based on subjective individual uncertainties have remained at high levels. Regarding the forecast accuracy, consensus from ECB-SPF is hard to beat by more complicated alternatives and we have found that uncertainty does not help to improve forecast accuracy in the period — for either GDP growth or inflation.

Alternative models seem to improve consensus performance during the years of the crisis while inflation do the opposite. Finally, we have checked whether an ex post measure of uncertainty based on forecast error and surprises in this specific time span played any role in improving the forecast performance 1 year ahead. Though acknowledging the specific characteristics of the time span considered for the forecast exercise — in the euro area, the role of surprises in 1 year ahead forecasts could be a matter of further research.

As in Eqs. As we only need point estimators to make the forecasts, OLS is a consistent estimator and the point estimates coincide with GMM. This also applies to all the regressions that correspond to Table 1 and columns 3 to 9 in Table 2. The choice of the end points, though arbitrary, is not restrictive as individuals usually assign negligible probability to outer class intervals. However, main results remain unaltered.

To facilitate the notation we do not distinguish between GDP growth of inflation, but probabilities are different for each variable. We acknowledge that this yearly magnitudes are not statistically significant, but we include them for better understanding the role of uncertainty. J Appl Econom 31 3 — Article Google Scholar. Oxford University Press, Oxford, pp — Google Scholar. Am Econ Rev — J Int Econ — Op Res Q OR 20 4 — AEJ Macroecon — Chic Booth Res Pap 13— Bloom N The impact of uncertainty shocks.

Econometrica 77 3 — Occasional Paper Series 59 April J Bus Econ Stat — Clemen RT Combining forecasts: a review and annotated bibliography. Int J Forecast 5 4 — Conflitti C Measuring uncertainty and disagreement in the European survey of professional forecasters.

Int J Forecast 31 4 — Finance and Economics Discussion Series. Board of Governors of the Federal Reserve System Eur J Oper Res 1 — J Bus Econ Stat 13 3 — ECB How has macroeconomic uncertainty in the euro area evolved recently?.

J Monet Econ 54 4 — Friedman M Nobel lecture: inflation and unemployment. J Polit Econ 85 3 — Int J Forecast 29 1 — Rev Econ Stat 94 4 — Eur Econ Rev — Int J Forecast — Am Econ Rev 3 — Kendall M, Stuart A The advanced theory of statistics, vol 1, 4th edn.

Macmillan, New York. Kinal T, Lahiri K A model for ex ante real interest rates and derived inflation forecasts. J Am Stat Assoc 83 — Lahiri K, Sheng X b Measuring forecast uncertainty by disagreement: the missing link. J Appl Econom — Econom Rev Forthcom. Lahiri K, Teigland C, Zaporowski M Interest rates and the subjective probability distribution of inflation forecasts. J Money Credit Bank 20 2 — Wiley, Hoboken, pp — Paloviita M, Viren M Inflation and output growth uncertainty in individual survey expectations.

Empirica — Appl Econ 38 18 — Int J Forecast 27 2 — J Appl Econom 7 2 — Rich R, Tracy J The relationships between expected inflation, disagreement and uncertainty: evidence from matched point and density forecasts. Rev Econ Stat 92 1 — Scotti C Surprise and uncertainty indexes: real-time aggregation of real-activity macro surprises. J Forecast 23 6 — Timmermann A Forecast combinations.

Elsevier, Amsterdam, pp — Chapter Google Scholar. Wallis KF Combining density and interval forecasts: a modest proposal. Oxf B Econ Stat —



0コメント

  • 1000 / 1000