Key Steps in Developing Decision-Quality Analysis

Key Steps in Developing Decision-Quality Analysis

This piece was originally published in the April 2016 issue of ei, the magazine of the electroindustry.

By Donald R. Leavens, PhD, Vice President and Chief Economist, NEMA/BIS

Economists famously rely on an assumption of perfect information when theorizing how economic agents will react to exogenous shocks.

In a world of perfect information, equilibrium is restored in all markets with no loss of efficiency because the impact of such shocks is anticipated. The perfect information assumption allows economists to simplify theoretical relationships by ignoring inefficiencies that result from such a shock in the real world of imperfect information. The use of assumptions to focus on underlying relationships is the essence of model building.

Modelling is often the first step in developing decision-quality analyses such as economic forecasts. The process starts with a theory about the relationship between a dependent variable, such as gross domestic product (GDP), and a set of independent and random variables. Rather than determining that changes in GDP depend on everything, economic theory hypothesizes that movements in GDP are correlated with investment, consumption, government spending, and net exports, each of which are based on theorized relationships to other variables.

The next step is to test this hypothesis with an econometric model using historic data. Economists test theoretical relationships by assuming that the correlation between variables is null. If analysis indicates that the correlation between two variables is statistically significant, then the null hypothesis (that no relationship exists) is rejected. Thus, a variable may be a statistically significant predictor of the behavior of the dependent variable.

Relying on theory and data, economists construct an econometric model using a set of explanatory variables that are usually significant and account for as much variation as possible. Once the best fit of the model is achieved, the next step is to test its predictive capabilities by using the model to forecast historical data series not used as part of the original empirical analysis. Analysis of the forecast errors fine-tunes the basic model to create the most accurate forecast of the dependent variable’s future behavior.

The Art of the Science

Moving from a well-developed model of an economic relationship to an accurate forecast is part science and part art. Simplified models of complex systems seldom account for all variations in a dependent variable. Moreover, underlying relationships with independent variables change, and the independent variables themselves may be correlated with each other. In addition, data selection may have resulted in a sample that was not representative of the entire population, or historical data may be inaccurate.

Reshaping a theoretical model into a forecast model is fraught with trip wires. Hands-on experience helps artful forecasters adjust a purely statistically derived forecast to account for anomalies not captured in the data. For example, as the Federal Reserve drove interest rates towards zero after the Great Recession, economic models predicted an increase in inflation. But the models failed to capture the fact that most of the liquidity that the Fed injected into the system was sent back to the Fed by banks to cover potential bad debts. Forecasters who recognized this anomaly maintained that inflation would not spike.

Forecast errors may be used to adjust forecast models. Although forecasts for some variables and markets may be reliably accurate, a forecast may be wrong. Since no one wants to make a decision on an incorrect forecast, decision-makers often rely on scenarios to quantify the probability that a baseline forecast may be wrong and identify other possibilities.

Typically, forecasters offer a baseline scenario as the most probable outcome, as well as more optimistic and pessimistic scenarios. These scenarios are often structured around a theme, such as a rebound in the oil industry or a Chinese recession, and allow for a statistical hedge against forecast error.

Merging data with theoretical models through empirical analyses yields forecast models that provide decision-quality information. Although the baseline forecast may not be accurate, probabilities assigned to a range of possible outcomes help the decision-maker set expectations that may serve to minimize the adverse effects that a missed forecast may have on a given decision. In other words, it may inform a hedging-like decision or perhaps a quickly executable Plan B if real-world results undermine Plan A.

Over time, the growing supply of data and new quantitative techniques may improve forecast accuracy, but they will likely also add layers of complexity. Thus, the usefulness of developing simplifying models is likely to endure.

Leave a Reply

Your email address will not be published. Required fields are marked *