Inflation plays a role in almost every important decision that an asset allocator can make, from setting spending targets and managing liabilities to understanding global economic fundamentals and forecasting risk-adjusted returns. Unfortunately, both defining and measuring inflation accurately remain difficult. Janet Yellen, the Chair of the Board of Governors for the US Federal Reserve, described the problem succinctly (at least by Fed-speak-standards) as recently as September 2017: “Our framework for understanding inflation dynamics could be misspecified in some fundamental way.”
One way allocators can improve their inflation forecasts is to analyze it from as many perspectives as possible—just as a data scientist would. No single metric comprehensively describes or measures all aspects of inflation, so one should study the sample under many different microscopes. Doing so may not completely solve the challenge the Fed—and others— try to tackle, and the multi-angle approach gives rise to distinct challenges of its own. Still, we believe aggregating and normalizing many different forecasts can help provide more accurate inflation inputs in asset allocation decisions.
Challenges of a data-science approach to forecasting inflation
Trying to quantify inflation from as many angles as possible—that is, combining different forecasts with different levels of confidence to try to formulate a consolidated view—sounds simple but comes with non-trivial challenges.
One challenge is in the definition of inflation itself. There are many different types of inflation, each of which might have a potentially different market effect for a given horizon and asset. For example, the US Bureau of Labor Statistics publishes both a consumer price index (CPI) and a producer price index (PPI) that includes different baskets of products. Subcategories exist within those broad categories, such as goods vs. services inflation and “core” vs. “non-core.” The US Federal Reserve states its goals for inflation in terms of a third metric that the Bureau of Economic Analysis publishes, the Personal Consumption Expenditure (PCE) index. Numerous other definitions of inflation also exist.
Another challenge is the volume of data available. An obvious example includes the proliferation of online prices. Since online prices can change more rapidly than the prices for equivalent goods at brick-and-mortar stores, aligning time stamps to the measurement proves critical (see below). Less obvious, perhaps, is the frequent variation in product specifications. A smartphone today bundles a different set of capabilities than a smartphone last year.
Variation in the time horizon for forecasting inflation presents a third challenge. The US Treasury issues Inflation-Protected Securities (TIPS) with varying maturities, for example. While these instruments may trade continuously, the liquidity varies between on-and off-the-run securities, complicating the estimates for implied inflation forecasts. A similar issue exists for inflation rate swaps.
Distribution and implications of 2017 US inflation forecasts
Figure 1 provides one depiction of these challenges, as well as a potentially better way to understand inflation. The figure plots publicly-available forecasts of future US inflation by month in which the forecast was made. To standardize the estimates, the chart normalizes the mean of each series to zero and the standard deviation to one (based on historical values).
Figure 1 highlights a few main points. First, the inflation forecasts for most months fall in a band of +/- two standard deviations. That band may seem normal (no pun intended), but the important feature is that it affords a relatively varied dataset from which to base inflation forecasts. A data scientist can evaluate which measures prove the most stable over time and/or test how each forecast affects individual securities. More importantly, it emphasizes the idea that putting undue weight on point estimates risks confusing rather than illuminating the inflation picture.
Second, the median forecast during 2017 has hovered around its long term value (zero), though it dipped materially during the month of October. It seems premature to assume that this one month trend will continue, but it seems inconsistent with the Phillips curve notion that inflation expectations have ticked higher due to tightening labor market conditions.
Finally, the distribution of inflation forecasts during 2017 seems to have a negative skew. The Fed professes to have a “symmetric” two percent inflation target, so it tries to manage above- and below-target expectations equivalently. Negatively skewed inflation expectations imply that more market participants believe the Fed has become too hawkish than too dovish on inflation. As a result, the Fed’s actions may surprise the market in ways that adversely affect some asset prices.
Chair Yellen summarized the Fed’s bewilderment candidly in a September press conference:
“Now, I recognize and it’s important that inflation has been running under our two percent objective for a number of years, and that is a concern, particularly if it were to translate into lower inflation expectations. For a number of years there were very understandable reasons for that shortfall, and they included quite a lot of slack in the labor market—which, my judgment would be, has largely disappeared—very large reductions in energy prices, and a large appreciation of the dollar that lowered import prices starting in mid-2014. This year, the shortfall of inflation from 2 percent, when none of those factors is operative, is more of a mystery, and I will not say that the Committee clearly understands what the causes are of that.”
Asset allocators may not know what the causes of it are either, but with proper data they can construct a better, multi-angle picture of how inflation may change. The data-driven approach seems to beat the tried, tested, and frequently failed alternative of relying too heavily on theory.