BLS

Long story short: really really bad. But there's a optimistic remark coming: we're almost as good at forecasting that we are at counting. In other words, we are bad at both, and should potentially question more heavily the reliability of monthly employment figures.

Counting - having 10 fingers is not enough

Yesterday (06/05/2016), the US Bureau of Labor Statistics published some of the figures that markets expect every month. The one who receives the most scrutiny, the "Non-farm payroll" employment creation counts the number of new job created during the previous month.  Let's have a look at the first and last paragraphs of the press release:

Total nonfarm payroll employment increased by 160,000 in April, and the unemployment rate was unchanged at 5.0 percent, the U.S. Bureau of Labor Statistics reported today. Job gains occurred in professional and business services, health care, and financial activities. Job losses continued in mining.

The change in total nonfarm payroll employment for February was revised from +245,000 to +233,000, and the change for March was revised from +215,000 to +208,000. With these revisions, employment gains in February and March combined were 19,000 less than previously reported. Over the past 3 months, job gains have averaged 200,000 per month.

Source: US Bureau of Labor Statistics

The last paragraph is a elegant warning to the reader: last month's figures were wrong. A tiny 3%. And the figures for the month before were also overstated by a mere 5%. Let's see whether those revisions are a one-off miscalculation, or a wider phenomenon.

 Statistic Init vs M+1 Init vs M+2
Average difference 11% 15%
Standard deviation of drift 13% 17%
Minimum deviation 1% 2%
Maximum deviation 33% 43%

So. Over the 30 months for which we have the three data (the first announcement, the first and the second revision), 6,890,000 jobs have been created. That is great news. But that's also about half a million, or 8% more than announced at a first glance. While we should certainly welcome a conservative approach to statistics and surveys, an 8% error is pretty large. And it gets much larger if we consider the overall drift to prevent overestimation and underestimation to compensate one another: on average, the figure initially announced by the BLS (Bureau of Labor Statistics) is off by 15%!

And, just to visualize a bit how a poor estimate the first figure is, here is a graph showing the total change in the NFP (Non farm Payroll) figure from the first publishing to the second revision:

drift NFP

Source: BLS, own calculations

How about forecasting?

It is sure hard to count jobs. But how are the analysts doing? They also consistently fail. Here is the same graph as previously, but also displaying the difference between the revised NFP figure (which we assume to be the 'true' figure) with the previsions of the analysts, in the form of the 'consensus' survey led by Bloomberg (we used the median of the survey):

analystconsensus Drift

Sources: Bloomberg, BLS

The trend is quite clear: the analysts are bearish and tend to underestimate job creation. 19 times out of the 30 observations in our sample (63%), the figures were underestimated. Yet, the graphic clearly shows something else: Analysts are especially bad at predicting extreme events, in particular bad news, in our sample: for the month of February 2015, their estimate was 106% above the true figure (i.e. twice as large), and 94% higher than BLS's first estimate. Here are a few statistics to have a better view of the phenomenon:

Statistic Consensus vs init Consensus vs M+2
Average difference 28% 27%
Standard deviation of drift 33% 29%
Minimum deviation 1% 1%
Maximum deviation 166% 135%

The analysts didn't do a very good job. On average, their estimate is almost 30% off. While they had a few good months, they had some catastrophic ones. The standard deviation of the difference between their estimate and the actual figure is also very high. Interestingly, we can see there that the analyst's estimate is generally closer to the final figure than to the first announced one.

Analysts provide estimates that are consistent with trends. The median of the surveys displays low volatility: the standard deviation (a measure of dispersion) of the revised NFP figures is more than two times larger that of the analysts' forecast (69 vs 28). If analysts are only good at predicting a figure consistent with the trend, then we might want to use the 3 or 6 month average instead. Or simply refrain from forming expectations that are consistently proven wrong, month after month, years after years.

Take-aways

  • BLS's figures poorly reflect reality, and the announcement effect around their publication is overrated.
  • Analysts do a very poor job at forecasting NFP figures.
  • Analysts are easily caught by surprise when out of the ordinary events happen.