Skip to main content

Forecasting the unemployment rate using the degree of agreement in consumer unemployment expectations

Abstract

This study aims to refine unemployment forecasts by incorporating the degree of consensus in consumers’ expectations. With this objective, we first model the unemployment rate in eight European countries using the step-wise algorithm proposed by Hyndman and Khandakar (J Stat Softw 27(3):1–22, 2008). The selected optimal autoregressive integrated moving average (ARIMA) models are then used to generate out-of-sample recursive forecasts of the unemployment rates, which are used as benchmark. Finally, we replicate the forecasting experiment including as predictors both an indicator of unemployment, based on the degree of agreement in consumer unemployment expectations, and a measure of disagreement based on the dispersion of expectations. In both cases, we obtain an improvement in forecast accuracy in most countries. These results reveal that the degree of agreement in consumers’ expectations contains useful information to predict unemployment rates, especially for the detection of turning points.

Introduction

Unemployment is a key macroeconomic variable and is crucial for economic planning. The Great Recession of 2008 and the euro debt crisis have had an important effect on the evolution of unemployment in Europe, although there are large cross-country differences. While in some countries the unemployment increased and peaked shortly after, in other countries the unemployment rate increased steadily and has remained very high. Differences are also very important in quantitative terms. While countries such as Germany and the Netherlands show very low unemployment rates, Mediterranean countries still show very high unemployment rates in spite of large decreases in recent years. These differences across countries have led us to analyse whether the inclusion of the degree of agreement in consumers’ expectations can help improve unemployment rate forecasts in a set of eight long-term member states that comprises this diversity (Austria, Germany, France, Italy, Greece, Portugal, the Netherlands, and the United Kingdom).

While several authors have addressed the effect of online job searches on unemployment forecasts (Askitas and Zimmermann 2009; D’Amuri and Marcucci 2017; Vicente et al. 2015), unemployment expectations have been overlooked. This study aims to fill this gap by incorporating information coming from qualitative surveys. In recent years, evidence has been found that consumer expectations contain valuable information in order to improve predictions of employment (Abberger 2007; Claveria et al. 2007; Hutter and Weber 2015; Lehmann and Wohlrabe 2017). These studies focus on a single country, more specifically in Germany, and compare the role of different employment indicators. Other studies, such as that of Lehmann and Weyh (2016), focus on the role of survey expectations aggregated in balances. Direct measures of economic expectations can only be derived from economic tendency surveys, in which respondents are asked if they expect a certain variable to increase, to decrease or to remain constant. Survey results are usually presented in the form of balances, which are obtained as a subtraction between the percentage of respondents who expect an increase and that of those who expect a decrease.

In this study we aim to refine forecasts of the unemployment rate by including survey expectations through a new aggregation procedure. The approach is based on a positional metric of consensus proposed by Claveria (2019) that gives the percentage of agreement in expectations for questions with five response categories. This measure presents several advantages over the balance, as it is directly interpretable and allows to incorporate the information coming from the respondents that do not expect any major change in the variable. On the one hand, we adapt the statistic for questions with three response options. On the other hand, we compute the indicator in eight European countries and evaluate whether it helps to improve the accuracy of unemployment rate forecasts, comparing its performance to Bachmann et al.’s (2013) disagreement indicator.

With this aim, we select the optimal autoregressive integrated moving average (ARIMA) model for each country. Then, we design two independent out-of-sample recursive forecasting experiments. In the first one, we generate univariate predictions of the unemployment rates which are used as a benchmark. In the second, we extend the model so as to include the consensus-based indicator as a predictor in a dynamic regression model in order to test if there is an improvement in forecast accuracy.

The study proceeds as follows. In Sect. 2, we present the methodology. In Sect. 3, data are described. In the next section, results of the out-of-sample forecasting experiments are discussed. Finally, Sect. 5 concludes.

Methods

In the first place, we propose a variation of the conceptual framework proposed by Claveria (2019) to compute the degree of consensus among survey respondents for three reply options instead of five. The main motivation for this adaptation is that survey respondents are mainly given three response categories. They are usually asked if they expect a certain variable to rise (Pt), to fall (Mt) or to remain constant (Et). Lolić and Sorić (2018) and Zuckarelli (2015) show that the number of response categories is crucial for forecast accuracy of quantified expectations. The original framework is based on Saari (2008)’s geometric approach to determine the likelihood of disagreement among election outcomes.

The most common way of presenting survey data is the balance, obtained as Pt − Mt. As consumer surveys contain three additional response categories (the extreme options “sharp increase” and “sharp fall”, and a “do not know” category), we opt for grouping all positive responses in Pt, all negative ones in Mt, and incorporating the “do not know” share in Et. By doing so, instead of using a four-dimensional simplex in the form of a regular pentagon (Claveria 2019), we can project survey responses in a two-dimensional simplex in the form of an equilateral triangle (Fig. 1).

Fig. 1
figure1

Simplex—equilateral triangle. The three reply options are P (% of “increase” replies), M (% of “fall”), and E (% of “remains constant”). The blue point in the simplex corresponds to a unique convex combination of the three reply options for a given period in time

The omission of Et in the calculation of the balance statistic implies a loss of the information concerning the degree of certainty of the respondents. In order to overcome this limitation, the presented methodological framework allows to derive a measure of consensus that explicitly incorporates the share of neutral responses.

As the sum of the reply options adds to 100, a natural representation of the three aggregated shares of responses is as a point on a simplex (Coxeter 1969). Golan and Perloff (2004) design a nonparametric method to forecast the US unemployment rate based on the higher-dimensional nearest neighbours approach, which allows to compose a simplex that contains the point to forecast.

The inside of the simplex in Fig. 1 encompasses all possible combinations of reply options, which correspond to the barycentric coordinates of each point at a given time t. Given that all vertices are at the same distance to the centre of the simplex, the ratio of the distance of a point to the barycentre and the distance from the barycentre to the nearest vertex gives a measure of agreement, which can be formalised as:

$$ C_{t} = \frac{{\sqrt {\left( {P_{t} - 33.\hat{3}} \right)^{2} + \left( {E_{t} - 33.\hat{3}} \right)^{2} + \left( {M_{t} - 33.\hat{3}} \right)^{2} } }}{{\sqrt {{\raise0.7ex\hbox{$2$} \!\mathord{\left/ {\vphantom {2 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}} }} $$
(1)

This consensus measure reaches the maximum (100%) when one reply option draws all the responses, and the minimum value of zero when the answers are equidistributed among the three response categories. This point corresponds to the centre of the simplex. By smoothing this consensus metric with a simple moving average and scaling it by means of a rolling regression we obtain a proxy of the unemployment rate which we later include in the benchmark model as a predictor.

In a similar vein, Bachmann et al. (2013) proposed a measure of disagreement based on the dispersion of respondents’ expectations that can be defined as the square root of the balance:

$$ DISP_{t} = \sqrt {P_{t} + M_{t} - (P_{t} - M_{t} )^{2} } $$
(2)

The authors applied this measure to the forward-looking survey question related to the expectations of domestic production activities in Germany at the micro level in order to proxy economic uncertainty. Since then, measures of disagreement among survey expectations have been increasingly used to proxy economic uncertainty (Claveria et al. 2019; Girardi and Reuter 2017; Glas and Hartmann 2016; Mokinski et al. 2015).

Both indicators are used as predictors in an ARIMA model used as a benchmark (Box and Jenkins 1970):

$$ y_{t}^{\lambda } = \frac{{\varTheta_{s} \left( {L^{s} } \right)\,\theta \left( L \right)}}{{\varPhi_{s} \left( {L^{s} } \right)\,\phi \left( L \right)\Delta_{s}^{D} \Delta^{d} }}\varepsilon_{t} $$
(3)

where \( \varTheta_{s} \left( {L^{s} } \right) = \left( {1 - \varTheta_{s} L^{s} - \varTheta_{2s} L^{2s} - \cdots - \varTheta_{Qs} L^{Qs} } \right) \) and \( \varPhi_{s} \left( {L^{s} } \right) = \left( {1 - \varPhi_{s} L^{s} - \varPhi_{2s} L^{2s} - \cdots - \varPhi_{Ps} L^{Ps} } \right) \) are seasonal moving average (MA) and autoregressive (AR) polynomials, and \( \theta \left( L \right) = \left( {1 - \theta_{1} L^{1} - \theta_{2} L^{2} - \cdots - \theta_{q} L^{q} } \right) \) and \( \phi \left( L \right) = \left( {1 - \phi_{1} L^{1} - \phi_{2} L^{2} - \cdots - \phi_{p} L^{p} } \right) \) regular MA and AR polynomials. The innovation is denoted as ɛt, and it is assumed to behave as a white noise. Lambda (λ) is the value of the Box–Cox transformation (Box and Cox 1964), \( \Delta_{s}^{D} \) the seasonal difference operator, and \( \Delta^{d} \) the regular difference operator. The periodicity of the time series is denoted as s.

In the next step, the model is extended by including the proxies of unemployment based on the level of agreement and disagreement among consumers as independent variables. This model is usually referred to as ARIMAX or dynamic regression model, and allows to take advantage of the autocorrelation that may be present in residuals of the regression to improve the accuracy of the forecasts.

Finally, we generate out-of-sample recursive forecasts and compare the forecast accuracy of both models by means of the mean absolute percentage error (MAPE), which is a scale-independent measure that weighs the absolute forecast error et by the actual value of the variable for every point in time:

$$ MAPE = \frac{100}{n}\sum\limits_{t = 1}^{n} {\left| {\frac{{e_{t} }}{{Y_{t} }}} \right|} $$
(4)

where Yt refers to the observation at time t. The fact that we are dealing with positive data and comparing countries with different unemployment rates, makes the MAPE particularly suitable in this case (Hyndman and Koehler 2006).

Data

The empirical analysis focuses on consumers’ expectations about the future evolution of unemployment, elicited from the seventh question of the Joint Harmonised EU Consumer Survey conducted by the European Commission, which can be freely downloaded (https://ec.europa.eu/info/business-economy-euro/indicators-statistics/economic-databases/business-and-consumer-surveys_en).

In the survey, consumers are asked how they expect the number of people unemployed in the country to change over the next 12 months. Surveys are conducted during the first 3 weeks of each month, and results are published at the end of each month, so expectations are available prior to the publication of official data, which makes them particularly useful for monitoring economic developments and short-term forecasting. The quantitative target variable is the unemployment rate. We use harmonised seasonally adjusted rates provided by the OECD (https://stats.oecd.org/index.aspx?queryid=36324).

The sample period goes from January 2007 to December 2016. We use 2017 to compute the out-of-sample forecast accuracy. As a robustness check, we replicate the pseudo out-of-sample experiment for 2012, when Greece and Portugal had already started to adopt the European Union emergency measures as a consequence of the effects of the euro debt crisis.

In Fig. 2 we graph the main features of the unemployment rate and the consensus metric described in Eq. 1 for the eight European countries included in the analysis. The percentages of consensus are quite homogenous across countries, being Greece and Portugal the countries that show the highest average levels of agreement among consumers. When combining both percentages, we observe a positive relation between average levels of unemployment and consensus, which suggests that they are positively correlated. This notion is further confirmed in Fig. 3, where we graph the evolution of both the unemployment rate and the percentage of consensus using as a backdrop the distribution of the three response categories duringthe sample period.

Fig. 2
figure2

Descriptive analysis

Fig. 3
figure3

Evolution of unemployment rates, consensus and survey responses by category. The blue area represents the evolution of the % of “increase” responses (P) regarding the expected level of unemployment over the next months, the grey area represents the % of “fall” responses (M), and the white area the  % of “no-change” responses (E). The black line represents the evolution of the unemployment rate in each country (secondary axis) and the dashed black line the evolution of the  % of agreement among consumers (main axis)

Results and discussion

In this section we present the results of the two independent out-of-sample forecasting experiments. In the first one, we generate predictions of the unemployment rate with the ARIMA model used as benchmark for each country. Model selection is done by means of the step-wise algorithm proposed by Hyndman and Khandakar (2008) implemented in R (Table 1). As the order selection process when using ARIMA models may entail a subjective factor, this automatic procedure selects the model by combining unit root tests and the minimization of the Akaike’s information criterion (AIC). It uses a variation of the Casanova–Hansen test (Casanova and Hansen 1995) for selecting the order of seasonal integration (D), and successive KPSS unit root tests (Kwiatkowski et al. 1992) to choose the order of regular integration (d).

Table 1 Model selection

In order to traverse the space of models efficiently to arrive at the model with the lowest AIC, the authors propose an algorithm in two steps. In the first one, depending on the periodicity, out of four possible models the one with the smallest AIC value is selected. In the second one, thirteen variations of the selected model are considered, allowing p, q, P and Q to vary. The procedure is repeated until no model with lower AIC can be obtained. The algorithm introduces several constraints on the fitted models to avoid problems with convergence or near unit-roots (upper bounds to p, q, P and Q, and rejection of models which are close to non-invertible or non-causal). For more details, see Hyndman and Khandakar (2008).

We then augment the benchmark ARIMA model by incorporating the consensus-based unemployment indicator (ARIMAX_1) and the disagreement indicator (ARIMAX_2) described in Sect. 2 as predictors. To evaluate the forecasting performance of the models we calculate the MAPE for the out-of-sample period (2007.01–2017.12), and replicate the experiment for 2012.01–2012.12. To test whether the reduction in MAPE of the two augmented models with regards to the benchmark is statistically significant, we compute the Diebold–Mariano (DM) statistic of predictive accuracy (Diebold and Mariano 1995). The null hypothesis of the test is that the difference between the two competing series is non-significant. A negative sign of the statistic implies that the second model has bigger forecasting errors.

Table 2 contains the results of the forecasting comparison. The first three columns contain the MAPE values for each of the models. The last two columns present the results of the DM test comparing the loss function of the errors of the benchmark ARIMA model to the loss function of the errors of each augmented model. The model that shows the best forecasting performance is the ARIMAX with the consensus-based unemployment indicator, which for 2017 obtains lower MAPE values than the ARIMA in all countries except Austria, and for 2012 in all countries except Italy, Greece and the Netherlands.

Table 2 Out-of-sample forecast accuracy—MAPE and DM test

In Figs. 4, 5 we compare the evolution of the forecasts generated with both the ARIMA and the ARIMAX model with the consensus-based unemployment indicator. For 2017, the lowest MAPE values are obtained for Italy, Greece and the United Kingdom, in which predictions that include information regarding the degree of agreement of expectations help to detect in advance the existence of a turning point in the evolution of the unemployment rate.

Fig. 4
figure4

Unemployment forecasts—ARIMA vs. ARIMAX. The red line represents the evolution of the unemployment rate in each country, the blue line the forecasts of the unemployment rate, and the vertical green segments the 95% confidence interval of the out-of-sample predictions. ARIMAX stands for the augmented ARIMA model including as a predictor the consensus-based unemployment indicator proposed by Claveria (2019)

Fig. 5
figure5

Unemployment forecasts—ARIMA vs. ARIMAX. The red line represents the evolution of the unemployment rate in each country, the blue line the forecasts of the unemployment rate, and the vertical green segments the 95% confidence interval of the out-of-sample predictions. ARIMAX stands for the augmented ARIMA model including as a predictor the consensus-based unemployment indicator proposed by Claveria (2019)

These results are in line with those of Abberger (2007) and Lehmann and Wohlrabe (2017), who find that consumers’ employment and unemployment expectations respectively exhibit a high forecasting accuracy in Germany. Martinsen et al. (2014) and Österholm (2010) also find evidence that unemployment expectations help to improve unemployment forecasting in Norway, Sweden and Finland. Conversely, Soybilgen and Yazgan (2018) do not find the Consumer Confidence Index to play an important role in nowcasting the unemployment rate in Turkey.

Concluding remarks

This paper assesses the predictive value of a novel measure of consensus among agents’ expectations. This metric presents several advantages over alternative aggregation procedures of qualitative survey expectations. On the one hand, as it gives the percentage of agreement among respondents, it is directly interpretable. On the other hand, it allows incorporating information from respondents who do not expect any change in questions with three response options.

The study analyses whether the inclusion of the information coming from the degree of agreement among consumers’ expectations helps to refine forecasts of the unemployment rate in eight European countries. First, we generate out-of-sample recursive forecasts with ARIMA models that are used as a benchmark. Then, we include the consensus-based unemployment indicator and a measure of disagreement as predictors in the models. Our results show that the proposed indicator leads to an improvement in forecast accuracy in most countries. The indicator of disagreement also helps refine predictions. This finding allows us to conclude that both the level of agreement in consumer unemployment expectations contain useful information to forecast unemployment rates, especially for the prediction of turning points detected by agents in advance.

The variation of improvements across countries does not only reflect differences in the explanatory power of the agreement indicators used as predictors, but also other country-specific factors that represent the heterogeneity in the respective labour markets and in the predictive capacity of consumers. Extending the analysis so as to control for some of these factors is an issue left for further research.

References

  1. Abberger, K.: Qualitative business surveys and the assessment of employment—a case study for Germany. Int. J. Forecast. 23(2), 249–258 (2007)

    Article  Google Scholar 

  2. Askitas, N., Zimmermann, K.F.: Google econometrics and unemployment forecasting. Appl. Econ. Q. 55(2), 107–120 (2009)

    Article  Google Scholar 

  3. Bachmann, R., Elstner, S., Sims, E.R.: Uncertainty and economic activity: evidence from business survey data. Am. Econ. J. Macroecon. 5(2), 217–249 (2013)

    Article  Google Scholar 

  4. Box, G., Cox, D.: An analysis of transformation. J. R. Stat. Soc. Ser. B. 211–264 (1964)

  5. Box, G.E.P., Jenkins, G.M.: Time series analysis: forecasting and control. Holden Day, San Francisco (1970)

    Google Scholar 

  6. Casanova, F., Hansen, P.: Are seasonal patterns constant over time? A test for seasonal stability. J. Bus. Econ. Stat. 13(3), 237–252 (1995)

    Google Scholar 

  7. Claveria, O.: A new consensus-based unemployment indicator. Appl. Econ. Lett. (2019, in press)

  8. Claveria, O., Monte, E., Torra, S.: Economic uncertainty: a geometric indicator of discrepancy among experts’ expectations. Soc. Indic. Res. (2019, in press)

  9. Claveria, O., Pons, E., Ramos, R.: Business and consumer expectations and macroeconomic forecasts. Int. J. Forecast. 23(1), 47–69 (2007)

    Article  Google Scholar 

  10. Coxeter, H.S.M.: Introduction to geometry (2nd Edition). Wiley, London (1969)

    Google Scholar 

  11. D’Amuri, F., Marcucci, J.: The predictive power of google searches in forecasting US unemployment. Int. J. Forecast. 33(4), 801–816 (2017)

    Article  Google Scholar 

  12. Diebold, F.X., Mariano, R.: Comparing predictive accuracy. J. Bus. Econ. Stat. 13, 253–263 (1995)

    Google Scholar 

  13. Girardi, A., Reuter, A.: New uncertainty measures for the euro area using survey data. Oxf. Econ. Pap. 69(1), 278–300 (2017)

    Article  Google Scholar 

  14. Glas, A., Hartmann, M.: Inflation uncertainty, disagreement and monetary policy: evidence from the ECB Survey of Professional Forecasters. J. Empir. Financ. 39(Part B), 215–228 (2016)

    Article  Google Scholar 

  15. Golan, A., Perloff, J.M.: Forecasts of the U.S. unemployment rate using a nonparametric method. Rev. Econ. Stat. 86(1), 433–438 (2004)

    Article  Google Scholar 

  16. Hutter, C., Weber, E.: Constructing a new leading indicator for unemployment from a survey among German employment agencies. Appl. Econ. 47(33), 3540–3558 (2015)

    Article  Google Scholar 

  17. Hyndman, R.J., Khandakar, Y.: Automatic time series forecasting: the forecast package for R. J. Stat. Softw. 27(3), 1–22 (2008)

    Article  Google Scholar 

  18. Hyndman, R.J., Koehler, A.B.: Another look at measures of forecast accuracy. Int. J. Forecast. 22(4), 679–688 (2006)

    Article  Google Scholar 

  19. Kwiatkowski, D., Phillips, P.C.B., Schmidt, P., Shin, Y.: Testing the null hypothesis of stationarity against the alternative of a unit root. J. Econom. 54(1–3), 159–178 (1992)

    Article  Google Scholar 

  20. Lehmann, R., Weyh, A.: Forecasting employment in Europe: are survey results helpful? J. Bus. Cycle Res. 12(1), 81–117 (2016)

    Article  Google Scholar 

  21. Lehmann, R., Wohlrabe, K.: Experts, firms, consumers or even hard data? Forecasting employment in Germany. Appl. Econ. Lett. 24(4), 279–283 (2017)

    Article  Google Scholar 

  22. Lolić, I., Sorić, P.: A critical re-examination of the Carlson–Parkin method. Appl. Econ. Lett. 25(19), 1360–1363 (2018)

    Article  Google Scholar 

  23. Martinsen, K., Ravazzolo, F., Wulfsberg, F.: Forecasting macroeconomic variables using disaggregate survey data. Int. J. Forecast. 30(1), 65–77 (2014)

    Article  Google Scholar 

  24. Mokinski, F., Sheng, X., Yang, J.: Measuring disagreement in qualitative expectations. J. Forecast. 34(5), 405–426 (2015)

    Article  Google Scholar 

  25. Österholm, P.: Improving unemployment rate forecasts using survey data. Finn. Econ. Pap. 23, 16–26 (2010)

    Google Scholar 

  26. Saari, D.G.: Complexity and the geometry of voting. Math. Comput. Model. 48(9–10), 551–573 (2008)

    Google Scholar 

  27. Soybilgen, B., Yazgan, E.: Evaluating nowcasts of bridge equations with advanced combination schemes for the Turkish unemployment rate. Econ. Model. 72, 99–108 (2018)

    Article  Google Scholar 

  28. Vicente, M.R., López-Menéndez, A.J., Pérez, R.: Forecasting unemployment with internet search data: does it help to improve predictions when job destruction is skyrocketing? Technol. Forecast. Soc. Chang. 92, 132–139 (2015)

    Article  Google Scholar 

  29. Zuckarelli, J.: A new method for quantification of qualitative expectations. Econ. Bus. Lett. 4(3), 123–128 (2015)

    Article  Google Scholar 

Download references

Authors’ contributions

All contributions to the manuscript were done by the sole author of the manuscript (OC). The author read and approved the final manuscript.

Acknowledgements

This research was supported by the project ECO2016-75805-R from the Spanish Ministry of Economy and Competitiveness. An earlier version of this paper was presented at the 13th ifo Dresden Workshop on Macroeconomics and Business Cycle Research (Dresden January 25–26, 2019). We wish to thank Robert Lehmann, Axel Lindner and the rest of the participants of the conference for their helpful comments and suggestions. We also want to thank the Editor and two anonymous referees for their useful comments and suggestions.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

The datasets used and/or analysed during the current study are:

Funding

This research was supported by the project ECO2016-75805-R from the Spanish Ministry of Economy and Competitiveness.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Oscar Claveria.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Claveria, O. Forecasting the unemployment rate using the degree of agreement in consumer unemployment expectations. J Labour Market Res 53, 3 (2019). https://doi.org/10.1186/s12651-019-0253-4

Download citation

Keywords

  • Unemployment
  • Forecasting
  • Leading indicator
  • Economic tendency surveys
  • Consumer expectations

JEL Classification

  • C51
  • C52
  • C53
  • D12
  • E24