Skip to main content
  • Original Article
  • Open access
  • Published:

Nonresponse trends in establishment panel surveys: findings from the 2001–2017 IAB establishment panel

Abstract

Many household panel surveys have experienced decreasing response rates and increasing risk of nonresponse bias in recent decades, but trends in response rates and nonresponse bias in business or establishment panel surveys are largely understudied. This article examines both panel response rates and nonresponse bias in one of the largest and longest-running establishment panels, the IAB Establishment Panel. Response rate trends are reported over a 17-year period for each annual cohort and rich administrative data are used to evaluate changes in nonresponse bias and test hypotheses regarding short-term and long-term panel participation. The findings show that while cumulative panel response rates have declined over time, wave-to-wave reinterview rates have remained largely stable. Reinterview nonresponse bias has also remained stable, while cumulative nonresponse bias has consistently increased within all cohorts. Larger establishments and those that experienced an interviewer change or did not answer all survey questions (item nonresponse) in a previous wave were less likely to continue participating in the panel. These findings and their practical implications are discussed in conclusion.

1 Introduction

Nonresponse is a ubiquitous problem in surveys and particularly in panel surveys as it accumulates over multiple time points. Nevertheless, panel surveys are important tools for capturing changes in the population over time. In addition to household panel surveys, business (or establishment) panel surveys provide valuable insights into changes in the economic situation of a country that inform policy debates. Examples of large-scale establishment panel surveys include the Current Employment Statistics (CES) of the U.S. Bureau of Labor Statistics (BLS) (Bureau of Labor Statistics 2023a), the Survey of Industrial and Service Firms (Invind) of the Bank of Italy (Banca d’Italia 2022), the Survey of Employment, Payrolls and Hours by Statistics Canada (Statistics Canada 2022), the Annual Enterprise Survey conducted by Stats New Zealand, and the German IAB Establishment Panel (IAB-EP) (Bechmann et al. 2021) of the Institute for Employment Research (IAB). These and other panel surveys measure key business statistics, including employee turnover, working hours, earnings, financial performance, and price changes by industry. In addition to well-established panels, newer establishment panels are being launched as fresh topics and research questions emerge. The COVID-19 pandemic, for example, propelled the creation of new panel surveys, including the Business Impact of Coronavirus Survey (BICS) by the Office for National Statistics of the United Kingdom (Gough et al. 2020) and the German survey “Establishments in the COVID-19 Crisis” of the IAB (Backhaus et al. 2021).

Governments and research institutes invest significant resources into creating these panel surveys, but their investments are at risk of being undermined when establishments do not respond to the survey requests or permanently drop out of the panel over time. Nonresponse not only diminishes the sample size and resulting confidence level of the estimates but can also introduce bias if the participating sample no longer represents the intended target population (de Heer and de Leeuw 2002). While some studies have identified declining participation in voluntary cross-sectional establishment surveys (Pielsticker and Hiebl 2020; König et al. 2021; Küfner et al. 2022), the extent to which this trend applies to voluntary panel surveys, after establishments have initially agreed to join the panel, is currently unclear.

In the present study, we address this research gap by examining trends in panel survey participation over 17 years of the IAB-EP (2001–2017) for each yearly cohort that was recruited during this period. In addition to reporting response rate trends (including refusal and noncontact rates), comprehensive administrative data are used to estimate panel nonresponse bias and test several hypotheses of panel participation.

Our research questions are:

  1. 1)

    Have panel response rates in the IAB Establishment Panel changed over time?

  2. 2)

    What is the magnitude of panel nonresponse biases in the IAB Establishment Panel and how much have they changed over time?

  3. 3)

    What factors influence short-term and long-term participation in the IAB Establishment Panel?

The remainder of this article proceeds as follows. Section 2 provides an overview of the theoretical response process of panel establishments and the current state of research on response rates, nonresponse bias, and correlates of participation in establishment surveys. Furthermore, hypotheses on the relationship between establishment-level characteristics and panel participation are formulated. In Sect. 3, the IAB-EP, administrative data, and explanatory variables are described. Section 4 describes the study methodology and Sect. 5 presents the study results. The article ends with a discussion in Sect. 6 and a conclusion in Sect. 7.

2 Theoretical background and state of research

2.1 Theory of participation in establishment panel surveys

The survey participation process of establishments has been the subject of research for several years (e.g., Willimack et al. 2002; Willimack and Nichols 2010). The current model of Willimack and Snijkers (2013) distinguishes between four groups of factors that influence an establishment’s decision to participate: the external environment, the business context, the respondent, and the survey design. Additional aspects of authority, capacity, and motivation to respond described in Tomaskovic-Devey et al. (1995) are included in the above groups. For empirical evaluations of theoretical participation models, the reader is referred to: Phipps and Jones (2007), Davis and Pihama (2009), Janik and Kohaut (2012), Phipps and Toth (2012), Earp et al. (2018), D’Aurizio and Papadia (2019), and König et al. (2021), among others.

There are some important differences between establishment and household surveys in the factors that influence their voluntary participation. First, establishments and households weigh the possible costs and benefits of survey participation in different ways. Households may be intrinsically motivated to participate in surveys and consider factors such as their interest in the topic, availability, perceived trust of the survey sponsor, and any financial or non-financial incentives offered (Groves and Couper 2012), whereas establishments must also consider how their participation affects their professional goals (e.g. making a profit), demands on staff time, and the allocation of resources to the response task. Second, identifying the most knowledgeable respondent, while relatively straightforward in household surveys, can be a challenge in establishment surveys where up to several employees may need to be involved in completing the questionnaire. Third, completing the questionnaire often requires that establishments consult their records, gather and process the relevant information, and perform arithmetic calculations, which are burdensome tasks that are not typical of household surveys.

In the panel survey context, it is necessary to consider additional factors that may influence the participation outcome, such as the frequency and regularity of data collection and the time between each data collection wave (Lepkowski and Couper 2002). For example, Bavdaž (2006) found that a high-frequency survey facilitated the identification of the same target respondent within the establishment and increased the likelihood of participation. In this context, organizational learning can be assumed as the aspect of authority to participate is resolved and the provision of survey information becomes more familiar for the establishment (Davis and Pihama 2009; Lemay 2009).

A good predictor of continued participation in a panel is the establishment’s prior experience with the panel (Smaill 2012; Davis and Pihama 2009). On the one hand, panel establishments can draw from their previous experience and familiarity with the survey and basically know what to expect regarding the interview process (e.g., content, duration, communication) and can use the same record look-up procedures to facilitate the data retrieval process, thus reducing the burden of response in subsequent waves of data collection (Bavdaž 2006; Sudman et al. 2000). On the other hand, establishments that had a negative experience in a previous interview, or encountered a change in the usual data collection procedure, are less likely to participate again (Bergman and Brage 2008).

2.2 Participation rates in establishment panel surveys

Convincing establishments to repeatedly participate in a voluntary panel survey, such as the IAB-EP, poses particular challenges for survey organizations as establishments do not experience any penalty if they do not participate. Therefore, it is not surprising that response rates in voluntary surveys are generally lower than in mandatory ones (Paxson et al. 1995; Fisher et al. 2003; Petroni et al. 2004).

It is important to keep in mind that the establishment survey literature uses no standard definitions of panel response rates and often the Response Rate 1 (AAPOR 2023) definition is reported by treating each wave as a single cross-section and neglecting the panel context (e.g., König et al. 2021; Bureau of Labor Statistics 2023b). Table 1 presents such response rates for a selection of five voluntary establishment panel surveys from the U.S. and Germany. Every survey experienced a decreasing response rate over time. Some have been more affected than others, as the Job Openings and Labor Turnover Survey (JOLTS) experienced an average decline in the response rate of 3.8 percentage points per year, while the IAB Job Vacancy Survey (IAB-JVS) experienced a 0.4 percentage point drop per year.

Table 1 Temporal trends in response rates of five voluntary establishment panel surveys

Earlier documentation of panel response rates is reported by Petroni et al. (2004), who provided an overview of five establishment panel surveys conducted monthly by the U.S. Census Bureau (1991–2003) and the BLS (2001–2003). Most surveys showed a slightly declining response rate trend during their respective time periods. The response rates of the Census Bureau surveys declined by about 10 percentage points, on average, until 2000, but an abrupt increase of 10 percentage points occurred in 2001 due to a large survey redesign and the recruitment of a new refreshment sample. The authors noted that adding a new sample initially increased the response rate before it began to decrease again over time. Until 2000, the increase in the average response rate became smaller for more recent refreshment samples as their baseline response rates were generally lower than for older refreshment samples. The BLS surveys’ response rates experienced a decrease of around 5–10 percentage points over the three-year period, though they fluctuated over the years.

In the household survey literature, wave-to-wave reinterview rates are commonly reported (Wetzel 2003; Schoeni et al. 2013; Sakshaug and Huber 2016; Watson et al. 2018), which are also called conditional response rates (Chesire et al. 2011; Bergmann and Scherpenzeel 2016) as their calculation is based on the respondents of the previous wave and those who participated again in the subsequent wave. In the following, we refer to this wave-to-wave response rate as the reinterview rate. It is especially useful when analyzing changes in retention over successive waves.

Another response rate which is also used for household panel surveys is the cumulative (or unconditional) panel response rate (Chesire et al. 2011; Sakshaug and Huber 2016; Kroh et al. 2018), as it considers the total sample loss due to nonresponse/attrition since the initial recruitment wave. While the wave-to-wave reinterview rate may fluctuate or increase over time, the cumulative rate usually declines over time and quantifies the full extent of panel attrition.

To the best of our knowledge, there is no article that reports wave-to-wave reinterview rates and cumulative response rates for establishment panel surveys. In the present study, we report both reinterview and cumulative response rate trends in the IAB-EP, and consider a long time period of 17 years. We also report refusal and noncontact rates, which are often neglected in the establishment panel survey literature.

2.3 Nonresponse bias in establishment panel surveys

Besides response rates, another important consideration in panel surveys is nonresponse bias. It occurs when participating establishments systematically differ from those who don’t participate on the key variables of interest. In panel surveys, attrition increases the risk that the respondent composition changes and becomes more selective over time (Groves et al. 2009). For this reason, nonresponse bias is a major concern in panel surveys, as it can reduce the accuracy of panel estimates.

Only few studies have reported estimates of nonresponse bias in establishment surveys (e.g. Küfner et al. 2022; König et al. 2021; Kratzke 2013; Phipps and Toth 2012). Key findings from these studies suggest that older and smaller establishments are slightly more likely to participate than younger and larger establishments. Yet, no study has reported panel nonresponse bias trends, reflecting a neglected area of research. Analogous to the two response rate perspectives described above, nonresponse bias can be assessed under both perspectives by looking at wave-to-wave reinterview bias and cumulative nonresponse bias over the entire life of the panel. We contribute to the literature by considering both perspectives and use rich administrative data available for both respondents and nonrespondents to assess temporal trends in reinterview and cumulative nonresponse bias in the IAB-EP.

2.4 Hypotheses of panel survey participation

In addition to reporting temporal trends of response rates and nonresponse bias in the IAB-EP, we also test three hypotheses, derived from the survey response model of Willimack and Snijkers (2013), regarding factors that may influence whether establishments continue to participate in the panel.

The first hypothesis concerns the establishment size, measured as the number of employees, which is a key characteristic that is directly correlated with other attributes of the establishment, such as revenue. Large establishments are important for the representativeness of surveys and, given their small proportion in the population, are frequently invited to participate in surveys. However, frequent survey requests may result in large establishments being very selective in which voluntary requests they accept, if any (Thompson and Oliver 2012; McCarthy et al. 2017). Furthermore, a large number of employees is associated with a hierarchical structure that can make it challenging to collaborate on survey requests as more people are involved, in which case response burden may not be reduced with repeated participation (Bottone et al. 2021). Empirical evidence on the relationship between establishment size and survey participation is inconsistent, but the general consensus is that the relationship is mostly positive for mandatory surveys and negative for voluntary surveys (Thompson and Washington 2013; Davis and Pihama 2009; Phipps and Jones 2007). For example, Earp et al. (2018) showed for the OES that smaller establishments are more likely to participate in the initial panel wave and continue to participate in subsequent waves, compared to larger establishments. Other studies also conclude that larger establishments are less likely to participate in voluntary surveys (Janik and Kohaut 2012; König et al. 2021; Küfner et al. 2022). Thus, in the IAB-EP we hypothesize that larger establishments have a lower likelihood of panel participation compared to smaller establishments.

H1: Larger establishments are associated with a lower likelihood of panel participation compared to smaller establishments.

An interviewer change may also affect the likelihood of panel participation, particularly reinterview participation, as it abruptly disrupts the usual data collection procedure. In face-to-face panel surveys, such as the IAB-EP, establishments usually develop a familiarity and rapport with the same interviewer over time. Watson et al. (2018) point out that a continuous relationship between interviewer and respondent is important for study participation as it is likely to “enhance respondent engagement with the study” (Watson et al. 2018, p. 611). In this sense, an interviewer change is likely to have a negative effect on reinterview participation (Janik and Kohaut 2012).

H2: An interviewer change is associated with a lower likelihood of reinterview participation compared to no interviewer change.

Lastly, item nonresponse is thought to be a potential indicator of survey reluctance or a negative survey experience that is predictive of future wave nonresponse, at least in household surveys (Loosveldt et al. 2002; Lee et al. 2004; Hawkes and Plewis 2006; Sakshaug and Kreuter 2011). Item nonresponse occurs when a respondent is unable or refuses to answer one or more of the survey questions. In the establishment survey context, this may occur if answering the question requires much effort or burden in searching records or recalling information that is not easily remembered by the establishment representative. We hypothesize that item nonresponse signals that the establishment’s motivation to participate may be waning and that the costs of providing the survey information may be outweighing the benefits, which is likely to translate into a lower likelihood of future participation (Janik 2011).

H3: Item nonresponse in a prior wave is associated with a lower likelihood of future panel participation compared to no prior item nonresponse.

The above hypotheses were selected based on their importance in the establishment survey participation literature and because most of them, namely, interviewer change and item nonresponse, reflect the panel context. Other potential influences of panel participation, such as response duration, the quality and length of open-ended answers, and straightlining were considered but not pursued further, as the IAB-EP either lacks information on them or the question types of the IAB-EP do not permit such analyses. Nevertheless, these influences would be useful to consider in future research on establishment survey participation.

3 Data sources

3.1 IAB Establishment panel

Since 1993, the Institute for Employment Research (IAB) runs the voluntary IAB Establishment Panel (IAB-EP), which is carried out annually in the second half of the year. Each year a sample of employers in Germany are interviewed on various topics, such as personnel recruitment, (further) training, and investments. Topics related to collective agreements and work councils are of special interest for policy research (Ellguth and Kohaut 2021).

The sampling frame of the IAB-EP includes all establishments in Germany with at least one employee that is liable to social security contributions on the 30th of June of the previous year. Establishment size, industry, and federal state serve as stratification variables. The yearly IAB-EP is composed of two samples: a refreshment sample of newly drawn establishments and the existing panel establishments. The former is drawn from the previous year’s administrative data with the 30th of June as the reference date. In the forthcoming analyses, we treat each yearly refreshment sample as a new cohort (e.g., all establishments that are newly recruited in 2005 belong to the 2005 cohort). Panel establishments are those that participated in at least one of the previous two waves (Fischer et al. 2008). Accordingly, temporal drop-outs among the panel cases are allowed as long as they were not nonrespondents in the survey for more than two consecutive waves. In total, we analyze the 17 annual cohorts of establishments that were recruited during the period 2001 to 2017, focusing on their participation behavior after the initial recruitment wave.

The IAB-EP is primarily a face-to-face survey, though, on average, about 15% of establishments were interviewed by mail each year between 2001 and 2015 (Ellguth et al. 2014) and additional modes, including online and telephone were introduced after 2017. We exclude the mail cases and do not consider the most recent mixed-mode years and restrict the analysis to face-to-face interviewing to exclude mode effects. We further exclude 127 establishments whose participation history was unknown for some years due to technical errors.

3.2 Administrative data and linkage to the IAB-EP

To assess nonresponse bias and predictors of panel participation in the IAB-EP, administrative data from the Establishment History Panel (BHP) of the Federal Employment Agency are used. These data are compiled from mandatory social security notifications submitted by employers and thus include every establishment in Germany with at least one marginal part-time employee (e.g. casual worker, seasonal worker, short-term employee) or one employee who is liable to social security contributions (Schmucker et al. 2018). The BHP records numerous attributes of establishments and their employees, such as the number of employees by sex, age, education level, and employment type. Detailed information about the BHP can be found in Schmucker et al. (2018). For our analyses, we assume that these register data, which stem from notifications that are required to be reported by the employer, are accurate. This assumption has also been applied in other studies using the BHP for substantive and methodological research (Wagner 2012; Henze 2014; Späth and Koch 2008).

To combine the two datasets a 1-to-1 linkage was performed merging every establishment in the IAB-EP of the survey year to the administrative data of the previous year. A small proportion of establishments in the IAB-EP could not be linked to the BHP for various reasons.Footnote 1 This proportion increases over the years and peaks at 8% in 2017. To check if there is a systematic difference between the linked and non-linked establishments, regressions of the linkage outcome were fitted on several establishment characteristics, which yielded no evidence of compositional imbalances.

3.3 Overview of administrative and survey variables

Table 2 shows the variables (and number of categories) used in each of the analyses (the detailed categories are shown in Additional file 1: Table S1). To facilitate interpretation of results, these variables are classified into four variable groups: general characteristics, employee structure, employment structure, and survey variables. The first three variable groups come from the BHP administrative data, while the last group comes from the survey data and is only used for modeling predictors of panel participation. These variables were selected based on their similarity with topics covered in the IAB-EP questionnaire and overlap with variables used in previous methodological work on nonresponse bias (Brixy et al. 2007; Wagner 2012; Henze 2014; Sakshaug et al. 2018; König et al. 2021; Küfner et al. 2022).

Table 2 Overview of variables and number of categories used in the analyses

Additional file 1: Tables S2 and S3 describe all variables in more detail and provide descriptive statistics for all variables based on the slightly different sample sizes for the reinterview and cumulative perspectives. They also show the various control variables which are included in the regression models as sensitivity checks for the hypotheses regarding establishment size, item nonresponse, and interviewer change.

4 Methods

4.1 Outcome rate definitions

In our analyses of the IAB-EP, we define a response (or participation) as an establishment who completed a face-to-face interview. All other outcomes are treated as nonresponse. Additional file 1: Table S4 provides an overview of the different types of nonresponse (refusal, noncontact) and their mapping with the specific call-record outcomes. As stated previously, we consider two ways of defining response rates in panel surveys: reinterview response rates and cumulative response rates.

The reinterview response rate is defined as the proportion of respondents from the previous wave (t−1) who participated in the current wave (t), assuming that they were eligible to do so:

$${Reinterview \,response\, rate}_{cohort, t}=\frac{{respondents}_{cohort, t}}{{respondents}_{cohort, t-1}}$$
(1)

By definition, temporal drop-outs of the previous year are not included in the reinterview response rate. These temporal drop-outs are also not considered again for later waves, which means, if an establishment is a nonrespondent in one wave it is not included in the reinterview analysis of later waves of the same cohort.

The reinterview refusal rate is similarly calculated as the proportion of respondents in the previous wave (t−1) who were refusals in the current wave (t):

$${Reinterview \,refusal \,rate}_{cohort, t}=\frac{{refusals}_{cohort, t}}{{respondents}_{cohort, t-1}}$$
(2)

The reinterview noncontact rate is also similarly calculated as the proportion of respondents in the previous wave (t−1) who were noncontacts in the current wave (t):

$${Reinterview \,noncontact \,rate}_{cohort, t}=\frac{{noncontacts}_{cohort,t}}{{respondents}_{cohort, t-1}}$$
(3)

These reinterview outcome rates are calculated for every wave of each cohort.

The cumulative response rate is defined as the proportion of sampled units from the starting wave (t0) who responded in the current wave (t):

$${Cumulative \,response \,rate}_{cohort, t}=\frac{{respondents}_{cohort, t}}{{total sample units}_{cohort, {t}_{0}}}$$
(4)

In the denominator, nonrespondents from the first wave are also included. To show the temporal change in the cumulative response rate, we also compare the rates of the different cohorts for the same waves. In contrast to the reinterview response rate, temporal drop-outs that were not nonrespondents in more than two subsequent waves are considered in the cumulative response rate analyses, which results in slightly larger numbers of observations.

4.2 Estimation of nonresponse bias and representativeness

Nonresponse bias is calculated as the difference between the respondent-based estimate and the estimate based on the relevant reference sample (D’Aurizio and Papadia 2019) for the administrative variables in Table 2. For reinterview nonresponse bias, the reference sample comprises all respondents from the previous wave of a cohort, whereas for cumulative nonresponse bias the reference sample consists of all sampled units from the initial wave of a cohort. Thus, the change in the sample composition due to nonresponse is studied from both panel participation perspectives, similar to the response rate analysis.

Table 3 provides an overview of the different nonresponse bias formulas used. The first formulas (5–8) calculate the raw nonresponse bias as the difference between the respondent-based estimate and the reference sample estimate, whereas the next set of formulas (9–12) estimate the absolute relative nonresponse bias, which has been used in previous research to show the magnitude of nonresponse bias relative to the reference sample estimate (Groves 2006; Sakshaug and Huber 2016; Sakshaug et al. 2020; Mackeben and Sakshaug 2022). As an aggregate measure of relative nonresponse bias, we also compute the average absolute relative nonresponse bias (formulas 13–16) by averaging across all estimates of absolute relative nonresponse bias for a given variable group or overall.

Table 3 Nonresponse bias and R-indicator formulas for reinterview and cumulative participation

To investigate survey representativeness from another perspective, we estimate the R-indicator that not only aggregates univariate differences but also considers multivariate relations between the administrative variables based on a logistic regression model (Schouten et al. 2009). The R-indicator uses the estimated response propensities to assess survey representativeness within a range of 0 to 1, where 0 represents no representativeness and 1 full representativeness. Similar to the nonresponse bias measures, we estimate R-indicator values for every combination of cohort and wave for the reinterview and cumulative participation perspectives. The same administrative variables as in the corresponding bias analyses are used as predictors in the regression model.

4.3 Modeling predictors of panel participation

The last set of analyses evaluate predictors of panel participation for both the reinterview and cumulative participation perspectives. Here, we restrict the analysis to establishments who at least responded to the initial recruitment wave (wave 1) of their respective cohort and were also eligible to participate in the subsequent wave (wave 2). This avoids establishments who never participated in the panel (which are the majority of the initial sample) from skewing the temporal participation results.

Wave-to-wave reinterview participation is modeled as a dichotomous variable (1 = response; 0 = nonresponse) using covariate information from respondents in the previous wave to predict participation in the current wave. A fixed-effects model is fitted separately for each of the 16 cohorts (cohort 2001 to cohort 2016) from the second wave onward with the wave number included as a control variable to account for wave-specific differences. This results in 16 regressions. We report the average marginal effects (AME) in the results section.

Cumulative wave participation is modeled over a longer period to identify determinants of participation at two time points: after wave 4 and after wave 8, both conditional on wave 1 participation. Accordingly, two dichotomous variables [1 = response after wave 4 (or wave 8); 0 = did not respond after wave 4 (or wave 8)] are generated, one for each of the cut-offs. Wave 4 and wave 8 were chosen as cut-offs because the cumulative response rates showed slightly larger decreases at these time points, but we acknowledge that other cut-offs could have been used. Cumulative participation after wave 4 is examined by using one fixed-effects regression model for cohorts 2001–2013. Cumulative participation after wave 8 is analyzed in the same way but for fewer cohorts, over the period 2001–2009, to account for the fact that newer cohorts did not have the opportunity to participate in so many waves. For cumulative participation after wave 8 we also performed a sensitivity analysis by excluding all establishments from the regression that did not participate after wave 4. This is to prevent establishments that did not participate after wave 4 (which comprise the majority of dropouts) from skewing the coefficient estimates of participation after wave 8. All three regression models use covariate information only collected at wave 1 and include cohort number as a control variable.

All data analyses were performed using the survey (svy) commands in Stata/MP 15.1 (StataCorp 2017) and are weighted to account for differential probabilities of selection.

5 Results

5.1 Outcome rates

5.1.1 Reinterview response rates

First, we examine temporal trends in wave-to-wave reinterview outcome rates for the 16 IAB-EP cohorts from 2002 to 2017. Figure 1 presents the reinterview response rates, refusal rates, and noncontact rates for each cohort. All cohorts exhibit a similar increasing trend in the beginning waves where the reinterview response rates increase from a low of around 70% in wave 2 to above 80% from wave 5 onward. In the later waves, the reinterview rates remain mostly stable and fluctuate by about 3%-points. Conversely, the reinterview refusal rates are highest in the early waves and decrease over time for nearly all cohorts, with a mean of 21.64% in wave 2 to 10.96% in the last wave of each cohort. The reinterview noncontact rate is rather small for all cohorts and fluctuates with a slightly decreasing trend over time, with a mean of 5.53% in wave 2 and 2.22% in the last wave. These results suggest that refusals are the dominant source of reinterview nonresponse.

Fig. 1
figure 1

Reinterview outcome rates by cohort and wave

5.1.2 Cumulative response rates

Next, we present trends in cumulative response rates for all cohorts from wave 2 onward; response rates of initial waves are presented elsewhere (König et al. 2021). Figure 2 clearly shows that every cohort is affected by cumulative nonresponse with steeper declines in participation in the earlier waves and a flatter decline in the latter waves. For example, the average yearly response rate declines about 5.01%-points from waves 1–3 and 1.76%-points from waves 4–6 across all cohorts. Such temporal trends are not surprising given the impacts of attrition, particularly in the early stages of a panel.

Fig. 2
figure 2

Cumulative response rates by cohort and wave

The more interesting result arises when looking at the cumulative response rates of the same waves across different cohorts (Fig. 3), as the slope of the decrease varies by wave and cohort. For waves 2 to 6 there is a sharp decreasing trend, mainly for the earlier cohorts introduced before 2007. In particular, the wave 2 cumulative response rate for cohort 2001 (38.05%) is nearly 15%-points higher than that of cohort 2010 (23.39%). The temporal trend tends to flatten over time and after wave 11 the response rate remains mostly stable. These results suggest that the early cohorts were more willing to participate again in the panel initially but their motivation waned soon after, whereas the later cohorts were more difficult to recruit, such that only the most motivated establishments joined the panel and their motivation stayed high beyond the initial waves.

Fig. 3
figure 3

Cumulative response rates by wave and cohort

5.2 Nonresponse bias and R-indicator

5.2.1 Reinterview nonresponse bias and R-indicator

Here we present trends in reinterview nonresponse, refusal, and noncontact biases. For brevity, we only discuss average absolute relative nonresponse biases across all administrative variables and mention biases for specific variables only if they are particularly striking. All raw nonresponse biases and absolute relative nonresponse biases for each variable of each cohort-wave combination can be found in Additional file 1: Tables S5–S7. For biases aggregated to the level of the three substantive variable groups (general characteristics, employee structure, and employment structure), the reader is also referred there.

Overall, the average absolute relative reinterview nonresponse bias shows no consistent trend for the waves within each cohort (Fig. 4). For most cohorts, the average nonresponse bias remains relatively stable and fluctuates by only 2- or 3%-points over the waves. There tends to be larger fluctuations in the earlier cohorts (e.g. cohort 2002) compared to the later cohorts. However, none of the average absolute relative reinterview biases exceed 7%, which we consider to be a reassuring finding.

Fig. 4
figure 4

Average absolute relative reinterview biases by cohort (2001–2015) and wave

R-indicator values for cohorts 2001 to 2015 are presented by wave in Additional file 1: Figure S1. It can be seen that the R-indicator starts around 0.8 in the early waves of every cohort. For the oldest cohorts, 2001 to 2005, there is a slightly decreasing trend over time which means that representativeness gets worse. In particular, cohorts 2001 and 2002 exhibit a decrease in the R-indicator of around 0.2 units after wave 10. Younger cohorts introduced since 2006 reveal more or less stable R-indicator values of around 0.8. Such R-indicator values correspond with those reported in the survey literature (Cornesse and Bosnjak 2018).

Focusing now on reinterview nonresponse bias for single variable categories (see Additional file 1: Tables S5–S7), we find no systematic trend for any category, and there are only few values that exceed 10% relative bias, such as establishment size (50 + employees) in wave 8 of cohort 2001 (11.54%) and year of foundation (1970/1980s) in wave 5 of cohort 2008 (16.98%). Again, it is reassuring that the vast majority of single variables are not substantially affected by reinterview nonresponse bias and the bias trends are relatively stable over time. A very similar picture emerges for average absolute relative refusal bias and noncontact bias as no systematic trend can be seen for either. Average noncontact biases are in general significantly lower than refusal biases and in most cases below 2%.

5.2.2 Cumulative nonresponse bias and R-indicator

Although the wave-to-wave reinterview analysis revealed that average absolute nonresponse relative biases are small (mostly below 5%) and do not follow a systematic trend, analyzing cumulative nonresponse bias may reveal a different picture as attrition accumulates at each subsequent wave. Figure 5 clearly shows that the average absolute relative nonresponse bias increases steadily over the waves for each cohort (the tabular results for each cohort and wave can be found in Additional file 1: Tables S8 and S9). For example, cohort 2001 has an average absolute relative cumulative nonresponse bias of 5.55% in wave 1, which jumps to 14.26% in wave 2 and 40.81% in wave 17. Cohort 2004 also starts off with a modest average relative nonresponse bias of 3.38% in wave 1, and increases strongly to 15.65% in wave 2 and 50.12% in wave 14.

Fig. 5
figure 5

Average absolute relative cumulative nonresponse bias by cohort (2001–2015) and wave

Additional file 1: Figure S2 reveals the R-indicator values for the cumulative participation perspective. All cohorts start out with values around 0.8 in wave 2. The trend for cohorts 2001 to 2005 are consistent with the results of the cumulative nonresponse bias presented earlier, as the R-indicator and thus the representativeness, decreases over waves. For cohorts 2006 onward, the R-indicator values are mostly stable

Looking at the single variable categories (see Additional file 1: Table S8), there are many estimates that show a large increase in absolute relative nonresponse bias, for example, the industry category public/educ/health/arts in the 2003 cohort starts at 10.77% in wave 1 and increases to 103.56% in wave 15. Compared to the other variable categories, only establishment location (West Germany), establishment size (1–9 employees), and the % of German employees (100%) have relatively small biases (smaller than 10%) in most cohorts over time. Overall, the composition of respondents within the cohort changes over time with respect to the administrative variables considered, however, the variables with the largest biases in the cohorts are not always the same. For example, in some cohorts the average age of employees (category > 45—88) has the largest bias, while for another cohort the largest bias occurs for % of high-qualified employees (> 8–20%).

To investigate whether the cohorts differ in their average absolute relative cumulative nonresponse bias over time, Fig. 6 compares the same waves of different cohorts. As previously indicated, the cumulative nonresponse bias is generally larger in later waves. For all waves, cohort 2003 is striking as it yields a smaller bias than most of the other cohorts. Looking at waves 5 and 6 we can see that cohorts that entered the panel later (after 2007) have slightly smaller average biases than earlier cohorts. For example, focusing on wave 6, the average bias decreases across the cohorts from 28.02% in cohort 2001 to 20.50% in cohort 2012.

Fig. 6
figure 6

Average absolute relative cumulative nonresponse bias per wave and cohort

Comparing the absolute relative cumulative nonresponse bias of the single variable categories of the same wave, no consistent patterns attract attention (see Additional file 1: Table S9). For example, the cumulative nonresponse bias of establishment size (50 + employees) in wave 2 has smaller values for earlier than for later cohorts which means that larger establishments are less represented in later cohorts but this trend does not apply to all waves. Looking at another category, the % of marginal employees (> 25–100%), we see a decreasing bias trend across cohorts in wave 5, meaning that later cohorts are less biased with regard to this specific variable than earlier cohorts. Most of the other variables fluctuate arbitrarily per wave over time.

5.3 Predictors of panel participation

5.3.1 Reinterview participation

Next, we examine the three hypothesized predictors (establishment size, interviewer change, item nonresponse) of reinterview participation by reporting the results of the fixed-effects logistic regression models that were fitted separately for cohorts 2001 to 2016. Additional file 1: Table S10.1 shows the corresponding average marginal effects (AMEs).

The AMEs for establishment size categories 10–49 and 50 + employees are smaller (between −0.06 and 0.05) than the other two variables and statistically significant (p < 0.1) in less than half of the cohorts. The findings indicate that establishments with 10–49 employees are more likely to participate and those with 50 + employees are less likely to participate in the next panel wave than establishments with 1–9 employees. H1 asserted that the likelihood of reinterview participation is lower for larger establishments. The results only partially support this hypothesis, as the AMEs for establishment size are very small and rarely statistically significant across the cohorts.

The largest negative AME on participation across all cohorts is due to interviewer change, meaning that establishments who experienced an interviewer change since the prior wave were less likely to participate in the next wave. This effect is statistically significant at the 0.01 level in all cohorts; thus, H2 is supported. This finding is particularly important as it is the only hypothesized variable under the control of the survey organization. On average, across all reinterview cohorts, we observe a reinterview rate that is 13.1 percentage points lower for establishments that experience an interviewer change compared to establishments recruited by the same interviewer. With large sample sizes, this can mean the difference between achieving, for example, 5000 interviews with no interviewer change or achieving only 4345 interviews with an interviewer change. Thus, survey organizations must weigh the potentially higher costs of deploying the same interviewer against the possible reduction in the reinterview rate due to changing the interviewer. Depending on the cohort, the difference is sometimes larger or smaller and we find no significant temporal trend of this effect.

Item nonresponse in the prior wave also reduces the likelihood of participating in the next wave, which is statistically significant (p < 0.1) in 14 of 16 cohorts. Thus, H3 is supported.

To investigate whether the effect of the hypothesized variables differ between the waves of a cohort, we included interaction terms with wave for all three covariates (Additional file 1: Table S10.2). Overall, the interactions are rarely statistically significant. For interviewer change, 6 out of 16 cohorts are statistically significant which indicates that the negative effect on reinterview participation differs slightly across waves, but the direction of the effect always remains the same.

To assess the robustness of the hypothesis results, we ran a series of sensitivity analyses. As a first sensitivity check, we ran the regression models without design weights. The coefficients slightly vary in size but the direction stays the same (Additional file 1: Table S10.3). The negative effect of establishments with 50 + employees on reinterview participation now becomes statistically significant (p < 0.05) in 14 (before 4) of 16 cohorts. This result makes sense given that these larger establishments were oversampled and therefore the design weights moderate their effect on participation. Univariate regressions for each hypothesized variable and multivariate regressions with many additional control variables (both survey and administrative) were analyzed as second and third sensitivity checks (Additional file 1: Table S10.4 respectively S10.5). The coefficients for the hypothesized variables slightly vary in size and significance level depending on the cohort, but overall the results are generally consistent with the main analysis.

5.3.2 Long-Term (cumulative) participation

The following analyses examine two hypothesized predictors (establishment size and item nonresponse) of long-term panel participation, defined in two ways: (1) still participating after wave 4 (for cohorts 2001–2013); or (2) still participating after wave 8 (for cohorts 2001–2009). As a reminder, participation after wave 8 is modeled twice using different samples: one that includes establishments that stopped participating until wave 4, and, as a sensitivity check, one that excludes these cases. Additional file 1: Table S11.1 shows the AMEs of all three regressions.

The likelihood of participating in the panel after wave 4 is higher for establishments with 5–49 employees and lower for establishments with 100–999 employees compared to the smallest establishments (i.e. 1–4 employees); thus, H1 is supported. Participating after wave 4 is also less likely if the establishment did not answer all survey questions in the first wave (p < 0.01), which is in support of H3. Similar conclusions can be drawn for modeling participation after wave 8 when keeping all nonrespondents until wave 4 in the sample; only the coefficient for establishments with 100–199 employees is not statistically significant anymore. When the early nonrespondents are excluded from the analysis, the results remain similar, but only establishments with 10–19 employees, which are positively related to participation after wave 8, are statistically significant (p < 0.05).

As before, we ran several sensitivity analyses for cumulative participation, including ignoring the design weights, running univariate regressions, and including several control variables. All of these analyses yielded results consistent with the previous findings (Additional file 1: Tables S11.2—S11.4). As another sensitivity check, we used the smaller set of categories for establishment size as in the models for reinterview participation (see Table 2) and these results were also consistent with the previous findings (Additional file 1: Table S11.5). As a final sensitivity check, we added two interaction terms of establishment size and item nonresponse with cohort to investigate whether their respective effects differ between the cohorts (Additional file 1: Table S11.6). The interactions are not significant in the regressions modeling participation after wave 4 and wave 8 with the full sample, but after excluding the early nonrespondents there is a statistically significant (p < 0.1) positive effect of establishment size on participation after wave 8. However, the main conclusions from the previous analyses do not change.

Table 4 summarizes the results of the hypotheses for both the reinterview and cumulative participation analyses.

Table 4 Results of the panel participation hypotheses

6 Discussion

This article examined panel participation trends in a long-running voluntary face-to-face panel survey of establishments, focusing on the annual cohorts from 2001 to 2017. We examined changes in wave-to-wave reinterview response rates as well as cumulative response rates over several waves and used extensive administrative data to analyze trends in panel nonresponse bias and determinants of short-term and long-term panel participation.

Regarding wave-to-wave reinterview participation, there were three main findings. First, reinterview rates increased over the first five waves of each cohort and remained mostly stable thereafter. Second, average relative reinterview nonresponse biases fluctuated over time but were generally rather small (below 7%) and the R-indicator value was around 0.8 in most waves of the cohorts, which corresponds with other research findings (Shlomo et al. 2013; Cornesse and Bosnjak 2018). Third, the strongest predictors of reinterview participation were interviewer change and item nonresponse. That is, changing the usual interviewer between two subsequent waves or not answering all survey questions in the previous wave had a negative effect on reinterview participation, as was hypothesized. The interviewer change finding is consistent with previous studies (Lemay and Durand 2002; Behr et al. 2005; Janik and Kohaut 2012).

Examining cumulative panel participation since wave 1 also yielded three main findings. First, we showed that older cohorts (before 2007) have had steeper drops in cumulative response rates during the first six waves of the panel compared the same waves of younger cohorts, suggesting that the younger cohorts have been more difficult to recruit into the panel but easier to retain over time. Second, the average cumulative nonresponse bias clearly increased over time for all cohorts, though the pattern was mostly consistent across the corresponding waves. Similarly, the R-indicator values started at around 0.8 in wave 2 and showed a decreasing trend only for the older cohorts (2001 to 2005). Third, the regression results revealed that item nonresponse was negatively related to long-term panel participation (after wave 4 or wave 8), which aligns with the survey literature (Loosveldt et al. 2002; Janik 2011). Furthermore, larger establishments with more than 100 employees were less likely to remain in the panel long-term, which is also consistent with previous research (Earp et al. 2018; Janik and Kohaut 2012; König et al. 2021; Küfner et al. 2022). All of these results were in line with our stated hypotheses.

The strengths of the present study include the long observation period, the analysis of multiple panel participation perspectives, and the utilization of extensive administrative and survey data for estimating nonresponse bias and testing hypotheses and determinants of panel participation. Nevertheless, it would have been beneficial for the regression analyses to include more factors related to the establishments’ decision-making processes described in the participation model from Willimack and Snijkers (2013). For example, the 2001–2017 IAB-EP includes no information about the person(s) who completed the interview(s), such as their demographics or position in the establishment. Nor do we know how many people in the establishment were involved in the panel participation decision or process. These aspects may help explain the temporal participation patterns we observed. Another topic for future research is evaluating the accuracy of establishment-level administrative data for the purpose of nonresponse bias analyses.

7 Conclusion

The study findings point to several practical implications for establishment panel survey research. First, motivating establishments to join and participate long-term in a voluntary panel has become more challenging over time. Special efforts are needed to recruit and keep motivation levels high, particularly in the early waves where the majority of attrition occurs. Second, changing the interviewer who previously interviewed the establishment should be avoided to the extent possible, as this seems to have a detrimental effect on reinterview participation. If unavoidable, we recommend that survey organizations notify establishments that an interview change will take place beforehand, preferably conveyed by the former, familiar interviewer themself. And thirdly, our research points to opportunities for implementing tailored recruitment and engagement strategies to target establishments whose motivation to continue participating in the panel may be waning. For example, large establishments could be targeted for more intensive long-term recruitment efforts. Even in the short-term, information from the previous interview, such as the presence of item nonresponse, may be used to identify establishments who are at risk of dropping out of the panel. Personal contact letters or telephone calls from the survey sponsor to the establishment’s survey representative or their superior thanking them for their previous participation and emphasizing the importance of their continued participation may be considered. Personal communications with the establishment could also be used to explore possible incentive structures that would help keep them engaged in the panel.

Availability of data and materials

Data from the IAB Establishment Panel are available via the Research Data Centre (FDZ) of the Federal Employment Agency at the Institute for Employment Research. Due to data protection regulations, the administrative records on Establishment History Panel are only available within the Institute for Employment Research. More information about data access is available through the Research Data Centre (FDZ) of the Institute for Employment Research: https://fdz.iab.de/.

Notes

  1. Reasons for the non-links can be that the unique identifier of some establishments has changed in the BHP due to owner change, address change, or a business split or merge.

References

  • American Association for Public Opinion Research (AAPOR). Standard definitions: final dispositions of case codes and outcome rates for surveys. https://aapor.org/wp-content/uploads/2023/05/Standards-Definitions-10th-edition.pdf (2023). Accessed 12 Jun 2023

  • Backhaus, N., Bellmann, L., Gleiser, P., Hensgen, S., Kagerl, C., Koch, T., König, C., Kleifgen, E., Leber, U., Moritz, M., Pohlan, L., Robelski, S., Roth, D., Schierholz, M., Sommer, S., Stegmaier, J., Tisch, A., Umkehrer, M., Aminian, A.: Panel ‘Establishments in the Covid-19 Crisis’-20/21: a longitudinal study in German establishments—waves 1–14. FDZ-Datenreport: Documentation of labour market data, 13/2021 (2021). https://doi.org/10.5164/IAB.FDZD.2113.en.v1

  • Banca d’Italia: Survey of industrial and service firms: main results. ‘Statistics’ series. https://www.bancaditalia.it/pubblicazioni/indagine-imprese/2021-indagini-imprese/en_statistiche_IIS_01072022.pdf?language_id=1 (2022). Accessed 12 Jun 2023

  • Bavdaž, M.: The Response Process in Recurring Business Surveys. In: Proceedings of the Third European Conference on Quality and Methodology in Official Statistics, Cardiff, UK (2006)

  • Bechmann, S., Tschersich, N., Ellguth, P., Kohaut, S., Florian, C.: Technical Report on the IAB Establishment Panel—Wave 28 (2020). FDZ-Methodenreport: Methodological aspects of labour market data, 07/2021 (2021)

  • Behr, A., Bellgardt, E., Rendtel, U.: Extent and determinants of panel attrition in the European Community Household Panel. Eur. Sociol. Rev. 21(5), 489–512 (2005)

    Article  Google Scholar 

  • Bergman, L.R., Brage, R.: Survey experiences and later survey attitudes, intentions and behaviour. J. off. Stat. 24(1), 99–113 (2008)

    Google Scholar 

  • Bergmann, M., Scherpenzeel, A.: Can a responsive fieldwork design increase response rates and decrease response bias in the Survey of Health, Ageing and Retirement in Europe?. SHARE: Working Paper Series 27–2016 (2016)

  • Bottone, M., Modugno, L., Neri, A.: Response burden and data quality in business surveys. J. off. Stat. 37(4), 811–836 (2021). https://doi.org/10.2478/jos-2021-0036

    Article  Google Scholar 

  • Brixy, U., Kohaut, S., Schnabel, C.: Do Newly founded firms pay lower wages? First evidence from Germany. Small Bus. Econ. 29, 161–171 (2007). https://doi.org/10.1007/s11187-006-0015-x

    Article  Google Scholar 

  • Bureau of Labor Statistics, U.S.: Current employment statistics—CES (National): technical notes for the current employment statistics survey. https://www.bls.gov/web/empsit/cestn.htm (2023a). Accessed 12 Jun 2023a

  • Bureau of Labor Statistics, U.S.: Office of survey methods research: household and establishment survey response rate calculations. https://www.bls.gov/osmr/response-rates/response-rates-calculation.htm (2023b). Accessed 12 Jun 2023b

  • Bureau of Labor Statistics, U.S.: Office of survey methods research: establishment surveys unit response rates, April 2013–April 2023c. https://www.bls.gov/osmr/response-rates/establishment-survey-response-rates.htm (2023c). Accessed 12 Jun 2023c

  • Cheshire, H., Ofstedal, M.B., Scholes, S., Schroeder, M.: A comparison of response rates in the English longitudinal study of ageing and the health and retirement study. Longit. Life Course Stud. 2(2), 127–144 (2011). https://doi.org/10.14301/llcs.v2i2.118

    Article  Google Scholar 

  • Cornesse, C., Bosnjak, M.: Is there an association between survey characteristics and representativeness? A meta-analysis. Surv. Res. Methods 12(1), 1–13 (2018). https://doi.org/10.18148/srm/2018.v12i1.7205

    Article  Google Scholar 

  • D’Aurizio, L., Papadia, G.: Using administrative data to evaluate sampling bias in a business panel survey. J. off. Stat. 35(1), 67–92 (2019). https://doi.org/10.2478/jos-2019-0004

    Article  Google Scholar 

  • Davis, W. R., Pihama, N.: Survey response as organisational behaviour: an analysis of the annual enterprise survey, 2003–2007. Paper presented at New Zealand Association of Economists Conference, Wellington (2009).

  • De Heer, W., de Leeuw, E.: Trends in household survey nonresponse: A Longitudinal and international comparison. In: Groves, R.M., Dillman, D.A., Eltinge, J.L., Little, R.J.A. (eds.) Survey nonresponse, pp. 41–54. Wiley, New York (2002)

    Google Scholar 

  • Earp, M., Toth, D., Phipps, P., Oslund, C.: Assessing nonresponse in a longitudinal establishment survey using regression trees. J. off. Stat. 34(2), 463–481 (2018). https://doi.org/10.2478/jos-2018-0021

    Article  Google Scholar 

  • Ellguth, P., Kohaut, S.: Tarifbindung und betriebliche Interessenvertretung: Ergebnisse aus dem IAB-Betriebspanel 2020. WSI-Mitteilungen 74(4), 306–314 (2021). https://doi.org/10.5771/0342-300X-2021-4-306

    Article  Google Scholar 

  • Ellguth, P., Kohaut, S., Möller, I.: The IAB establishment panel—methodological essentials and data quality. J. Labour Market Res. 47, 27–41 (2014). https://doi.org/10.1007/s12651-013-0151-0

    Article  Google Scholar 

  • Fischer, G., Janik, F., Müller, D., Schmucker, A.: The IAB establishment panel—from sample to survey to projection. FDZ Methodenreport 01/2008. https://core.ac.uk/reader/6561380 (2008). Accessed 12 Jun 2023

  • Fisher, S., Bosley, J., Goldenberg, K., Mockovak, W., Tucker, C.: A qualitative study of nonresponse factors affecting BLS establishment surveys: results. In: Proceedings of the Survey Research Methods Section, American Statistical Association, pp. 679–684 (2003)

  • Gough, J., Hopson, E., McLaren, C., Thorsteinsson, K.: The United Kingdom experience: the business impact of coronavirus survey. Office for National Statistics. https://www.imf.org/-/media/Files/Conferences/2020/8th-stats-forum/paper-craig-mclaren.ashx (2020). Accessed 12 Jun 2023

  • Groves, R.M.: Nonresponse rates and nonresponse bias in household surveys. Public Opin. q. 70(5), 646–675 (2006). https://doi.org/10.1093/poq/nfl033

    Article  Google Scholar 

  • Groves, R.M., Couper, M.P.: Nonresponse in household interview surveys. John Wiley & Sons, Hoboken (2012)

    Google Scholar 

  • Groves, R.M., Fowler, F.J., Couper, M.P., Lepkowski, J.M., Singer, E., Tourangeau, R.: Survey methodology. John Wiley & Sons, Hoboken (2009)

    Google Scholar 

  • Hawkes, D., Plewis, I.: Modelling non-response in the national child development study. J. r. Stat. Soc. a. Stat. Soc. 169(3), 479–491 (2006)

    Article  Google Scholar 

  • Henze, P.: Structural change and wage inequality: evidence from german micro data. Center for European, Governance and Economic Development Research Working Paper 204 (2014). https://doi.org/10.2139/ssrn.2422471

  • Janik, F., Kohaut, S.: Why don’t they answer? Unit non-response in the IAB establishment panel. Qual. Quant. 46, 917–934 (2012). https://doi.org/10.1007/s11135-011-9436-y

    Article  Google Scholar 

  • Janik, F.: Unit non-response in establishments surveyed for the first time in the IAB establishment panel. FDZ Methodenreport 04/2011. https://doku.iab.de/fdz/reporte/2011/MR_04-11_EN.pdf (2011). Accessed 12 Jun 2023

  • König, C., Sakshaug, J.W., Stegmaier, J., Kohaut, S.: Trends in establishment survey nonresponse rates and nonresponse bias: evidence from the 2001–2017 IAB Establishment Panel. J. off. Stat. 37(4), 931–953 (2021). https://doi.org/10.2478/JOS-2021-0040

    Article  Google Scholar 

  • Kratzke, D. T.: Nonresponse bias analysis of average weekly earnings in the current employment statistics survey. In: Proceedings of the Survey Research Methods, Joint Statistical Meetings (2013)

  • Kroh, M., Kühne, S., Siegers, R., Belcheva, V.: SOEP-Core-Documentation of sample sizes and panel attrition (1984 until 2016). No. 480. SOEP Survey Papers (2018)

  • Küfner, B., Sakshaug, J.W., Zins, S.: Analysing establishment survey non-response using administrative data and machine learning. J. r. Stat. Soc. Ser. A 185(Suppl_2), S310–S342 (2022). https://doi.org/10.1111/rssa.12942

    Article  Google Scholar 

  • Lee, E., Hu, M.Y., Toh, R.S.: Respondent non-cooperation in surveys and diaries: an analysis of item non-response and panel attrition. Int. J. Mark. Res. 46(3), 311–326 (2004)

    Article  Google Scholar 

  • Lemay, M., Durand, C.: The effect of interviewer attitude on survey cooperation. Bull. Méthodol. Sociol. 76(1), 27–44 (2002)

    Article  Google Scholar 

  • Lemay, M.: Understanding the mechanism of panel attrition. Unpublished Doctoral thesis, Doctor of Philosophy, University of Maryland, College Park (2009)

  • Lepkowski, J.M., Couper, M.P.: Nonresponse in the second wave of longitudinal household surveys. In: Groves, R.M., Dillman, D.A., Eltinge, J.L., Little, R.J.A. (eds.) Survey nonresponse, pp. 259–272. Wiley, New York (2002)

    Google Scholar 

  • Loosveldt, G., Pickery, J., Billiet, J.: Item nonresponse as a predictor of unit nonresponse in a panel survey. J. off. Stat. 18(4), 545–557 (2002)

    Google Scholar 

  • Mackeben, J., Sakshaug, J.W.: Introducing Web in a telephone employee survey: effects on nonresponse and costs. J. Surv. Stat. Methodol. (2022). https://doi.org/10.1093/jssam/smac002

    Article  Google Scholar 

  • McCarthy, J., Wagner, J., Sanders, H.L.: The impact of targeted data collection on nonresponse bias in an establishment survey: a simulation study of adaptive survey design. J. off. Stat. 33(3), 857–871 (2017)

    Article  Google Scholar 

  • Paxson, M.C., Dillman, D.A., Tarnai, J.: Improving response to business mail surveys. In: Cox, B.G., Binder, D.A., Chinnappa, B.N., Christianson, A., Colledge, M.J., Kott, P.S. (eds.) Business survey methods, pp. 303–316. Wiley, New York (1995)

    Chapter  Google Scholar 

  • Petroni, R., Sigman, R., Willimack, D. K., Cohen, S., Tucker, C.: Response rates and nonresponse in establishment surveys–BLS and Census Bureau. Federal Economic Statistics Advisory Committee, 1–50 (2004).

  • Phipps, P., Jones, C.: Factors affecting response to the occupational employment statistics survey. In: Proceedings of the 2007 Federal Committee on Statistical Methodology Research Conference. https://www.bls.gov/osmr/research-papers/2007/st070170.htm (2007). Accessed 12 Jun 2023

  • Phipps, P., Toth, D.: Analyzing establishment nonresponse using an interpretable regression tree model with linked administrative data. Annal. Appl. Stat. 6(2), 772–794 (2012). https://doi.org/10.1214/11-AOAS521

    Article  Google Scholar 

  • Pielsticker, D.I., Hiebl, M.R.: Survey response rates in family business research. Eur. Manag. Rev. 17(1), 327–346 (2020). https://doi.org/10.1111/emre.12375

    Article  Google Scholar 

  • Sakshaug, J.W., Huber, M.: An evaluation of panel nonresponse and linkage consent bias in a survey of employees in Germany. J. Surv. Stat. Methodol. 4(1), 71–93 (2016). https://doi.org/10.1093/jssam/smv034

    Article  Google Scholar 

  • Sakshaug, J.W., Kreuter, F.: Using paradata and other auxiliary data to examine mode switch nonresponse in a “recruit-and-switch” telephone survey. J. off. Stat. 27(2), 339–357 (2011)

    Google Scholar 

  • Sakshaug, J.W., Vicari, B., Couper, M.P.: Paper, E-mail, or both? Effects of contact mode on participation in a web survey of establishments. Soc. Sci. Comput. Rev. 37(6), 750–765 (2018). https://doi.org/10.1177/0894439318805160

    Article  Google Scholar 

  • Sakshaug, J.W., Hülle, S., Schmucker, S.: Panel survey recruitment with or without interviewers? Implications for nonresponse bias, panel consent bias, and total recruitment bias. J. Surv. Stat. Methodol. 8(3), 540–565 (2020). https://doi.org/10.1093/jssam/smz012

    Article  Google Scholar 

  • Schmucker, A., Ganzer, A., Stegmaier, J., Wolter, S.: Establishment history panel 1975–2017. FDZ-Datenreport, 09/2018 (2018). https://doi.org/10.5164/IAB.FDZD.1809.en.v1.

  • Schoeni, R.F., Stafford, F., McGonagle, K.A., Andreski, P.: Response rates in national panel surveys. Ann. Am. Acad. Polit. Soc. Sci. 645(1), 60–87 (2013). https://doi.org/10.1177/0002716212456363

    Article  Google Scholar 

  • Schouten, B., Cobben, F., Bethlehem, J.: Indicators for the representativeness of survey response. Surv. Methodol. 35(1), 101–113 (2009)

    Google Scholar 

  • Shlomo, N., Schouten, B., De Heij, V.: Designing adaptive designs with R-indicators. Paper presented at the NTTS Conference, Brussels. http://hummedia.manchester.ac.uk/institutes/cmist/risq/shlomo-schouten-heij-2013.pdf (2013). Accessed 12 Jun 2023

  • Smaill, K.: Trajectory modelling of longitudinal non-response in business surveys. Stat. J. IAOS 28(3, 4), 137–144 (2012)

    Google Scholar 

  • Späth J., Koch A.: Does the quality of employment differ between new and incumbent firms? First results for Germany based on the establishment history panel. Paper presented at the 3rd User Conference on the Analysis of BA and IAB Data, Nuremberg, Germany. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=10e0c719fabf13e411c4f8f963c0d5ca41753fef (2008). Accessed 9 Jun 2023

  • StataCorp: Stata statistical software: release 15. College Station, TX: StataCorp LLC (2017)

  • Statistics Canada: Guide to the survey of employment, payrolls and hours. https://www150.statcan.gc.ca/n1/en/pub/72-203-g/72-203-g2022001-eng.pdf?st=SnCllodO (2022). Accessed 12 Jun 2023

  • Sudman, S., Willimack, D. K., Nichols, E., Mesenbourg, T. L.: Exploratory research at the us census bureau on the survey response process in large companies. In Proceedings of the Second International Conference on Establishment Surveys, pp. 327–337. American Statistical Association, Buffalo (2000)

  • Thompson, K.J., Oliver, B.E.: Response rates in business surveys: going beyond the usual performance measure. J. off. Stat. 28(2), 221–237 (2012)

    Google Scholar 

  • Thompson, K.J., Washington, K.T.: Challenges in the treatment of unit nonresponse for selected business surveys: a case study. Surv. Methods Insights Field (2013). https://doi.org/10.13094/SMIF-2013-00011

    Article  Google Scholar 

  • Tomaskovic-Devey, D., Leiter, J., Thompson, S.: Item nonresponse in organizational surveys. Sociol. Methodol. 25, 77–110 (1995). https://doi.org/10.2307/271062

    Article  Google Scholar 

  • Wagner, J.: Average wage, qualification of the workforce and export performance in german enterprises: evidence from KombiFiD data. J. Labour Market Res. 45, 161–170 (2012). https://doi.org/10.1007/s12651-012-0106-x

    Article  Google Scholar 

  • Watson, N., Leissou, E., Guyer, H., Wooden, M.: Best practices for panel maintenance and retention. In: Johnson, T.P., Pennell, B., Stoop, I.A.L., Dorer, B. (eds.) Advances in comparative survey methods: multinational, multiregional, and multicultural contexts (3MC), pp. 597–622. John Wiley & Sons, Hoboken, NJ (2018)

    Chapter  Google Scholar 

  • Wetzel, A.: Assessing the effect of different instrument modes on reinterview results from the Consumer Expenditure Quarterly Interview Survey. In: Proceedings of the Survey Research Methods, Joint Statistical Meetings (2003)

  • Willimack, D.K., Nichols, E.: A hybrid response model for business surveys. J. off. Stat. 26(1), 3–24 (2010)

    Google Scholar 

  • Willimack, D.K., Snijkers, G.: The business context and its implications for the survey response process. In: Snijkers, G., Haraldsen, G., Jones, J., Willimack, D.K. (eds.) Designing and conducting business surveys, pp. 39–82. John Wiley & Sons, Hoboken (2013)

    Chapter  Google Scholar 

  • Willimack, D.K., Nichols, E., Sudman, S.: Understanding unit and item nonresponse in business surveys. In: Groves, R.M., Dillman, D.A., Eltinge, J.L., Little, R.J.A. (eds.) Survey nonresponse, pp. 213–227. Wiley, New York (2002)

    Google Scholar 

Download references

Acknowledgements

The authors thank Jens Stegmaier and Susanne Kohaut for fruitful discussions about the topic at an early stage of this research. Moreover, we thank Lukas Olbrich for helpful comments and suggestions regarding the analysis methods.

Funding

CK acknowledges financial support from the Graduate Programme (GradAB) of the IAB and the Friedrich-Alexander University Erlangen-Nürnberg. The funder did not influence the design of the study, analysis, and interpretation of data.

Author information

Authors and Affiliations

Authors

Contributions

CK and JS contributed to the design of the study. CK drafted and revised the manuscript. JS also contributed to revising the manuscript. CK contributed to the data preparation, data analysis, and creating the graphs and tables. Both authors approved the final manuscript.

Corresponding author

Correspondence to Corinna König.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1

: Figure S1. R-indicator for reinterview participation by cohort (2001–2015) and wave. Figure S2. R-indicator for cumulative participation by cohort (2001–2015) and wave. Table S1. Variables and their categories used in the analyses. Table S2. Descriptive statistics of all variables—reinterview participation. Table S3. Descriptive statistics of all variables—Cumulative participation. Table S4. Categorization of interviewer data on reasons for failure. Table S5. Reinterview nonresponse bias and absolute relative nonresponse bias for every variable category per cohort and wave. Table S6. Reinterview refusal bias and absolute relative refusal bias for every variable category per cohort and wave. Table S7. Reinterview noncontact bias and absolute relative noncontact bias for every variable category per cohort and wave. Table S8. Cumulative nonresponse bias and absolute relative cumulative nonresponse bias for every variable category per cohort and wave. Table S9. Cumulative nonresponse bias and absolute relative cumulative nonresponse bias for every variable category per wave and cohort. Table S10.1. Average marginal effects (AME) of logistic regressions with hypotheses variables per cohort—reinterview participation. Table S10.2. Log odds of logistic regressions with hypotheses variables and interaction terms per cohort—reinterview participation. Table S10.3. Average marginal effects (AME) of logistic regressions with hypotheses variables per cohort without weights—reinterview participation. Table S10.4. Average marginal effects (AME) of univariate logistic regressions per cohort—Reinterview participation. Table S10.5. Average marginal effects (AME) of full logistic regressions model per cohort with weights—Reinterview participation. Table S11.1. Average marginal effects (AME) of logistic regressions with hypotheses variables—cumulative participation. Table S11.2. Average marginal effects (AME) of logistic regressions with hypotheses variables without weights—cumulative participation. Table S11.3. Average marginal effects (AME) of univariate logistic regressions—cumulative participation. Table S11.4. Average marginal effects (AME) of full logistic regressions model with weights—cumulative participation. Table S11.5. Average marginal effects (AME) of logistic regressions for cumulative participation with hypotheses variables and the same categories as for reinterview participation. Table S11.6. Log odds of logistic regressions with hypotheses variables and interaction terms—cumulative participation.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

König, C., Sakshaug, J.W. Nonresponse trends in establishment panel surveys: findings from the 2001–2017 IAB establishment panel. J Labour Market Res 57, 23 (2023). https://doi.org/10.1186/s12651-023-00349-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12651-023-00349-4

Keywords

JEL Classification