Decomposing international gender test score differences

In this paper, we decompose worldwide PISA mathematics and reading scores. While mathematics scores are still tilted towards boys, girls have a larger advantage in reading over boys. Girls’ disadvantage in mathematics is increasing over the distribution of talents. Our decomposition shows that part of this increase can be explained by an increasing trend in productive endowments and learning productivity, although the largest part remains unexplained. Countries’ general level of gender (in)equality also contributes to girls’ disadvantage. For reading, at the upper end of the talent distribution, girls’ advantage can be fully explained by differences in learning productivity, but this is not so at lower levels.


Introduction
Consensus exists regarding significant gender test score differences in schools. Boys typically excel in mathematics and science whereas girls score better in reading and literacy subjects (e.g., Turner and Bowen 1999;Halpern et al. 2007;Ceci et al. 2009). Although girls have somewhat caught up in mathematics (Hyde and Mertz 2009), differences remain. On the other hand, there is evidence of more men or boys at the upper end of the education or professional distribution (Machin and Pekkarinen 2008), which could be attributed to the larger variance of test scores for boys. The magnitude, spread and practical significance of gender differences in educational outcomes have remained a topic of concern. This concern is important, because gender disparities in achievement at an earlier stage, particularly at the upper ends of the distribution, may impact career selection and educational outcomes at a later stage.
The previous literature mostly examined mean differences (Fryer and Levitt 2010), while quantile regressions do exist for some countries (Gevrek and Seiberlich 2014;Sohn 2012;Thu Le and Nguyen 2018): providing evidence for Turkey, Korea and Australia, respectively. Two possible arguments have been suggested for these gender gaps, one biological or natural (Benbow and Stanley 1980;Geary 1998) and the other environmental, including family, institutional, social, and cultural influences (e.g., Fennema and Sherman 1978;Parsons et al. 1982;Levine and Ornstein 1983;Guiso et al. 2008;Pope and Sydnor 2010;Nollenberger et al. 2016). Recent studies looked at the impact of culture: Nollenberger et al. (2016) look at immigrants in the U.S. to explain whether gender-related culture in the home country can explain differences in mathematics scores; similarly Guiso et al. (2008) look at gender differences in 35 countries PISA mathematics scores.
The present study looks at mathematics and reading scores for all countries included in the OECD's PISA test and tries to decompose these score differences at different percentiles of the distribution through natural and environmental factors that influence the students' mathematics and reading test scores. This decomposition research is guided by the Juhn et al. (1993) decomposition model, which extends the usual Blinder-Oaxaca decomposition by taking into account the residual distribution. Following this method, this study will decompose test score gaps between males and females to analyze how much of the test score gap can be "predicted" by observable differences across students in determining the test score production function and inequality within these classifications. Munir and Winter-Ebmer J Labour Market Res (2018) 52:12 In this study, we employed international PISA data to examine test score differences between boys and girls worldwide, focusing on the differences at different quantiles of the distribution. PISA has the advantage of covering various personal, family, school system, and societal background characteristics, which enables decomposing potential differences into effects due to different endowments, institutional settings, and the productivity of learning in different situations. We adopted a decomposition following Juhn et al. (1993), which enabled us to decompose test score differentials into endowment, productivity, and unobservable components.
Our decomposition for score differentials in mathematics shows that part of the increasing disadvantage of girls over the distribution of talent can be explained by an increasing trend in productive endowments and learning productivity, although the largest part remains unexplained. Countries' general level of gender (in)equality also contributes to girls' disadvantage. For reading, at the upper end of the talent distribution, girls' advantage can be fully explained by differences in learning productivity, but this is not so at lower levels. Our contribution to the literature lies in an extension of quantile regression results to practically all PISA countries, to an inclusion of country-specific gender-related variables and to an application of the Juhn, Murphy and Pierce analysis, which extends a simple decomposition to take the residual distribution into account.
The remainder of the paper is organized as follows: The next section describes the PISA database, its features and other data sources used in the study. Section 3 discusses the estimation strategy used in this paper and structures the econometric model based upon the Juhn, Murphy and Pierce decomposition method. Section 4 presents results on test score inequality for our dispersion analysis. Section 5 concludes.

Data
This paper uses the micro data of the Program of International Student Assessment (PISA) 2012 as well as data on per capita GDP (PPP), gender equality, and government expenditure on education to analyze the decomposition of gender differences in test scores. Combining the available data, the dataset contains information on 480,174 students in 65 countries pertaining to mathematics and reading literacy.

PISA data
PISA is a cross-national study created by the Organization for Economic Co-operation and Development (OECD) to assess students' ability in mathematics, reading, science, and problem solving. Since its launch in 2000, the assessment is conducted on a triennial basis.
The main advantage of the program is its international comparability, as it assesses students' ability based on a cohort of students of the same age. Moreover, there is a large volume of background information of students and schools, which may help to put student assessment into perspective. The assessment in each wave focuses on one particular subject, 1 and tests other main areas as well. In our analysis, we employed data from the 2012 PISA wave that focused on performance in mathematics.
The PISA 2012 dataset covers the test score performance of students from 34 OECD and 31 non-OECD countries, which includes approximately 510,000 students aged 15 or 16 years. The dataset includes a number of demographic and socioeconomic variables for these students. The instrument was paper-based and comprised a mixture of text responses and multiple-choice questions. The test is completed in 2 h. The questions are organized in groups based on real life situations. A stratified sampling design was used for this complex survey, and at least 150 schools were selected 2 in each country and 35 students randomly selected in each school to form clusters. Because of potential sample selection problems, weights were assigned to each student and school. The PISA test scores are standardized with an average score of 500 points and standard deviation of 100 points in OECD countries. In the PISA 2012 test, the final proficiency estimates were provided for each student and recorded as a set of five plausible values. 3 In this study, we used the first plausible value as a measure of student proficiency. 4 In 2012, Shanghai scored best and remained at the top with 613 PISA points in mathematics, followed by Hong Kong, Japan, Taiwan, and South Korea, all 1 The first PISA exam in 2000 focused on reading literacy, while the second focused on mathematics specialization. PISA 2012 again focused on mathematics literacy. 2 The PISA consortium decides which school will participate, and then the school provides a list of eligible students. Students are selected by national project managers according to standardized procedures (OECD 2012). 3 These plausible values are calculated by the complex item-response theory (IRT) model (see Baker 2001;Von Davier and Sinharay 2013) based on the assumption that each student only answers a random subset of questions and their true ability cannot be directly judged but only estimated from their answers to the test. This is a statistical concept, and instead of obtaining a point estimate [like a Weighted Likelihood Estimator (WLE)], a range of possible values of students' ability with an associated probability for each of these values is estimated (OECD 2009). 4 "Working with one plausible value instead of five provides unbiased estimates of population parameters but will not estimate the imputation error that reflects the influence of test unreliability for the parameter estimation" (OECD 2009). As this imputation error decreases with a large sample size, so the use of one plausible value with a sample size of 480,174 students will not make any substantial difference in the mean estimates and standard errors of the estimates. For details, see p 43: https ://www.oecd-ilibr ary.org/docse rver/97892 64056 275-en.pdf?expir es=15372 49103 &id=id&accna me=guest &check sum=FCF6D 3D8A0 3AB42 A0FEC 82FE7 E2ADF 47.
high-performing East Asian countries. Among the European countries, Liechtenstein and Switzerland demonstrated the best performance, followed by the Netherlands, Estonia, Finland, Poland, Belgium, Germany, and Austria with slightly lower figures. On average, the mean score in mathematics was 494 and 496 for reading in OECD countries. The UK, Ireland, New Zealand, and Australia were close to the OECD average, while the USA scored lower than the OECD average with 481 PISA points.
Since the primary concern of this study is to explore the differences in mathematics and reading test scores between male and female students, the dependent variable is the student test score in PISA 2012. The rich set of covariates includes five characteristics, namely individual characteristics of the students, their family characteristics, school characteristics, student's beliefs or perceptions about learning, and country characteristics. Table 2 provides a description of all variables from the PISA data used in this study.
In the survey data, the probability that individuals will be sampled is assumed dependent on the survey design. To take into account this feature, students' educational production functions were estimated using survey regression methods. This allowed us to include student weights and school clusters depending on the sampling probabilities and within standard errors respectively in our analysis.
Non-parametric kernel density estimates for the distribution of the entire sample of students' test score achievements by gender are presented in Fig. 1. The left and right panels of Fig. 1 display kernel density estimates for mathematics and reading test performances respectively. Males' test scores in mathematics are on average higher than those for females, whereas females on average score better than males for reading. Regarding the spread of the curves, it is narrow and highly concentrated around the mean for females compared to the relatively wider distribution of males both in mathematics and reading test scores.

Level of development, education expenditure, and gender equality data
To consider the country's level of development in this analysis, we employed the data on GDP per capita (measured in purchasing power parity (PPP)) from the World Development Indicators 2012. Data on education expenditure was derived from the Human Development Report 2013, United Nations Development Program, while data for Jordan, Shanghai, and Macao were obtained from the World Bank database.
To explore the cultural role related to gender equality, following Guiso et al. (2008), we employed the Gender Gap Index (GGI) by the World Economic Forum (Hausmann et al. 2013). The Global Gender Gap Index was first introduced in 2006, which by that time was published annually by the World Economic Forum. GGI shows the ranking of countries based on the average of four sub indices, 5 namely economic, political, health, and educational opportunities provided to females. A GGI of 1 reflects full gender equality and 0 total gender inequality. The top five countries in the 2012 GGI ranking were Iceland (0.86), Finland (0.85), Norway (0.84), Sweden (0.82), and Ireland (0.78). It is important to note that GGI data is only available for whole countries 6 and not for participating economic regions in the PISA 2012 dataset (e.g., Hong Kong, Macao, and Shanghai), Furthermore, it does not seem reasonable that data for whole countries can be  Table 2 in Appendix. 6 GGI data for Liechtenstein, Montenegro, and Tunisia is unavailable.
Page 4 of 17 Munir and Winter-Ebmer J Labour Market Res (2018) 52:12 representative of the relevant economic regions. These regions were eliminated from the data set. 7

Estimation strategy
In general, decomposition approaches follow the standard partial equilibrium approach in which observed outcomes of one group (i.e., gender, region, or time period) can be used to construct various counterfactual scenarios for the other group. Besides this, decompositions also provide useful indications of particular hypotheses to be explored in more detail (Fortin et al. 2011).
Originally, decomposition methods were proposed by Oaxaca (1973) and Blinder (1973) for decomposing differences in the means of an outcome variable. The Juhn et al. (JMP) (1993) decomposition method extends the Oaxaca/Blinder decomposition by considering the residual distribution. 8 We show this decomposition following the description of Sierminska et al. (2010) as follows: where Y j are the test scores for j=M, W (men and women respectively), X j are observables, β j are the vectors of the estimated coefficients, and ε j are the residuals (unobservables, i.e., unmeasured prices and quantities).
If F j (.) denotes the cumulative distribution function of the residuals for group j, then the residual gap consists of two components: an individual's percentile in the residual distribution p i , and the distribution function of the test score equation residuals F j (.). If p ij = F j (ε ij |x ij ) is the percentile of an individual residual in the residual distribution of model I, by definition we can write the following: where F j −1 (.) is the inverse of the cumulative distribution (e.g., the average residual distribution over both samples) and β an estimate of benchmark coefficients (e.g., the coefficients from a pooled model over the whole sample).
Using this framework, we can construct hypothetical outcome distributions with any of the components held fixed. Thus, we can determine: 1. Hypothetical outcomes with varying quantities between the groups and fixed prices (coefficients) and a fixed residual distribution as Hypothetical outcomes with varying quantities and varying prices and fixed residual distribution as 3. Outcomes with varying quantities, varying prices, and a varying residual distribution 9 as Let a capital letter stand for a summary statistic of the distribution of the variable denoted by the corresponding lower-case letter. For instance, Y may be the mean or interquartile range of the distribution of y. The differential Y M -Y W can then be decomposed as follows: where T is the total difference, Q can be attributed to differences in observable endowments, P to differences in the productivity of observable contributions to test scores, and U to differences in unobservable quantities and prices. This last component not only captures the effects of unmeasured prices and differences in the distribution of unmeasured characteristics (e.g., one of the unmeasured characteristics is more important for men and women for generating test scores), but also measurement error.
The major advantage of the JMP framework is that it enables us to examine how differences in the distribution affect other inequality measures and how the effects on inequality differ below and above the mean. Table 4 contains the descriptive statistics on all the variables used in this microanalysis of the PISA, 2012 dataset. The descriptive statistics are displayed by gender and by OECD and non-OECD countries separately. We imputed missing data for the variable 'age' and for some other

Descriptive statistics
Other methods, like Machado and Mata (2005) provide a similar decomposition, extending the Blinder-Oaxaca framework along quantiles, Juhn et al. (1993) have the advantage that they also provide for a distribution of residuals. 9 These outcomes are actually equal to the originally observed values, i.e., y Page 5 of 17 Munir and Winter-Ebmer J Labour Market Res (2018) 52:12 variables 10 in the schooling vector by using the mean imputation method. Table 4 shows that in OECD countries, students on average, scored 42.12 and 46.1 points more in mathematics and reading, respectively than non-OECD countries. On average, OECD girls have fallen behind OECD boys by 5.4 points in mathematics scores and 9 points in reading scores, while, non-OECD girls remain 3.5 PISA points behind non-OECD boys in mathematics and 6.5 in reading.
In order to examine whether or not a gender difference within PISA is statistically significant at the 1%, 5% and 10% levels, we also calculated the mean difference between the girls' and boys' scores. 11 It shows that significant mean differences across gender (based on the OECD and non-OECD grouping) exist for almost all variables.

PISA score in mathematics
Decomposition results for the mathematical test scores following JMP are depicted in Fig. 2. Positive results indicate females' disadvantage. In Fig. 2, we include a varying set of control variables: individual's characteristics, family characteristics, school characteristics, characteristics of beliefs about the learning process, and country characteristics. Panels A-E provide the decomposition results including only one of these lists of covariates. Panel F shows a decomposition using all available covariates together. Male-female test score differences are shown at various percentiles: 5th, 10th, 25th, 50th, 75th, 90th, and 95th. Table 6 in Appendix provides the numerical results.
In general, a strong upward trend in the total malefemale test score differential (T) is evident. While there is (almost) no difference for the lowest percentiles, the female disadvantage in mathematical competence increases almost linearly to around 20 PISA points at the 95th percentile. As good mathematical knowledge, particularly at the upper percentiles, is especially valuable for getting a good job (Athey et al. 2007), it is important to explore this issue. This total (T) effect will be decomposed into an effect due to differences in observables (Q), in a productivity-effect (P) on the learning productivity of these observables, and finally, an unobservable (U) rest.
Looking first at Panel F-including all characteristics, this upward trend in mathematical test score differences (T) cannot easily be explained by one factor. Unobservables demonstrate a clear upward trend, but observables and productivity effects do so at a somewhat lower level. We now examine individual contributions of individual versus school characteristics. Here, decomposing the contribution of unobservables (U) in Panels A-E does not make sense, because even if the individual contributions are orthogonal, the unobservable trends measure mainly the impact of omitted variables.
Turning to the contribution of observables (Q) towards mathematical competence, the endowment effect, Panel F indicates a negative endowment effect. In other words, females typically enjoy better endowments: around 10 PISA points at lower percentiles down to 5 PISA points at higher levels. These advantages stem from better female endowments in terms of schooling characteristics and beliefs. The slight upward trend in the contribution of observables in Panel F can mainly be attributed to an upward trend in observables in belief characteristics.
What is the contribution of learning productivity (P)? Panel F shows that the learning productivity of females increases the male-female test score gap for all percentiles, but the effect is slightly higher for higher percentiles. Panels A-E indicate similar productivity disadvantages for all included lists of characteristics.
To examine the contribution of individual variables in more detail, we performed the following quantitative exercise: increase, in turn, one of the variables in the model by one standard deviation and calculate the impact on the PISA score for males and females (Table 1). Starting with variables that will increase the male test score advantage, the number of female students in a classroom has the largest positive effect. Increasing the female share by one standard deviation increases the male-female test score differential by 8.8 PISA points. This is contrary to the results of Gneezy et al. (2003), who found that more female peers in schools increases the mathematical competence of females. Other strong pro-male variables are students' beliefs such as perseverance, success, or a career or job motive. Factors that reduce the male-female gap are subjective norms, public schools, more studying outside school, better education of the mother, and mothers who work more. Interestingly, countries where the GGI is more favorable towards women have lower male-female PISA score differences. This is in contrast to simple correlations by Stoet and Geary (2013), which did not reveal any correlation between PISA gender differentials and the GGI.

PISA scores for reading
An equivalent analysis was conducted for reading, as shown in Fig. 3. Panel F shows the JMP decomposition when all control variables are included. In contrast to mathematics, a continuous advantage of girls over boys is evident. In particular, there is a large disadvantage for boys at the lower end of the distribution: at the 5th and 10th percentile, boys score almost one half standard deviation (50 PISA points) less than girls. Torppa et al. (2018) investigate this for an extension of Finnish PISA data Munir and Winter-Ebmer J Labour Market Res (2018) 52:12 and find that general reading fluency (speed) is the main explanation for this difference, whereas other indicators like mastery orientation, homework activity or leisure book reading frequency are not very influential.
On the other hand, similar to mathematics, the total advantage of girls (T) diminishes from around 50 PISA points at the lowest percentiles to about 20 PISA points at the highest. 12 Decomposing that, at the highest percentile levels, this male-female differential is fully Fig. 2 Juhn-Murphy-Pierce decomposition of relative mathematics test scores by percentile, 2012, T total differential, Q endowments, P productivity, U unobservables, a-e provide decompositions using only a subset of variables; f uses all available variables Page 7 of 17 Munir and Winter-Ebmer J Labour Market Res (2018) 52:12 explained by productivity differentials (P), less so at lower percentiles. There is a contribution of observables (Q): the endowment of students contributes between 6 and 12 PISA points towards this female advantage. Finally, the contribution of unobservables (U) is mixed, increasing between − 9 to + 9 PISA points. Which factors are responsible for this difference? Our detailed analysis of the causes in Panels A-E in Fig. 3 indicates that endowment differences (Q) are strongest for schooling characteristics. Schooling characteristics, considered separately, explain between 7 and 10 PISA points, while the contributions of other domains are minor.
On the other hand, there is a large productivity (P) contribution in all separately considered domains. They are particularly high in the family, individual, belief, and country domains.

Table 1 Ceteris-paribus shifts in math and reading test scores due to a one standard deviation shift in individual variables
Gender score inequality is calculated by subtracting the female scores from male scores where positive values are indicating the gender inequality towards females. Male and female test scores are calculated on the basis that one standard deviation increase in particular characteristic e.g. age is associated with an increase of 0.071 score points in math gender score gap Regarding the contributions of individual items (Table 1), those favorable for boys are the percentage of girls in a classroom, success motivation, and class size. Factors favorable for girls are public schools and the amount of studying time out of school. Interestingly, a country's GGI has no effect on the reading differential between boys and girls.

Conclusion
In this paper, we provided a decomposition of PISA mathematics and reading scores worldwide. Our contribution to the literature lies in an extension of quantile regression results to practically all PISA countries, to an inclusion of country-specific gender-related variables and to an application of Juhn et al. (1993) analysis, which extends a simple decomposition to take the residual distribution into account. While mathematics scores are still tilted towards boys, girls have a larger advantage in reading over boys. This advantage is particularly large for low-achieving individuals. Our analysis shows that over the distribution of talent, boys' scores increase more than girls-for both mathematics and reading: thus-at the highest percentiles-we see a smaller reading advantage for girls as well as a large advantage of boys in mathematics.
Our decomposition shows that part of this increase can be explained by an increasing trend in productive endowments and learning productivity, but the largest part remains unexplained. Countries' general level of gender (in)equality also contributes towards girls' disadvantage. For reading, at the upper end of the talent distribution, girls' advantage can be fully explained by differences in learning productivity, although this is not so at lower levels. Education policy trying to reduce these gender differences must target high-performing females in their efforts in mathematics and science, and must be concerned by low-achieving boys who lag in reading and verbal expressiveness.

Authors' contributions
The authors contributed equally towards the preparation of the paper. Both authors read and approved the final manuscript.

Acknowledgements
Thanks to helpful comments to Nicole Schneeweis and Helmut Hofer.

Competing interests
The authors declare that they have no competing interests.

Availability of data materials
Data (PISA) are available for free, Stata-files are available upon request.

Funding
There is no external funding.

Students' own characteristics
Age Age of student was calculated as the difference between the year and month of the testing and the year and month of the students' birth

Grade
The relative grade index was computed to capture between the country variation. It indicates whether students are below or above the model grade in a country (model grade having value "zero") Country of birth According to the PISA, students' are distinguished by country of birth to take into account their immigrant status 1. "Native students", students born in the country of assessment with at least one parent born in the country of assessment 2. "Second-generation students", students born in the country with both parents foreign-born 3. "First-generation students, where foreign-born students have foreign-born parents In this study, the variable for country of birth only differentiate that the students are "native" or "others"

Occupational status of parents
Parents' job status is closely linked to socio-economic status that can cause large gaps in performance between students. Students reported their mothers' and fathers' current job status either as "full or part time working" or they hold another job status (i.e. home duties, retired etc.) Family structure An index was formed on the basis of the family structure with the following categories 1. "1" if "single parent family" (students living with one of the following: mother, father, male guardian, female guardian) 2. "2" if "two parent family" (students living with a father or step/foster father and a mother or step/foster mother) 3. "3" if students do not live with their parents Language spoken at home An international comparable variable is derived from the information (containing a country-specific code for each language) with the following categories 1. Language at home is the same as the language of assessment for the student 2. Language at home is another language Home possession Home possession is the summary index of 23 household items, mainly related to possession of books and things necessary to have a profound study

Schooling characteristics
School category Schools are classified as either public or private according to whether a private entity or a public agency has the ultimate power to make decisions concerning its affairs School autonomy Twelve items measuring school autonomy were asked that includes (a) Selecting teachers for hire, (b) Firing teachers, (c) Establishing teachers' starting salaries, (d) Determining teachers' salary increases, (e) Formulating the school budget, (f ) Deciding on budget allocations within the school, (g) Establishing student disciplinary policies, (h) Establishing student assessment policies, (i). Approving students for admission to the school, (j) Choosing which textbooks are used, (k) Determining course content, and (k). Deciding which courses are offered. Five response categories were used and principals were asked to tick as many categories as appropriate, that are 1. Principal 2. Teachers 3. School governing board 4. Regional education authority 5. National education authority

Class size
The average class size was derived from one of the nine possibilities ranging from "15 students or fewer" to "more than 50 students" for the average class size of the test language in the sampled schools. The mid point of each response category was used for class size, resulting a value of 13 for the lowest category, and a value of 53 for the highest Quality of physical infrastructure The index concerning the quality of physical infrastructure was computed on the basis of three items measuring the principals' perceptions of potential factors hindering instruction at school that are (a) Shortage or inadequacy of school buildings and grounds, (b) Shortage or inadequacy of heating/cooling and lighting systems, and (c) Shortage or inadequacy of instructional space (i.e. classrooms). All items were reversed for scaling

Proportion of girls enrolled at school
Proportion is based on the enrollment data provided by the principal, calculated by dividing the number of girls by the number of girls and boys at a school

Proportion of fully certified teachers
The proportion was calculated by dividing the number of fully certified teachers by the total number of teachers Student-teacher ratio The student-teacher ratio is obtained by dividing the school size by the total number of teachers. The number of part-time teachers was weighted by 0.5 and the number of full-time teachers was weighted by 1.0 in the computation of this index Teacher-student relations The index of teacher-student relations is derived from students' view that to what extent do you agree with the following statements": (

Students' perceptions or beliefs about learning
Difference in test effort To compare the students' performance across countries that can be influenced by the effort students invest in preparing PISA assessment, a variable "difference in test effort (or relative test effort)" is used. This based on the "Effort Thermometer" that was developed by a group of researchers at the Max-Planck-Institute in Berlin (Kunter et al. 2002). The Effort Thermometer is based on three 10-point scales (For more details, see Butler and Adams 2007) Effort Difference = PISA Effort − School Mark Effort The Effort Difference scores can range from negative nine to positive nine. A negative score on Effort Difference means that students indicate they would try harder on a test that counts than they did on the PISA assessment Out of school study time The index was calculated by summing the time spent studying for school subjects from the information that how much time they spent studying outside school (in open-ended format) Perseverance Five items measuring perseverance (i.e. a). When confronted with a problem, I give up easily, (b) I put off difficult problems, (c) I remain interested in the tasks that I start, (d) I continue working on tasks until everything is perfect, and (e) When confronted with a problem, I do more than what is expected for me) were included with five response categories, namely 1. Very much like me 2. Mostly like me 3. Somewhat like me 4. Not much like me 5. Not at all like me All three items were reversed

Perceived control
The index of perceived control is constructed using student responses on question "what you think that you can succeed with enough effort (or the course material is too hard to understand with your sole effort)? Students give responses that they strongly agreed, agreed, disagreed, or strongly disagreed

Instrumental motivation for job and career
The index of instrumental motivation for job and career is constructed by asking question that making an effort is worthwhile for me because it will increase chances to get a job and will improve my career with student responses over the extent they strongly agreed, agreed, disagreed, or strongly disagreed Subjective norms (Mathematics) The index of subjective norms in mathematics is constructed using student responses over whether, thinking about how people important to them view mathematics, they strongly agreed, agreed, disagreed or strongly disagreed to the fol-