2. OLS and lasso for wage prediction#

An important question in labour economics is what determines the wage of workers. This is a causal question, but we can begin to investigate it from a predictive perspective.

In the following wage example, \(Y\) is the hourly wage of a worker and \(X\) is a vector of worker’s characteristics, e.g., education, experience, gender. Two main questions here are:

  • How can we use job-relevant characteristics, such as education and experience, to best predict wages?

  • What is the difference in predicted wages between men and women with the same job-relevant characteristics?

In this lab, we focus on the prediction question first.

2.1. Data#

The data set we consider is from the 2015 March Supplement of the U.S. Current Population Survey. We select white non-hispanic individuals, aged 25 to 64 years, and working more than 35 hours per week for at least 50 weeks of the year. We exclude self-employed workers; individuals living in group quarters; individuals in the military, agricultural or private household sectors; individuals with inconsistent reports on earnings and employment status; individuals with allocated or missing information in any of the variables used in the analysis; and individuals with hourly wage below \(3\).

The variable of interest \(Y\) is the hourly wage rate constructed as the ratio of the annual earnings to the total number of hours worked, which is constructed in turn as the product of number of weeks worked and the usual number of hours worked per week. In our analysis, we also focus on single (never married) workers. The final sample is of size \(n=5150\).

2.2. Data analysis#

We start by loading the data set.

install.packages("librarian", quiet = T)
librarian::shelf(tidyverse, broom, hdm, quiet = T)
data <- read_csv(
    "https://github.com/d2cml-ai/14.388_R/raw/main/Data/wage2015_subsample_inference.csv"
    , show_col_types = F) |> 
        rename(socl = scl, sohs = shs, sout = so)
dim(data)
package 'librarian' successfully unpacked and MD5 sums checked
  1. 5150
  2. 21

Let’s have a look at the structure of the data.

glimpse(data)
Rows: 5,150
Columns: 21
$ rownames <dbl> 10, 12, 15, 18, 19, 30, 43, 44, 47, 71, 73, 77, 84, 89, 96, 1…
$ wage     <dbl> 9.615385, 48.076923, 11.057692, 13.942308, 28.846154, 11.7307…
$ lwage    <dbl> 2.263364, 3.872802, 2.403126, 2.634928, 3.361977, 2.462215, 2…
$ sex      <dbl> 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1…
$ sohs     <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ hsg      <dbl> 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0…
$ socl     <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1…
$ clg      <dbl> 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0…
$ ad       <dbl> 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ mw       <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ sout     <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ we       <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ ne       <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
$ exp1     <dbl> 7.0, 31.0, 18.0, 25.0, 22.0, 1.0, 42.0, 37.0, 31.0, 4.0, 7.0,…
$ exp2     <dbl> 0.4900, 9.6100, 3.2400, 6.2500, 4.8400, 0.0100, 17.6400, 13.6…
$ exp3     <dbl> 0.343000, 29.791000, 5.832000, 15.625000, 10.648000, 0.001000…
$ exp4     <dbl> 0.24010000, 92.35210000, 10.49760000, 39.06250000, 23.4256000…
$ occ      <dbl> 3600, 3050, 6260, 420, 2015, 1650, 5120, 5240, 4040, 3255, 40…
$ occ2     <dbl> 11, 10, 19, 1, 6, 5, 17, 17, 13, 10, 13, 14, 11, 11, 1, 19, 1…
$ ind      <dbl> 8370, 5070, 770, 6990, 9470, 7460, 7280, 5680, 8590, 8190, 82…
$ ind2     <dbl> 18, 9, 4, 12, 22, 14, 14, 9, 19, 18, 18, 18, 18, 18, 17, 4, 4…

We construct the output variable \(Y\) and the matrix \(Z\) which includes the characteristics of workers that are given in the data.

# construct matrices for estimation from the data 

Y <- log(data$wage)
n <- length(Y)
Z <- data |> select(!c(wage, lwage)) #data[-which(colnames(data) %in% c("wage","lwage"))]
p <- dim(Z)[2]

cat(
  "Number of observations:", n, '\n'
   , "Number of raw regressors:", p
  )
Number of observations: 5150 
 Number of raw regressors: 19

For the outcome variable wage and a subset of the raw regressors, we calculate the empirical mean to get familiar with the data.

# generate a table of means of variables 
variables <- c("lwage","sex","sohs","hsg","socl","clg","ad","mw","sout","we","ne","exp1")
data |> 
  select(all_of(variables)) |>
  relocate(all_of(variables)) |>
  pivot_longer(everything()) |>
  with_groups(name, summarise, "Sample mean" = mean(value)) |>
  mutate(
    Variable = c("Log Wage","Sex","Some High School","High School Graduate","Some College","College Graduate", "Advanced Degree","Midwest","South","West","Northeast","Experience") |> str_sort()
    , across(where(is.numeric), round, 2)
    )
A tibble: 12 × 3
nameSample meanVariable
<chr><dbl><chr>
ad 0.14Advanced Degree
clg 0.32College Graduate
exp1 13.76Experience
hsg 0.24High School Graduate
lwage 2.97Log Wage
mw 0.26Midwest
ne 0.23Northeast
sex 0.44Sex
socl 0.28Some College
sohs 0.02Some High School
sout 0.30South
we 0.22West

E.g., the share of female workers in our sample is ~44% (\(sex=1\) if female).

2.3. Prediction Question#

Now, we will construct a prediction rule for hourly wage \(Y\), which depends linearly on job-relevant characteristics \(X\):

\[ \begin{equation}\label{decompose} Y = \beta'X+ \epsilon. \end{equation} \]

Our goals are

  • Predict wages using various characteristics of workers.

  • Assess the predictive performance of a given model using the (adjusted) sample MSE, the (adjusted) sample \(R^2\) and the out-of-sample MSE and \(R^2\).

We employ two different specifications for prediction:

  1. Basic Model: \(X\) consists of a set of raw regressors (e.g. gender, experience, education indicators, occupation and industry indicators and regional indicators).

  2. Flexible Model: \(X\) consists of all raw regressors from the basic model plus a dictionary of transformations (e.g., \({exp}^2\) and \({exp}^3\)) and additional two-way interactions of a polynomial in experience with other regressors. An example of a regressor created through a two-way interaction is experience times the indicator of having a college degree.

Using the Flexible Model enables us to approximate the real relationship by a more complex regression model and therefore to reduce the bias. The Flexible Model increases the range of potential shapes of the estimated regression function. In general, flexible models often deliver higher prediction accuracy but are harder to interpret.

Now, let us fit both models to our data by running ordinary least squares (ols):

# 1. basic model
basic <- lwage ~ (sex + exp1 + sohs + hsg+ socl + clg + mw + sout + we + occ2 + ind2)
regbasic <- lm(basic, data=data) # perform ols using the defined model
tidy(regbasic)
cat("\nNumber of regressors in the basic model:",length(regbasic$coef), '\n') # number of regressors in the Basic Model
A tibble: 12 × 5
termestimatestd.errorstatisticp.value
<chr><dbl><dbl><dbl><dbl>
(Intercept) 3.6479323770.0345872395105.4704692 0.000000e+00
sex -0.1057776600.0146410466 -7.2247335 5.756254e-13
exp1 0.0090514320.0006788967 13.3325609 6.842174e-40
sohs -0.6824817830.0510365863-13.3724027 4.080049e-40
hsg -0.5728446830.0260153880-22.01945579.296656e-103
socl -0.4449137160.0242872639-18.3188077 1.154449e-72
clg -0.1946208820.0227910017 -8.5393738 1.756029e-17
mw -0.0243669900.0201527227 -1.2091165 2.266737e-01
sout -0.0173403900.0194855874 -0.8899085 3.735567e-01
we 0.0188340570.0209856609 0.8974727 3.695088e-01
occ2 -0.0192095800.0011631856-16.5146301 9.745732e-60
ind2 -0.0136629010.0013242750-10.3172692 1.024554e-24
Number of regressors in the basic model: 12 

Note that the basic model consists of \(51\) regressors.

# 2. flexible model
flex <- lwage ~ sex + sohs + hsg+ socl+ clg+ mw + sout + we + occ2 + ind2 + 
  (exp1 + exp2 + exp3 + exp4)*(sohs + hsg + socl + clg + occ2 + ind2 + mw + sout + we)
regflex <- lm(flex, data=data)
tidy(regflex)
cat("\n Number of regressors in the flexible model:",length(regflex$coef)) # number of regressors in the Flexible Model
A tibble: 51 × 5
termestimatestd.errorstatisticp.value
<chr><dbl><dbl><dbl><dbl>
(Intercept) 3.54598176520.129309880027.422357551.498721e-154
sex -0.10246930960.0146380235-7.00021486 2.886521e-12
sohs -0.69096682840.8988056759-0.76876109 4.420708e-01
hsg -0.58161308250.1944598339-2.99091628 2.794718e-03
socl -0.32976451000.1234560370-2.67110883 7.584125e-03
clg -0.08737953480.0668027183-1.30802364 1.909243e-01
mw 0.10796229660.0834047742 1.29443785 1.955728e-01
sout 0.03859422880.0747084056 0.51659821 6.054591e-01
we -0.00350422900.0854161248-0.04102538 9.672773e-01
occ2 -0.02666165590.0050950699-5.23283414 1.736165e-07
ind2 -0.01613328830.0062181409-2.59455175 9.498551e-03
exp1 0.03953282370.0481219801 0.82151282 4.113926e-01
exp2 -0.04709836540.5325374987-0.08844141 9.295293e-01
exp3 -0.07714288720.2154298415-0.35808821 7.202921e-01
exp4 0.01928829120.0284494345 0.67798505 4.978119e-01
sohs:exp1 -0.07271040380.1901282833-0.38242813 7.021598e-01
hsg:exp1 -0.02502497770.0546884570-0.45759159 6.472654e-01
socl:exp1 -0.06097052470.0417513801-1.46032357 1.442628e-01
clg:exp1 -0.03804532510.0300321113-1.26682153 2.052770e-01
occ2:exp1 0.00310612740.0017875747 1.73762097 8.233800e-02
ind2:exp1 0.00040286960.0020727530 0.19436449 8.458982e-01
mw:exp1 -0.02708854240.0301188617-0.89938798 3.684885e-01
sout:exp1 -0.00787181770.0265857631-0.29609147 7.671723e-01
we:exp1 -0.00249767860.0305114339-0.08186041 9.347609e-01
sohs:exp2 0.90292154341.3741164234 0.65709246 5.111511e-01
hsg:exp2 0.18770005130.5146825687 0.36469090 7.153573e-01
socl:exp2 0.51130911520.4400571846 1.16191516 2.453243e-01
clg:exp2 0.20304269320.3705629183 0.54793041 5.837637e-01
occ2:exp2 -0.03434640300.0186214185-1.84445686 6.517456e-02
ind2:exp2 -0.00591630270.0210536072-0.28101136 7.787131e-01
mw:exp2 0.20428577950.3188136442 0.64076862 5.217018e-01
sout:exp2 0.04954604900.2765429344 0.17916223 8.578174e-01
we:exp2 0.11901247080.3228730643 0.36860452 7.124378e-01
sohs:exp3 -0.33935917390.4077661328-0.83223972 4.053126e-01
hsg:exp3 -0.03738228180.1921294886-0.19456816 8.457388e-01
socl:exp3 -0.14096245030.1751678330-0.80472795 4.210142e-01
clg:exp3 -0.00654297570.1607227984-0.04070969 9.675289e-01
occ2:exp3 0.01416781900.0071314004 1.98668119 4.701117e-02
ind2:exp3 0.00427557650.0079664729 0.53669629 5.915008e-01
mw:exp3 -0.06693456180.1233227099-0.54275941 5.873192e-01
sout:exp3 -0.02128797820.1047325848-0.20326032 8.389397e-01
we:exp3 -0.06160493190.1249833626-0.49290506 6.221009e-01
sohs:exp4 0.03904551060.0426584451 0.91530553 3.600745e-01
hsg:exp4 0.00167457690.0243824377 0.06867963 9.452473e-01
socl:exp4 0.01213242750.0230861018 0.52552950 5.992380e-01
clg:exp4 -0.00685715960.0222924956-0.30759946 7.583997e-01
occ2:exp4 -0.00190770240.0008949371-2.13166086 3.308241e-02
ind2:exp4 -0.00081350410.0009948429-0.81772116 4.135546e-01
mw:exp4 0.00694318130.0155865910 0.44545862 6.560073e-01
sout:exp4 0.00312006790.0129355024 0.24120191 8.094083e-01
we:exp4 0.00804714530.0158007113 0.50929006 6.105710e-01
 Number of regressors in the flexible model: 51

Note that the flexible model consists of \(246\) regressors.

2.3.1. Re-estimating the flexible model using Lasso#

We re-estimate the flexible model using Lasso (the least absolute shrinkage and selection operator) rather than ols. Lasso is a penalized regression method that can be used to reduce the complexity of a regression model when the ratio \(p/n\) is not small. We will introduce this approach formally later in the course, but for now, we try it out here as a black-box method.

# Flexible model using Lasso
# library(hdm)
lassoreg<- rlasso(flex, data=data) 
sumlasso<- summary(lassoreg)
Call:
rlasso.formula(formula = flex, data = data)

Post-Lasso Estimation:  TRUE 

Total number of variables: 50
Number of selected variables: 7 

Residuals: 
     Min       1Q   Median       3Q      Max 
-2.06952 -0.30967 -0.01303  0.30262  3.60099 

            Estimate
(Intercept)    3.493
sex           -0.104
sohs          -0.539
hsg           -0.431
socl          -0.306
clg            0.000
mw             0.000
sout           0.000
we             0.000
occ2          -0.020
ind2          -0.013
exp1           0.009
exp2           0.000
exp3           0.000
exp4           0.000
sohs:exp1      0.000
hsg:exp1       0.000
socl:exp1      0.000
clg:exp1       0.000
occ2:exp1      0.000
ind2:exp1      0.000
mw:exp1        0.000
sout:exp1      0.000
we:exp1        0.000
sohs:exp2      0.000
hsg:exp2       0.000
socl:exp2      0.000
clg:exp2       0.000
occ2:exp2      0.000
ind2:exp2      0.000
mw:exp2        0.000
sout:exp2      0.000
we:exp2        0.000
sohs:exp3      0.000
hsg:exp3       0.000
socl:exp3      0.000
clg:exp3       0.000
occ2:exp3      0.000
ind2:exp3      0.000
mw:exp3        0.000
sout:exp3      0.000
we:exp3        0.000
sohs:exp4      0.000
hsg:exp4       0.000
socl:exp4      0.000
clg:exp4       0.000
occ2:exp4      0.000
ind2:exp4      0.000
mw:exp4        0.000
sout:exp4      0.000
we:exp4        0.000

Residual standard error: 0.5037
Multiple R-squared:  0.2203
Adjusted R-squared:  0.2192
Joint significance test:
 the sup score statistic for joint significance test is  1180 with a p-value of 0.018

2.3.2. Evaluating the predictive performance of the basic and flexible models#

Now, we can evaluate the performance of both models based on the (adjusted) \(R^2_{sample}\) and the (adjusted) \(MSE_{sample}\):

# Assess predictive performance
sumbasic <- summary(regbasic)
sumflex <- summary(regflex)

# R-squared and adjusted R-squared
R2_1 <- sumbasic$r.squared
R2_adj1 <- sumbasic$adj.r.squared
cat(
    "R-squared for the basic model:\t", R2_1
    , "\nAdjusted R-squared for the basic model:\t", R2_adj1
    )

R2_2 <- sumflex$r.squared
R2_adj2 <- sumflex$adj.r.squared
cat(
    "R-squared for the flexible model:\t", R2_2
    , "\nAdjusted R-squared for the flexible model:\t", R2_adj2
    )

R2_3 <- sumlasso$r.squared
R2_adj3 <- sumlasso$adj.r.squared
cat(
    "\nR-squared for the lasso with flexible model:\t", R2_3
    , "\nAdjusted R-squared for the flexible model:\t", R2_adj3
    )

# MSE and adjusted MSE
MSE1 <- mean(sumbasic$res^2)
p1 <- sumbasic$df[1] # number of regressors
MSE_adj1 <- (n / (n - p1)) * MSE1

cat(
    "\nMSE for the basic model: ", MSE1
    , "\nAdjusted MSE for the basic model: ", MSE_adj1
    )

MSE2 <-mean(sumflex$res^2)
p2 <- sumflex$df[1]
MSE_adj2 <- (n / (n - p2)) * MSE2

cat(
    "\nMSE for the flexible model: ", MSE2
    , "\nAdjusted MSE for the lasso flexible model: ", MSE_adj2
)

MSEL <-mean(sumlasso$res^2)
pL <- length(sumlasso$coef)
MSE_adj3 <- (n / (n - pL)) * MSEL
cat(
    "\nMSE for the lasso flexible model: ", MSEL
    , "\nAdjusted MSE for the lasso flexible model: ", MSE_adj3
)
R-squared for the basic model:	 0.2320083 
Adjusted R-squared for the basic model:	 0.2303641R-squared for the flexible model:	 0.2452406 
Adjusted R-squared for the flexible model:	 0.2378395
R-squared for the lasso with flexible model:	 0.2202567 
Adjusted R-squared for the flexible model:	 0.2191952
MSE for the basic model:  0.249809 
Adjusted MSE for the basic model:  0.2503924
MSE for the flexible model:  0.2455048 
Adjusted MSE for the lasso flexible model:  0.2479604
MSE for the lasso flexible model:  0.2536315 
Adjusted MSE for the lasso flexible model:  0.2561683
# Output the table
table <- matrix(0, 3, 5)
table[1,1:5]   <- c(p1,R2_1,MSE1,R2_adj1,MSE_adj1)
table[2,1:5]   <- c(p2,R2_2,MSE2,R2_adj2,MSE_adj2)
table[3,1:5]   <- c(pL,R2_3,MSEL,R2_adj3,MSE_adj3)
colnames(table)<- c("p","R^2_sample","MSE_sample","R^2_adjusted", "MSE_adjusted")
rownames(table)<- c("Basic reg","Flexible reg", "Lasso flex")
table |> as.data.frame() |> rownames_to_column("Model")
A data.frame: 3 × 6
ModelpR^2_sampleMSE_sampleR^2_adjustedMSE_adjusted
<chr><dbl><dbl><dbl><dbl><dbl>
Basic reg 120.23200830.24980900.23036410.2503924
Flexible reg510.24524060.24550480.23783950.2479604
Lasso flex 510.22025670.25363150.21919520.2561683

Considering the measures above, the flexible model performs slightly better than the basic model.

As \(p/n\) is not large, the discrepancy between the adjusted and unadjusted measures is not large. However, if it were, we might still like to apply data splitting as a more general procedure to deal with potential overfitting if \(p/n\). We illustrate the approach in the following.

2.4. Data Splitting#

Measure the prediction quality of the two models via data splitting:

  • Randomly split the data into one training sample and one testing sample. Here we just use a simple method (stratified splitting is a more sophisticated version of splitting that we might consider).

  • Use the training sample to estimate the parameters of the Basic Model and the Flexible Model.

  • Use the testing sample for evaluation. Predict the \(\mathtt{wage}\) of every observation in the testing sample based on the estimated parameters in the training sample.

  • Calculate the Mean Squared Prediction Error \(MSE_{test}\) based on the testing sample for both prediction models.

# splitting the data
set.seed(1) # to make the results replicable (we will generate random numbers)
random <- sample(1:n, floor(n*4/5))
# draw (4/5)*n random numbers from 1 to n without replacing them
train <- data[random,] # training sample
test <- data[-random,] # testing sample
# basic model
# estimating the parameters in the training sample
regbasic <- lm(basic, data=train)

# calculating the out-of-sample MSE
trainregbasic <- predict(regbasic, newdata=test)
y_test <- log(test$wage)
MSE_test1 <- sum((y_test - trainregbasic)^2) / length(y_test)
R2_test1 <- 1 - MSE_test1 / var(y_test)

cat(
  "Test MSE for the basic model: ", MSE_test1
  , "\nTest R2 for the basic model: ", R2_test1
  )
Test MSE for the basic model:  0.2253461 
Test R2 for the basic model:  0.2316631

In the basic model, the \(MSE_{test}\) is quite close to the \(MSE_{sample}\).

# flexible model
# estimating the parameters
options(warn=-1) # ignore warnings 
regflex <- lm(flex, data=train)

# calculating the out-of-sample MSE
trainregflex<- predict(regflex, newdata=test)
y_test <- log(test$wage)
MSE_test2 <- sum((y_test - trainregflex)^2) / length(y_test)
R2_test2 <- 1- MSE_test2/var(y_test)

cat(
  "Test MSE for the flexible model: ", MSE_test2
  , "\nTest R2 for the flexible model: ", R2_test2
  )
Test MSE for the flexible model:  0.2277139 
Test R2 for the flexible model:  0.2235902

In the flexible model too, the discrepancy between the \(MSE_{test}\) and the \(MSE_{sample}\) is not large.

It is worth noticing that the \(MSE_{test}\) varies across different data splits. Hence, it is a good idea to average the out-of-sample MSE over different data splits to get valid results.

Nevertheless, we observe that, based on the out-of-sample \(MSE\), the basic model using ols regression performs about as well (or slightly better) than the flexible model.

Next, let us use lasso regression in the flexible model instead of ols regression. The out-of-sample \(MSE\) on the test sample can be computed for any black-box prediction method, so we also compare the performance of lasso regression in the flexible model to ols regression.

# flexible model using lasso
library(hdm) # a library for high-dimensional metrics
reglasso <- rlasso(flex, data=train, post=FALSE) # estimating the parameters

# calculating the out-of-sample MSE
trainreglasso<- predict(reglasso, newdata=test)
MSE_lasso <- sum((y_test - trainreglasso)^2) / length(y_test)
R2_lasso<- 1 - MSE_lasso / var(y_test)

cat(
  "Test MSE for the lasso on flexible model: ", MSE_lasso, 
  "\nTest R2 for the lasso flexible model: ", R2_lasso
  )
Test MSE for the lasso on flexible model:  0.2321006 
Test R2 for the lasso flexible model:  0.2086333

Finally, let us summarize the results:

# Output the comparison table

tibble(
  "Model" = c("Basic reg", "Flexible reg", "Lasso Regression")
  , "MSE test" = c(MSE_test1, MSE_test2, MSE_lasso)
  , "R2_test" = c(R2_test1, R2_test2, R2_lasso)
)
A tibble: 3 × 3
ModelMSE testR2_test
<chr><dbl><dbl>
Basic reg 0.22534610.2316631
Flexible reg 0.22771390.2235902
Lasso Regression0.23210060.2086333