2. OLS and lasso for wage prediction#
An important question in labour economics is what determines the wage of workers. This is a causal question, but we can begin to investigate it from a predictive perspective.
In the following wage example, \(Y\) is the hourly wage of a worker and \(X\) is a vector of worker’s characteristics, e.g., education, experience, gender. Two main questions here are:
How can we use job-relevant characteristics, such as education and experience, to best predict wages?
What is the difference in predicted wages between men and women with the same job-relevant characteristics?
In this lab, we focus on the prediction question first.
2.1. Data#
The data set we consider is from the 2015 March Supplement of the U.S. Current Population Survey. We select white non-hispanic individuals, aged 25 to 64 years, and working more than 35 hours per week for at least 50 weeks of the year. We exclude self-employed workers; individuals living in group quarters; individuals in the military, agricultural or private household sectors; individuals with inconsistent reports on earnings and employment status; individuals with allocated or missing information in any of the variables used in the analysis; and individuals with hourly wage below \(3\).
The variable of interest \(Y\) is the hourly wage rate constructed as the ratio of the annual earnings to the total number of hours worked, which is constructed in turn as the product of number of weeks worked and the usual number of hours worked per week. In our analysis, we also focus on single (never married) workers. The final sample is of size \(n=5150\).
2.2. Data analysis#
We start by loading the data set.
install.packages("librarian", quiet = T)
librarian::shelf(tidyverse, broom, hdm, quiet = T)
data <- read_csv(
"https://github.com/d2cml-ai/14.388_R/raw/main/Data/wage2015_subsample_inference.csv"
, show_col_types = F) |>
rename(socl = scl, sohs = shs, sout = so)
dim(data)
package 'librarian' successfully unpacked and MD5 sums checked
- 5150
- 21
Let’s have a look at the structure of the data.
glimpse(data)
Rows: 5,150
Columns: 21
$ rownames <dbl> 10, 12, 15, 18, 19, 30, 43, 44, 47, 71, 73, 77, 84, 89, 96, 1…
$ wage <dbl> 9.615385, 48.076923, 11.057692, 13.942308, 28.846154, 11.7307…
$ lwage <dbl> 2.263364, 3.872802, 2.403126, 2.634928, 3.361977, 2.462215, 2…
$ sex <dbl> 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1…
$ sohs <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ hsg <dbl> 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0…
$ socl <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1…
$ clg <dbl> 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0…
$ ad <dbl> 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ mw <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ sout <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ we <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
$ ne <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
$ exp1 <dbl> 7.0, 31.0, 18.0, 25.0, 22.0, 1.0, 42.0, 37.0, 31.0, 4.0, 7.0,…
$ exp2 <dbl> 0.4900, 9.6100, 3.2400, 6.2500, 4.8400, 0.0100, 17.6400, 13.6…
$ exp3 <dbl> 0.343000, 29.791000, 5.832000, 15.625000, 10.648000, 0.001000…
$ exp4 <dbl> 0.24010000, 92.35210000, 10.49760000, 39.06250000, 23.4256000…
$ occ <dbl> 3600, 3050, 6260, 420, 2015, 1650, 5120, 5240, 4040, 3255, 40…
$ occ2 <dbl> 11, 10, 19, 1, 6, 5, 17, 17, 13, 10, 13, 14, 11, 11, 1, 19, 1…
$ ind <dbl> 8370, 5070, 770, 6990, 9470, 7460, 7280, 5680, 8590, 8190, 82…
$ ind2 <dbl> 18, 9, 4, 12, 22, 14, 14, 9, 19, 18, 18, 18, 18, 18, 17, 4, 4…
We construct the output variable \(Y\) and the matrix \(Z\) which includes the characteristics of workers that are given in the data.
# construct matrices for estimation from the data
Y <- log(data$wage)
n <- length(Y)
Z <- data |> select(!c(wage, lwage)) #data[-which(colnames(data) %in% c("wage","lwage"))]
p <- dim(Z)[2]
cat(
"Number of observations:", n, '\n'
, "Number of raw regressors:", p
)
Number of observations: 5150
Number of raw regressors: 19
For the outcome variable wage and a subset of the raw regressors, we calculate the empirical mean to get familiar with the data.
# generate a table of means of variables
variables <- c("lwage","sex","sohs","hsg","socl","clg","ad","mw","sout","we","ne","exp1")
data |>
select(all_of(variables)) |>
relocate(all_of(variables)) |>
pivot_longer(everything()) |>
with_groups(name, summarise, "Sample mean" = mean(value)) |>
mutate(
Variable = c("Log Wage","Sex","Some High School","High School Graduate","Some College","College Graduate", "Advanced Degree","Midwest","South","West","Northeast","Experience") |> str_sort()
, across(where(is.numeric), round, 2)
)
name | Sample mean | Variable |
---|---|---|
<chr> | <dbl> | <chr> |
ad | 0.14 | Advanced Degree |
clg | 0.32 | College Graduate |
exp1 | 13.76 | Experience |
hsg | 0.24 | High School Graduate |
lwage | 2.97 | Log Wage |
mw | 0.26 | Midwest |
ne | 0.23 | Northeast |
sex | 0.44 | Sex |
socl | 0.28 | Some College |
sohs | 0.02 | Some High School |
sout | 0.30 | South |
we | 0.22 | West |
E.g., the share of female workers in our sample is ~44% (\(sex=1\) if female).
2.3. Prediction Question#
Now, we will construct a prediction rule for hourly wage \(Y\), which depends linearly on job-relevant characteristics \(X\):
Our goals are
Predict wages using various characteristics of workers.
Assess the predictive performance of a given model using the (adjusted) sample MSE, the (adjusted) sample \(R^2\) and the out-of-sample MSE and \(R^2\).
We employ two different specifications for prediction:
Basic Model: \(X\) consists of a set of raw regressors (e.g. gender, experience, education indicators, occupation and industry indicators and regional indicators).
Flexible Model: \(X\) consists of all raw regressors from the basic model plus a dictionary of transformations (e.g., \({exp}^2\) and \({exp}^3\)) and additional two-way interactions of a polynomial in experience with other regressors. An example of a regressor created through a two-way interaction is experience times the indicator of having a college degree.
Using the Flexible Model enables us to approximate the real relationship by a more complex regression model and therefore to reduce the bias. The Flexible Model increases the range of potential shapes of the estimated regression function. In general, flexible models often deliver higher prediction accuracy but are harder to interpret.
Now, let us fit both models to our data by running ordinary least squares (ols):
# 1. basic model
basic <- lwage ~ (sex + exp1 + sohs + hsg+ socl + clg + mw + sout + we + occ2 + ind2)
regbasic <- lm(basic, data=data) # perform ols using the defined model
tidy(regbasic)
cat("\nNumber of regressors in the basic model:",length(regbasic$coef), '\n') # number of regressors in the Basic Model
term | estimate | std.error | statistic | p.value |
---|---|---|---|---|
<chr> | <dbl> | <dbl> | <dbl> | <dbl> |
(Intercept) | 3.647932377 | 0.0345872395 | 105.4704692 | 0.000000e+00 |
sex | -0.105777660 | 0.0146410466 | -7.2247335 | 5.756254e-13 |
exp1 | 0.009051432 | 0.0006788967 | 13.3325609 | 6.842174e-40 |
sohs | -0.682481783 | 0.0510365863 | -13.3724027 | 4.080049e-40 |
hsg | -0.572844683 | 0.0260153880 | -22.0194557 | 9.296656e-103 |
socl | -0.444913716 | 0.0242872639 | -18.3188077 | 1.154449e-72 |
clg | -0.194620882 | 0.0227910017 | -8.5393738 | 1.756029e-17 |
mw | -0.024366990 | 0.0201527227 | -1.2091165 | 2.266737e-01 |
sout | -0.017340390 | 0.0194855874 | -0.8899085 | 3.735567e-01 |
we | 0.018834057 | 0.0209856609 | 0.8974727 | 3.695088e-01 |
occ2 | -0.019209580 | 0.0011631856 | -16.5146301 | 9.745732e-60 |
ind2 | -0.013662901 | 0.0013242750 | -10.3172692 | 1.024554e-24 |
Number of regressors in the basic model: 12
Note that the basic model consists of \(51\) regressors.
# 2. flexible model
flex <- lwage ~ sex + sohs + hsg+ socl+ clg+ mw + sout + we + occ2 + ind2 +
(exp1 + exp2 + exp3 + exp4)*(sohs + hsg + socl + clg + occ2 + ind2 + mw + sout + we)
regflex <- lm(flex, data=data)
tidy(regflex)
cat("\n Number of regressors in the flexible model:",length(regflex$coef)) # number of regressors in the Flexible Model
term | estimate | std.error | statistic | p.value |
---|---|---|---|---|
<chr> | <dbl> | <dbl> | <dbl> | <dbl> |
(Intercept) | 3.5459817652 | 0.1293098800 | 27.42235755 | 1.498721e-154 |
sex | -0.1024693096 | 0.0146380235 | -7.00021486 | 2.886521e-12 |
sohs | -0.6909668284 | 0.8988056759 | -0.76876109 | 4.420708e-01 |
hsg | -0.5816130825 | 0.1944598339 | -2.99091628 | 2.794718e-03 |
socl | -0.3297645100 | 0.1234560370 | -2.67110883 | 7.584125e-03 |
clg | -0.0873795348 | 0.0668027183 | -1.30802364 | 1.909243e-01 |
mw | 0.1079622966 | 0.0834047742 | 1.29443785 | 1.955728e-01 |
sout | 0.0385942288 | 0.0747084056 | 0.51659821 | 6.054591e-01 |
we | -0.0035042290 | 0.0854161248 | -0.04102538 | 9.672773e-01 |
occ2 | -0.0266616559 | 0.0050950699 | -5.23283414 | 1.736165e-07 |
ind2 | -0.0161332883 | 0.0062181409 | -2.59455175 | 9.498551e-03 |
exp1 | 0.0395328237 | 0.0481219801 | 0.82151282 | 4.113926e-01 |
exp2 | -0.0470983654 | 0.5325374987 | -0.08844141 | 9.295293e-01 |
exp3 | -0.0771428872 | 0.2154298415 | -0.35808821 | 7.202921e-01 |
exp4 | 0.0192882912 | 0.0284494345 | 0.67798505 | 4.978119e-01 |
sohs:exp1 | -0.0727104038 | 0.1901282833 | -0.38242813 | 7.021598e-01 |
hsg:exp1 | -0.0250249777 | 0.0546884570 | -0.45759159 | 6.472654e-01 |
socl:exp1 | -0.0609705247 | 0.0417513801 | -1.46032357 | 1.442628e-01 |
clg:exp1 | -0.0380453251 | 0.0300321113 | -1.26682153 | 2.052770e-01 |
occ2:exp1 | 0.0031061274 | 0.0017875747 | 1.73762097 | 8.233800e-02 |
ind2:exp1 | 0.0004028696 | 0.0020727530 | 0.19436449 | 8.458982e-01 |
mw:exp1 | -0.0270885424 | 0.0301188617 | -0.89938798 | 3.684885e-01 |
sout:exp1 | -0.0078718177 | 0.0265857631 | -0.29609147 | 7.671723e-01 |
we:exp1 | -0.0024976786 | 0.0305114339 | -0.08186041 | 9.347609e-01 |
sohs:exp2 | 0.9029215434 | 1.3741164234 | 0.65709246 | 5.111511e-01 |
hsg:exp2 | 0.1877000513 | 0.5146825687 | 0.36469090 | 7.153573e-01 |
socl:exp2 | 0.5113091152 | 0.4400571846 | 1.16191516 | 2.453243e-01 |
clg:exp2 | 0.2030426932 | 0.3705629183 | 0.54793041 | 5.837637e-01 |
occ2:exp2 | -0.0343464030 | 0.0186214185 | -1.84445686 | 6.517456e-02 |
ind2:exp2 | -0.0059163027 | 0.0210536072 | -0.28101136 | 7.787131e-01 |
mw:exp2 | 0.2042857795 | 0.3188136442 | 0.64076862 | 5.217018e-01 |
sout:exp2 | 0.0495460490 | 0.2765429344 | 0.17916223 | 8.578174e-01 |
we:exp2 | 0.1190124708 | 0.3228730643 | 0.36860452 | 7.124378e-01 |
sohs:exp3 | -0.3393591739 | 0.4077661328 | -0.83223972 | 4.053126e-01 |
hsg:exp3 | -0.0373822818 | 0.1921294886 | -0.19456816 | 8.457388e-01 |
socl:exp3 | -0.1409624503 | 0.1751678330 | -0.80472795 | 4.210142e-01 |
clg:exp3 | -0.0065429757 | 0.1607227984 | -0.04070969 | 9.675289e-01 |
occ2:exp3 | 0.0141678190 | 0.0071314004 | 1.98668119 | 4.701117e-02 |
ind2:exp3 | 0.0042755765 | 0.0079664729 | 0.53669629 | 5.915008e-01 |
mw:exp3 | -0.0669345618 | 0.1233227099 | -0.54275941 | 5.873192e-01 |
sout:exp3 | -0.0212879782 | 0.1047325848 | -0.20326032 | 8.389397e-01 |
we:exp3 | -0.0616049319 | 0.1249833626 | -0.49290506 | 6.221009e-01 |
sohs:exp4 | 0.0390455106 | 0.0426584451 | 0.91530553 | 3.600745e-01 |
hsg:exp4 | 0.0016745769 | 0.0243824377 | 0.06867963 | 9.452473e-01 |
socl:exp4 | 0.0121324275 | 0.0230861018 | 0.52552950 | 5.992380e-01 |
clg:exp4 | -0.0068571596 | 0.0222924956 | -0.30759946 | 7.583997e-01 |
occ2:exp4 | -0.0019077024 | 0.0008949371 | -2.13166086 | 3.308241e-02 |
ind2:exp4 | -0.0008135041 | 0.0009948429 | -0.81772116 | 4.135546e-01 |
mw:exp4 | 0.0069431813 | 0.0155865910 | 0.44545862 | 6.560073e-01 |
sout:exp4 | 0.0031200679 | 0.0129355024 | 0.24120191 | 8.094083e-01 |
we:exp4 | 0.0080471453 | 0.0158007113 | 0.50929006 | 6.105710e-01 |
Number of regressors in the flexible model: 51
Note that the flexible model consists of \(246\) regressors.
2.3.1. Re-estimating the flexible model using Lasso#
We re-estimate the flexible model using Lasso (the least absolute shrinkage and selection operator) rather than ols. Lasso is a penalized regression method that can be used to reduce the complexity of a regression model when the ratio \(p/n\) is not small. We will introduce this approach formally later in the course, but for now, we try it out here as a black-box method.
# Flexible model using Lasso
# library(hdm)
lassoreg<- rlasso(flex, data=data)
sumlasso<- summary(lassoreg)
Call:
rlasso.formula(formula = flex, data = data)
Post-Lasso Estimation: TRUE
Total number of variables: 50
Number of selected variables: 7
Residuals:
Min 1Q Median 3Q Max
-2.06952 -0.30967 -0.01303 0.30262 3.60099
Estimate
(Intercept) 3.493
sex -0.104
sohs -0.539
hsg -0.431
socl -0.306
clg 0.000
mw 0.000
sout 0.000
we 0.000
occ2 -0.020
ind2 -0.013
exp1 0.009
exp2 0.000
exp3 0.000
exp4 0.000
sohs:exp1 0.000
hsg:exp1 0.000
socl:exp1 0.000
clg:exp1 0.000
occ2:exp1 0.000
ind2:exp1 0.000
mw:exp1 0.000
sout:exp1 0.000
we:exp1 0.000
sohs:exp2 0.000
hsg:exp2 0.000
socl:exp2 0.000
clg:exp2 0.000
occ2:exp2 0.000
ind2:exp2 0.000
mw:exp2 0.000
sout:exp2 0.000
we:exp2 0.000
sohs:exp3 0.000
hsg:exp3 0.000
socl:exp3 0.000
clg:exp3 0.000
occ2:exp3 0.000
ind2:exp3 0.000
mw:exp3 0.000
sout:exp3 0.000
we:exp3 0.000
sohs:exp4 0.000
hsg:exp4 0.000
socl:exp4 0.000
clg:exp4 0.000
occ2:exp4 0.000
ind2:exp4 0.000
mw:exp4 0.000
sout:exp4 0.000
we:exp4 0.000
Residual standard error: 0.5037
Multiple R-squared: 0.2203
Adjusted R-squared: 0.2192
Joint significance test:
the sup score statistic for joint significance test is 1180 with a p-value of 0.018
2.3.2. Evaluating the predictive performance of the basic and flexible models#
Now, we can evaluate the performance of both models based on the (adjusted) \(R^2_{sample}\) and the (adjusted) \(MSE_{sample}\):
# Assess predictive performance
sumbasic <- summary(regbasic)
sumflex <- summary(regflex)
# R-squared and adjusted R-squared
R2_1 <- sumbasic$r.squared
R2_adj1 <- sumbasic$adj.r.squared
cat(
"R-squared for the basic model:\t", R2_1
, "\nAdjusted R-squared for the basic model:\t", R2_adj1
)
R2_2 <- sumflex$r.squared
R2_adj2 <- sumflex$adj.r.squared
cat(
"R-squared for the flexible model:\t", R2_2
, "\nAdjusted R-squared for the flexible model:\t", R2_adj2
)
R2_3 <- sumlasso$r.squared
R2_adj3 <- sumlasso$adj.r.squared
cat(
"\nR-squared for the lasso with flexible model:\t", R2_3
, "\nAdjusted R-squared for the flexible model:\t", R2_adj3
)
# MSE and adjusted MSE
MSE1 <- mean(sumbasic$res^2)
p1 <- sumbasic$df[1] # number of regressors
MSE_adj1 <- (n / (n - p1)) * MSE1
cat(
"\nMSE for the basic model: ", MSE1
, "\nAdjusted MSE for the basic model: ", MSE_adj1
)
MSE2 <-mean(sumflex$res^2)
p2 <- sumflex$df[1]
MSE_adj2 <- (n / (n - p2)) * MSE2
cat(
"\nMSE for the flexible model: ", MSE2
, "\nAdjusted MSE for the lasso flexible model: ", MSE_adj2
)
MSEL <-mean(sumlasso$res^2)
pL <- length(sumlasso$coef)
MSE_adj3 <- (n / (n - pL)) * MSEL
cat(
"\nMSE for the lasso flexible model: ", MSEL
, "\nAdjusted MSE for the lasso flexible model: ", MSE_adj3
)
R-squared for the basic model: 0.2320083
Adjusted R-squared for the basic model: 0.2303641R-squared for the flexible model: 0.2452406
Adjusted R-squared for the flexible model: 0.2378395
R-squared for the lasso with flexible model: 0.2202567
Adjusted R-squared for the flexible model: 0.2191952
MSE for the basic model: 0.249809
Adjusted MSE for the basic model: 0.2503924
MSE for the flexible model: 0.2455048
Adjusted MSE for the lasso flexible model: 0.2479604
MSE for the lasso flexible model: 0.2536315
Adjusted MSE for the lasso flexible model: 0.2561683
# Output the table
table <- matrix(0, 3, 5)
table[1,1:5] <- c(p1,R2_1,MSE1,R2_adj1,MSE_adj1)
table[2,1:5] <- c(p2,R2_2,MSE2,R2_adj2,MSE_adj2)
table[3,1:5] <- c(pL,R2_3,MSEL,R2_adj3,MSE_adj3)
colnames(table)<- c("p","R^2_sample","MSE_sample","R^2_adjusted", "MSE_adjusted")
rownames(table)<- c("Basic reg","Flexible reg", "Lasso flex")
table |> as.data.frame() |> rownames_to_column("Model")
Model | p | R^2_sample | MSE_sample | R^2_adjusted | MSE_adjusted |
---|---|---|---|---|---|
<chr> | <dbl> | <dbl> | <dbl> | <dbl> | <dbl> |
Basic reg | 12 | 0.2320083 | 0.2498090 | 0.2303641 | 0.2503924 |
Flexible reg | 51 | 0.2452406 | 0.2455048 | 0.2378395 | 0.2479604 |
Lasso flex | 51 | 0.2202567 | 0.2536315 | 0.2191952 | 0.2561683 |
Considering the measures above, the flexible model performs slightly better than the basic model.
As \(p/n\) is not large, the discrepancy between the adjusted and unadjusted measures is not large. However, if it were, we might still like to apply data splitting as a more general procedure to deal with potential overfitting if \(p/n\). We illustrate the approach in the following.
2.4. Data Splitting#
Measure the prediction quality of the two models via data splitting:
Randomly split the data into one training sample and one testing sample. Here we just use a simple method (stratified splitting is a more sophisticated version of splitting that we might consider).
Use the training sample to estimate the parameters of the Basic Model and the Flexible Model.
Use the testing sample for evaluation. Predict the \(\mathtt{wage}\) of every observation in the testing sample based on the estimated parameters in the training sample.
Calculate the Mean Squared Prediction Error \(MSE_{test}\) based on the testing sample for both prediction models.
# splitting the data
set.seed(1) # to make the results replicable (we will generate random numbers)
random <- sample(1:n, floor(n*4/5))
# draw (4/5)*n random numbers from 1 to n without replacing them
train <- data[random,] # training sample
test <- data[-random,] # testing sample
# basic model
# estimating the parameters in the training sample
regbasic <- lm(basic, data=train)
# calculating the out-of-sample MSE
trainregbasic <- predict(regbasic, newdata=test)
y_test <- log(test$wage)
MSE_test1 <- sum((y_test - trainregbasic)^2) / length(y_test)
R2_test1 <- 1 - MSE_test1 / var(y_test)
cat(
"Test MSE for the basic model: ", MSE_test1
, "\nTest R2 for the basic model: ", R2_test1
)
Test MSE for the basic model: 0.2253461
Test R2 for the basic model: 0.2316631
In the basic model, the \(MSE_{test}\) is quite close to the \(MSE_{sample}\).
# flexible model
# estimating the parameters
options(warn=-1) # ignore warnings
regflex <- lm(flex, data=train)
# calculating the out-of-sample MSE
trainregflex<- predict(regflex, newdata=test)
y_test <- log(test$wage)
MSE_test2 <- sum((y_test - trainregflex)^2) / length(y_test)
R2_test2 <- 1- MSE_test2/var(y_test)
cat(
"Test MSE for the flexible model: ", MSE_test2
, "\nTest R2 for the flexible model: ", R2_test2
)
Test MSE for the flexible model: 0.2277139
Test R2 for the flexible model: 0.2235902
In the flexible model too, the discrepancy between the \(MSE_{test}\) and the \(MSE_{sample}\) is not large.
It is worth noticing that the \(MSE_{test}\) varies across different data splits. Hence, it is a good idea to average the out-of-sample MSE over different data splits to get valid results.
Nevertheless, we observe that, based on the out-of-sample \(MSE\), the basic model using ols regression performs about as well (or slightly better) than the flexible model.
Next, let us use lasso regression in the flexible model instead of ols regression. The out-of-sample \(MSE\) on the test sample can be computed for any black-box prediction method, so we also compare the performance of lasso regression in the flexible model to ols regression.
# flexible model using lasso
library(hdm) # a library for high-dimensional metrics
reglasso <- rlasso(flex, data=train, post=FALSE) # estimating the parameters
# calculating the out-of-sample MSE
trainreglasso<- predict(reglasso, newdata=test)
MSE_lasso <- sum((y_test - trainreglasso)^2) / length(y_test)
R2_lasso<- 1 - MSE_lasso / var(y_test)
cat(
"Test MSE for the lasso on flexible model: ", MSE_lasso,
"\nTest R2 for the lasso flexible model: ", R2_lasso
)
Test MSE for the lasso on flexible model: 0.2321006
Test R2 for the lasso flexible model: 0.2086333
Finally, let us summarize the results:
# Output the comparison table
tibble(
"Model" = c("Basic reg", "Flexible reg", "Lasso Regression")
, "MSE test" = c(MSE_test1, MSE_test2, MSE_lasso)
, "R2_test" = c(R2_test1, R2_test2, R2_lasso)
)
Model | MSE test | R2_test |
---|---|---|
<chr> | <dbl> | <dbl> |
Basic reg | 0.2253461 | 0.2316631 |
Flexible reg | 0.2277139 | 0.2235902 |
Lasso Regression | 0.2321006 | 0.2086333 |