Multiple regression allows us to relate a numerical response variable to one or more numerical or categorical predictors.
We can use multiple regression models to understand relationships, assess differences, and make predictions.
But what about a situation where the response of interest is categorical and binary?
On April 15, 1912 the famous ocean liner Titanic sank in the North Atlantic after striking an iceberg on its maiden voyage. The dataset titanic.csv
contains the survival status and other attributes of individuals on the titanic.
survived
: survival status (0 = died, 1 = survived)pclass
: passenger class (1 = 1st, 2 = 2nd, 3 = 3rd)name
: name of individualsex
: sex (male or female)age
: age in yearsfare
: passenger fare in British poundsWe are interested in investigating the variables that contribute to passenger survival. Do women and children really come first?
Today we’re using a variety of packages we’ve used before.
Let’s load our data and then look at it.
titanic <- read_csv("~/titanic.csv")
glimpse(titanic)
## Rows: 887
## Columns: 6
## $ pclass <dbl> 3, 1, 3, 1, 3, 3, 1, 3, 3, 2, 3, 1, 3, 3, 3, 2, 3, 2, 3, 3, 2…
## $ name <chr> "Mr. Owen Harris Braund", "Mrs. John Bradley (Florence Briggs…
## $ sex <chr> "male", "female", "female", "female", "male", "male", "male",…
## $ age <dbl> 22, 38, 26, 35, 35, 27, 54, 2, 27, 14, 4, 58, 20, 39, 14, 55,…
## $ fare <dbl> 7.2500, 71.2833, 7.9250, 53.1000, 8.0500, 8.4583, 51.8625, 21…
## $ survived <dbl> 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0…
\[ y = \beta_0 + \beta_1~x_1 + \beta_2~x_2 + \cdots + \beta_k~x_k + \epsilon \]
\[ \hat{y} = b_0 + b_1~x_1 + b_2~x_2 + \cdots + b_k~x_k \]
Denote by \(p\) the probability of death and consider the model below.
\[ p = \beta_0 + \beta_1~x_1 + \beta_2~x_2 + \cdots + \beta_k~x_k + \epsilon\]
Can you see any problems with this approach?
lm_survival <- lm(survived ~ age + sex, data = titanic)
tidy(lm_survival)
## # A tibble: 3 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.752 0.0356 21.1 2.88e-80
## 2 age -0.000343 0.000979 -0.350 7.26e- 1
## 3 sexmale -0.551 0.0289 -19.1 3.50e-68
This isn’t helpful! We need to develop a new tool.
Odds are sometimes expressed as X : Y and read X to Y.
It is the ratio of successes to failures, where values larger than 1 favor a success and values smaller than 1 favor a failure.
If \(P(A) = 1/2\), what are the odds of \(A\)?
If \(P(B) = 1/3\) what are the odds of \(B\)?
An odds ratio is a ratio of odds.
\[\text{logit}(p) = \text{log}\left(\frac{p}{1-p}\right)\]
The logit takes a value of \(p\) between 0 and 1 and outputs a value between \(-\infty\) and \(\infty\).
The inverse logit (logistic) takes a value between \(-\infty\) and \(\infty\) and outputs a value between 0 and 1.
\[\text{inverse logit}(x) = \frac{e^x}{1+e^x} = \frac{1}{1 + e^{-x}}\]
There is a one-to-one relationship between probabilities and log-odds. If we create a model using the log-odds we can “work backwards” using the logistic function to obtain probabilities between 0 and 1.
\[\text{log}\left(\frac{p}{1-p}\right) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \ldots + \beta_k x_{k}\]
Use the inverse logit to find the expression for \(p\).
\[p = \frac{e^{\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \ldots + \beta_k x_{k}}}{1 + e^{\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \ldots + \beta_k x_{k}}}\]
We can use the logistic regression model to obtain predicted probabilities of success for a binary response variable.
We can handle fitting the model via computer either in a tidymodels
framework or by using the glm
function.
fit_1 <- logistic_reg() %>%
set_engine("glm") %>%
fit(as.factor(survived) ~ sex + age, data = titanic, family = "binomial")
fit_1 %>%
tidy()
## # A tibble: 3 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.11 0.208 5.34 9.05e- 8
## 2 sexmale -2.50 0.168 -14.9 3.24e-50
## 3 age -0.00206 0.00586 -0.351 7.25e- 1
logit_mod <- glm(survived ~ sex + age, data = titanic, family = "binomial")
tidy(logit_mod)
## # A tibble: 3 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.11 0.208 5.34 9.05e- 8
## 2 sexmale -2.50 0.168 -14.9 3.24e-50
## 3 age -0.00206 0.00586 -0.351 7.25e- 1
And use augment
to find predicted log-odds.
pred_log_odds <- augment(logit_mod)
tidy(logit_mod)
## # A tibble: 3 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.11 0.208 5.34 9.05e- 8
## 2 sexmale -2.50 0.168 -14.9 3.24e-50
## 3 age -0.00206 0.00586 -0.351 7.25e- 1
\[\text{log}\left(\frac{\hat{p}}{1-\hat{p}}\right) = 1.11 - 2.50~sex - 0.00206~age\] \[\hat{p} = \frac{e^{1.11 - 2.50~sex - 0.00206~age}}{{1+e^{1.11 - 2.50~sex - 0.00206~age}}}\]
\[\text{log}\left(\frac{\hat{p}}{1-\hat{p}}\right) = 1.11 - 2.50~sex - 0.00206~age\]
Holding sex constant, for every additional year of age, we expect the log-odds of survival to decrease by approximately 0.002.
Holding age constant, we expect males to have a log-odds of survival that is 2.50 less than females.
\[\frac{\hat{p}}{1-\hat{p}} = e^{1.11 - 2.50~sex - 0.00206~age}\]
tidy(logit_mod) %>%
mutate(estimate= exp(estimate))
## # A tibble: 3 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 3.05 0.208 5.34 9.05e- 8
## 2 sexmale 0.0821 0.168 -14.9 3.24e-50
## 3 age 0.998 0.00586 -0.351 7.25e- 1
Holding sex constant, for every one year increase in age, the odds of survival is expected to be multiplied by \(e^{-0.00206} = 0.998\).
Holding age constant, the odds of survival for males is \(e^{-2.50} = 0.082\) times the odds of survival for females.
tibble(
age = rep(0:80, times = 2),
sex = rep(c("male", "female"), each = 81)
) %>%
augment(logit_mod, newdata = .) %>%
mutate(p = exp(.fitted) / (1 + exp(.fitted))) %>%
ggplot(aes(x = age, y = p, color = sex)) +
geom_line() +
scale_color_viridis_d()
tibble(
age = 31,
sex = "male") %>%
augment(logit_mod, newdata = .) %>%
mutate(p = exp(.fitted) / (1 + exp(.fitted))) %>%
pull("p")
## [1] 0.1900059
glance(logit_mod)
## # A tibble: 1 × 8
## null.deviance df.null logLik AIC BIC deviance df.residual nobs
## <dbl> <int> <dbl> <dbl> <dbl> <dbl> <int> <int>
## 1 1183. 886 -458. 922. 936. 916. 884 887
If you use glance
, you can see a variety of measures of model fit.
Two measures you can see are the null deviance and the deviance. The null deviance tells us how well we can predict the response variable with only the intercept, while the deviance tells us about the model fit now that we added two predictors. Notice that the deviance drops from 1183 to 916, with the loss of two degrees of freedom for the two predictors we added to the model.
Here, lower values are better. There is a reduction in the deviance by 267 with a loss of two degrees of freedom.
You can also see the AIC (and BIC) here. This is based upon the deviance, but penalizes you for including more explanatory variables, like we saw for adjusted \(R^2\). AIC is useful when comparing different models.
Please add fare to the model. Interpret the coefficients for your variables using odds ratios.
What is the predicted probability of survival for a 40 year old man who paid 100 pounds? What if it went up to 500 pounds?
Set age as being equal to its mean value. Then, create a predicted probability plot showing the effect of fare price for men and women. Describe what you see.