I have a series of linear models and I'd like to report the standardized coefficients for each. However, when I print the models in stargazer, it looks like stargazer automatically prints the significance stars for the standardized coefficients as if they were unstandardized coefficients. You can see how the differences emerge below.
Is it statistically wrong to print the significance stars based on the unstandardized values? How is this done in stargazer? Thanks!
#load libraries
library(stargazer)
library(lm.beta)
#fake data
var1<-rnorm(100, mean=10, sd=5)
var2<-rnorm(100, mean=5, sd=2)
var3<-rnorm(100, mean=2, sd=3)
var4<-rnorm(100, mean=5, sd=1)
df<-data.frame(var1, var2, var3, var4)
#model with unstandardized betas
model1<-lm(var1~var2+var3+var4, data=df)
#Standardized betas
model1.beta<-lm.beta(model1)
#print
stargazer(model1, model1.beta, type='text')
Stargazer
does not automatically know it should look for the standardized coefficients in the second model. lm.beta
just add standardzied coefficients to the lm.object
. So it is still an lm.object
, so it extracts the coefficients as per usual (from model1.beta$coefficients
. Use the coef =
argument to specify the specific coefficients you want to use: coef = list(model1$coefficients, model1.beta$standardized.coefficients)
> stargazer(model1, model1.beta,
coef = list(model1$coefficients,
model1.beta$standardized.coefficients),
type='text')
==========================================================
Dependent variable:
----------------------------
var1
(1) (2)
----------------------------------------------------------
var2 0.135 0.048
(0.296) (0.296)
var3 -0.088 -0.044
(0.205) (0.205)
var4 -0.190 -0.030
(0.667) (0.667)
Constant 10.195** 0.000
(4.082) (4.082)
----------------------------------------------------------
Observations 100 100
R2 0.006 0.006
Adjusted R2 -0.025 -0.025
Residual Std. Error (df = 96) 5.748 5.748
F Statistic (df = 3; 96) 0.205 0.205
==========================================================
Note: *p<0.1; **p<0.05; ***p<0.01