I am estimating some spatial econometric models that contain both a spatial autoregressive term rho and a spatial error term lambda. In attempting to communicate my results I was using the texreg
package, which accepts the sacsarlm
models I am working with. I noticed, however, that texreg
is printing p-values for the rho and lambda parameters that are identical. Texreg
appears to be returning the p-value found in the model@LR1$p.value
slot of the model object.
The parameters rho and lambda are different in magnitude and have different standard errors so they should not have equivalent p-values. If I call summary on the model object I get unique p-values, but cannot figure out where those values are stored within the model object despite going through each element in a str(model)
call.
My question is twofold:
- Am I correct in thinking this is an error in the texreg (and screenreg etc.) function or am I erring in my interpretation?
- How do I compute the correct p-value or find it in the model object (I am writing a new extract function for texreg and need to find the correct value)?
Below is a minimal example that shows the problem:
library(spdep)
library(texreg)
set.seed(42)
W.ran <- matrix(rbinom(100*100, 1, .3),nrow=100)
X <- rnorm(100)
Y <- .2 * X + rnorm(100) + .9*(W.ran %*% X)
W.test <- mat2listw(W.ran)
model <- sacsarlm(Y~X, type = "sacmixed",
listw=W.test, zero.policy=TRUE)
summary(model)
Call:sacsarlm(formula = Y ~ X, listw = W.test, type = "sacmixed",
zero.policy = TRUE)
Residuals:
Min 1Q Median 3Q Max
-2.379283 -0.750922 0.036044 0.675951 2.577148
Type: sacmixed
Coefficients: (asymptotic standard errors)
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.91037455 0.65700059 1.3857 0.1659
X -0.00076362 0.10330510 -0.0074 0.9941
lag.(Intercept) -0.03193863 0.02310075 -1.3826 0.1668
lag.X 0.89764491 0.02231353 40.2287 <2e-16
Rho: -0.0028541
Asymptotic standard error: 0.0059647
z-value: -0.47849, p-value: 0.6323
Lambda: -0.020578
Asymptotic standard error: 0.020057
z-value: -1.026, p-value: 0.3049
LR test value: 288.74, p-value: < 2.22e-16
Log likelihood: -145.4423 for sacmixed model
ML residual variance (sigma squared): 1.0851, (sigma: 1.0417)
Number of observations: 100
Number of parameters estimated: 7
AIC: 304.88, (AIC for lm: 585.63)
screenreg(model)
=================================
Model 1
---------------------------------
(Intercept) 0.91
(0.66)
X -0.00
(0.10)
lag.(Intercept) -0.03
(0.02)
lag.X 0.90 ***
(0.02)
---------------------------------
Num. obs. 100
Parameters 7
AIC (Linear model) 585.63
AIC (Spatial model) 304.88
Log Likelihood -145.44
Wald test: statistic 1.05
Wald test: p-value 0.90
Lambda: statistic -0.02
Lambda: p-value 0.00
Rho: statistic -0.00
Rho: p-value 0.00
=================================
*** p < 0.001, ** p < 0.01, * p < 0.05
Obviously, in the example, Rho and Lambda have different p-values neither of which are zero and thus there is a problem with the texreg output. Any help with why this is occurring or where to obtain the correct p-values much appreciated!
texreg
author here. Thanks for catching this. As described in my reply here,texreg
usesextract
methods to retrieve the relevant information from any of the (currently more than 70 supported) model object types. It looks like there is a glitch in the GOF part of the method forsarlm
objects.Here is what the method currently looks like (as of
texreg
1.36.13):I think you are right that the lambda and rho parts need some updating. The
sacsarlm
function does not store the results printed by itssummary
method in any object, so you are rightly pointing out that any attempts usingstr
do not appear to show the true p-values etc.Therefore it makes sense to take a look at what the
print.summary.sarlm
function in thespdep
package actually does when it prints the summary. I found the code for this function in the fileR/summary.spsarlm.R
in the source code of the package on CRAN. It looks like this:You can see there that the function first distinguishes between different sub-models (
error
vs.sac
/sacmixed
vs. else), then decides which standard errors to use, and then computes the p-values on the fly without saving them anywhere.So this is what we need to do as well in our
extract
method in order to get the same result as thesummary
method in thespdep
package. We also need to move this from the GOF block up to the coefficient block of the table (see comments section below for a discussion). Here is my attempt at adopting their approach in theextract
method:You can just execute this code at run-time to update the way
texreg
handles these objects. Please do let me know if you still think there are any glitches I haven't spotted. If none are reported in the comments, I will include this updatedextract
method in the nexttexreg
release.With these changes, calling
screenreg(model, single.row = TRUE)
yields the following output: