I want to graph the histograms of parameter estimates from a stan model against the priors for those parameters. I have tried doing this by running a model in stan, graphing it with ggplot2, then overlaying an approximation of the prior distribution using R's random generator function (e.g. rnorm()
, rbinom()
) but I have run into many scaling issues that make the graphs impossible to get looking right.
I was thinking a better way to do it would be simply to sample directly from the prior distribution and then graph those samples against the parameter estimates, but running a whole separate model just to sample from the priors seems very time-consuming. I was wondering if there was a way to do this within, or rather parallel-to, an existing model.
Here is a sample script.
# simulate linear model
a <- 3 # intercept
b <- 2 # slope
# data
x <- rnorm(28, 0, 1)
eps <- rnorm(28, 0, 2)
y <- a + b*x + eps
# put data into list
data_reg <- list(N = 28, x = x, y = y)
# create the model string
ms <- "
data {
int<lower=0> N;
vector[N] x;
vector[N] y;
}
parameters {
real alpha;
real beta;
real<lower=0> sigma;
}
model {
vector[N] mu;
sigma ~ cauchy(0, 2);
beta ~ normal(0,10);
alpha ~ normal(0,100);
for ( i in 1:N ) {
mu[i] = alpha + beta * x[i];
}
y ~ normal(mu, sigma);
}
"
# now fit the model in stan
fit1 <- stan(model_code = ms, # model string
data = data_reg, # named list of data
chains = 1, # number of Markov chains
warmup = 1e3, # number of warmup iterations per chain
iter = 2e3) # show progress every 'refresh' iterations
# extract the sample estimates
post <- extract(fit1, pars = c("alpha", "beta", "sigma"))
# now for the density plots. Write a plotting function
densFunct <- function (parName) {
g <- ggplot(postDF, aes_string(x = parName)) +
geom_histogram(aes(y=..density..), fill = "white", colour = "black", bins = 50) +
geom_density(fill = "skyblue", alpha = 0.3)
return(g)
}
# plot
gridExtra::grid.arrange(grobs = lapply(names(postDF), function (i) densFunct(i)), ncol = 1)
Now I understand that I can sample from the prior by simply omitting the likelihood from the model string, like so
ms <- "
data {
int<lower=0> N;
vector[N] x;
vector[N] y;
}
parameters {
real alpha;
real beta;
real<lower=0> sigma;
}
model {
sigma ~ cauchy(0, 2);
beta ~ normal(0,10);
alpha ~ normal(0,100);
}
"
But is there any way to get the samples from the prior within the first model? Maybe via the generated quantities block?
There are two ways you can do this.
First, if the program is general enough, just pass in zero-size data so that the posterior is the prior. For instance, N = 0
in the regression example you gave will work (along with the right zero-sized x and y).
Second, you can write a pure Monte Carlo generator (doesn't use MCMC) in the generated quantities block. Something like:
generated quantities {
real<lower = 0> sigma_sim = cauchy_rng(0, 2); // wide tail warning!
real beta_sim = normal_rng(0, 10);
real alpha_sim = normal_rng(0, 20);
}
The second approach is much more efficient as it conveniently draws an independent sample and doesn't have to do any MCMC.
The answer to how to do this occurred to me on the bus this morning. Of course by the time I finished writing it out, @Bob Carpenter posted the solution I was looking for. By comparison my way is quite cumbersome and hacky, but it does work.
All we need to do is specify priors that mirror the actual priors but which are never passed downstream into a likelihood function.
So in the example above all we need to do is create these mirror variables within the model string. We'll call them p_alpha
, p_beta
, and p_sigma
. These will be analogs of alpha
, beta
, and sigma
but will not appear in any likelihood function.
Note we have to create these variables in the parameters{}
block and in the model{}
block.
ms <- "
data {
int<lower=0> N;
vector[N] x;
vector[N] y;
}
parameters {
// priors to sample from
real p_alpha;
real p_beta;
real p_sigma;
// real priors
real alpha;
real beta;
real<lower=0> sigma;
}
model {
vector[N] mu;
// priors to sample from
p_sigma ~ cauchy(0, 2);
p_beta ~ normal(3,1); // for didactic purposes
p_alpha ~ normal(0,100);
// actual priors
sigma ~ cauchy(0, 2);
beta ~ normal(0,10);
alpha ~ normal(0,100);
// likelihood
for ( i in 1:N ) {
mu[i] = alpha + beta * x[i];
}
y ~ normal(mu, sigma);
}
"
Note that the specifications of the distributions for the mirror parameters should match those of the actual priors, which I have done for p_alpha
/alpha
and p_sigma
/sigma
. For didactic purposes I have deliberately made the centre and spread of p_beta
different from beta
as I will graph these below on the same graph.
Now run the model again
fit1 <- stan(model_code = ms,
data = data_reg,
chains = 1,
warmup = 1e3,
iter = 2e3)
And extract the samples
post <- as.data.frame(extract(fit1, pars = c("p_alpha", "p_beta", "p_sigma", "alpha", "beta", "sigma")))
head(post)
# output
p_alpha p_beta p_sigma alpha beta sigma
1 -81.44259 3.275672 -1.1416369 3.121382 2.499459 2.354001
2 161.03740 3.694711 0.2989131 3.648288 2.335520 2.140973
3 126.58106 3.495947 -2.0027929 3.846835 2.266247 3.037055
4 18.55785 3.283425 -0.4045153 2.903958 1.854639 1.807591
5 103.02826 5.213568 -18.3721863 3.980290 1.725396 2.178264
6 49.50477 1.737679 6.5971377 4.209471 2.535044 2.941958
Here are the priors and posteriors as separate plots
So now we have raw priors and posteriors for the same parameters in the same dataframe.
Now what if we want to put prior and posterior on the same graph?
First put the two parameters p_beta
and beta
into a dataframe, making it long-form so that estimates are in one column and distribution (prior vs posterior) in the other.
library(dplyr)
betaDF <- post %>% dplyr::select(grep("^.*beta$", names(.))) %>%
gather(key = source, value = estimate) %>%
transform(source = factor(ifelse(source == "p_beta", "prior", "posterior"), levels = c("prior", "posterior")))
Now graph it
ggplot(betaDF, aes(x = estimate, fill = source)) +
geom_density(alpha = 0.3) +
coord_cartesian(xlim = c(-5,10)) +
labs(x = "beta")