Built using Zelig version 5.1.4.90000
Bayesian Normal Linear Regression with normal.bayes
.
Use Bayesian regression to specify a continuous dependent variable as a linear function of specified explanatory variables. The model is implemented using a Gibbs sampler. See for the maximum-likelihood implementation or for the ordinary least squares variation.
Use the following arguments to monitor the convergence of the Markov chain:
burnin
: number of the initial MCMC iterations to be discarded (defaults to 1,000).
mcmc
: number of the MCMC iterations after burnin (defaults to 10,000).
thin
: thinning interval for the Markov chain. Only every thin
-th draw from the Markov chain is kept. The value of mcmc
must be divisible by this value. The default value is 1.
verbose
: defaults to FALSE. If TRUE
, the progress of the sampler (every \(10\%\)) is printed to the screen.
seed
: seed for the random number generator. The default is NA
, which corresponds to a random seed of 12345.
beta.start
: starting values for the Markov chain, either a scalar or vector with length equal to the number of estimated coefficients. The default is NA
, which uses the least squares estimates as the starting values.
Use the following arguments to specify the model’s priors:
b0
: prior mean for the coefficients, either a numeric vector or a scalar. If a scalar, that value will be the prior mean for all the coefficients. The default is 0.
B0
: prior precision parameter for the coefficients, either a square matrix (with the dimensions equal to the number of the coefficients) or a scalar. If a scalar, that value times an identity matrix will be the prior precision parameter. The default is 0, which leads to an improper prior.
c0
: c0/2
is the shape parameter for the Inverse Gamma prior on the variance of the disturbance terms.
d0
: d0/2
is the scale parameter for the Inverse Gamma prior on the variance of the disturbance terms.
Zelig users may wish to refer to help(MCMCregress)
for more information.
Attaching the sample dataset:
data(macro)
Estimating linear regression using normal.bayes
:
z.out <- zelig(unem ~ gdp + capmob + trade, model = "normal.bayes",
data = macro, verbose = FALSE)
## How to cite this model in Zelig:
## Ben Goodrich, and Ying Lu. 2013.
## normal-bayes: Bayesian Normal Linear Regression
## in Christine Choirat, Christopher Gandrud, James Honaker, Kosuke Imai, Gary King, and Olivia Lau,
## "Zelig: Everyone's Statistical Software," http://zeligproject.org/
You can check for convergence before summarizing the estimates with three diagnostic tests. See the section Diagnostics for Zelig Models for examples of the output with interpretation:
z.out$geweke.diag()
z.out$heidel.diag()
z.out$raftery.diag()
summary(z.out)
## Model:
##
## Iterations = 1001:11000
## Thinning interval = 1
## Number of chains = 1
## Sample size per chain = 10000
##
## 1. Empirical mean and standard deviation for each variable,
## plus standard error of the mean:
##
## Mean SD Naive SE Time-series SE
## (Intercept) 6.17726 0.451477 4.515e-03 4.515e-03
## gdp -0.32396 0.063489 6.349e-04 6.349e-04
## capmob 1.42069 0.165622 1.656e-03 1.656e-03
## trade 0.01993 0.005613 5.613e-05 5.373e-05
## sigma2 7.58338 0.578461 5.785e-03 5.965e-03
##
## 2. Quantiles for each variable:
##
## 2.5% 25% 50% 75% 97.5%
## (Intercept) 5.309269 5.87071 6.17970 6.47793 7.08776
## gdp -0.450438 -0.36694 -0.32373 -0.28122 -0.20117
## capmob 1.092726 1.31054 1.42151 1.53355 1.74610
## trade 0.008726 0.01624 0.01998 0.02369 0.03078
## sigma2 6.534636 7.17656 7.55263 7.95517 8.79571
##
## Next step: Use 'setx' method
Setting values for the explanatory variables to their sample averages:
x.out <- setx(z.out)
Simulating quantities of interest from the posterior distribution given x.out
:
s.out1 <- sim(z.out, x = x.out)
summary(s.out1)
##
## sim x :
## -----
## ev
## mean sd 50% 2.5% 97.5%
## 1 4.994252 0.1478826 4.991904 4.708317 5.283167
## pv
## mean sd 50% 2.5% 97.5%
## [1,] 5.011832 2.720194 5.005844 -0.2898692 10.34369
Set explanatory variables to their default(mean/mode) values, with high (80th percentile) and low (20th percentile) trade on GDP:
x.high <- setx(z.out, trade = quantile(macro$trade, prob = 0.8))
x.low <- setx(z.out, trade = quantile(macro$trade, prob = 0.2))
Estimating the first difference for the effect of high versus low trade on unemployment rate:
s.out2 <- sim(z.out, x = x.high, x1 = x.low)
summary(s.out2)
##
## sim x :
## -----
## ev
## mean sd 50% 2.5% 97.5%
## 1 5.433274 0.1922159 5.432843 5.061819 5.814532
## pv
## mean sd 50% 2.5% 97.5%
## [1,] 5.41142 2.776374 5.443248 0.03801122 10.91195
##
## sim x1 :
## -----
## ev
## mean sd 50% 2.5% 97.5%
## 1 4.599877 0.1854347 4.600784 4.240725 4.965821
## pv
## mean sd 50% 2.5% 97.5%
## [1,] 4.611589 2.757124 4.600312 -0.7679706 10.04142
## fd
## mean sd 50% 2.5% 97.5%
## 1 -0.8333967 0.2346753 -0.8355675 -1.287118 -0.3648508
\[ \begin{aligned} \epsilon_{i} & \sim & \textrm{Normal}(0, \sigma^2) \end{aligned} \]
where \(\epsilon_{i}=Y_i-\mu_i\).
\[ \begin{aligned} \mu_{i}= x_{i} \beta, \end{aligned} \]
where \(x_{i}\) is the vector of \(k\) explanatory variables for observation \(i\) and \(\beta\) is the vector of coefficients.
\[ \begin{aligned} \beta & \sim & \textrm{Normal}_k \left( b_{0},B_{0}^{-1}\right) \\ \sigma^{2} & \sim & {\rm InverseGamma}\left( \frac{c_0}{2}, \frac{d_0}{2} \right) \end{aligned} \]
where \(b_{0}\) is the vector of means for the \(k\) explanatory variables, \(B_{0}\) is the \(k\times k\) precision matrix (the inverse of a variance-covariance matrix), and \(c_0/2\) and \(d_0/2\) are the shape and scale parameters for \(\sigma^{2}\). Note that \(\beta\) and \(\sigma^2\) are assumed to be a priori independent.
qi$ev
) for the linear regression model are calculated as following:\[ \begin{aligned} E(Y) = x_{i} \beta, \end{aligned} \]
given posterior draws of \(\beta\) based on the MCMC iterations.
qi$fd
) for the linear regression model is defined as\[ \begin{aligned} \text{FD}=E(Y\mid X_{1})-E(Y\mid X). \end{aligned} \]
qi$att.ev
) for the treatment group is\[ \begin{aligned} \frac{1}{\sum_{i=1}^n t_{i}}\sum_{i:t_{i}=1} \{ Y_{i}(t_{i}=1)-E[Y_{i}(t_{i}=0)] \}, \end{aligned} \]
where \(t_{i}\) is a binary explanatory variable defining the treatment (\(t_{i}=1\)) and control (\(t_{i}=0\)) groups.
\[ \frac{1}{\sum_{i=1}^n t_i}\sum_{i:t_i=1}^n \left\{ Y_i(t_i=1) - \widehat{Y_i(t_i=0)} \right\}, \]
where \(t_{i}\) is a binary explanatory variable defining the treatment (\(t_{i}=1\)) and control (\(t_{i}=0\)) groups.
The output of each Zelig command contains useful information which you may view. For example, if you run:
z.out <- zelig(y ~ x, model = "normal.bayes", data)
then you may examine the available information in z.out
by using names(z.out)
, see the draws from the posterior distribution of the coefficients
by using z.out$coefficients
, and view a default summary of information through summary(z.out)
. Other elements available through the $
operator are listed below.
From the zelig()
output object z.out
, you may extract:
coefficients
: draws from the posterior distributions of the estimated parameters. The first \(k\) columns contain the posterior draws of the coefficients \(\beta\), and the last column contains the posterior draws of the variance \(\sigma^2\).
zelig.data
: the input data frame if save.data = TRUE.
seed
: the random seed used in the model.
From the sim()
output object s.out
:
qi$ev
: the simulated expected values for the specified values of x
.
qi$fd
: the simulated first difference in the expected values for the values specified in x
and x1
.
qi$att.ev
: the simulated average expected treatment effect for the treated from conditional prediction models.
Bayesian normal regression is part of the MCMCpack library by Andrew D. Martin and Kevin M. Quinn . The convergence diagnostics are part of the CODA library by Martyn Plummer, Nicky Best, Kate Cowles, and Karen Vines.
Martin AD, Quinn KM and Park JH (2011). “MCMCpack: Markov Chain Monte Carlo in R.” Journal of Statistical Software, 42 (9), pp. 22. <URL: http://www.jstatsoft.org/v42/i09/>.