Built using Zelig version 5.1.4.90000
Zelig has three available diagnostic tests for MCMC models. Here is an example of their use. First we attach the sample dataset and estimate a linear regression using the MCMC normal.bayes
model:
data(macro)
z.out <- zelig(unem ~ gdp + capmob + trade, model = "normal.bayes",
data = macro, verbose = FALSE)
## How to cite this model in Zelig:
## Ben Goodrich, and Ying Lu. 2013.
## normal-bayes: Bayesian Normal Linear Regression
## in Christine Choirat, Christopher Gandrud, James Honaker, Kosuke Imai, Gary King, and Olivia Lau,
## "Zelig: Everyone's Statistical Software," http://zeligproject.org/
You can check for convergence before summarizing the estimates with three diagnostic tests.
The Geweke diagnostic tests the null hypothesis that the Markov chain is in the stationary distribution and produces z-statistics for each estimated parameter.
z.out$geweke.diag()
##
## Fraction in 1st window = 0.1
## Fraction in 2nd window = 0.5
##
## (Intercept) gdp capmob trade sigma2
## -0.1178 -0.3130 -0.5891 0.5155 -1.6965
The Heidelberger and Welch diagnostic first tests the null hypothesis that the Markov Chain is in the stationary distribution and produces p-values for each estimated parameter. Calling this diagnostic also produces output that indicates whether the mean of a marginal posterior distribution can be estimated with sufficient precision, assuming that the Markov Chain is in the stationary distribution.
z.out$heidel.diag()
##
## Stationarity start p-value
## test iteration
## (Intercept) passed 1 0.756
## gdp passed 1 0.963
## capmob passed 1 0.355
## trade passed 1 0.339
## sigma2 passed 1 0.626
##
## Halfwidth Mean Halfwidth
## test
## (Intercept) passed 6.1773 0.008849
## gdp passed -0.3240 0.001244
## capmob passed 1.4207 0.003246
## trade passed 0.0199 0.000105
## sigma2 passed 7.5834 0.011691
The Raftery and Lewis diagnostic indicates how long the Markov Chain should run before considering draws from the marginal posterior distributions sufficiently representative of the stationary distribution.
z.out$raftery.diag()
##
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95
##
## Burn-in Total Lower bound Dependence
## (M) (N) (Nmin) factor (I)
## (Intercept) 2 3834 3746 1.020
## gdp 2 3650 3746 0.974
## capmob 2 3771 3746 1.010
## trade 2 3680 3746 0.982
## sigma2 2 3710 3746 0.990
The convergence diagnostics are part of the CODA library by Martyn Plummer, Nicky Best, Kate Cowles, and Karen Vines. Bayesian normal regression is part of the MCMCpack library by Andrew D. Martin and Kevin M. Quinn.
Plummer M, Best N, Cowles K and Vines K (2006). “CODA: Convergence Diagnosis and Output Analysis for MCMC.” R News, 6 (1), pp. 7-11. <URL: https://journal.r-project.org/archive/>.
Geweke J (1992). “Evaluating the accuracy of sampling-based approaches to calculating posterior moments.” In Bernado J, Berger J, Dawid A and Smith A (eds.), Bayesian Statistics 4. Clarendon Press, Oxford, UK.
Heidelberger P and Welch P (1983). “Simulation run length control in the presence of an initial transient.” Operations Research, 31, pp. 1109-44.
Raftery A and Lewis S (1992). “One long run with diagnostics: Implementation strategies for Markov chain Monte Carlo.” Statistical Science, 31, pp. 1109-44.
Raftery A and Lewis S (1995). “The number of iterations, convergence diagnostics and generic Metropolis algorithms.” In Gilks W, Spiegelhalter D and Richardson S (eds.), Practical Markov Chain Monte Carlo. Chapman and Hall, London, UK.
Martin AD, Quinn KM and Park JH (2011). “MCMCpack: Markov Chain Monte Carlo in R.” Journal of Statistical Software, 42 (9), pp. 22. <URL: http://www.jstatsoft.org/v42/i09/>.