Bayesian Statistics for the Social Sciences

David Kaplan

Hardcovere-bookprint + e-book
July 23, 2014
ISBN 9781462516513
Price: $61.00 $51.85
318 Pages
Size: 6⅛" x 9¼"
July 23, 2014
Price: $61.00 $51.85
318 Pages
print + e-book
Hardcover + e-Book (PDF) ?
Price: $122.00 $67.10
318 Pages

Bridging the gap between traditional classical statistics and a Bayesian approach, David Kaplan provides readers with the concepts and practical skills they need to apply Bayesian methodologies to their data analysis problems. Part I addresses the elements of Bayesian inference, including exchangeability, likelihood, prior/posterior distributions, and the Bayesian central limit theorem. Part II covers Bayesian hypothesis testing, model building, and linear regression analysis, carefully explaining the differences between the Bayesian and frequentist approaches. Part III extends Bayesian statistics to multilevel modeling and modeling for continuous and categorical latent variables. Kaplan closes with a discussion of philosophical issues and argues for an "evidence-based" framework for the practice of Bayesian statistics.

User-Friendly Features

This title is part of the Methodology in the Social Sciences Series, edited by Todd D. Little, PhD.

“As the name suggests, Bayesian Statistics for the Social Sciences is a valuable read for researchers, practitioners, teachers, and graduate students in the field of social sciences….Extremely accessible and incredibly delightful….The wide breadth of topics covered, along with the author’s clear and engaging style of writing and inclusion of numerous examples, should provide an adequate foundation for any psychologist wishing to take a leap into Bayesian thinking. Furthermore, the technical details and analytic aspects provided in all chapters should equip readers with enough knowledge to embark on Bayesian analysis with their own research data.”


“Bayesian analysis has arrived—and Kaplan has written exactly the book that social science faculty members and graduate students need in order to learn Bayesian statistics. It is sophisticated yet accessible, complete yet an easy read. This book will ride the crest of the Bayesian wave for years to come.”

—William R. Shadish, PhD, Department of Psychological Sciences, University of California, Merced

“I like that this book is concise but very comprehensive, with topics ranging from the basic regression model to the advanced mixture model. Well-organized sections move from foundations; to model building, basic regression, and generalized linear models; to advanced topics. The author's explanations of concepts and examples are clear and straightforward. He has chosen his examples well; they address very commonly studied research questions in the educational sciences. The ability to access the code and data online will benefit researchers and students tremendously.”

—Feifei Ye, PhD, Department of Psychology in Education, University of Pittsburgh

“We are all Bayesians at heart—in that we all have prior knowledge—so why use a frequentist approach to statistics? This book can help you understand and implement a Bayesian approach.”

—John J. McArdle, PhD, Department of Psychology, University of Southern California

“This much-needed book bridges the gap between Bayesian statistics and social sciences. It provides the reader with basic knowledge and practical skills for applying Bayesian methodologies to data-analysis problems. The focus on Bayesian psychometric modeling is noteworthy and unique.”

—Jay Myung, PhD, Department of Psychology, Ohio State University

Table of Contents

I. Foundations of Bayesian Statistics

1. Probability Concepts and Bayes' Theorem

1.1. Relevant Probability Axioms

1.1.1. Probability as Long-Run Frequency

1.1.2. The Kolmogorov Axioms of Probability

1.1.3. The Rényi Axioms of Probability

1.1.4. Bayes' Theorem

1.1.5. Epistemic Probability

1.1.6. Coherence

1.2. Summary

1.3. Suggested Readings

2. Statistical Elements of Bayes' Theorem

2.1. The Assumption of Exchangeability

2.2. The Prior Distribution

2.2.1. Noninformative Priors

2.2.2 .Informative Priors

2.3. Likelihood

2.3.1. The Law of Likelihood

2.4. The Posterior Distribution

2.5. The Bayesian Central Limit Theorem and Bayesian Shrinkage

2.6. Summary

2.7. Suggested Readings

2.8. Appendix 2.1. Derivation of Jeffreys' Prior

3. Common Probability Distributions

3.1. The Normal Distribution

3.1.1. The Conjugate Prior for the Normal Distribution

3.2. The Uniform Distribution

3.2.1. The Uniform Distribution as a Noninformative Prior

3.3. The Poisson Distribution

3.3.1. The Gamma Density: Conjugate Prior for the Poisson Distribution

3.4. The Binomial Distribution

3.4.1. The Beta Distribution: Conjugate Prior for the Binomial Distribution

3.5. The Multinomial Distribution

3.5.1. The Dirichlet Distribution: Conjugate Prior for the Multinomial Distribution

3.6. The Wishart Distribution

3.6.1. The Inverse-Wishart Distribution: Conjugate Prior for the Wishart Distribution

3.7. Summary

3.8. Suggested Readings

3.9. Appendix 3.1. R Code for Chapter 3

4. Markov Chain Monte Carlo Sampling

4.1. Basic Ideas of MCMC Sampling

4.2. The Metropolis–Hastings Algorithm

4.3. The Gibbs Sampler

4.4. Convergence Diagnostics

4.5. Summary

4.6. Suggested Readings

4.7. Appendix 4.1. R Code for Chapter 4

II. Topics in Bayesian Modeling

5. Bayesian Hypothesis Testing

5.1. Setting the Stage: The Classical Approach to Hypothesis Testing and Its Limitations

5.2. Point Estimates of the Posterior Distribution

5.2.1. Interval Summaries of the Posterior Distribution

5.3. Bayesian Model Evaluation and Comparison

5.3.1. Posterior Predictive Checks

5.3.2. Bayes Factors

5.3.3. The Bayesian Information Criterion

5.3.4. The Deviance Information Criterion

5.4. Bayesian Model Averaging

5.4.1 Occam's Window

5.4.2. Markov Chain Monte Carlo Model Composition

5.5. Summary

5.6. Suggested Readings

6. Bayesian Linear and Generalized Linear Models

6.1. A Motivating Example

6.2. The Normal Linear Regression Model

6.3. The Bayesian Linear Regression Model

6.3.1. Noninformative Priors in the Linear Regression Model

6.3.2. Informative Conjugate Priors

6.4. Bayesian Generalized Linear Models

6.4.1. The Link Function

6.4.2. The Logit-Link Function for Logistic and Multinomial Models

6.5 Summary

6.6 Suggested Readings

6.7. Appendix 6.1. R Code for Chapter 6

7. Missing Data from a Bayesian Perspective

7.1. A Nomenclature for Missing Data

7.2. Ad Hoc Deletion Methods for Handling Missing Data

7.2.1. Listwise Deletion

7.2.2. Pairwise Deletion

7.3. Single Imputation Methods

7.3.1. Mean Imputation

7.3.2. Regression Imputation

7.3.3. Stochastic Regression Imputation

7.3.4. Hot-Deck Imputation

7.3.5. Predictive Mean Matching

7.4. Bayesian Methods of Multiple Imputation

7.4.1. Data Augmentation

7.4.2. Chained Equations

7.4.3. EM Bootstrap: A Hybrid Bayesian/Frequentist Method

7.4.4. Bayesian Bootstrap Predictive Mean Matching

7.5. Summary

7.6. Suggested Readings

7.7. Appendix 7.1. R Code for Chapter 7

III. Advanced Bayesian Modeling Methods

8. Bayesian Multilevel Modeling

8.1 Bayesian Random Effects Analysis of Variance

8.2. Revisiting Exchangeability

8.3. Bayesian Multilevel Regression

8.4. Summary

8.5. Suggested Readings

8.6. Appendix 8.1. R Code for Chapter 8

9. Bayesian Modeling for Continuous and Categorical Latent Variables

9.1. Bayesian Estimation of the CFA Model

9.1.1. Conjugate Priors for CFA Model Parameters

9.2. Bayesian SEM

9.2.1. Conjugate Priors for SEM Parameters

9.2.2. MCMC Sampling for Bayesian SEM

9.3. Bayesian Multilevel SEM

9.4. Bayesian Growth Curve Modeling

9.5. Bayesian Models for Categorical Latent Variables

9.5.1. Mixture Model Specification

9.5.2. Bayesian Mixture Models

9.6. Summary

9.7. Suggested Readings

9.8. Appendix 9.1. “RJAGS” Code for Chapter 9

10. Philosophical Debates in Bayesian Statistical Inference

10.1. A Summary of the Bayesian Versus Frequentist Schools of Statistics

10.1.1. Conditioning on Data

10.1.2. Inferences Based on Data Actually Observed

10.1.3. Quantifying Evidence

10.1.4. Summarizing the Bayesian Advantage

10.2. Subjective Bayes

10.3. Objective Bayes

10.4. Final Thoughts: A Call for Evidence-Base Subjective Bayes



About the Author

David Kaplan, PhD, is Professor of Quantitative Methods in the Department of Educational Psychology at the University of Wisconsin–Madison and holds affiliate appointments in the Department of Population Health Sciences and the Center for Demography and Ecology. Dr. Kaplan’s program of research focuses on the development of Bayesian statistical methods for education research. His work on these topics is directed toward application to quasi-experimental and large-scale cross-sectional and longitudinal survey designs. He is most actively involved in the Program for International Student Assessment, sponsored by the Organisation for Economic Co-operation and Development—he served on its Technical Advisory Group from 2005 to 2009 and currently serves as Chair of its Questionnaire Expert Group. Dr. Kaplan also is a member of the Questionnaire Standing Committee of the U.S. National Assessment of Educational Progress, is a Fellow of the American Psychological Association (Division 5), and was a Jeanne Griffith Fellow at the National Center for Education Statistics.


Behavioral and social science researchers; instructors and graduate students in psychology, education, sociology, management, and public health.

Course Use

Will serve as a core book for courses on Bayesian or advanced quantitative techniques.