Product Cover

Bayesian Structural Equation Modeling

Sarah Depaoli

Hardcovere-bookprint + e-book
Hardcover
August 16, 2021
ISBN 9781462547746
Price: $80.00
521 Pages
Size: 7" x 10"
order
e-book
July 1, 2021
PDF ?
Price: $80.00
521 Pages
order
print + e-book
Hardcover + e-Book (PDF) ?
Price: $160.00 $96.00
521 Pages
order
professor copy Request a free digital professor copy on VitalSource ?

This book offers researchers a systematic and accessible introduction to using a Bayesian framework in structural equation modeling (SEM). Stand-alone chapters on each SEM model clearly explain the Bayesian form of the model and walk the reader through implementation. Engaging worked-through examples from diverse social science subfields illustrate the various modeling techniques, highlighting statistical or estimation problems that are likely to arise and describing potential solutions. For each model, instructions are provided for writing up findings for publication, including annotated sample data analysis plans and results sections. Other user-friendly features in every chapter include “Major Take-Home Points,” notation glossaries, annotated suggestions for further reading, and sample code in both Mplus and R. The companion website (www.guilford.com/depaoli-materials) supplies data sets; annotated code for implementation in both Mplus and R, so that users can work within their preferred platform; and output for all of the book’s examples.

This title is part of the Methodology in the Social Sciences Series, edited by Todd D. Little, PhD.


“The structure of each chapter is extremely well thought-out and facilitates understanding. A brief introduction to each topic is followed by an in-depth discussion, an example, and hypothetical results and discussion. The section about how to write up findings for each SEM analysis will be extremely helpful to readers; this is something that instructors are typically left to try to come up with on their own. I would absolutely consider using this book for a class on Bayesian SEM—or a lecture on the topic in a broader SEM course—as well as for my own professional use as a reference guide.”

—Katerina Marcoulides, PhD, Department of Psychology, University of Minnesota Twin Cities


“Depaoli has created a book that will quickly have a positive impact on researchers and students looking to expand their analytic capabilities. The text's design and writing style will engage readers with different levels of familiarity with Bayesian analysis and SEM. Instructors can flexibly change the level and amount of technical and mathematical information for different courses. I will add this text to my course to replace the hodgepodge of documents, website links, and articles needed for comprehension and usage of Bayesian SEM.”

—James B. Schreiber, PhD, School of Nursing, Duquesne University


“Researchers interested in applying Bayesian SEM in the social sciences will benefit from reading this book or taking a course based on it. Each chapter is well organized; the introduction sections are particularly useful. All methods are illustrated by code, which is an important step toward implementing the methods and applying them to real problems.”

—Peng Ding, PhD, Department of Statistics, University of California, Berkeley


“This book is a 'must read' for anyone who wants to do or review Bayesian SEM. It is structured well for the advanced graduate student and moderately versed researcher. The chapters are highly readable, and I really appreciate the annotated bibliography of select resources, which will be a great help to students and faculty.”

—Michael D. Toland, PhD, Executive Director, The Herb Innovation Center, Judith Herb College of Education, University of Toledo

Table of Contents

Preface

I. Introduction

1. Background

1.1. Bayesian Statistical Modeling: The Frequency of Use

1.2. The Key Impediments within Bayesian Statistics

1.3. Benefits of Bayesian Statistics within SEM

1.3.1. A Recap: Why Bayesian SEM?

1.4. Mastering the SEM Basics: Precursors to Bayesian SEM

1.4.1. The Fundamentals of SEM Diagrams and Terminology

1.4.2. LISREL Notation

1.4.3. Additional Comments about Notation

1.5. Datasets used in the Chapter Examples

1.5.1. Cynicism Data

1.5.2. Early Childhood Longitudinal Survey–Kindergarten Class

1.5.3. Holzinger and Swineford (1939)

1.5.4. IPIP 50: Big Questionnaire

1.5.5. Lakaev Academic Stress Response Scale

1.5.6. Political Democracy

1.5.7. Program for International Student Assessment

1.5.8. Youth Risk Behavior Survey

2. Basic Elements of Bayesian Statistics

2.1. A Brief Introduction to Bayesian Statistics

2.2. Setting the Stage

2.3. Comparing Frequentist and Bayesian Inference

2.4. The Bayesian Research Circle

2.5. Bayes’ Rule

2.6. Prior Distributions

2.6.1. The Normal Prior

2.6.2. The Uniform Prior

2.6.3. The Inverse Gamma Prior

2.6.4. The Gamma Prior

2.6.5. The Inverse Wishart Prior

2.6.6. The Wishart Prior

2.6.7. The Beta Prior

2.6.8. The Dirichlet Prior

2.6.9. Different Levels of Informativeness for Prior Distributions

2.6.10. Prior Elicitation

2.6.11. Prior Predictive Checking

2.7. The Likelihood (Frequentist and Bayesian Perspectives)

2.8. The Posterior

2.8.1. An Introduction to Markov Chain Monte Carlo Methods

2.8.2. Sampling Algorithms

2.8.3. Convergence

2.8.4. MCMC Burn-in Phase

2.8.5. The Number of Markov Chains

2.8.6. A Note about Starting Values

2.8.7. Thinning a Chain

2.9. Posterior Inference

2.9.1. Posterior Summary Statistics

2.9.2. Intervals

2.9.3. Effective Sample Size

2.9.4. Trace-plots

2.9.5. Autocorrelation Plots

2.9.6. Posterior Histogram and Density Plots

2.9.7. HDI Histogram and Density Plots

2.9.8. Model Assessment

2.9.9. Sensitivity Analysis

2.10. A Simple Example

2.11. Chapter Summary

2.11.1. Major Take Home Points

2.11.2. Notation Referenced

2.11.3. Annotated Bibliography of Select Resources

Appendix A: Getting Started with R

II. Measurement Models and Related Issues

3. The Confirmatory Factor Analysis Model

3.1. Introduction to Bayesian CFA

3.2. The Model and Notation

3.2.1. Handling Indeterminacies in CFA

3.3. The Bayesian Form of the CFA Model

3.3.1. Additional Information about the (Inverse) Wishart Prior

3.3.2. Alternative Priors for Covariance Matrices

3.3.3. Alternative Priors for Variances

3.3.4. Alternative Priors for Factor Loadings

3.4. Example: Basic Confirmatory Factor Analysis Model

3.5. Example: Implementing Near-Zero Priors for Cross-Loadings

3.6. How to Write up Bayesian CFA Results

3.6.1. Hypothetical Data Analysis Plan

3.6.2. Hypothetical Results Section

3.6.3. Discussion Points Relevant to the Analysis

3.7. Chapter Summary

3.7.1. Major Take Home Points

3.7.2. Notation Referenced

3.7.3. Annotated Bibliography of Select Resources

3.7.4. Example Code for Mplus

3.7.5. Example Code for R

4. Multiple Group Models

4.1. A Brief Introduction to Multi-Group Models

4.2. Introduction to the Multiple-Group CFA Model (with Mean Differences)

4.3. The Model and Notation

4.4. The Bayesian Form of the Multiple-Group CFA Model

4.5. Example: Using a Mean Differences, Multiple-Group CFA Model to Assess for School Differences

4.6. Introduction to the MIMIC Model

4.7. The Model and Notation

4.8. The Bayesian Form of the MIMIC Model

4.9. Example: Using the MIMIC Model to Assess for School Differences

4.10. How to Write up Bayesian Multiple-Group Model Results with Mean Differences

4.10.1. Hypothetical Data Analysis Plan

4.10.2. Hypothetical Results Section

4.10.3. Discussion Points Relevant to the Analysis

4.11. Chapter Summary

4.11.1. Major Take Home Points

4.11.2. Notation Referenced

4.11.3. Annotated Bibliography of Select Resources

4.11.4. Example Code for Mplus

4.11.5. Example Code for R

5. Measurement Invariance Testing sample

5.1. A Brief Introduction to Measurement Invariance in SEM

5.1.1. Stages of Traditional MI Testing

5.1.2. Challenges Within Traditional MI Testing

5.2. Bayesian Approximate MI

5.3. The Model and Notation

5.4. Priors within Bayesian Approximate MI

5.5. Example: Illustrating Bayesian Approximate MI for School Differences

5.5.1. Results for the Conventional MI Tests

5.5.2. Results for the Bayesian Approximate MI Tests

5.5.3. Results Comparing Latent Means Across Approaches

5.5.4. Hypothetical Data Analysis Plan

5.6. How to Write up Bayesian Approximate Measurement Invariance Results

5.6.1. Analytic Procedure

5.6.2. Results

5.6.3. Discussion Points Relevant to the Analysis

5.7. Chapter Summary

5.7.1. Major Take Home Points

5.7.2. Notation Referenced

5.7.3. Annotated Bibliography of Select Resources

5.7.4. Example Code for Mplus

5.7.5. Example Code for R

III. Extending the Structural Model

6. The General Structural Equation Model

6.1. Introduction to Bayesian SEM

6.2. The Model and Notation

6.3. The Bayesian form of SEM

6.4. Example: Revisiting Bollen’s (1989) Political Democracy Example

6.4.1. Motivation for this Example

6.4.2. The Current Example

6.5. How to Write up Bayesian SEM Results

6.5.1. Hypothetical Data Analysis Plan

6.5.2. Hypothetical Results Section

6.5.3. Discussion Points Relevant to the Analysis

6.6. Chapter Summary

6.6.1. Major Take Home Points

6.6.2. Notation Referenced

6.6.3. Annotated Bibliography of Select Resources

6.6.4. Example Code for Mplus

6.6.5. Example Code for R

Appendix A: Causal Inference and Mediation Analysis

7. Multilevel Structural Equation Modeling

7.1. Introduction to Multilevel SEM

7.1.1. MSEM Applications

7.1.2. Contextual Effects

7.2. Extending MSEM into the Bayesian Context

7.3. The Model and Notation

7.4. The Bayesian form of MSEM

7.5. Example: A 2-Level CFA with Continuous Items

7.5.1. Implementation of Example

7.5.2. Example Results

7.6. Example: A Three-Level CFA with Categorical Items

7.6.1. Implementation of Example

7.6.2. Example Results

7.7. How to Write up Bayesian MSEM Results

7.7.1. Hypothetical Data Analysis Plan

7.7.2. Hypothetical Results Section

7.7.3. Discussion Points Relevant to the Analysis

7.8. Chapter Summary

7.8.1. Major Take Home Points

7.8.2. Notation Referenced

7.8.3. Annotated Bibliography of Select Resources

7.8.4. Example Code for Mplus

7.8.5. Example Code for R

IV. Longitudinal and Mixture Models

8. Latent Growth Curve Modeling

8.1. Introduction to Bayesian LGCM

8.2. The Model and Notation

8.2.1. Extensions of the LGCM

8.3. The Bayesian Form of the LGCM

8.3.1. Alternative Priors for the Factor Variances and Covariances

8.4. Example: Bayesian Estimation of the LGCM using ECLS–K Reading Data

8.5. Example: Extending the Example to Include Separation Strategy Priors

8.6. Example: Extending the Framework to Assessing Measurement Invariance Over Time

8.7. How to Write up Bayesian LGCM Results

8.7.1. Hypothetical Data Analysis Plan

8.7.2. Hypothetical Results Section

8.7.3. Discussion Points Relevant to the Analysis

8.8. Chapter Summary

8.8.1. Major Take Home Points

8.8.2. Notation Referenced

8.8.3. Annotated Bibliography of Select Resources

8.8.4. Example Code for Mplus

8.8.5. Example Code for R

9. The Latent Class Model

9.1. A Brief Introduction to Mixture Models

9.2. Introduction to Bayesian LCA

9.3. The Model and Notation

9.3.1. Introducing the Issue of Class Separation

9.4. The Bayesian Form of the LCA Model

9.4.1. Adding Flexibility to the LCA Model

9.5. Mixture Models, Label Switching, and Possible Solutions

9.5.1. Identifiability Constraints

9.5.2. Relabeling Algorithms

9.5.3. Label Invariant Loss Functions

9.5.4. Final Thoughts on Label Switching

9.6. Example: A Demonstration of Bayesian LCA

9.6.1. Motivation for this Example

9.6.2. The Current Example

9.7. How to Write up Bayesian LCA Results

9.7.1. Hypothetical Data Analysis Plan

9.7.2. Hypothetical Results Section

9.7.3. Discussion Points Relevant to the Analysis

9.8. Chapter Summary

9.8.1. Major Take Home Points

9.8.2. Notation Referenced

9.8.3. Annotated Bibliography of Select Resources

9.8.4. Example Code for Mplus

9.8.5. Example Code for R

10. The Latent Growth Mixture Model

10.1. Introduction to Bayesian LGMM

10.2. The Model and Notation

10.2.1. Concerns with Class Separation

10.3. The Bayesian Form of the LGMM

10.3.1. Alternative Priors for Factor Means

10.3.2. Alternative Priors for the Measurement Error Covariance Matrix

10.3.3. Alternative Priors for the Factor Covariance Matrix

10.3.4. Handling Label Switching in LGMMs

10.4. Example: Comparing Different Prior Conditions in an LGMM

10.4.1. Hypothetical Data Analysis Plan

10.5. How to Write up Bayesian LGMM Results

10.5.1. Hypothetical Results Section

10.5.2. Discussion Points Relevant to the Analysis

10.6. Chapter Summary

10.6.1. Major Take Home Points

10.6.2. Notation Referenced

10.6.3. Annotated Bibliography of Select Resources

10.6.4. Example Code for Mplus

10.6.5. Example Code for R

V. Special Topics

11. Model Assessment

11.1. Model Comparison and Cross Validation

11.1.1. Bayes Factors

11.1.2. The Bayesian Information Criterion

11.1.3. The Deviance Information Criterion

11.1.4. The Widely Applicable Information Criterion

11.1.5. Leave-One-Out Cross-Validation

11.2. Model Fit

11.2.1. Posterior Predictive Model Checking

11.2.2. Missing Data and the PPC Procedure

11.2.3. Testing Near-Zero Parameters Through the PPPP

11.3. Bayesian Approximate Fit

11.3.1. Bayesian Root Mean Square Error of Approximation

11.3.2. Bayesian Tucker-Lewis Index

11.3.3. Bayesian Normed Fit Index

11.3.4. Bayesian Comparative Fit Index

11.3.5. Implementation of These Indices

11.4. Example: Illustrating the PPC and the PPPP for CFA

11.5. Example: Illustrating Bayesian Approximate Fit for CFA

11.6. How to Write up Bayesian Approximate Fit Results

11.6.1. Hypothetical Data Analysis Plan

11.6.2. Hypothetical Results Section

11.6.3. Discussion Points Relevant to the Analysis

11.7. Chapter Summary

11.7.1. Major Take Home Points

11.7.2. Notation Referenced

11.7.3. Annotated Bibliography of Select Resources

11.7.4. Example Code for Mplus

11.7.5. Example Code for R

12. Important Points to Consider

12.1. Implementation and Reporting of Bayesian Results

12.1.1. Priors Implemented

12.1.2. Convergence

12.1.3. Sensitivity Analysis

12.1.4. How Should We Interpret These Findings?

12.2. Points to Check Prior to Data Analysis

12.2.1. Is Your Model Formulated “Correctly”?

12.2.2. Do You Understand the Priors?

12.3. Points to Check After Initial Data Analysis, but Before Interpretation of Results

12.3.1. Convergence

12.3.2. Does Convergence Remain After Doubling the Number of Iterations?

12.3.3. Is There Ample Information in the Posterior Histogram?

12.3.4. Is There a Strong Degree of Autocorrelation in the Posterior?

12.3.5. Does the Posterior make Substantive Sense?

12.4. Understanding the Influence of Priors

12.4.1. Examining the Influence of Priors on Multivariate Parameters (e.g., Covariance Matrices)

12.4.2. Comparing the Original Prior to Other Diffuse or Subjective Priors

12.5. Incorporating Model Fit or Model Comparison

12.6. Interpreting Model Results the “Bayesian Way”

12.7. How to Write up Bayesian Results

12.7.1. (Hypothetical) Results for Bayesian Two-Factor CFA

12.8. How to Review Bayesian Work

12.9. Chapter Summary and Looking Forward

References

Glossary

About the Author


About the Author

Sarah Depaoli, PhD, is Associate Professor of Quantitative Methods, Measurement, and Statistics in the Department of Psychological Sciences at the University of California, Merced, where she teaches undergraduate statistics and a variety of graduate courses in quantitative methods. Her research interests include examining different facets of Bayesian estimation for latent variable, growth, and finite mixture models. She has a continued interest in the influence of prior distributions and robustness of results under different prior specifications, as well as issues tied to latent class separation. Her recent research has focused on using Bayesian semi- and non-parametric methods for obtaining proper class enumeration and assignment, examining parameterization issues within Bayesian SEM, and studying the impact of priors on longitudinal models.

Audience

Behavioral and social science researchers; instructors and graduate students in fields including psychology, education, sociology, management, and public health.

Course Use

Will serve as a required or recommended book for graduate-level courses in Bayesian latent variable modeling, advanced SEM, advanced quantitative methods, Bayesian analysis, or advanced research methodology.