Over the past decade, BIC=L2-df*log(n) has become a popular tool for choosing models in social mobility tables and in other statistical models in social research. BIC is easy to calculate and, unlike standard significance testing, provides consistent model selection (i.e. it chooses the correct model with high probability when the sample is large). It also closely approximates a Bayes factor with a unit information prior on the model parameters, i.e. a prior that has about as much information as a single typical observation. One consequence of this is that, if BIC is used as a hypothesis test, it minimizes the total error rate (sum of Type I and Type II errors), on average over data sets drawn from the model and the prior. Recently, BIC has been criticized because the unit information prior on which it is implicitly based can be more spread out than seems reasonable.
I argue that BIC provides a useful reference analysis because if BIC favors the presence of a parameter in the model, there will tend to be agreement that it should be there. I then show how more precise prior information can be used in calculating Bayes factors, via the GLIB software (www.research.att.com/~volinsky/bma.html). I compare the performance of BIC, GLIB, and standard significance testing by simulation, and I also reanalyze the Hazelrigg-Garnier set of mobility tables, which motivated the use of BIC in the first place.
Bayesian Model Selection and Model Averaging for Social Research: Recent Results
Room
209