Skip to main content

Bayesian Models of Human Learning and Inference

In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, the existence of hidden properties of object, or the causal relations among events from just one or a few relevant observations. How do they do it? We propose that people's intuitive inductive leaps can be explained in terms of Bayesian computations operating over constrained representations of world structure -- what cognitive scientists have called "intuitive theories" or "schemas". For each of several human learning tasks, we will consider how the appropriate knowledge representations are structured, how these representations guide Bayesian learning and reasoning, and how these representations could themselves be learned via Bayesian methods. Models will be evaluated both in terms of how well they capture quantitative or qualitative patterns of human behavior, and their ability to solve analogous real-world problems of learning and inference. The models we discuss will draw on -- and hopefully, offer new insights for -- several directions in contemporary Bayesian statistics and machine learning, such as semi-supervised learning, modeling relational data, structure learning in graphical models, hierarchical Bayesian modeling, and Bayesian nonparametrics.