Regression and structural equation modeling are two techniques commonly employed in social scientific analyses. Both methods produce equations (often linear) relating a 'response' to a set of 'explanatory' variables. However, it is well-known that these methods often produce quite different equations. This immediately raises a number of questions, including:

1. How does the interpretation of a coefficient in a regression equation differ from the interpretation of a structural coefficient? Or equivalently, which substantive questions do these methods attempt to answer?

2. When will a given coefficient in a regression equation consistently estimate the coefficient in a postulated structural model?

3. When covariates are said to be "controlled for" by adding them to a regression equation, what is being asserted?

4. What assumptions or background knowledge are required for a coefficient in a regression equation to be interpretable as a 'structural' coefficient in a possibly unknown structural equation model?

I will show that many of these questions may be answered by examining the path diagrams associated with structural equation models.

The talk will describe some of my recent work on this topic, but will also draw heavily on the existing literature on causation which has developed in diverse fields over the last 50 years, including: Econometrics, Statistics, Computer Science, Epidemiology, Philosophy and Sociology.