Skip to main content

Geometric Inference Via Graph Laplacians

Networks can be encoded in terms of their graph Laplacians, matrices which intuitively describe random walks on the network.  When those networks are generated from a geometric space X in a certain sense, an intuitive assumption for social networks, then those graph Laplacians are directly approximating some important information on the space X. For example, it has been shown that the curvature of X at different points affects the hierarchical structure, clusterability, and power laws for the network.  I will survey some recent work in relating the structure of the graph Laplacian to features of the networks, including recent work by myself, Kate Calder, and Anna Smith in gleaning information about networks from the growth rate of the eigenvalues and giving some applications to model fitting. I will then describe some even more recent work by myself in inferring the complete geometric structure of X from graph Laplacians under some mild smoothness assumptions on X - an estimator for intrinsic distances in X between the sample points. The key idea underlying this sort of geometric inference is to regard graph Laplacians not merely as linear operators, but as linear operators satisfying a certain approximate "product rule" for second derivatives.  This talk assumes no prior familiarity with differential geometry or (graph) Laplacians.