In this talk, I will present a new approach to sparse-signal detection called the horseshoe estimator. The horseshoe is a close cousin of the lasso in that it arises from the same class of multivariate scale mixtures of normals, but that it is more robust alternative at handling unknown sparsity patterns. A theoretical framework is proposed for understanding why the horseshoe is a better default sparsity estimator than those that arise from powered-exponential priors. Comprehensive numerical evidence is presented to show that the difference in performance can often be large. Most importantly, I will show that the horseshoe estimator corresponds quite closely to the answers one would get if one pursued a full Bayesian model-averaging approach using a point mass at zero for noise, and a continuous density for signals. Surprisingly, this correspondence holds both for the estimator itself and for the classification rule induced by a simple threshold applied to the estimator. For most of this talk I will study sparsity in the simplified context of estimating a vector of normal means. It is here that the lessons drawn from a comparison of different approaches for modeling sparsity are most readily understood, but these lessons generalize straightforwardly to more difficult problems--regression, covariance regularization, function estimation--where many of the challenges of modern statistics lie. This is join work with Nicholas Polson and James Scott.