Most Bayesian analyses conclude with a summary of the posterior distribution, thus summarizing uncertainty about parameters of interest. For purposes of inference, this is not enough, as it avoids stating what it is about the parameters that we want actually want to know. Formally, deciding our criteria for a 'good' answer defines a loss function, or utility, and is usually only considered for point estimation problems. We present several new results for interval estimation, where simple and interpretable loss functions provide formal justification for two-sided p-values, Bonferroni correction, the Benjamini-Hochberg algorithm, and standard sample-size calculations. Other consequences of this work will be discussed, including a resolution of Lindley's Paradox that is neither Bayesian nor frequentist.