With modern "Big Data" settings, off-the-shelf statistical machine learning methods are frequently proving insufficient. A key challenge posed by these modern settings is that the data might have a large number of features, in what we will call "Big-p" data, to denote the fact that the dimension "p" of the data is large, potentially even larger than the number of samples. It is now well understood that estimation with strong statistical guarantees is still possible under such high-dimensional settings provided we impose suitable structure on the model space. Accordingly, a strong and large body of work over the last decade on high-dimensional statistical inference has studied models with various types of structure (e.g. sparse vectors; block-structured matrices; low-rank matrices; and Markov assumptions). A general approach to estimation under such settings is to use a regularized M-estimator which combines a loss function, measuring goodness-of-fit of the models to the data, with some regularization function that encourages the assumed structure. In this talk, we discuss a unified framework for estimation of general structurally constrained high-dimensional models, where we establish consistency and convergence rates for any regularized M-estimation procedure under high-dimensional scaling given any underlying model structure. We state one main theorem and show how it can be used to re-derive several existing results, and also to obtain several new results on consistency and convergence rates. Finally, we will discuss an even further generalization of our analysis for the estimation of statistical models with general "heterogeneous" structures that are superpositions of homogeneous structures such as sparse, block-sparse and so on: thus providing a very broadly applicable toolkit for structured high-dimensional statistical estimation.