Observational cosmology is going through a golden age. In particular, we are in the midst of an influx of data from on-going experiments, such as the Dark Energy Survey (DES). In the coming five years, the volume and quality of data will rapidly increase as Stage IV surveys, Euclid, LSST and WFIRST, come online. Processing this data will require new algorithms and methods to maximise our science reach and to control for systematic errors. In this talk, I will present a method that we have developed called Monte-Carlo-Control-Loops that relies heavily on forward modelling the observed data by simulating all the processes from cosmology theory to images. Given the complexities of the late-time Universe, these forward models need to capture the important properties of galaxy populations and key features imprinted on the data from the experiments themselves. By bringing together all these elements with advanced statistical methods and new machine learning algorithms, we can build a process for extracting maximal information from the new data, which will allow us to extensively test the physics of the dark sector.