Titre : "Sketching for Large-Scale Learning of Mixture Models" Résumé: Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. Given a data collection, it is however a classical assumption that it was drawn from an underlying probability distribution which typically has low intrinsic complexity. This assumption makes it conceivable to apply Compressive Sensing paradigms, in which low-dimensional objects are recovered from fewer measurements than the ambient dimension, directly to probability distributions. A data collection can then be compressed in a few measurements of the underlying probability distribution, referred to as “sketch”, which theoretically encodes all information about this probability distribution. Using this basis principle, is it possible to derive practical methods to perform learning tasks on large databases ? Can theoretical guarantees from Compressive Sensing be adapted to this framework, and is it possible to link these guarantees to more classical learning theory ? We tackle here some aspects of these questions from a practical and theoretical point of view.