The aim of this thesis is to explore and develop the application of abstract interpretation techniques to the context of machine learning. While big data and artificial intelligence have become widespread technologies, the lack of formal verification systems represents a serious limitation. Machine learning models can be very accurate but unstable, fragile or unfair, which makes them unsuitable in safety critical scenarios. We propose a theoretical framework for measuring stability related properties of machine learning classifiers, such as robustness and fairness, relying on abstract interpretation to achieve a formal, efficient and tunable mechanism to systematically deliver proofs or disproofs, possibly along with counter examples. We show how this framework can be instantiated to realistic models, accounting Support Vector Machines, Decision Tree and Decision Tree Ensembles, conduct a rigorous experimental evaluation and compare to recent, similar works from the literature. Last, we use the verification procedure as part of the combinatorial optimization necessary to train a model, thus defining training algorithms which are able to produce classifiers which are inherently more stable than current state-of-the-art ones. The significance of this study is that is approaches the problems related to machine learning using a formal, mathematical characterization, rather than relying on statistical results, which allowed to build actual novel tools for verification and training.