Recommender Systems
statistical methods for machine learning
6 CFU, Laurea magistrale in Informatica (F94)
machine learning
6 CFU, MSc in Data Science and Economics

INSTRUCTOR/DOCENTE: Nicolò Cesa-Bianchi

News

For students of the MSc in Data Science and Economics

Bibliographic references:

Lecture notes provided by the instructor

The course makes heavy use of probability and statistics. A good textbook on these topics is:

Dimitri P. Bertsekas and John N. Tsitsiklis, Introduction to Probability (2nd edition). Athena Scientific, 2008.

Some good machine learning textbooks:
Shai Shalev-Shwartz e Shai Ben-David, Understanding Machine Learning: From Theory to Algorithms, Cambridge University Press, 2014.

Mehryar Mohri, Afshin Rostamizadeh e Ameet Talwalkar, Foundations of Machine Learning, MIT Press, 2012.

L. Devroye, L. Gyorfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition, Springer, 1996.

Goals

Machine learning is concerned with the design of algorithms that can predict the evolution of a phenomenon based of a set of observations. A standard tool in the development of intelligent systems, machine learning has been successfully applied to a wide range of domains, including vision, speech and language, human-computer interaction, personalized recommendations, health and medicine, autonomous navigation, and many more. The course will describe and analyze, in a rigorous statistical framework, some of the most important machine learning techniques. This will provide the student with a rich set of methodological tools for understanding the general phenomenon of learning in machines.

Disclaimer

This is a course about the theoretical foundations of machine learning and the analysis of machine learning algorithms. The focus is on understanding the mathematical principles at the basis of machine learning. If you are interested in the practical aspects, please check out the other courses in the Predictive Modeling and Data Science track of the MSc in Computer Science.

Syllabus

Topics will not be necessarily taught in this order. English versions of lecture notes will be made available.

  1. Introduction (Enlgish version of March 5, 2019)
  2. The Nearest Neighbour algorithm (English version of March 5, 2019)
  3. Tree predictors (English version of March 17, 2019)
  4. Statistical learning (English version of March 11, 2019)
  5. Cross-validation (English version of March 23, 2019)
  6. Rischio in Nearest Neighbour (versione del 27 marzo 2018)
  7. Risk analysis for tree predictors (English version of August 2, 2019)
  8. Consistency and nonparametric algorithms (English version of March 26, 2019)
  9. Linear classification (English version of April 20, 2019)
  10. Online gradient descent (English version of April 19, 2019)
  11. From sequential risk to statistical risk (English version of April 22, 2019)
  12. Kernel functions (English version of June 16, 2019)
  13. Support Vector Machines (versione del 28 maggio 2018)
  14. Stability bounds e controllo del rischio in SVM (versione del 20 maggio 2018)
  15. Boosting and ensemble methods (English version of June 16, 2019)
  16. Compression bounds (versione del 30 aprile 2018)
  17. Reti neuronali e deep learning (versione del 6 giugno 2018)

Exams

The exam consists in writing a paper of about 10-15 pages containing either a report describing experimental results (experimental project) or a in-depth analysis of a theoretical topic (theory project). The paper will be discussed in an oral examination, in which students will be asked detailed questions about the algorithms used in the project, and also more high-level questions on the rest of the syllabus.

The experimental project is typically based on implementing two or more learning algorithms (or variants of the same algorithm) from scratch. The algorithms are compared on real-world datasets. The programming language is immaterial. However, the implementation should be reasonable in terms of running time and memory footprint. If the experimental project is based on neural networks, then the student is allowed to use a toolbox (e.g., Tensorflow). The paper will be preferably written in latex and contain images and plots to illustrate the experimental results. The description of the methodology and the algorithms must be detailed enough to allow reproducibilty of the results. The paper may be structured as follows

The list of experimental projects is available here. Students who want to suggest alternative projects can do so at any moment.

The theory project is typically (but not exclusively) focused on a topic taught in class. The report will be based on one or more scientific papers (provided by the instructor), and must contain the complete proof of at least a technical result, including all necessary definitions and auxiliary lemmas. The paper may be structured as follows

Here is the list of scientific papers (more are being added). Students who want to suggest alternative projects can do so at any moment.

Course calendar:

Browse the calendar pages and click on a day to find out what was covered on that day.