Artificial Intelligence and Machine Learning (AIML) for Connected Systems

Year:
1st year
Semester:
S1
Onsite in:
Remote:
UPC, CNAM
ECTS range:
3-7 ECTS

Professors

img
Professors
Pedro B. Velloso
CNAM
img
Professors
Naresh Modina
CNAM
img
Eduard Garcia-Villegas
UPC
img
Lehel Csato
UBB
img
Yannick Esteve
AU
img
PhD
Vladyslav Taran
NTUU
img
David Remondo
UPC
img
Assistant Professor
Yuriy Kochura
NTUU
img
PhD
Ivan Zhuk
NTUU

Prerequisites:

Calculus, Algebra, basic concepts of statistics and probability. Prior knowledge on Python is strongly recommended

Pedagogical objectives:

The main goal of this course is to cover the basic concepts related to machine learning projects and present the main ML models and algorithms and how to apply them to connected-systems.

Evaluation modalities:

Exam, lab reports, and/or project-driven competition (also across multiple sites whenever possible).

Description:

The course intercalates theoretical lectures and lab sessions. The main idea consists of presenting the theoretical background of a specific subject, followed by a lab session in which students will learn more details about each model and algorithm with practical examples using the most popular tools and libraries available. The course includes hand-on lab sessions with practical assignments, some of which are evaluated.

The course is connected-systems oriented, which means that, in addition to the most popular datasets, like MNIST and California houses, students will also see other examples of network-related datasets.

Topics:

  • Introduction to AIML.
  • Practical skills and Linear Regression.
    • Lab: end-to-end work, exploratory data analysis.
  • Supervised Learning and Classification (Decision Trees and Random Forest,  Bayesian Detection, Non-Parametric Classifiers)
    • Lab: Classification, Linear and Quadratic Discriminants, K-nearest neighbors (KNN).
  • Dimensionality Reduction
    • Lab: Principal Component Analysis (PCA), Multiple Discriminant Analysis (MDA).
  • Unsupervised Learning
    • Lab: Clustering
  • Artificial Neural Networks, Deep Neural Networks (DNN)
    • Lab: Neural Networks, Multi-Layer Perceptron (MLP)
  • Training enhancement techniques (e.g. Ensembles, in DNN)

complementary content:

  • Bases on Reinforcement Learning
    • Lab: Bases of Reinforcement Learning
  • Data processing tools (e.g. TensorFlow, Scikit learning)

Lab: management of time-series in Recurrent Neural Networks (RNN)

Required teaching material

•Bibliography • James, G., Witten, D., Hastie, T., Tibshirani, R. "An Introduction to Statistical Learning”, 2nd edition, 2021 • Aurélien Géron, "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow", 2nd Edition, O'Reilly Media, 2019 • Tom Mitchell, ”Machine Learning”, McGraw-Hill Science, 1997 • R. O. Duda, P. E. Hart, D. G. Stork, Pattern Classification, Ed. Wiley Interscience, 2002. • Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006 •Lab tools • Language: Python • Frameworks and libraries: Numpy, Pandas, Matplotlib, Scikit-Learn, Tensor Flow, Keras, Google Colab, Latex, and Overleaf •Datasets • Iris • Ridership: Bus and Rail rides in Chicago • CTU-13: 13 attack scenarios from botnets • 5G-Traffic: Traffic load in different cities in France • LiveStreaming: Live streaming data of user’s connections (World Cup matches).

Teaching volume:
lessons:
20-28 hours
Exercices:
Supervised lab:
14-30 hours
Project:
0-14 hours

Devices:

  • Laboratory-Based Course Structure
  • Open-Source Software Requirements