Network Security

Year:
1st year
Semester:
S2
Programme main editor:
I2CAT
Onsite in:
Remote:
UULM
ECTS range:
3-7 ECTS

Professors

img
Professors
Yacine Benchaib
CGE
img
Oleksandr Rokovyi
NTUU
img
Oleg Alienin
img
Professors
Rachid El-Azouzi
AU
img
Professors
Frank Kargl
UULM

Prerequisites:

Linux, Python, network protocols, virtual machines; use of virtual machines on Linux environment for carrying out practical work related to security.

Pedagogical objectives:

The objective of this course is to address different security themes: authentication and access control, as well as cryptography, virtual private networks and the exploitation of network and software vulnerabilities.

Evaluation modalities:

Mid-term and/or final exams and/or assignment/lab reports.

Description:

The course introduces different aspects of system and network security.

The course focuses on Authentication, Authorization and Accounting (AAA)  architectures and primitives: challenge/response, nonces, mutual  authentication schemes, perfect forward secrecy, timestamps. TCP-based authentication protocols, and sequence number prediction.

In addition, a subset of the following complementary topics is covered:

  • Basics on cryptography and integrity. hash functions, public/private keys, shift encryption, secret key cryptography, public key cryptography, public key certificate, Diffie–Hellman key exchange.
  • Securing virtual environments, programming in Python language and use of Linux Operating System. Managing access rights to a system and defining firewall rules. Virtual private networks.
  • Usage of application’s vulnerabilities to remotely break into a system, reproducing the man-in-the-middle attack using flaws in the ARP protocol and attacking the DHCP service.
  • Intrusion detection systems.
  • Operating systems security.
  • AI methods to secure networks (e.g. intrusion detection and prevention systems, traffic classification and filtering for DDoS, …)? Methods (for example from the field of anomaly detection, e.g. isolation forests)
  • Attacks on AI systems (e.g. adversarial examples, malicious patches, prompt injection in LLMs, …)
  • Usage of AI systems for attacks? (Coding of malware using AI, use of LLMs to generate phishing emails, …)

Required teaching material

personal laptop

Teaching volume:
lessons:
16-30 hours
Exercices:
8-16 hours
Supervised lab:
0-16 hours
Project:

Devices:

  • Laboratory-Based Course Structure
  • Open-Source Software Requirements