Fair and Transparent Machine Learning Methods (FTML)

Course content

Deploying machine learning models for downstream applications brings with it a wealth of possibilities. However, there is also a non-negligible risk of potential harm if models are not developed carefully.

Data can encode undesired societal biases, which can in turn be perpetuated by machine learning models if trained on such data. There may be risks in developing automated solutions for certain application tasks altogether. Moreover, ML models are often black boxes whose decisions are not transparent to end-users, creating imbalances and issues regarding the accountability of models. Therefore, it is imperative to reflect on the benefits and risks of ML models, to develop methods to detect and mitigate biases in ML models, and to create solutions to make the inner workings of models more transparent. This course focuses on the technical solutions needed to improve the fairness, accountability and transparency of machine learning models. As such, it assumes students have prior knowledge of machine learning.
 

This course covers the following tentative topic list:

  • Statistical notions of fairness and bias
  • The intended usage of ML models, e.g. datasheets, model cards
  • Learning fair representations, e.g. counterfactual data augmentation, adversarial training, model calibration
  • Model interpretability and transparency
  • Generating explanations, e.g. post-hoc explainability, generating free-text explanations
  • Evaluating model explanations
  • Probing representations for bias, e.g. functional testing, subspace probing, generative approaches
Education

MSc Programme in Computer Science

Learning outcome

Knowledge of

  • ML fairness: how to operationalise and measure fairness
  • Model bias: how to automatically detect and mitigate ML model biases
  • Transparency: interpretability and explainability for ML models

 

Skills to

  • Develop methods to automatically detect, measure and mitigate biases in ML models
  • Develop methods to interpret features deep neural networks have learned
  • Develop methods to explain decisions made by ML models
  • Transparently document the intended usage of ML models

 

Competences to

  • Understand methods for bias detection and mitigation, interpretability and explainability
  • Plan and carry out fairness and bias analyses on datasets and ML tasks

The format of the class consists of lectures (including guest lectures), presentations by students, and project work.

Selected papers and book chapters. See Absalon when the course is set up.

Knowledge of machine learning (probability theory, linear algebra, classification) and programming is required corresponding to NDAK15007U Machine Learning or NDAB21005U Machine Learning A or similar.

Written
Oral
Individual
Collective
Continuous feedback during the course of the semester
Peer feedback (Students give each other feedback)
ECTS
7,5 ECTS
Type of assessment
Oral examination, during course
Written assignment, during course
Type of assessment details
The exam consists of two parts:

1) A class presentation of an academic paper (oral part)
2) An individual mini project on a topic covered in the course, the findings of which are to be documented in a short report (written part)

The final grade is based on an overall assessment of the assignments and the presentation.
Aid
All aids allowed
Marking scale
7-point grading scale
Censorship form
No external censorship
Several internal examiners.
Re-exam

The re-exam consists of two parts: 

1) A 20 minute oral examination without preparation

2) A (potentially revised) version of the mini-project incl. the short report, to be submitted no later than 3 weeks before the re-exam week.

 

Criteria for exam assessment

See learning outcome

Single subject courses (day)

  • Category
  • Hours
  • Lectures
  • 16
  • Preparation
  • 90
  • Practical exercises
  • 0
  • Project work
  • 100
  • English
  • 206

Kursusinformation

Language
English
Course number
NDAK22005U
ECTS
7,5 ECTS
Programme level
Full Degree Master
Duration

1 block

Placement
Block 2
Schedulegroup
B
Capacity
No limitation – unless you register in the late-registration period (BSc and MSc) or as a credit or single subject student.
Studyboard
Study Board of Mathematics and Computer Science
Contracting department
  • Department of Computer Science
Contracting faculty
  • Faculty of Science
Course Coordinator
  • Christina Lioma   (7-6a357370767468476b7035727c356b72)
Teacher

Isabelle Augenstein

Saved on the 15-02-2024

Er du BA- eller KA-studerende?

Er du bachelor- eller kandidat-studerende, så find dette kursus i kursusbasen for studerende:

Kursusinformation for indskrevne studerende