autumn 2024
INF-8605 Interpretability in Deep Learning - 5 ECTS

Type of course

The course can be taken as a singular Ph.D. course. The course can be organized for participants who are registered at NORA summer school, DLN transdisciplinary course participants, and UiT students, during summer/winter school or during the semester time as a block course. It will be conducted as a concentrated/block course in the style of summer/winter school/courses conducted under the NORA, DLN, and UiT.

NOTE: First lecture will be in the end of May and will be given digitally (online).

Part of the lectures will be uploaded as video lectures, which is a prerequisite before the onsite (at UiT) program begins in August. The details will be shared during the first digital introductory.


Admission requirements

PhD students or holders of a Norwegian master´s degree of five years (300 ECTS) or 3 (180 ECTS) + 2 years (120 ECTS) or equivalent may be admitted. PhD students must upload a document from their university stating that there are registered PhD students. This group of applicants does not have to prove English proficiency and are exempt from semester fee. Holders of a Master´s degree must upload a Master´s Diploma with Diploma Supplement / English PhD students at UiT The Arctic University of Norway register for the course through StudentWeb. External applicants apply for admission through SøknadsWeb. All external applicants have to attach a confirmation of their status as a PhD student from their home institution. Students who hold a Master of Science degree, but are not yet enrolled as a PhD-student have to attach a copy of their master's degree diploma. These students are also required to pay the semester fee.

Recommended prerequisites: Programming skills in python and / or INF-1400. Hands on knowledge of python programming for deep learning.

Application code: 9303 Application deadline: April15th, 2024. Apply here: https://fsweb.no/soknadsweb/login.jsf

The course is limited to 25 places. Qualified applicants are ranked on the basis of a lottery if there are more applicants than available places.


Course overlap

If you pass the examination in this course, you will get an reduction in credits (as stated below), if you previously have passed the following courses:

INF-3605 Interpretability in Deep Learning 4 ects

Course content

This course will consider different topics of importance regarding interpretable deep learning, equipping the students with knowledge of approaches that can be used to explain deep learning, and deep learning approaches that are more explainable than others. In addition, the students will receive practical skills of applying selected approaches for explaining deep learning, which will equip the students with practical skills of adapting to the rapid pace of technology development in the field of explainable artificial intelligence/interpretable deep learning.

Introductory concepts (3 hours)

  • Explainability and interpretability
  • Black box models versus explainable models
  • Knowledge versus performance, need of explainability
  • Example case(s)

Model-agnostic approaches (6 hours)

  • Knowledge abstraction and encoding
  • Interpretation and visualization of abstract encoding - concepts and techniques, visual explanation
  • Feature importance and feature interaction
  • Counterfactual methods

Neural networks and explainability (6 hours)

  • Network visualization and neuron interaction
  • Tracing and explainable backpropagation
  • Depth of network, and abstraction
  • Class activation maps
  • Saliency and attention models
  • Fuzzy neural networks - type 1 and 2

Self-reading

  • Two research articles - comparison, weaknesses, strengths, application domain

Extensive lab work, self-exercises and groups are planned for competence development is also included.

Relevance of course in program of study: Artificial intelligence(AI) and deep learning approaches are often considered as black boxes, i.e. as a type of algorithms that accomplish learning tasks but cannot explain how. However, as AI/deep learning is increasingly absorbed as adopted for accomplishing cognitive tasks for human beings, it is becoming important that the deep learning models are understandable by humans, such that artificial and human intelligence can co-exist and collaborate. In critical tasks such as deriving, from given data, a correct medical diagnosis and prognosis, collaboration between artificial and human intelligence in imperative so that the suggestions or decision from artificial intelligence are both more accurate and more trustworthy.


Recommended prerequisites

INF-1400 Object-oriented programming

Objectives of the course

A general educational aim of the course will be to equip students with knowledge and skills regarding interpretable artificial intelligence/deep learning, for considering explainable approaches for solving a neural learning problem as well as for developing an explanation for an existing knowledge model developed by a black box approach. This will enable the students to understand, work with, and solve deep learning tasks with a balance of explainability and accuracy, as needed.

An brief introduction to explainable / interpretable deep learning is being offered in this course. This will fill the knowledge gap for those who want to learn more about deep learning and develop trustworthy, reliable deep learning models. Recognizing the significance of interpretable models for computationally-intensive deep learning architectures, as well as the analysis and comprehension of complex biological applications, and the need for cross-disciplinary collaborations in future biotech, medicine, and AI.

PhD student undertaking this course knowledge and learning will be evaluated in an oral exam during their project presentation.

Knowledge - The student

  • understands the need for explainability and the basic concepts underlying the ability or disability of a black-box model being interpretable.
  • learns about the knowledge versus performance trade-off and examples to build an interpretable system.
  • learns the various model-agnostic approach for interpretability and the underlying concept of knowledge encoding and abstractions in neural networks, which differs from the traditional knowledge encoding in classical machine learning systems.
  • knows a variety of explainable approaches of deep learning
  • knows tools and visualization techniques used for making black-box models explainable
  • comprehends the neural network and its complexity, learning about tools to explain the abstraction of deep networks.
  • learns about fuzzy learning in neural network and its basics in making a system more explainable.

Skills - The student can

  • apply explainable approaches for problem solving
  • compute feature importance and counterfactual explanation
  • visualize knowledge encoding in the model
  • compute class activation and saliency maps and popular visualization techniques.
  • compare different deep learning approaches for explainability and performance

General competence - The student has developed

  • the confidence of assessing the explainability/interpretability of a deep learning model
  • the confidence and skills of making a knowledge model explainable
  • the confidence and skills of making choices and discussing solutions for interpretable deep learning
  • enough background and confidence to discuss, assess, and potentially contribute to advances and newest developments in explainable/interpretable deep learning
  • perspectives about the value, ethics, and compromises of explainable/interpretable deep learning.
  • develop research and independent learning within the interpretability in deep learning domain
  • read advanced scientific literature in the field of interpretability in deep learning, and evaluate and extract relevant information from it.

Language of instruction and examination

The language of instruction is English, and all the syllabus material is in English. Project presentation should be given in English and Q&A must be answered in English.

Teaching methods

Lectures: 15 hours

Self-study session: 30 hours

Project work: spread over 8 weeks - net time 50 hours

Group / Self-work session: 20 hours

Hands-on session: 3 hours

Project consultation session: 4 hours

Oral Presentations/ Presentation preparation: 3 hours

Net effort (~125 hours)

Note! First lecture will be in the end of May and will be given digitally.


Schedule

Examination

Examination: Date: Weighting: Duration: Grade scale:
Off campus exam 26.07.2024 09:00 (Hand out)
20.09.2024 14:00 (Hand in)
1/2 8 Weeks Passed / Not Passed
Oral exam 1/2 15 Minutes Passed / Not Passed

Coursework requirements:

To take an examination, the student must have passed the following coursework requirements:

1 practical lab report and 1 problem-solving assignment Approved – not approved
Physical participation in summer school Approved – not approved
UiT Exams homepage

More info about the coursework requirements

The coursework consist of:

  • one practical lab report, and one problem-solving assignment, and
  • physical participation in the summer school arranged at UiT in Tromsø August 7th - 9th

Re-sit examination

It will not be given a re-sit exam for this course.

Info about the weighting of parts of the examination

The exam consist of the following two parts:

  • Off campus exam counting: 50%
  • Oral exam counting: 50%

The off campus exam period is 8 weeks. Project will be given at the introductory class and student will build their project as the course progress. At the end student will submit a project report upto 10 pages along with the source code they have developed for their project work.

Oral examination includes a 15 min presentation on the self-reading of research article.


  • About the course
  • Campus: Tromsø |
  • ECTS: 5
  • Course code: INF-8605
  • Tidligere år og semester for dette emnet