High Performance Parallel Computing
Computational methods are growing increasingly important in many areas of science, and the solution to many problems depend on computers that are vastly faster and holds more memory than what a single high-end server can offer. Top supercomputers consist of more than hundred million processor cores working in parallel, and soon a top machine will host more than a billion processors. Programming such highly parallel computers is difficult, and ensuring both program correctness and high performance is non-trivial. In this class students will learn how computers are build, from the individual CPU and up to millions of processor cores. Students will learn to program high performance applications with both accelerators (GPUs), shared memory and explicit message passing paradigms. The theory is put in to practice through hands-on exercises, and students will learn to map algorithms to parallel architectures and how to decompose problems for parallel execution. We will use the ERDA MODI cluster to execute the programs on a real high performance computing infrastructure and evaluate both performance, scalability and correctness of the programs. The hands-on exercises use examples from astrophysics, biophysics, geophysics, high-energy physics, and solid-state physics. The numerical methods for each week are chosen to be well-suited to each parallel architecture. During the exercises we will each week introduce a new tool, such as debuggers, profilers, and parallel correctness checkers, to aid in the development of high performing programs.
We will use C++ as the course language, and small python programs for visualizing the data. The first week is dedicated to introducing C++ and general programming.
MSc Programme in Physics
At the course completion, the student should be able to:
1. Design and implement parallel applications
2. Choose a parallel computer architecture for a specific purpose
3. Program efficiently for shared memory architectures
4. Program distributed memory architectures with Message Passing Interface
5. Transform algorithms to enable vectorization of operations, well suited for accelerators
The overall purpose of this course is to enable the student to write high performance parallel applications on a range of parallel computer architectures. In addition, the successful candidate will become familiar with a number of typical parallel computer architectures and a set of high performance scientific algorithms.
The students will understand the challenges in addressing parallelization of applications and limitations of the available hardware. In addition, the students should have the ability to reason about the potential from different solutions to a given high performance computing problem.
Lectures and written projects.
See Absalon for final course material.
It is an advantage to have experience writing programs,
especially applications in scientific modelling, simulation or
data-processing. It is useful if the student has a general idea of
the internal construction of a computer.
Academic qualifications equivalent to a BSc degree is recommended.
- 7,5 ECTS
- Type of assessment
- Type of assessment details
- An exercise counting 10% towards the final grade is handed in each week during the first six weeks. In week seven a final project is developed that counts 40% towards the grade.
- All aids allowed
- Marking scale
- 7-point grading scale
- Censorship form
- No external censorship
More internal examiners
Criteria for exam assessment
See learning outcome
Single subject courses (day)
- Practical exercises
- Course number
- 7,5 ECTS
- Programme level
- Full Degree Master
- Block 3
- No restriction
The number of seats may be reduced in the late registration period
- Study Board of Physics, Chemistry and Nanoscience
- The Niels Bohr Institute
- Faculty of Science
- Troels Haugbølle (8-6a6377696471676e4270646b306d7730666d)
- Markus Jochum (7-7774796d727f774a786c7338757f386e75)
Are you BA- or KA-student?
Courseinformation of students