COSC 594 006

Scientific Computing for Engineers:  Spring 2018 3 Credits  

This class is part of the Interdisciplinary Graduate Minor in Computational Science. See IGMCS for details.

Wednesdays from 1:30 4:15, Room 233 Claxton

Prof. Jack Dongarra with help from Drs. Hartwig Anzt, George Bosilca, Anthony Danalis, Mark Gates, Azzam Haidar, Jakub Kurzak, Heike Jagode, Piotr Luszczek, Stan Tomov



Phone: 865-974-8295

Office hours: Wednesday 11:00 - 1:00, or by appointment

TA:  Qinglei Cao,

TAs Office : Claxton 305


TAs Office Hours: Wednesdays 10:00 12:00 or by appointment


There will be four major aspects of the course:

  • Part I will start with current trends in high-end computing systems and environments, and continue with a practical short description on parallel programming with MPI, OpenMP, and Pthreads.


  • Part II will illustrate the modeling of problems from physics and engineering in terms of partial differential equations (PDEs), and their numerical discretization using finite difference, finite element, and spectral approximation.


  • Part III will be on solvers: both iterative for the solution of sparse problems of part II, and direct for dense matrix problems.  Algorithmic and practical implementation aspects will be covered.


  • Finally in Part IV, various software tools will be surveyed and used.  This will include PETSc, Sca/LAPACK, MATLAB, and some tools and techniques for scientific debugging and performance analysis.


The grade would be based on homework, a midterm project, a final project, and a final project presentation. Topics for the final project would be flexible according to the student's major area of research.



Class Roster

If your name is not on the list or some information is incorrect, please send mail to the TA:


































Lecture Notes: (Tentative outline of the class)


  1. January 10th (Dr. Bosilca)

Class Introduction

Parallel programming paradigms and their performances

Homework 1 (due January 24th)


  1. January 17th (Dr. Bosilca)

Introduction to MPI

Homework 3 (due January 31st)

Homework 4 (due January 31st)

Tarfile for Homeworks


  1. January 24th (Dr. Kurzak)


Homework 2 (due February 7)


  1. January 31st  (Dr. Bosilca)


MPI dynamic

Homework 3 (due February 14)


  1. February 7th (Dr. Bosilca)

Advanced MPI


Homework 4 (February 21)


  1. February 14th (Dr.  Luszczek)


Homework 5 (due February 28)


7.     February 21th (Dr.  Dongarra)

Introduction to High Performance Computing

Homework 6 (due March 7th)

Tar file of timer


  1. February 28th ( Drs. Jagode & Danalis)

Performance Modeling


Homework 7 (due March 21)


  1. March 7th (Dr. Gates)

Dense Linear Algebra


March 14th Spring Break


  1.   March 21st (Dr. Tomov)

Projection and its importance in scientific computing

GPU Computing

Homework 8 (due April 4)

Matlab script chol_qr_it.m


  1. March 28th  (Dr. Luszczek)

Deep Learning


  1. April 4th  (Dr. Tomov)

Discretization of PDEs and Parallel Solvers

Mesh generation and load balancing

Homework 9 (due April 18)



  1. April 11th  (Dr. Anzt)

Sparse Matrices and Optimized Parallel Implementations

GPU example kernals

Homework 10


14.  April 18th ( Dr. Haidar )

Iterative Methods in Linear Algebra Part 1

Iterative Methods in Linear Algebra Part 2

Homework 11


  1. April 25th (Dr. Haidar)

Iterative Methods in Linear Algebra Part 2



  1. May 3rd  1:30 4:30 (Dr. Dongarra)

Schedule Class Final Reports



The project is to describe and demonstrate what you have learned in class.

The idea is to take an application and implement it on a parallel computer.

Describe what the application is and why this is of importance.

You should describe the parallel implementation, look at the performance,

perhaps compare it to another implementation if possible.

You should write this up in a report, 10-15 pages, and in class you will have

20 minutes to make a presentation.



Here are some ideas for projects:

o   Projects and additional projects.


Additional Reading Materials

Message Passing Systems

Several implementations of the MPI standard are available today. The most widely used open source MPI implementations are Open MPI and MPICH.

Here is the link to the MPI Forum.

Other useful reference material

    Here are pointers to specs on various processors:


       Introduction to message passing systems and parallel computing

J.J. Dongarra, G.E. Fagg, R. Hempl and D. Walker, Chapter in Wiley Encyclopedia of Electrical and Electronics Engineering, October 1999 ( postscript version )


``Message Passing Interfaces'', Special issue of Parallel Computing, vol 20(4), April 1994.


Ian Foster, Designing and Building Parallel Programs, see  


Alice Koniges, ed., Industrial Strength Parallel Computing, ISBN1-55860-540-1, Morgan Kaufmann Publishers, San Francisco, 2000.


Ananth Gramma et al., Introduction to Parallel Computing, 2nd edition, Pearson Education Limited, 2003.


Michael Quinn, Parallel Programming: Theory and Practice, McGraw-Hill, 1993


David E. Culler & Jaswinder Pal Singh, Parallel Computer Architecture, Morgan Kaufmann, 1998, see


George Almasi and Allan Gottlieb, Highly Parallel Computing, Addison Wesley, 1993


Matthew Sottile, Timothy Mattson, and Craig Rasmussen, Introduction to Concurrency in Programming Languages, Chapman & Hall, 2010


       Other relevant books


       Stephen Chapman, Fortran 95/2003 for Scientists and Engineers, McGraw-Hill, 2007


       Stephen Chapman, MATLAB Programming for Engineers, Thompson, 2007


       Barbara Chapman, Gabriele Jost, Ruud van der Pas, and David J. Kuck, Using OpenMP: Portable Shared Memory Paralllel Programming, MIT Press, 2007


      Tarek El-Ghazawi, William Carlson, Thomas Sterling, Katherine Yelick, UPC: Distributed Shared Memory Programming, John Wiley & Sons, 2005


       David Bailey, Robert Lucas, Samuel Williams, eds., Performance Tuning of Scientific Applications, Chapman & Hall, 2010


Message Passing Standards

``MPI - The Complete Reference, Volume 1, The MPI-1 Core, Second Edition'',
by Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, Jack Dongarra, MIT Press, September 1998, ISDN 0-262-69215-5.


``MPI: The Complete Reference - 2nd Edition: Volume 2 - The MPI-2 Extensions'',
by William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir, published by The MIT Press, September, 1998; ISBN 0-262-57123-4.



 MPI-2.2 Standard, September 2009

 PDF format:




On-line Documentation and Information about Machines

High-performance computing systems:

       High Performance Computing Systems: Status and outlook, Aad J. van der Steen and Jack J. Dongarra, 2012.

       TOP500 Supercomputer Sites

       Green 500 List of Energy Efficient Supercomputers



Other Scientific Computing Information Sites

      Netlib Repository at UTK/ORNL

       BLAS Quick Reference Card



       GAMS - Guide to Available Math Software

       Fortran Standards Working Group

       Message Passing Interface (MPI) Forum


       Unified Parallel C

      DOD High Performance Computing Modernization Program

       DOE Accelerated Strategic Computing Initiative (ASC)

       NSF XSEDE (Extreme Science and Engineering Discovery Environment

       AIST Parallel and High Performance Application Software Exchange (in Japan)

                           (includes information on parallel computing conferences and journals)


       Supercomputing Online


Related On-line Books/Textbooks

   Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM Publication, Philadelphia, 1994. 

    LAPACK Users' Guide (Third Edition), SIAM Publications, Philadelphia, 1999.

    MPI: The Complete Reference, M. Snit, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra

     Parallel Computing Works, by G. Fox, R. Williams, and P. Messina (Morgan Kaufmann Publishers)

     Designing and Building Parallel Programs. A dead-tree version of this book is available by Addison-Wesley.

       Introduction to High-Performance Scientific Computing, by Victor Eijkhout with Edmond Chow, Robert Van De Geijn, February 2010

       Introduction to Parallel Computing, by Blaise Barney


Performance Analysis Tools Websites












      Eclipse Parallel Tools Platform

Other Online Software and Documentation

  Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. See this url

  SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively.

  Sources of test matrices for sparse matrix algorithms

  Matrix Market

  University of Florida Sparse Matrix Collection

  Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes on-line versions of the book (in html and pdf) as well as software.

  Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as software.

  Updated survey of sparse direct linear equation solvers, by Xiaoye Li

  MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs.

  Resources for Parallel and High Performance Computing

  PETSc: Portable, Extensible, Toolkit for Scientific Computation 

  Issues related to Computer Arithmetic and Error Analysis

  Efficient software for very high precision floating point arithmetic

  Notes on IEEE Floating Point Arithmetic, by Prof. W. Kahan

  Other notes on arithmetic, error analysis, etc. by Prof. W. Kahan

  Report on arithmetic error that cause the Ariane 5 Rocket Crash  Video of the explosion

  The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here.

Jack Dongarra