COSC 594 00?

Scientific Computing for Engineers:  Spring 2017 3 Credits  

This class is part of the Interdisciplinary Graduate Minor in Computational Science. See IGMCS for details.

Wednesdays from 1:30 4:15, Room 233 Claxton

Prof. Jack Dongarra with help from Profs. Hartwig Anzt, George Bosilca, Mark Gates, Jakub Kurzak, Piotr Luszczek, Anthony Danalis, Stan Tomov



Phone: 865-974-8295

Office hours: Wednesday 11:00 - 1:00, or by appointment

TA:   Stephen Richmond,

TAs Office : Claxton 309


TAs Office Hours: Wednesdays 10:00 12:00 or by appointment


There will be four major aspects of the course:

  • Part I will start with current trends in high-end computing systems and environments, and continue with a practical short description on parallel programming with MPI, OpenMP, and Pthreads.


  • Part II will illustrate the modeling of problems from physics and engineering in terms of partial differential equations (PDEs), and their numerical discretization using finite difference, finite element, and spectral approximation.


  • Part III will be on solvers: both iterative for the solution of sparse problems of part II, and direct for dense matrix problems.  Algorithmic and practical implementation aspects will be covered.


  • Finally in Part IV, various software tools will be surveyed and used.  This will include PETSc, Sca/LAPACK, MATLAB, and some tools and techniques for scientific debugging and performance analysis.


The grade would be based on homework, a midterm project, a final project, and a final project presentation. Topics for the final project would be flexible according to the student's major area of research.



Class Roster

If your name is not on the list or some information is incorrect, please send mail to the TA:

































Book for the Class:

The Sourcebook of Parallel Computing, Edited by Jack Dongarra, Ian Foster, Geoffrey Fox, William Gropp, Ken Kennedy, Linda Torczon, Andy White, October 2002, 760 pages, ISBN 1-55860-871-0, Morgan Kaufmann Publishers.














Lecture Notes: (Tentative outline of the class)


  1. January 11th  (Dr. Dongarra)

Class Introduction

Introduction to High Performance Computing

Read Chapter 1, 2, and 9

Homework 1 (due January 25th, 2017)

Tar file of timer


  1. January 18th  (Dr. Bosilca)

Parallel programming paradigms and their performances

Homework 2 (due February 15th, 2017)


3.     January 25th (Dr.  Luszczek)


Homework 3 (due February 22nd, 2017)


  1. February 1st  (Dr. Bosilca)

Architecture and POSIX threads


  1. February 8th (Dr. Bosilca)


MPI dynamic

Homework 4 (due March 1, 2017)


  1. February 15th (Dr. Kurzak)


Homework 5 (due March 3, 2017)


  1. February 22th  (Dr. Bosilca)

MPI Part 2

Homework 6 (due March 22, 2017)


  1. March 1th ( Dr. Anthony Danalis )

Performance Modeling


Homework 7 (due March 29, 2017)


  1. March 8th (Dr. Anzt)

Sparse Linear Algebra on GPUs

Homework 8 (due March 22, 2017)

Homework 8

GPU example


March 15th Spring Break


  1.   March 22nd (Dr. Tomov)

Projection and its importance in scientific computing

GPU Computing

Homework 9 (due April 5, 2017)

Matlab script chol_qr_it.m


  1. March 29th  (Dr. Tomov)

Discretization of PDEs and Parallel Solvers

Mesh generation and load balancing

Homework 10 (due April 12, 2017)



  1. April 5th  (Dr. Tomov)

Sparse Matrices and Optimized Parallel Implementations

NVIDIA's Compute Unified Device Architecture (CUDA)

SGEMM example


  1. April 12th  (Dr. Tomov)

Iterative Methods in Linear Algebra Part 1

Iterative Methods in Linear Algebra Part 2

Better performance at lower occupancy

  (linked to

Introduction to OpenCL (Part 1)

  (linked to

Introduction to OpenCL (Part 2) 

  (linked to

  1. April 19th ( Dr. Luszczek )

Deep Learning


  1. April 26th (Dr. Gates)

Dense Linear Algebra


  1. May 3rd  1:30 4:30 (Dr. Dongarra)

Schedule Class Final Reports



The project is to describe and demonstrate what you have learned in class.

The idea is to take an application and implement it on a parallel computer.

Describe what the application is and why this is of importance.

You should describe the parallel implementation, look at the performance,

perhaps compare it to another implementation if possible.

You should write this up in a report, 10-15 pages, and in class you will have

20 minutes to make a presentation.



Here are some ideas for projects:

o   Projects and additional projects.


Additional Reading Materials

Message Passing Systems

Several implementations of the MPI standard are available today. The most widely used open source MPI implementations are Open MPI and MPICH.

Here is the link to the MPI Forum.

Other useful reference material

    Here are pointers to specs on various processors:


       Introduction to message passing systems and parallel computing

J.J. Dongarra, G.E. Fagg, R. Hempl and D. Walker, Chapter in Wiley Encyclopedia of Electrical and Electronics Engineering, October 1999 ( postscript version )


``Message Passing Interfaces'', Special issue of Parallel Computing, vol 20(4), April 1994.


Ian Foster, Designing and Building Parallel Programs, see  


Alice Koniges, ed., Industrial Strength Parallel Computing, ISBN1-55860-540-1, Morgan Kaufmann Publishers, San Francisco, 2000.


Ananth Gramma et al., Introduction to Parallel Computing, 2nd edition, Pearson Education Limited, 2003.


Michael Quinn, Parallel Programming: Theory and Practice, McGraw-Hill, 1993


David E. Culler & Jaswinder Pal Singh, Parallel Computer Architecture, Morgan Kaufmann, 1998, see


George Almasi and Allan Gottlieb, Highly Parallel Computing, Addison Wesley, 1993


Matthew Sottile, Timothy Mattson, and Craig Rasmussen, Introduction to Concurrency in Programming Languages, Chapman & Hall, 2010


       Other relevant books


       Stephen Chapman, Fortran 95/2003 for Scientists and Engineers, McGraw-Hill, 2007


       Stephen Chapman, MATLAB Programming for Engineers, Thompson, 2007


       Barbara Chapman, Gabriele Jost, Ruud van der Pas, and David J. Kuck, Using OpenMP: Portable Shared Memory Paralllel Programming, MIT Press, 2007


      Tarek El-Ghazawi, William Carlson, Thomas Sterling, Katherine Yelick, UPC: Distributed Shared Memory Programming, John Wiley & Sons, 2005


       David Bailey, Robert Lucas, Samuel Williams, eds., Performance Tuning of Scientific Applications, Chapman & Hall, 2010


Message Passing Standards

``MPI - The Complete Reference, Volume 1, The MPI-1 Core, Second Edition'',
by Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, Jack Dongarra, MIT Press, September 1998, ISDN 0-262-69215-5.


``MPI: The Complete Reference - 2nd Edition: Volume 2 - The MPI-2 Extensions'',
by William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir, published by The MIT Press, September, 1998; ISBN 0-262-57123-4.



 MPI-2.2 Standard, September 2009

 PDF format:




On-line Documentation and Information about Machines

High-performance computing systems:

       High Performance Computing Systems: Status and outlook, Aad J. van der Steen and Jack J. Dongarra, 2012.

       TOP500 Supercomputer Sites

       Green 500 List of Energy Efficient Supercomputers



Other Scientific Computing Information Sites

      Netlib Repository at UTK/ORNL

       BLAS Quick Reference Card



       GAMS - Guide to Available Math Software

       Fortran Standards Working Group

       Message Passing Interface (MPI) Forum


       Unified Parallel C

      DOD High Performance Computing Modernization Program

       DOE Accelerated Strategic Computing Initiative (ASC)

       NSF XSEDE (Extreme Science and Engineering Discovery Environment

       AIST Parallel and High Performance Application Software Exchange (in Japan)

                           (includes information on parallel computing conferences and journals)


       Supercomputing Online


Related On-line Books/Textbooks

   Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM Publication, Philadelphia, 1994. 

    LAPACK Users' Guide (Third Edition), SIAM Publications, Philadelphia, 1999.

    MPI: The Complete Reference, M. Snit, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra

     Parallel Computing Works, by G. Fox, R. Williams, and P. Messina (Morgan Kaufmann Publishers)

     Designing and Building Parallel Programs. A dead-tree version of this book is available by Addison-Wesley.

       Introduction to High-Performance Scientific Computing, by Victor Eijkhout with Edmond Chow, Robert Van De Geijn, February 2010

       Introduction to Parallel Computing, by Blaise Barney


Performance Analysis Tools Websites












      Eclipse Parallel Tools Platform

Other Online Software and Documentation

  Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. See this url

  SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively.

  Sources of test matrices for sparse matrix algorithms

  Matrix Market

  University of Florida Sparse Matrix Collection

  Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes on-line versions of the book (in html and pdf) as well as software.

  Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as software.

  Updated survey of sparse direct linear equation solvers, by Xiaoye Li

  MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs.

  Resources for Parallel and High Performance Computing

  PETSc: Portable, Extensible, Toolkit for Scientific Computation 

  Issues related to Computer Arithmetic and Error Analysis

  Efficient software for very high precision floating point arithmetic

  Notes on IEEE Floating Point Arithmetic, by Prof. W. Kahan

  Other notes on arithmetic, error analysis, etc. by Prof. W. Kahan

  Report on arithmetic error that cause the Ariane 5 Rocket Crash  Video of the explosion

  The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here.

Jack Dongarra