COSC 594 00?

Scientific Computing for Engineers:  Spring 2016 3 Credits  

This class is part of the Interdisciplinary Graduate Minor in Computational Science. See IGMCS for details.

Wednesdays from 1:30 4:15, Room 233 Claxton

Prof. Jack Dongarra with help from Profs. Hartwig Anzt, George Bosilca, Jakub Kurzak, Piotr Luszczek, Heike Jagode, Stan Tomov



Phone: 865-974-8295

Office hours: Wednesday 11:00 - 1:00, or by appointment

TA:   David Eberius

TAs Office : Claxton 305


TAs Office Hours: Wednesdays 10:00 12:00 or by appointment


There will be four major aspects of the course:

  • Part I will start with current trends in high-end computing systems and environments, and continue with a practical short description on parallel programming with MPI, OpenMP, and Pthreads.


  • Part II will illustrate the modeling of problems from physics and engineering in terms of partial differential equations (PDEs), and their numerical discretization using finite difference, finite element, and spectral approximation.


  • Part III will be on solvers: both iterative for the solution of sparse problems of part II, and direct for dense matrix problems.  Algorithmic and practical implementation aspects will be covered.


  • Finally in Part IV, various software tools will be surveyed and used.  This will include PETSc, Sca/LAPACK, MATLAB, and some tools and techniques for scientific debugging and performance analysis.


The grade would be based on homework, a midterm project, a final project, and a final project presentation. Topics for the final project would be flexible according to the student's major area of research.



Class Roster

If your name is not on the list or some information is incorrect, please send mail to the TA:
















Computer Science




Computer Engineering




Computer Science








Computer Science
















Computer Science




Computer Science


Michael Morgan






Computer Science












Book for the Class:

The Sourcebook of Parallel Computing, Edited by Jack Dongarra, Ian Foster, Geoffrey Fox, William Gropp, Ken Kennedy, Linda Torczon, Andy White, October 2002, 760 pages, ISBN 1-55860-871-0, Morgan Kaufmann Publishers.














Lecture Notes: (Tentative outline of the class)


  1. January 13th  (Dr. Dongarra)

Class Introduction

Introduction to High Performance Computing

Read Chapter 1, 2, and 9

Homework 1 (due February 3rd, 2016)

Tar file of timer


  1. January 20th  (Dr. Bosilca)

Canceled, University closed because of snow.


3.     January 27th (Dr. Bosilca)

Parallel programming paradigms and their performances


Homework 2 (due February 17th, 2016)

Homework 3 (due February 24th, 2016)


  1. February 3rd  (Dr. Bosilca)



  1. February 10th (Dr. Tomov)

Projection and its importance in scientific computing


Homework 4 (due February 24th, 2016)



  1. February 17th (Dr. Luszczek)


Homework 5 (due March 2nd, 2016)


  1. February 24th  (Dr. Kurzak)


Homework 6 (due March 13th, 2016)


  1. March 2th (Dr. Kurzak)

Dense Linear Algebra


  1. March 9th (Dr. Bosilca)

MPI Basics

Homework 7 (Due March 30th, 2016)


March 16th  Spring Break


  1.   March 23rd (Dr. Tomov)

Discretization of PDEs and Tools for the Parallel Solution of the Resulting System

Mesh generation and load balancing

Homework 8 (Due April 6th, 2016)

HW8 tar file


  1. March 30th  (Dr. Tomov)

Sparse Matrices and Optimized Parallel Implementations

NVIDIA's Compute Unified Device Architecture (CUDA)

SGEMM example

Homework 9 (Due April 13th, 2016)


  1. April 6th  (Dr. Anzt)

Sparse Linear Algebra on GPUs

 Homework 10 (Due April 20th, 2016)

 HW10 tar file


  1. April 13th  (Dr. Tomov)

Iterative Methods in Linear Algebra (Part 1)

Iterative Methods in Linear Algebra (Part 2)

Better Performance at Lower Occupancy

Introduction to OpenCL (part1)

Introduction to OpenCL (part2)


  1. April 20th ( Dr. Luszczek )

Deep Learning


  1. April 27nd (Heike Jagode)

Performance Modeling-part1

Performance Modeling-part2


  1. May 4th  1:30 4:30 (Dr. Dongarra)

Schedule Class Final Reports



The project is to describe and demonstrate what you have learned in class.

The idea is to take an application and implement it on a parallel computer.

Describe what the application is and why this is of importance.

You should describe the parallel implementation, look at the performance,

perhaps compare it to another implementation if possible.

You should write this up in a report, 10-15 pages, and in class you will have

20 minutes to make a presentation.



Here are some ideas for projects:

o   Projects and additional projects.


Additional Reading Materials

Message Passing Systems

Several implementations of the MPI standard are available today. The most widely used open source MPI implementations are Open MPI and MPICH.

Here is the link to the MPI Forum.

Other useful reference material

    Here are pointers to specs on various processors:


             Introduction to message passing systems and parallel computing

J.J. Dongarra, G.E. Fagg, R. Hempl and D. Walker, Chapter in Wiley Encyclopedia of Electrical and Electronics Engineering, October 1999 ( postscript version )


``Message Passing Interfaces'', Special issue of Parallel Computing, vol 20(4), April 1994.


Ian Foster, Designing and Building Parallel Programs, see  


Alice Koniges, ed., Industrial Strength Parallel Computing, ISBN1-55860-540-1, Morgan Kaufmann Publishers, San Francisco, 2000.


Ananth Gramma et al., Introduction to Parallel Computing, 2nd edition, Pearson Education Limited, 2003.


Michael Quinn, Parallel Programming: Theory and Practice, McGraw-Hill, 1993


David E. Culler & Jaswinder Pal Singh, Parallel Computer Architecture, Morgan Kaufmann, 1998, see


George Almasi and Allan Gottlieb, Highly Parallel Computing, Addison Wesley, 1993


Matthew Sottile, Timothy Mattson, and Craig Rasmussen, Introduction to Concurrency in Programming Languages, Chapman & Hall, 2010


             Other relevant books


       Stephen Chapman, Fortran 95/2003 for Scientists and Engineers, McGraw-Hill, 2007


       Stephen Chapman, MATLAB Programming for Engineers, Thompson, 2007


       Barbara Chapman, Gabriele Jost, Ruud van der Pas, and David J. Kuck, Using OpenMP: Portable Shared Memory Paralllel Programming, MIT Press, 2007


      Tarek El-Ghazawi, William Carlson, Thomas Sterling, Katherine Yelick, UPC: Distributed Shared Memory Programming, John Wiley & Sons, 2005


       David Bailey, Robert Lucas, Samuel Williams, eds., Performance Tuning of Scientific Applications, Chapman & Hall, 2010


Message Passing Standards

``MPI - The Complete Reference, Volume 1, The MPI-1 Core, Second Edition'',
by Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, Jack Dongarra, MIT Press, September 1998, ISDN 0-262-69215-5.


``MPI: The Complete Reference - 2nd Edition: Volume 2 - The MPI-2 Extensions'',
by William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir, published by The MIT Press, September, 1998; ISBN 0-262-57123-4.



 MPI-2.2 Standard, September 2009

 PDF format:




On-line Documentation and Information about Machines

High-performance computing systems:

             High Performance Computing Systems: Status and outlook, Aad J. van der Steen and Jack J. Dongarra, 2012.

             TOP500 Supercomputer Sites

             Green 500 List of Energy Efficient Supercomputers



Other Scientific Computing Information Sites

            Netlib Repository at UTK/ORNL

             BLAS Quick Reference Card



             GAMS - Guide to Available Math Software

             Fortran Standards Working Group

             Message Passing Interface (MPI) Forum


             Unified Parallel C

             DOD High Performance Computing Modernization Program

             DOE Accelerated Strategic Computing Initiative (ASC)

             NSF XSEDE (Extreme Science and Engineering Discovery Environment

             AIST Parallel and High Performance Application Software Exchange (in Japan)

                           (includes information on parallel computing conferences and journals)


             Supercomputing Online


Related On-line Books/Textbooks

   Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM Publication, Philadelphia, 1994. 

    LAPACK Users' Guide (Second Edition), SIAM Publications, Philadelphia, 1995.

    MPI: The Complete Reference, M. Snit, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra

     Parallel Computing Works, by G. Fox, R. Williams, and P. Messina (Morgan Kaufmann Publishers)

     Designing and Building Parallel Programs. A dead-tree version of this book is available by Addison-Wesley.

             Introduction to High-Performance Scientific Computing, by Victor Eijkhout with Edmond Chow, Robert Van De Geijn, February 2010

             Introduction to Parallel Computing, by Blaise Barney


Performance Analysis Tools Websites












            Eclipse Parallel Tools Platform

Other Online Software and Documentation

  Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. See this url

  SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively.

  Sources of test matrices for sparse matrix algorithms

  Matrix Market

  University of Florida Sparse Matrix Collection

  Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes on-line versions of the book (in html and pdf) as well as software.

  Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as software.

  Updated survey of sparse direct linear equation solvers, by Xiaoye Li

  MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs.

  Resources for Parallel and High Performance Computing

  ACTS (Advanced CompuTational Software) is a set of software tools that make it easier for programmers to write high performance scientific applications for parallel computers.

  PETSc: Portable, Extensible, Toolkit for Scientific Computation 

 Issues related to Computer Arithmetic and Error Analysis

  Efficient software for very high precision floating point arithmetic

  Notes on IEEE Floating Point Arithmetic, by Prof. W. Kahan

  Other notes on arithmetic, error analysis, etc. by Prof. W. Kahan

  Report on arithmetic error that cause the Ariane 5 Rocket Crash  Video of the explosion

  The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here.

Jack Dongarra