COSC 594

Scientific Computing for Engineers:  Spring 2020 – 3 Credits  

This class is part of the Interdisciplinary Graduate Minor in Computational Science. See IGMCS for details.

Wednesdays from 1:30 – 4:15, Room 233 Claxton


Prof. Jack Dongarra with help from Drs. George Bosilca, Anthony Danalis, Mark Gates,  Heike Jagode Nuria Losada, Piotr Luszczek, Stan Tomov, and Jeff Larkin



Phone: 865-974-8295

Office hours: Wednesday 11:00 - 1:00, or by appointment

TA:   Neil Lindquist

TA’s Office : Claxton 353


TA’s Office Hours: Wednesday’s 10:00 – 12:00 or by appointment


There will be four major aspects of the course:

  • Part I will start with current trends in high-end computing systems and environments, and continue with a practical short description on parallel programming with MPI, OpenMP, and Pthreads.


  • Part II will illustrate the modeling of problems from physics and engineering in terms of partial differential equations (PDEs), and their numerical discretization using finite difference, finite element, and spectral approximation.


  • Part III will be on solvers: both iterative for the solution of sparse problems of part II, and direct for dense matrix problems.  Algorithmic and practical implementation aspects will be covered.


  • Finally in Part IV, various software tools will be surveyed and used.  This will include PETSc, Sca/LAPACK, MATLAB, and some tools and techniques for scientific debugging and performance analysis.


The grade would be based on homework, a midterm project, a final project, and a final project presentation. Topics for the final project would be flexible according to the student's major area of research.



Class Roster

If your name is not on the list or some information is incorrect, please send mail to the TA:







Tugrul Yavuz Ertugrul

Mechanical Engineering

Melissa Karman

Aerospace Engineering


Daniel Lee Nichols


Alexander Caine Teepe


Ethan Arthur Vogel

Aerospace Engineering














Lecture Notes: (Tentative outline of the class)


  1. January 8th (Dr. Dongarra)

Class Introduction

Introduction to High Performance Computing

Homework 1 (due January 22nd)

Tar file of timer


  1. January 15th (Dr. Bosilca)

Parallel programming paradigms and their performances



  1. January 22th (Dr. Bosilca)

Introduction to MPI



  1. January 29th  (Dr. Bosilca)



Jacobi template zip file


  1. (Special Monday lecture 1:30) February 3rd(Dr.  Losada )




  1. February 12th (Jeff Larkin)

Modern Directive Programming with OpenMP and OpenACC



7.     February 19th   (Dr. Luszczek)

Machine Learning with Deep Neural Networks



  1. February 26th (Dr. Gates)

Dense Linear Algebra



  1. March 4th (Dr. Gates)

Dense Linear Algebra



  1. March 11th ( Drs. Jagode & Danalis)

Performance Modeling




March 18th Spring Break


  1. March 25th (Dr. Gates)

Accelerators part1

Accelerators part2





12.  April 1st (Dr. Tomov)

Projection and its importance in scientific computing

GPU Computing


Matlab script chol_qr_it.m


  1. April 8th  (Dr. Tomov)

Discretization of PDEs and Parallel Solvers

Mesh generation and load balancing




  1. April 15th  (Dr. Tomov)

Sparse Matrices and Optimized Parallel Implementations

GPU example kernals


15.  April 22th ( Dr. Tomov )

Iterative Methods in Linear Algebra Part 1

Iterative Methods in Linear Algebra Part 2


  1. April 29th  1:30 – 3:00

Schedule Class Final Reports

       Melissa Karman

       Daniel Nichols

       Alxabder Teepe 

       Ethan Vogel



The project is to describe and demonstrate what you have learned in class.

The idea is to take an application and implement it on a parallel computer.

Describe what the application is and why this is of importance.

You should describe the parallel implementation, look at the performance,

perhaps compare it to another implementation if possible.

You should write this up in a report, 10-15 pages, and in class you will have

20 minutes to make a presentation.



Here are some ideas for projects:

o   Projects and additional projects.


Additional Reading Materials

Message Passing Systems

Several implementations of the MPI standard are available today. The most widely used open source MPI implementations are Open MPI and MPICH.

Here is the link to the MPI Forum.

Other useful reference material

·     Here are pointers to specs on various processors:


·       Introduction to message passing systems and parallel computing

J.J. Dongarra, G.E. Fagg, R. Hempl and D. Walker, Chapter in Wiley Encyclopedia of Electrical and Electronics Engineering, October 1999 ( postscript version )


``Message Passing Interfaces'', Special issue of Parallel Computing, vol 20(4), April 1994.


Ian Foster, Designing and Building Parallel Programs, see  


Alice Koniges, ed., Industrial Strength Parallel Computing, ISBN1-55860-540-1, Morgan Kaufmann Publishers, San Francisco, 2000.


Ananth Gramma et al., Introduction to Parallel Computing, 2nd edition, Pearson Education Limited, 2003.


Michael Quinn, Parallel Programming: Theory and Practice, McGraw-Hill, 1993


David E. Culler & Jaswinder Pal Singh, Parallel Computer Architecture, Morgan Kaufmann, 1998, see


George Almasi and Allan Gottlieb, Highly Parallel Computing, Addison Wesley, 1993


Matthew Sottile, Timothy Mattson, and Craig Rasmussen, Introduction to Concurrency in Programming Languages, Chapman & Hall, 2010


·       Other relevant books


       Stephen Chapman, Fortran 95/2003 for Scientists and Engineers, McGraw-Hill, 2007


       Stephen Chapman, MATLAB Programming for Engineers, Thompson, 2007


       Barbara Chapman, Gabriele Jost, Ruud van der Pas, and David J. Kuck, Using OpenMP: Portable Shared Memory Paralllel Programming, MIT Press, 2007


      Tarek El-Ghazawi, William Carlson, Thomas Sterling, Katherine Yelick, UPC: Distributed Shared Memory Programming, John Wiley & Sons, 2005


       David Bailey, Robert Lucas, Samuel Williams, eds., Performance Tuning of Scientific Applications, Chapman & Hall, 2010


Message Passing Standards

``MPI - The Complete Reference, Volume 1, The MPI-1 Core, Second Edition'',
by Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, Jack Dongarra, MIT Press, September 1998, ISDN 0-262-69215-5.


``MPI: The Complete Reference - 2nd Edition: Volume 2 - The MPI-2 Extensions'',
by William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir, published by The MIT Press, September, 1998; ISBN 0-262-57123-4.



 MPI-2.2 Standard, September 2009

 PDF format:




On-line Documentation and Information about Machines

High-performance computing systems:

·       High Performance Computing Systems: Status and outlook, Aad J. van der Steen and Jack J. Dongarra, 2012.

·       TOP500 Supercomputer Sites

·       Green 500 List of Energy –Efficient Supercomputers



Other Scientific Computing Information Sites

·      Netlib Repository at UTK/ORNL

·       BLAS Quick Reference Card

·       LAPACK

·       ScaLAPACK

·       GAMS - Guide to Available Math Software

·       Fortran Standards Working Group

·       Message Passing Interface (MPI) Forum

·       OpenMP

·       Unified Parallel C

·      DOD High Performance Computing Modernization Program

·       DOE Accelerated Strategic Computing Initiative (ASC)

·       NSF XSEDE (Extreme Science and Engineering Discovery Environment

·       AIST Parallel and High Performance Application Software Exchange (in Japan)

                           (includes information on parallel computing conferences and journals)

·       HPCwire

·       Supercomputing Online


Related On-line Books/Textbooks

·    Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM Publication, Philadelphia, 1994. 

·     LAPACK Users' Guide (Third Edition), SIAM Publications, Philadelphia, 1999.

·     MPI: The Complete Reference, M. Snit, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra

·     Parallel Computing Works, by G. Fox, R. Williams, and P. Messina (Morgan Kaufmann Publishers)

·     Designing and Building Parallel Programs. A dead-tree version of this book is available by Addison-Wesley.

·       Introduction to High-Performance Scientific Computing, by Victor Eijkhout with Edmond Chow, Robert Van De Geijn, February 2010

·       Introduction to Parallel Computing, by Blaise Barney


Performance Analysis Tools Websites

·      PAPI

·      PerfSuite

·      TAU

·      Vampir

·      Scalasca

·      HPCToolkit

·      PerfExpert

·      mpiP

·      ompP

·      Open|Speedshop

·      IPM

·      Eclipse Parallel Tools Platform

Other Online Software and Documentation

·  Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. See this url

·  SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively.

·  Sources of test matrices for sparse matrix algorithms

·  Matrix Market

·  University of Florida Sparse Matrix Collection

·  Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes on-line versions of the book (in html and pdf) as well as software.

·  Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as software.

·  Updated survey of sparse direct linear equation solvers, by Xiaoye Li

·  MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs.

·  Resources for Parallel and High Performance Computing

·  PETSc: Portable, Extensible, Toolkit for Scientific Computation 

·  Issues related to Computer Arithmetic and Error Analysis

·  Efficient software for very high precision floating point arithmetic

·  Notes on IEEE Floating Point Arithmetic, by Prof. W. Kahan

·  Other notes on arithmetic, error analysis, etc. by Prof. W. Kahan

·  Report on arithmetic error that cause the Ariane 5 Rocket Crash  Video of the explosion

·  The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here.

Jack Dongarra