COSC 594 - 201420

Scientific Computing for Engineers:  Spring 2014 – 3 Credits  

This class is part of the Interdisciplinary Graduate Minor in Computational Science. See IGMCS for details.

Wednesdays from 1:30 – 4:15, Room 233 Claxton

Prof. Jack Dongarra with help from Profs. George Bosilca, Jakub Kurzak, Piotr Luszczek, Heike McCraw, Gabriel Marin, Stan Tomov



Phone: 865-974-8295

Office hours: Wednesday 11:00 - 1:00, or by appointment

TA:   Reazul Hoque <>

TAs Office : Claxton 309


TAs Office Hours: Wednesdays 10:00 – 12:00 or by appointment


There will be four major aspects of the course:

  • Part I will start with current trends in high-end computing systems and environments, and continue with a practical short description on parallel programming with MPI, OpenMP, and Pthreads.


  • Part II will illustrate the modeling of problems from physics and engineering in terms of partial differential equations (PDEs), and their numerical discretization using finite difference, finite element, and spectral approximation.


  • Part III will be on solvers: both iterative for the solution of sparse problems of part II, and direct for dense matrix problems.  Algorithmic and practical implementation aspects will be covered.


  • Finally in Part IV, various software tools will be surveyed and used.  This will include PETSc, Sca/LAPACK, MATLAB, and some tools and techniques for scientific debugging and performance analysis.


The grade would be based on homework, a midterm project, a final project, and a final project presentation. Topics for the final project would be flexible according to the student's major area of research.










Class Roster

If your name is not on the list or some information is incorrect, please send mail to the TA:



First Name

Last Name






Nuclear Eng.




ESE(energy science and Eng.)












ESE(energy science and Eng.)




Chemical Eng.




Material Science



















Book for the Class:

The Sourcebook of Parallel Computing, Edited by Jack Dongarra, Ian Foster, Geoffrey Fox, William Gropp, Ken Kennedy, Linda Torczon, Andy White, October 2002, 760 pages, ISBN 1-55860-871-0, Morgan Kaufmann Publishers.














Lecture Notes: (Tentative outline of the class)


  1. January 8th  (Dr. Dongarra)

Class Introduction

Introduction to High Performance Computing

Read Chapter 1, 2, and 9

Homework 1 (due January 22, 2014)

Tar file of timer


  1. January 15th  (Dr.  Luszczek )


Hybrid MPI/OpenMP programming

Homework 2 (due January 29, 2014)


3.     January 22rd (Dr. Tomov)

Projection and its importance in scientific computing


Homework 3 (due February 5, 2014)



  1. January 29th (Dr. Tomov)

Discretization of PDEs and Tools for the Parallel Solution of the Resulting

SystMesh generation and load balancing

Homework 4  (due February 12, 2014)


  1. February 5th (Dr. Tomov)

Sparse Matrices and Optimized Parallel Implementations

NVIDIA's Compute Unified Device Architecture (CUDA)

SGEMM example

Homework 5 (due February 19, 2014)


  1. February 12th  (Dr. Tomov)

Iterative Methods in Linear Algebra (Part 1)

Iterative Methods in Linear Algebra (Part 2)

Video of  CUDA -- "Better Performance at Lower Occupancy", V. Volkov

Video of OpenCL -- "What is OpenCL"

Homework 6 (due February 26, 2014)

Homework 6 tar file

High Performance Linear Algebra with Intel Xeon Phi Coprocessors


  1. February 19th  (Heike McCraw)

Performance Modeling

Homework 7 (due March 5, 2014)


  1. February 26th (Dr. Marin)

Performance Analysis and Tools

Homework 8 (due March 19, 2014)

Homework 8 tar file


  1. March 5th (Dr.  Kurzak )



  1. March 12th (Dr. Bosilca)

Parallel programming paradigms and their performances

Homework 9 (due March 26th, 2014)


March 19th – Spring Break


  1. March 26th (Dr. Dongarra)

Dense Linear Algebra part1

Dense Linear Algebra part2

Homework 10 (due April 9th, 2014)


  1. April 2nd  (Dr. Bosilca)


Homework 11 (due April 23rd, 2014)


  1. April 9th  (Dr. Bosilca)

Continue with the slides from last week


  1. April 16th   No Class


  1. April 23th (Dr. Bosilca)



  1. May 2th (Friday) 1:30 – 4:30 (Dr. Dongarra)

Schedule Class Final Reports


1:30 Cole Gentry

1:50 Anthony Gianfraesco

2:10 Jason Jiahui Guo  

2:30 Reazul Hoque

2:50 Artem Maksov

3:10 Marshall Mcdonnell

3:30 Christopher Ostrouchov

3:50 Thananon Patinyasakdikul

4:10 Chunyan Tang


The project is to describe and demonstrate what you have learned in class.

The idea is to take an application and implement it on a parallel computer.

Describe what the application is and why this is of importance.

You should describe the parallel implementation, look at the performance,

perhaps compare it to another implementation if possible.

You should write this up in a report, 10-15 pages, and in class you will have

20 minutes to make a presentation.



Here are some ideas for projects:

o   Projects and additional projects.


Additional Reading Materials

Message Passing Systems

Several implementations of the MPI standard are available today. The most widely used open source MPI implementations are Open MPI and MPICH.

Here is the link to the MPI Forum.

Other useful reference material

    Here are pointers to specs on various processors:


             Introduction to message passing systems and parallel computing

J.J. Dongarra, G.E. Fagg, R. Hempl and D. Walker, Chapter in Wiley Encyclopedia of Electrical and Electronics Engineering, October 1999 ( postscript version )


``Message Passing Interfaces'', Special issue of Parallel Computing, vol 20(4), April 1994.


Ian Foster, Designing and Building Parallel Programs, see


Alice Koniges, ed., Industrial Strength Parallel Computing, ISBN1-55860-540-1, Morgan Kaufmann Publishers, San Francisco, 2000.


Ananth Gramma et al., Introduction to Parallel Computing, 2nd edition, Pearson Education Limited, 2003.


Michael Quinn, Parallel Programming: Theory and Practice, McGraw-Hill, 1993


David E. Culler & Jaswinder Pal Singh, Parallel Computer Architecture, Morgan Kaufmann, 1998, see


George Almasi and Allan Gottlieb, Highly Parallel Computing, Addison Wesley, 1993


Matthew Sottile, Timothy Mattson, and Craig Rasmussen, Introduction to Concurrency in Programming Languages, Chapman & Hall, 2010


             Other relevant books


       Stephen Chapman, Fortran 95/2003 for Scientists and Engineers, McGraw-Hill, 2007


       Stephen Chapman, MATLAB Programming for Engineers, Thompson, 2007


       Barbara Chapman, Gabriele Jost, Ruud van der Pas, and David J. Kuck, Using OpenMP: Portable Shared Memory Paralllel Programming, MIT Press, 2007


      Tarek El-Ghazawi, William Carlson, Thomas Sterling, Katherine Yelick, UPC: Distributed Shared Memory Programming, John Wiley & Sons, 2005


       David Bailey, Robert Lucas, Samuel Williams, eds., Performance Tuning of Scientific Applications, Chapman & Hall, 2010


Message Passing Standards

``MPI - The Complete Reference, Volume 1, The MPI-1 Core, Second Edition'',
by Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, Jack Dongarra, MIT Press, September 1998, ISDN 0-262-69215-5.


``MPI: The Complete Reference - 2nd Edition: Volume 2 - The MPI-2 Extensions'',
by William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir, published by The MIT Press, September, 1998; ISBN 0-262-57123-4.



 MPI-2.2 Standard, September 2009

 PDF format:




On-line Documentation and Information about Machines

High-performance computing systems:

             High Performance Computing Systems: Status and outlook, Aad J. van der Steen and Jack J. Dongarra, 2012.

             TOP500 Supercomputer Sites

             Green 500 List of Energy –Efficient Supercomputers



Other Scientific Computing Information Sites

            Netlib Repository at UTK/ORNL

             BLAS Quick Reference Card



             GAMS - Guide to Available Math Software

             Fortran Standards Working Group

             Message Passing Interface (MPI) Forum


             Unified Parallel C

             DOD High Performance Computing Modernization Program

             DOE Accelerated Strategic Computing Initiative (ASC)

             NSF XSEDE (Extreme Science and Engineering Discovery Environment

             AIST Parallel and High Performance Application Software Exchange (in Japan)

                           (includes information on parallel computing conferences and journals)


             Supercomputing Online


Related On-line Books/Textbooks

   Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM Publication, Philadelphia, 1994. 

    LAPACK Users' Guide (Second Edition), SIAM Publications, Philadelphia, 1995.

    Using MPI: Portable Parallel Programming with the Message-Passing Interface by W. Gropp, E. Lusk, and A. Skjellum

     Parallel Computing Works, by G. Fox, R. Williams, and P. Messina (Morgan Kaufmann Publishers)

     Designing and Building Parallel Programs. A dead-tree version of this book is available by Addison-Wesley.

             Introduction to High-Performance Scientific Computing, by Victor Eijkhout with Edmond Chow, Robert Van De Geijn, February 2010

             Introduction to Parallel Computing, by Blaise Barney


Performance Analysis Tools Websites












            Eclipse Parallel Tools Platform

Other Online Software and Documentation

  Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. See this url

  SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively.

  Sources of test matrices for sparse matrix algorithms

  Matrix Market

  University of Florida Sparse Matrix Collection

  Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes on-line versions of the book (in html and pdf) as well as software.

  Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as software.

  Updated survey of sparse direct linear equation solvers, by Xiaoye Li

  MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs.

  Resources for Parallel and High Performance Computing

  ACTS (Advanced CompuTational Software) is a set of software tools that make it easier for programmers to write high performance scientific applications for parallel computers.

  PETSc: Portable, Extensible, Toolkit for Scientific Computation 

  Issues related to Computer Arithmetic and Error Analysis

  Efficient software for very high precision floating point arithmetic

  Notes on IEEE Floating Point Arithmetic, by Prof. W. Kahan

  Other notes on arithmetic, error analysis, etc. by Prof. W. Kahan

  Report on arithmetic error that cause the Ariane 5 Rocket Crash  Video of the explosion

  The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here.

Jack Dongarra