
COSC 594 – Scientific Computing for Engineers: Spring 2020 – 3 Credits This class is part of the Interdisciplinary Graduate Minor in Computational
Science. See IGMCS for details.
Wednesdays
from 1:30 – 4:15, Room 233 Claxton Zoom
https://tennessee.zoom.us/j/999753401 Prof.
Jack Dongarra
with help from Drs. George Bosilca, Anthony Danalis, Mark
Gates, Heike Jagode Nuria Losada, Piotr Luszczek, Stan
Tomov, and
Jeff Larkin Email: dongarra@icl.utk.edu Phone: 8659748295 Office
hours: Wednesday 11:00  1:00, or by appointment TA: Neil Lindquist nlindqu1@vol.utk.edu TA’s
Office : Claxton
353 TA’s
Office Hours: Wednesday’s 10:00 – 12:00 or by appointment There
will be four major aspects of the course:
The
grade would be based on homework, a midterm project, a final project, and a
final project presentation. Topics for the final project would be flexible
according to the student's major area of research. Class
Roster If
your name is not on the list or some information is incorrect, please send
mail to the TA:
Lecture
Notes: (Tentative outline of the class) Introduction to High Performance Computing Homework 1 (due January 22^{nd})
Parallel programming paradigms and their performances
Modern
Directive Programming with OpenMP and OpenACC 7. February
19^{th} (Dr. Luszczek) Machine
Learning with Deep Neural Networks
March 18^{th} Spring Break
12. April
1^{st} (Dr. Tomov) https://tennessee.zoom.us/j/999753401 Projection and its importance in scientific
computing
Discretization of PDEs and Parallel Solvers Mesh generation and load balancing
Sparse Matrices and Optimized Parallel Implementations 15. April
22^{th} ( Dr. Tomov ) https://tennessee.zoom.us/j/999753401 Iterative Methods in Linear Algebra Part 1 Iterative
Methods in Linear Algebra Part 2
Schedule Class
Final Reports Melissa Karman Daniel Nichols Alxabder Teepe Ethan Vogel The project is to describe and demonstrate what you have learned in class. The idea is to take an application and implement it on a parallel computer. Describe what the application is and why this is of importance. You should describe the parallel implementation, look at the performance, perhaps compare it to another implementation if possible. You should write this up in a report, 1015 pages, and in class you will have 20 minutes to make a presentation. Here
are some ideas for projects: o
Projects
and additional
projects. Additional Reading Materials Message Passing
Systems
Several implementations of the
MPI standard are available today. The most widely used open source MPI
implementations are Open MPI and MPICH.
Here is the link to the MPI Forum.
Other useful reference material
· Here are pointers to
specs on various processors: http://www.cpuworld.com/CPUs/index.html http://www.cpuworld.com/sspec/index.html http://processorfinder.intel.com ·
Introduction to message passing
systems and parallel computing
``Message Passing Interfaces'', Special issue
of Parallel Computing, vol 20(4), April 1994. Ian
Foster, Designing and Building Parallel Programs, see http://www.mcs.anl.gov/~itf/dbpp/ Alice Koniges, ed., Industrial Strength Parallel Computing,
ISBN1558605401, Morgan Kaufmann Publishers, San Francisco, 2000. Ananth
Gramma et al., Introduction to Parallel
Computing, 2^{nd} edition, Pearson Education Limited, 2003. Michael
Quinn, Parallel Programming: Theory and
Practice, McGrawHill, 1993 David E.
Culler & Jaswinder Pal Singh, Parallel
Computer Architecture, Morgan Kaufmann, 1998, see http://www.cs.berkeley.edu/%7Eculler/book.alpha/index.html George
Almasi and Allan Gottlieb, Highly
Parallel Computing, Addison Wesley, 1993 Matthew
Sottile, Timothy Mattson, and Craig Rasmussen, Introduction to Concurrency in Programming Languages, Chapman
& Hall, 2010 · Other relevant books Stephen Chapman, Fortran 95/2003 for Scientists and
Engineers, McGrawHill, 2007 Stephen Chapman, MATLAB Programming for Engineers,
Thompson, 2007 Barbara Chapman,
Gabriele Jost, Ruud van der Pas, and David J. Kuck, Using OpenMP: Portable Shared Memory Paralllel Programming, MIT
Press, 2007 Tarek ElGhazawi, William
Carlson, Thomas Sterling, Katherine Yelick, UPC: Distributed Shared Memory Programming, John Wiley &
Sons, 2005 David Bailey, Robert Lucas, Samuel Williams,
eds., Performance Tuning of Scientific
Applications, Chapman & Hall, 2010 Message Passing
Standards
``MPI
 The Complete Reference, Volume 1, The MPI1 Core, Second Edition'', ``MPI: The Complete Reference  2nd
Edition: Volume 2  The MPI2 Extensions'', MPI2.2 Standard, September 2009 PDF format: http://www.mpiforum.org/docs/mpi2.2/mpi22report.pdf
Hardcover: https://fs.hlrs.de/projects/par/mpi//mpi22/
Online Documentation
and Information about Machines Highperformance
computing systems: · High
Performance Computing Systems: Status and outlook, Aad J.
van der Steen and Jack J. Dongarra, 2012.
· Green 500 List of Energy –Efficient
Supercomputers Other Scientific
Computing Information Sites · Netlib Repository at UTK/ORNL · LAPACK
· GAMS  Guide to Available Math Software · Fortran Standards
Working Group · Message Passing Interface (MPI)
Forum · OpenMP ·
DOD High Performance Computing Modernization
Program · DOE Accelerated
Strategic Computing Initiative (ASC) · NSF XSEDE (Extreme Science and Engineering
Discovery Environment · AIST Parallel and High
Performance Application Software Exchange (in Japan) (includes
information on parallel computing conferences and journals) · HPCwire Related Online Books/Textbooks ·
Templates
for the Solution of Linear Systems: Building Blocks for Iterative Methods,
SIAM Publication, Philadelphia, 1994. ·
LAPACK
Users' Guide (Third Edition), SIAM Publications, Philadelphia, 1999. ·
MPI:
The Complete Reference, M. Snit, S. Otto, S.
HussLederman, D. Walker, and J. Dongarra · Parallel
Computing Works, by G. Fox, R. Williams, and P. Messina
(Morgan Kaufmann Publishers) · Designing and Building Parallel
Programs. A deadtree version of this book is
available by AddisonWesley. · Introduction to HighPerformance
Scientific Computing, by Victor
Eijkhout with Edmond Chow, Robert Van De Geijn, February 2010 · Introduction to
Parallel Computing, by Blaise Barney Performance Analysis Tools Websites · PAPI · TAU · Vampir · Scalasca · mpiP · ompP · IPM · Eclipse Parallel Tools Platform Other
Online Software and Documentation
· Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. See this url · SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively. · Sources of test matrices for sparse matrix algorithms · University of Florida Sparse Matrix Collection · Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes online versions of the book (in html and pdf) as well as software. · Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as software. · Updated survey of sparse direct linear equation solvers, by Xiaoye Li · MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs. · Resources for Parallel and High Performance Computing · PETSc: Portable, Extensible, Toolkit for Scientific Computation · Issues related to Computer Arithmetic and Error Analysis · Efficient software for very high precision floating point arithmetic · Notes on IEEE Floating Point Arithmetic, by Prof. W. Kahan · Other notes on arithmetic, error analysis, etc. by Prof. W. Kahan · Report on arithmetic error that cause the Ariane 5 Rocket Crash Video of the explosion · The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here. 4/22/2020



