|
COSC 594 Ð 00? Scientific Computing for Engineers: Spring 2017 Ð 3 Credits This class is part of the Interdisciplinary Graduate Minor in Computational
Science. See IGMCS for details.
Wednesdays
from 1:30 Ð 4:15, Room 233 Claxton Prof.
Jack Dongarra
with help from Profs. Hartwig Anzt, George Bosilca,
Mark Gates, Jakub Kurzak, Piotr Luszczek, Anthony Danalis, Stan
Tomov Email: dongarra@eecs.utk.edu Phone: 865-974-8295 Office
hours: Wednesday 11:00 - 1:00, or by appointment TA: Stephen Richmond, srichmond2486@gmail.com TAÕs
Office : Claxton
309 TAÕs
Office Hours: WednesdayÕs 10:00 Ð 12:00 or by appointment There
will be four major aspects of the course:
The
grade would be based on homework, a midterm project, a final project, and a
final project presentation. Topics for the final project would be flexible
according to the student's major area of research. Class
Roster If
your name is not on the list or some information is incorrect, please send
mail to the TA:
Book
for the Class: The
Sourcebook of Parallel Computing, Edited by Jack
Dongarra, Ian Foster, Geoffrey Fox, William Gropp,
Ken Kennedy, Linda Torczon, Andy White, October
2002, 760 pages, ISBN 1-55860-871-0, Morgan Kaufmann Publishers. Lecture
Notes: (Tentative outline of the class)
Introduction
to High Performance Computing Read Chapter
1, 2, and 9 Homework
1 (due January 25th, 2017)
Parallel
programming paradigms and their performances Homework
2 (due February 15th, 2017) 3. January
25th (Dr. Luszczek) Homework
3 (due February 22nd, 2017)
Architecture
and POSIX threads
Homework
4 (due March 1, 2017)
Homework
5 (due March 3, 2017)
Homework
6 (due March 22, 2017)
Homework
7 (due March 29, 2017)
Homework
8 (due March 22, 2017) March 15th Spring Break
Projection
and its importance in scientific computing Homework
9 (due April 5, 2017)
Discretization
of PDEs and Parallel Solvers Mesh
generation and load balancing Homework
10 (due April 12, 2017)
Sparse
Matrices and Optimized Parallel Implementations NVIDIA's
Compute Unified Device Architecture (CUDA)
Iterative
Methods in Linear Algebra Part 1 Iterative
Methods in Linear Algebra Part 2 Better
performance at lower occupancy
(linked to http://www.nvidia.com/content/gtc-2010/pdfs/2238_gtc2010.pdf) Introduction
to OpenCL (Part 1)
(linked to https://www.youtube.com/watch?v=aKtpZuokeEk) Introduction
to OpenCL (Part 2) (linked to https://www.youtube.com/watch?v=EwHfCpCA4GU)
Schedule Class
Final Reports The project is to describe and demonstrate what you have learned in class. The idea is to take an application and implement it on a parallel computer. Describe what the application is and why this is of importance. You should describe the parallel implementation, look at the performance, perhaps compare it to another implementation if possible. You should write this up in a report, 10-15 pages, and in class you will have 20 minutes to make a presentation. Here
are some ideas for projects: o
Projects
and additional
projects. Additional Reading Materials Message Passing
Systems
Several implementations of the
MPI standard are available today. The most widely used open source MPI
implementations are Open MPI and MPICH.
Here is the link to the MPI Forum.
Other useful reference material
á Here
are pointers to specs on various processors: http://www.cpu-world.com/CPUs/index.html http://www.cpu-world.com/sspec/index.html http://processorfinder.intel.com á
Introduction to message passing
systems and parallel computing
``Message Passing Interfaces'', Special issue
of Parallel Computing, vol 20(4), April
1994. Ian
Foster, Designing and Building Parallel Programs, see http://www.mcs.anl.gov/~itf/dbpp/ Alice Koniges, ed., Industrial Strength Parallel Computing, ISBN1-55860-540-1,
Morgan Kaufmann Publishers, San Francisco, 2000. Ananth Gramma et al., Introduction
to Parallel Computing, 2nd edition, Pearson Education Limited,
2003. Michael
Quinn, Parallel Programming: Theory and
Practice, McGraw-Hill, 1993 David E.
Culler & Jaswinder Pal Singh, Parallel Computer Architecture, Morgan
Kaufmann, 1998, see http://www.cs.berkeley.edu/%7Eculler/book.alpha/index.html George Almasi and Allan Gottlieb, Highly Parallel Computing, Addison Wesley, 1993 Matthew Sottile, Timothy Mattson, and Craig Rasmussen, Introduction to Concurrency in Programming
Languages, Chapman & Hall, 2010 á Other relevant books Stephen
Chapman, Fortran 95/2003 for Scientists
and Engineers, McGraw-Hill, 2007 Stephen
Chapman, MATLAB Programming for
Engineers, Thompson, 2007 Barbara
Chapman, Gabriele Jost, Ruud van der Pas, and David
J. Kuck, Using
OpenMP: Portable Shared Memory Paralllel
Programming, MIT Press, 2007 Tarek
El-Ghazawi, William Carlson, Thomas Sterling,
Katherine Yelick, UPC: Distributed Shared Memory Programming, John Wiley &
Sons, 2005 David Bailey, Robert Lucas, Samuel
Williams, eds., Performance Tuning of Scientific
Applications, Chapman & Hall, 2010 Message Passing
Standards
``MPI
- The Complete Reference, Volume 1, The MPI-1 Core, Second Edition'', ``MPI: The Complete Reference - 2nd
Edition: Volume 2 - The MPI-2 Extensions'', MPI-2.2 Standard, September 2009 PDF format: http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf
Hardcover: https://fs.hlrs.de/projects/par/mpi//mpi22/
On-line Documentation
and Information about Machines High-performance computing systems: á High
Performance Computing Systems: Status and outlook, Aad J.
van der Steen and Jack J. Dongarra, 2012.
á Green 500 List of Energy ÐEfficient
Supercomputers Other Scientific
Computing Information Sites á Netlib
Repository at UTK/ORNL á LAPACK
á GAMS - Guide to Available Math Software á Fortran Standards
Working Group á Message Passing Interface (MPI)
Forum á OpenMP á
DOD High Performance Computing Modernization
Program á DOE Accelerated
Strategic Computing Initiative (ASC) á NSF XSEDE (Extreme Science and Engineering
Discovery Environment á AIST Parallel and High
Performance Application Software Exchange (in Japan)
(includes information on parallel computing conferences and journals) á HPCwire Related On-line Books/Textbooks á
Templates
for the Solution of Linear Systems: Building Blocks for Iterative Methods,
SIAM Publication, Philadelphia, 1994.
á
LAPACK
Users' Guide (Third Edition), SIAM Publications, Philadelphia, 1999. á
MPI:
The Complete Reference, M. Snit, S. Otto, S.
Huss-Lederman, D. Walker, and J. Dongarra á Parallel
Computing Works, by G. Fox, R. Williams, and P. Messina
(Morgan Kaufmann Publishers) á Designing and Building Parallel
Programs. A dead-tree version of this book is
available by Addison-Wesley. á Introduction to High-Performance
Scientific Computing, by Victor Eijkhout with Edmond Chow, Robert Van De Geijn, February 2010 á Introduction to
Parallel Computing, by Blaise Barney Performance Analysis Tools Websites á PAPI á TAU á Vampir á Scalasca á mpiP á ompP á IPM á Eclipse Parallel Tools Platform Other
Online Software and Documentation
á Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. See this url á SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively. á Sources of test matrices for sparse matrix algorithms á University of Florida Sparse Matrix Collection á Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes on-line versions of the book (in html and pdf) as well as software. á Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as software. á Updated survey of sparse direct linear equation solvers, by Xiaoye Li á MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs. á Resources for Parallel and High Performance Computing á PETSc: Portable, Extensible, Toolkit for Scientific Computation á Issues related to Computer Arithmetic and Error Analysis á Efficient software for very high precision floating point arithmetic á Notes on IEEE Floating Point Arithmetic, by Prof. W. Kahan á Other notes on arithmetic, error analysis, etc. by Prof. W. Kahan á Report on arithmetic error that cause the Ariane 5 Rocket Crash Video of the explosion á The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here. 4/26/2017
|
||||||||||||||||||||||||||||
|
|