|
CS 594-02 Scientific Computing for
Engineers: Spring 2008 – 3
Credits This class is part of the Interdisciplinary Graduate
Minor in Computational Science. See IGMCS for details.
Wednesdays from Room C233 (NOTE: This is a room change) Prof. Jack Dongarra with
help from Profs. George Bosilca, Jakub Kurzak, Karl Fuerlinger, and Stan Tomov Email: dongarra@eecs.utk.edu Phone: 865-974-8295 Fax:
865-974-8296 Office
hours: Wednesday TA: Gwang Son, son@eecs.utk.edu TA’s
Office : Claxton 349; Phone: 974-3760. TA’s Office
Hours: Wednesday’s 11:00 – 1:00 or by appointment There will be four major aspects of the course: · Part I will start with current trends in high-end computing systems and environments, and continue with a practical short description on parallel programming with MPI, OpenMP, and pthreads. · Part II will illustrate the modeling of problems from physics and engineering in terms of partial differential equations (PDEs), and their numerical discretization using finite difference, finite element, and spectral approximation.
The grade would be based on homework, a midterm project, a final project, and a final project presentation. Topics for the final project would be flexible according to the student's major area of research. Class
Roster If your name is
not on the list or some information is incorrect, please send mail to the TA:
And the course
mailing list: cs594parallel-students@cs.utk.edu
The
Sourcebook of Parallel Computing, Edited by Jack Dongarra, Ian
Foster, Geoffrey Fox, William Gropp, Ken Kennedy, Linda Torczon, Andy White,
October 2002, 760 pages, ISBN 1-55860-871-0, Morgan Kaufmann Publishers. Lecture
Notes: (Tentative outline of the class)
Introduction
to High Performance Computing Read Chapter
1, 2, and 9 Homework 1 (due January 23, 2008)
Introduction to the Cell Processor Read Chapter 3
Homework 2 (due February 6, 2008)
Parallel Programming Paradigms Notes on booting over the network Read Chapter 11
Message Passing Interface (MPI) Homework 3 (due February 18, 2008)
Floating Point Arithmetic, Memory Hierarchy
and Cache Homework 4 (due March 5, 2008) Some disasters attributable to bad numerical computing Read Chapter 3 Toward an Optimal Algorithm for Matrix Multiply Read Chapter 20, Bailey’s paper on “12 ways to fool …”
Homework
5 (due March 12, 2008) Read Chapter 20
Projection and its importance in scientific
computing Homework 6
(due March 26, 2008) Matlab
Script myqr_it.m
Homework 7 (due
April 2, 2008) Read
Chapter 15 March 19 –
Spring Break
Dense
Linear Algebra part2 and Homework
8 and tar
file (due April 9, 2008) Read Chapter 14 pp 409 – 442 12.
April 2 – (Dr. Tomov) Discretization of PDEs and tools for the parallel
solution of the resulting systems
and Mesh Generation and Load Balancing Homework 9,
discussion of HW9, tar file
(due April 16, 2008)
Sparse matrices and optimized parallel
implementations
Iterative Methods in Linear Algebra (part 1) Read
Chapter 20 and 21
Iterative
Methods in Linear Algebra (part 2) Read
Chapter 21
Class
Final reports Order
of presentation: 1.
Wes Kendall
Matching patterns with climate data 2.
Rick Weber
Optimizing one of the components of MADNESS 3.
Gwang
Son Google 4.
Roland
Schulz FFTs 3d w/2d
decomposition 5.
Dilip
Patlolla Strassen Matrix Multiply on the
GPU. and also the implementation of Strassen tuned for memory hierarchy 6.
Benjamin Lindner Biological
Crystallography 7.
Junkyu
Lee Lanczos method for symmetric
eigenvalue problems 8.
Yinan Li GridSolve request sequencing 9.
Rajib Kumar Nath Loop
transformation 10. Supriya Kilambi Conjugate
gradient method on the GPU's 11. Bruce Johnson Radiosity’s calculation by GPUs 12. Akila Gothandaraman Implement Parallel Quantum Monte Carlo
software application and study its performance 13. Reuben Budiardja Implementation
of FFT based Poisson Solver for Self-Gravitating System on 3D Mesh ·
Projects reports to be turned in on Tuesday,
April 29th. ·
Here
are some ideas for projects: o
Projects
and additional
projects. Additional
Message Passing
Systems.
The
PVM home page. Other useful reference
material
· Here’s a
pointer to specs on various processors: http://www.cpu-world.com/CPUs/index.html http://www.cpu-world.com/sspec/index.html http://processorfinder.intel.com A
good introduction to message passing systems.
``Message Passing Interfaces'', Special issue
of Parallel Computing , vol 20(4), April 1994. A paper by members of the PVM team on the differences between PVM
and MPI. Geist, G.A, J.A. Kohl, P.M. Papadopoulos, ``
PVM and MPI: A Comparison of Features '', Calculateurs Paralleles ,
8(2), pp. 137--150, June, 1996. Papers by members of the MPI team on the differences between PVM
and MPI. ``Why are PVM and MPI So Different'', William
Gropp and Ewing Lusk (submitted to The Fourth European PVM - MPI Users' Group
Meeting) and ``PVM and MPI are completely different'',
William Gropp and Ewing Lusk, to appear in the journal Future Generation Computer
Systems, 1998. Ian Foster, Designing and Building Parallel Programs, see http://www-unix.mcs.anl.gov/dbpp/ Alice Koniges, ed.,
Industrial Strength Parallel Computing, ISBN1-55860-540-1,
Morgan Kaufmann Publishers, Michael Quinn, Parallel Programming, see http://web.engr.oregonstate.edu/~quinn/Comparison.htm David E. Culler & Jaswinder Pal Singh, Parallel
Computer Architecture, see http://www.cs.berkeley.edu/%7Eculler/book.alpha/index.html George Almasi and Allan Gottlieb, Highly Parallel
Computing Standard
Books on Message Passing
``MPI - The Complete Reference, Volume
1, The MPI-1 Core, Second Edition'', ``Using
MPI,'' ``MPI:
The Complete Reference - 2nd Edition: Volume 2 - The MPI-2 Extensions'', On-line
Documentation and Information about Machines ·
Overview of Recent Supercomputers, Aad J. van der Steen and Jack J.
Dongarra, 2007.
·
Catalog of
Commercial Hardware and Software Vendors Other Parallel Information Sites · NHSE - National
HPCC Software Exchange · Netlib
Repository at UTK/ORNL · LAPACK · GAMS - Guide to
Available Math Software · Supercomputing
& Parallel Computing: Conferences · Supercomputing
& Parallel Computing: Journals · High
Performance Fortran (HPF) reports · High
Performance Fortran Resource List · Major Science Research Institutions from Caltech · Message Passing
Interface (MPI) Forum · High
Performance Fortran Forum · OpenMP · PVM · DoD High
Performance Computing Modernization Program · DoE Accelerated Strategic Computing
Initiative (ASC) · National
Computational Science Alliance Related On-line Textbooks · Templates for the
Solution of Linear Systems: Building Blocks for Iterative Methods, · PVM - A Users'
Guide and Tutorial for Networked Parallel Computing, MIT Press, · MPI : A
Message-Passing Interface Standard · LAPACK Users' Guide
(Second Edition), · MPI: The
Complete Reference, MIT Press, · Using MPI: Portable Parallel
Programming with the Message-Passing Interface by W. Gropp,
E. Lusk, and A. Skjellum · Parallel
Computing Works,
by G. Fox, R. Williams, and P. Messina (Morgan Kaufmann Publishers) · Computational Science Education
Project · Designing and
Building Parallel Programs. A dead-tree version of this book is
available by Addison-Wesley. · High Performance
Fortran (HPF),
a course offered by For
performance analysis: · Raj Jain, The Art of Computer Systems
Performance Analysis. John Wiley, 1991. Papers
on performance analysis tools: · Ruth A. Aydt, "The Pablo Self-Defining
Data Format," November 1997, click
here.
· Jeffrey K. Hollingsworth, Barton P. Miller,
Marcelo J. R. Gongalves, Oscar Naim, Zhichen Xu and Ling Zheng, "MDL: A
Language and Compiler for Dynamic Program Instrumentation",
International Conference on Parallel Architectures and Compilation
Techniques, San Francisco, CA, November 1997, click here. · Barton P. Miller, Mark D. Callaghan,
Jonathan M. Cargille, Jeffrey K. Hollingsworth, R. Bruce Irvin, Karen L.
Karavanic, Krishna Kunchithapadam and Tia Newhall. "The Paradyn Parallel
Performance Measurement Tools", IEEE Computer 28(11), (November 1995). click here. · Steven T. Hackstadt and Allen D. Malony,
"Distributed Array Query and Visualization for High Performance Fortran,
February 1996.
· Jerry Yan and Sekhar Sarukkai and Pankaj
Mehra, "Performance Measurement, Visualization and Modeling of Parallel
and Distributed Programs using the AIMS toolkit", Software Practice and
Experience 25(4), April 1995, 429--461 Other Online Software and Documentation· Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. A primer (for version 4.0/4.1 of Matlab, not too different from the current version) is available in either postscript or pdf. · Netlib, a repository of numerical software and related documentation · Netlib Search Facility, a way to search for the software on Netlib that you need · GAMS - Guide to Available Math Software, another search facility to find numerical software · Linear Algebra Software Libraries and Collections · LAPACK, state-of-the-art software for dense numerical linear algebra on workstations and shared-memory parallel computers. Written in Fortran. · CLAPACK, a C version of
LAPACK. · ScaLAPACK, a partial version of LAPACK for distributed-memory parallel computers. · LINPACK and EISPACK are precursors of LAPACK, dealing with linear systems and eigenvalue problems, respectively. · SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively. · Sources of test matrices for sparse matrix algorithms · University of Florida Sparse Matrix Collection · Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes on-line versions of the book (in html and postscript) as well as software. · Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as software. · Updated survey of sparse direct linear equation solvers, by Xiaoye Li · MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs. · Resources for Parallel and High Performance Computing · Millennium a UC Berkeley campus-wide parallel computing resource · Resources for CS 267, Applications of Parallel Computers · ACTS (Advanced CompuTational Software) is a set of software tools that make it easier for programmers to write high performance scientific applications for parallel computers. · PETSc: Portable, Extensible, Toolkit for Scientific Computation · NHSE - National High Performance Computing and Communications Software Exchange, pointers to related work across the country. · Issues related to Computer Arithmetic and Error Analysis · Efficient software for very high precision floating point arithmetic · Notes on IEEE Floating Point Arithmetic, by Prof. W. Kahan · Other notes on arithmetic, error analysis, etc. by Prof. W. Kahan · Report on arithmetic error that cause the Ariane 5 Rocket Crash · The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here. 4/23/2008
|
|||||||||||||||||||||||||||||||||||||||||||||
|
|