|Objective:||The objective of the subject is to provide the knowledge and basic applications of parallel processing concepts, parallel environments and architectures, parallel algorithms and parallel programming.|
|Pre-Requisite:||ECP1026:Algorithm and Data Structures|
|Contact Hours:||48 hours|
Final Examination: 60%
|References:||Barry Wilkinson and Michael Allen, "Parallel Programming: Techniques
and Applications Using Networked Workstations and Parallel Computers", Pearson Prentice Hall, 2004. (Textbook)
Ananth Grama, Anshul Gupta, George Karypis and Vipin Kumar, "Introduction to Parallel Computing", Addison-Wesley, 2003. (Textbook)
Harry F. Jordan & Gita Alaghband, "Fundamentals of Parallel Processing", Prentice Hall, 2003.
Peter S. Pacheco, "Parallel Programming with MPI", Morgan Kaufmann Publishers, Inc., 1999.
Rajkumar Buyya, "High Performance Cluster Computing: Programming and Applications, Volume 2", Prentice Hall PTR, 1999.
Introduction to Parallel Computing (Part I)
Motivations for parallelism, scope of parallel computing, parallel paradigms, parallel programming environments, physical organization of parallel platforms and computational speed.
Parallel Processing on shared Memory
Shared memory multiprocessors and chip-level multiprocessor (CMP or multi-core), concurrent process creation (UNIX heavyweight process and threads), shared data access, shared memory synchronization (lock, barrier, semaphores, deadlock, etc), POSIX Thread API (Pthreads), OpenMP, sample applications/programs.
Introduction to Parallel Computing (Part II)
Motivations for Interconnection, Communication methods, deadlocks & cluster computing .
Analytical Modeling of Parallel Programs
Basics of message passing programming, performance metrics for parallel systems (execution time, overhead, speedup, efficiency, cost, etc), analytical evaluation of communication operations and parallel programs.
Message Passing Paradigms
Message Passing Interface (MPI), Parallel Virtual Machine (PVM), sample applications/programs.
Parallel Algorithm Design
Partitioning and divide-and-conquer strategies, pipelined computations, embarrassingly parallel computations, other parallel algorithm models (data-parallel, task graph, work pool, master slave, hybrid, etc), sample applications/programs.
Barriers, synchronized computations, sample applications/programs.
At the completion of the subject, students should be able to perform the following tasks: