ntroduction to Parallel Computing
Parallel computing, also known as concurrent computing, refers to a group of independent processors working collaboratively to solve a large compu-tational problem. This is motivated by the need to reduce the execution time and to utilize larger memory/storage resources. The essence of paral-lel computing is to partition and distribute the entire computational work among the involved processors. However, the hardware architecture of any multi-processor computer is quite di®erent from that of a single-processor computer, thus requiring specially adapted parallel software. Although the message passing programming model, especially in terms of the MPI stan-dard , promotes a standardized approach to writing parallel programs, it can still be a complicated and error-prone task. To reduce the diÿculties, we apply object-oriented programming techniques in creating new software modules, tools, and programming rules, which may greatly decrease the user e®ort needed in developing parallel code. These issues will be discussed in this chapter within the framework of Di®pack.
Introduction to Parallel Computing
Different Performance Model
Basic Parallel Programming with Diffpack
Parallelizing Explicit FD Schemes
Parallelizing FE Computations on Unstructured Grids