Personal tools

Parallel Supercomputing

California Institute of Technology_072821A
[California Institute of Technology, US News]
 

- Overview

Parallel computing is the concurrent use of multiple processors (CPUs) to do computational work. In traditional (serial) programming, a single processor executes program instructions in a step-by-step manner. Some operations, however, have multiple steps that do not have time dependencies and therefore can be separated into multiple tasks to be executed simultaneously. 

For example, adding a number to all the elements of a matrix does not require that the result obtained from summing one element be acquired before summing the next element. Elements in the matrix can be made available to several processors, and the sums performed simultaneously, with the results available faster than if all operations had been performed serially. 

Parallel computations can be performed on shared-memory systems with multiple CPUs, distributed-memory clusters made up of smaller shared-memory systems, or single-CPU systems. Coordinating the concurrent work of the multiple processors and synchronizing the results are handled by program calls to parallel libraries; these tasks usually require parallel programming expertise.

Please refer to the following for more information:

 

- Parallel Computing for Modeling and Computation

 

- Challenges and Applications of Parallel Supercomputers 

In parallel computing, a larger task is divided into smaller subtasks, which are then distributed across multiple computers. The computers may be located in the same physical location, or they may be spread out across different geographical locations. 

Some challenges of parallel computing include: Finding and expressing concurrency, Managing data distributions, Managing inter-processor communication, Balancing the computational load, and Implementing the parallel algorithm correctly. 

Parallel supercomputers have been in the mainstream of high-performance computing for the past decades. However, their popularity is waning. 

The reasons for this decline are many, but include factors like being expensive to purchase and run, potentially difficult to program, slow to evolve in the face of emerging hardware technologies, and difficult to upgrade without, generally, replacing the whole system. 

The decline of dedicated parallel supercomputer has been compounded by the emergence of commodity-off-the-shelf clusters of PCs and workstations. 

Parallel computing has many commercial applications, including: 

  • Big data
  • Data mining
  • Artificial intelligence (AI)
  • Oil exploration
  • Web search engines
  • Medical imaging and diagnosis
  • Pharmaceutical design
  • Financial and economic modeling
  • Advanced graphics and virtual reality

 

[More to come ...]

 

 

Document Actions