Personal tools
You are here: Home Research Trends & Opportunities High Performance and Quantum Computing High Performance Computing Systems and Applications

High Performance Computing Systems and Applications

Supercomputer_Lawrence_Livermore_National_Lab_1
(Supercomputer, Lawrence Livermore National Laboratory)
 

- Overview

High performance computing (HPC) is the practice of using parallel data processing to perform complex calculations at high speeds across multiple servers. HPC can take the form of custom-built supercomputers or groups of individual computers called clusters.

HPC achieves its goals by aggregating computing power, which allows even advanced applications to run efficiently, reliably, and quickly. This aggregate computing power enables different science, business, and engineering organizations to solve large problems that would otherwise be unapproachable.

 

- High-Performance Computing (HPC)

High Performance Computing (HPC) most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business. 

HPC is the use of parallel processing and supercomputers to run advanced and complex application programs. The system focuses on developing parallel processing systems by integrating both administration and computational methods. And for decades, HPC and supercomputing were intrinsically linked. 

 

- Why Do We  Need the HPC?

Specialized computing resources were necessary to help researchers and scientists extracts insights from massive data sets. Scientist need HPC because they hit a tipping point. At some point in research, there is a need to:

  • Expand the current study area (regional → national → global)
  • Integrate new data
  • Increase model resolution

 

But … processing on my desktop or a single server no longer works. Some typical computational barriers:

  • Time – processing on local systems is too slow or not feasible
  • CPU Capacity -- Can only run one model at a time
  • Develop, implement, and disseminate state-of-the-art techniques and tools so that models are more effectively applied to today’s decision-making
  • Management of Computer Systems – Science Groups don’t want to purchase and manage local computer systems – they want to focus on science

 

Rome_Italy_011421A
[Rome, Italy - Civil Engineering Discoveries]

- Exascale Computing

A supercomputer can contain hundreds of thousands of processor cores and require entire buildings to house and cool—not to mention millions of dollars to create and maintain them. But despite these challenges, more and more devices will come online as the U.S. and China develop new "exascale" supercomputers that promise to boost performance fivefold compared to current leading systems .

Exascale computing is the next milestone in supercomputer development. Exascale computers, capable of processing information faster than today's most powerful supercomputers, will give scientists a new tool to solve some of the biggest challenges facing our world, from climate change to understanding cancer to design New Materials. 

Exascale computers are digital computers, broadly similar to today's computers and supercomputers, but with more powerful hardware. This makes them different from quantum computers, which represent an entirely new approach to building computers suitable for specific types of problems.

 

- Memory Centric Computing

Driven by the development of artificial intelligence and 5G networks, the ICT industry is ushering in a new turning point. The intelligent society has emerged in our daily life, and the world we live in is becoming more and more intelligent. However, many tasks remain to be solved. In the future, artificial intelligence computing systems will play a pivotal role in the "next intelligent society", which explains why a lot of research is being carried out in related industries aimed at greatly improving performance. 

While various approaches are being developed to leverage systems for an intelligent society, data transfer between processors and memory still hinders system performance. Additionally, increased power consumption due to increased bandwidth remains an issue. The memory industry is currently developing high-bandwidth memory capable of meeting system requirements, contributing to improved system performance, but now is the time to develop more innovative memory technologies to meet the requirements of an intelligent society. 

To achieve this, we need to not only improve memory itself, but also combine computation with memory to minimize unnecessary bandwidth usage. In other words, we need to use a different system-in-package technology to place memory close to the processor. In addition, the need to change the architecture of computing systems makes cross-industry collaboration essential.

 

 

[More to come ...]

 

 

Document Actions