Personal tools
You are here: Home Research Trends & Opportunities High Performance and Quantum Computing

High Performance and Quantum Computing

Black Holes Simulation_100820A
[Supermassive test: this simulation of the region around M87 shows the motion of plasma as it swirls around the black hole. The bright thin ring that can be seen in blue is the edge of the shadow. (Courtesy: L Medeiros/C Chan/D. Psaltis/F Özel/University of Arizona/Institute for Advanced Study) - Physicsworld]

 

- Overview

High-performance computing (HPC) is synonymous with the term "supercomputer," which many people may be more familiar with. A supercomputer is more than just a classic computer with high-end components, although you should certainly expect to see the highest-end components in use. 

Instead, supercomputers are many powerful processors and many large memory modules interconnected into one physical, large classical computer. Imagine adding a memory module to a desktop or laptop, but on a much larger scale.

Technically speaking, HPC is not limited to purpose-built supercomputers. The term can also be applied to large groups or "clusters" of hundreds or even thousands of independent computers. 

Each server or "node" in the cluster is connected through the network to every other node in the cluster. The aggregation of processing power and memory is conceptually similar to a supercomputer, with the main difference being that the components are distributed, potentially located around the world. 

Whether using supercomputers or clusters, the goal of HPC is to solve the most complex computing tasks by using all those powerful processors and all those memories in parallel. 

Classic computers are serial in nature, dividing workloads into tasks and then executing them sequentially. HPC is essentially no different; however, it leverages its architecture to perform larger tasks and more tasks simultaneously. 

Extremely complex problems and massive cubes can then be processed a million times faster than the most powerful single server. Interestingly, the power of quantum computers is that they are inherently parallel and can essentially process all quantum information simultaneously.

Here are some examples of supercomputers:

  • AI Research SuperCluster (RSC) by Meta
  • Google Sycamore
  • Summit by IBM
  • Microsoft's cloud supercomputer for OpenAI
  • Fugaku by Fujitsu
  • Lonestar6 by the Texas Advanced Computing Center (TACC) at the University of Texas

 

Supercomputing can cost between $100 million and $300 million to design, build, and deploy. On average, these systems use 6-7 megawatts of power, with peak consumption nearing 9.5 megawatts. The yearly energy cost (in 2023) for such usage is approximately US$ 6-US$ 7 million.

 

- The Future of High Performance Computing

High-performance computing (HPC) utilizes supercomputers and parallel processing techniques to quickly complete time-consuming tasks or complete multiple tasks simultaneously. Technologies such as edge computing and artificial intelligence (AI) can broaden the capabilities of HPC and provide high-performance processing power for various fields.

In the age of Internet computing, billions of people use the Internet every day. Therefore, supercomputer sites and large data centers must simultaneously provide HPC services to massive Internet users. We must upgrade our data centers with fast servers, storage systems, and high-bandwidth networks. The aim is to leverage emerging new technologies to advance web-based computing and web services. 

The general computing trend is to take advantage of shared network resources and the vast amount of data on the Internet. Trends in parallel, distributed and cloud computing with clusters, MPPS (massively parallel processing), P2P (peer-to-peer) networks, grids, clouds, web services, IoT, and even quantum computing. 

Data has become a driving force for business, academic, and social progress, driving significant advances in computer processing. According to UBS, the data universe is expected to grow more than 10 times from 2020 to 2030, reaching 660 zettabytes. This is equivalent to 610 iPhones (128GB) per person. HPC presents new opportunities to address emerging challenges in these areas as organizations embrace a "data-everywhere" mentality. 

HPC is a discipline in computer science in which supercomputers are used to solve complex scientific problems. As HPC technologies have grown in computing power, other academic, government, and commercial organizations have adopted them to meet their needs for fast computing. 

Today, HPC dramatically reduces the time, hardware, and cost required to solve mathematical problems critical to core functionality. As a mature field of advanced computing, HPC is driving new discoveries in disciplines such as astrophysics, genomics, and medicine; it is also driving business value in unlikely industries such as financial services and agriculture.

 

- Supercomputing Technology

Supercomputing technology is a form of high-performance computing that uses multiple computers working together in parallel to solve complex problems and calculate large data sets. Supercomputers are made up of many components, including: 

 

  • Interconnects
  • Memory
  • Processor cores
  • I/O systems
  • More than one central processing unit (CPU)


Supercomputing often uses high-speed interconnects and massive CPU resources. They can consist of hundreds or even thousands of nodes that work in parallel.

Supercomputers play an important and growing role in various fields of national importance. They are used to solve challenging scientific and technical problems. "Supercomputer" is an umbrella term for computing systems capable of supporting high-performance computing applications requiring large numbers of processors, shared or distributed memory, and multiple disks. 

A supercomputer is a computer with the architecture, resources, and components to enable massive computing power. Today's supercomputers consist of tens of thousands of the fastest processors capable of performing billions and trillions of calculations or calculations per second. 

Supercomputer performance is measured in floating point operations per second (FLOPS) rather than millions of instructions per second (MIPS). As of today, 500 of the fastest supercomputers in the world run Linux-based operating systems. 

Supercomputers are primarily designed for businesses and organizations that require large amounts of computing power. Supercomputers combine the architectural and operational principles of parallel and grid processing, where a process executes simultaneously on thousands of processors or is distributed among them. 

Supercomputing technology has indelibly changed the way we approach the world's complex problems, from weather forecasting and climate modeling to keeping our nation safe from cyberattacks. All the most powerful supercomputers in the world now run on Linux.

 

- Linux and Supercomputing

Linux dominates supercomputing. The Linux operating system runs all 500 of the fastest supercomputers in the world, which helps advance artificial intelligence, machine learning and even COVID-19 research. 

While most modern supercomputers use the Linux operating system, each manufacturer makes their own specific changes to the Linux derivatives they use, and no industry standard exists, in part because differences in hardware architecture require changes to be tailored to each The hardware design optimizes the operating system. 

Why do supercomputers use Linux? After seeing the expert opinion, let's detail the characteristics of Linux that make Linux the best choice for supercomputers: 

 

  • Modularity of Linux
  • General properties of the Linux kernel
  • Scalability
  • Open source nature
  • Community support
  • Cost

 

Given that modern massively parallel supercomputers typically separate computation from other services by using multiple types of nodes, they typically run different operating systems on different nodes, for example using small, efficient lightweight cores such as compute node cores ( CNK) or Compute Node Linux (CNL) is a Linux derivative on compute nodes, but on larger systems such as servers and input/output (I/O) nodes.

 

Prague_Czech Republic_052422A
[Prague, Czech Republic]

- Quantum Computing and Quantum Supremacy

Quantum computing is a fundamentally different approach to computing than the type of computing we do today on laptops, workstations, and mainframes. It won't replace these devices, but by leveraging the principles of quantum physics, it will solve specific and often very complex problems of statistical nature that current computers struggle to solve. 

Quantum computers process information millions of times faster than classical computers. By 2030, the quantum computing market is expected to reach $64.98 billion. Companies like Microsoft, Google and Intel are racing to build quantum computing tools. 

A quantum computer is an amazing device. While its applications are still limited at the moment, we now know that it can be faster than the fastest computer we can currently use. Quantum supremacy (also known as quantum supremacy) is the point at which quantum machines surpass classical computers. 

Quantum computers are not meant to replace typical computers. In practice, they will be stand-alone tools for solving complex, data-heavy problems, especially those leveraging machine learning, where systems can make predictions and improve over time.

 

- The Future of Quantum Computing

Widespread implementation of quantum computing may still be years away. However, explore the differences between classical and quantum computing to see if the technology will become more widespread.

Quantum computers have four fundamental functions that differ from today's classical computers: (1) quantum simulation, in which quantum computers model complex molecules; (2) optimization (i.e., solving multivariate problems at unprecedented speed); (3) quantum Artificial intelligence, with better algorithms, can transform machine learning in different industries like pharma and automotive; (4) Prime factorization, which can revolutionize encryption. 

There are four types of quantum computers currently under development that use:

 

  • light particles
  • trapped ions
  • superconducting qubit
  • Nitrogen vacancy centers in diamonds

 

Quantum computers will enable many useful applications, such as being able to simulate many changes in chemical reactions to discover new drugs; developing new imaging techniques for healthcare to better detect problems in the body; or speeding up our design of batteries, new materials and flexibility The speed of electronics."

 

- The Way Forward: Bringing HPC and Quantum Computing Together

Classical computing has been the norm for decades, but in recent years, quantum computing has continued to advance rapidly. The technology is still in its early stages, but has existing and many more potential uses in artificial intelligence/machine learning, cybersecurity, modeling and other applications. Widespread implementation of quantum computing may still be years away.

When approaching the design, development and integration of quantum computing solutions, it is important to keep in mind that for the foreseeable future, quantum computers will act as computing accelerators requiring substantial classical computing support. 

Whether using supercomputers or clusters, the goal of HPC is to solve the most complex computing tasks by using all those powerful processors and all those memories in parallel. Classic computers are serial in nature, dividing workloads into tasks and then executing them sequentially. 

HPC is essentially no different; however, it leverages its architecture to perform larger tasks and more tasks simultaneously. Extremely complex problems and massive cubes can then be processed a million times faster than the most powerful single server. 

Interestingly, the power of quantum computers is that they are inherently parallel and can essentially process all quantum information simultaneously. 

HPC resources, such as quantum computers, can be accessed via the cloud. However, the Internet can cause latency issues. In other words, data transfer becomes a bottleneck, slowing down calculations. 

This bottleneck can occur if HPC and quantum computing resources are connected through a network. So part of the drive to integrate quantum computers into HPC centers is to eliminate this latency and facilitate the fastest possible data transfer.

The ultimate goal of a self-contained quantum computer is a laudable goal, but it's still far in the future. For now, the goal should be seamless interaction between quantum computers and existing HPC infrastructure.

To maximize the chance of a successful collaboration, it is best for the quantum computer to be on premises. That is, the quantum computer should be located at the HPC center. 

 

[More to come ...]

 

 

Document Actions