What Is A Supercomputer?

What Is A Supercomputer?

A supercomputer is a computer or an array of computers that acts as one collective machine capable of processing enormous amounts of data and/or computations.

 

An ordinary computer does one thing at a time, so it does things in a distinct series of operations; that's called serial processing. It's a bit like a person sitting at a grocery store checkout, picking up items from the conveyor belt, running them through the scanner, and then passing them on for you to pack in your bags. It doesn't matter how fast you load things onto the belt or how fast you pack them: the speed at which you check out your shopping is entirely determined by how fast the operator can scan and process the items, which is always one at a time. In a supercomputer, problems are split into parts and worked on simultaneously by thousands of processors as opposed to the one-at-a-time “serial” method. Here’s another good analogy, this one from Explainthatstuff.com:

 

It’s like arriving at a checkout with a cart full of items, but then splitting your items up between several different friends. Each friend can go through a separate checkout with a few of the items and pay separately. Once you’ve all paid, you can get together again, load up the cart, and leave. The more items there are and the more friends you have, the faster it gets to do things by parallel processing — at least, in theory.  Parallel processing is more like what happens in our brains.

 

Supercomputers are primarily designed to be used in enterprises and organizations that require massive computing power. A supercomputer incorporates architectural and operational principles from parallel and grid processing, where a process is simultaneously executed on thousands of processors or is distributed among them. Although supercomputers house thousands of processors and require substantial floor space, they contain most of the key components of a typical computer, including a processor(s), peripheral devices, connectors, an operating system, and applications.

 

Modern computers work on the principle of supercomputers. They have multiple cores and/or processors which provide high performance through parallel processing.

 

The size of a supercomputer can differ widely, depending on how many computers make up the supercomputer. A supercomputer could be made up of 10, 100, 1000, or more computers, all working together.

 

You can make a supercomputer by filling a giant box with processors and getting them to cooperate on tackling a complex problem through massively parallel processing. Alternatively, you could just buy a load of off-the-shelf PCs, put them in the same room, and interconnect them using a very fast local area network (LAN) so they work in a broadly similar way. That kind of supercomputer is called a cluster.

 

For the past several decades and into the present-day, supercomputing’s chief contribution to science has been its ever-improving ability to simulate reality in order to help humans make better performance predictions and design better products in fields from manufacturing and oil to pharmaceutical and military. Jack Dongarra, one of the world's foremost supercomputing experts, likens the ability to have a crystal ball.

 

“Say I want to understand what happens when two galaxies collide,” Dongarra says. “I can’t really do that experiment. I can't take two galaxies and collide them. So I have to build a model and run it on a computer. Or in the old days, when they designed a car, they would take that car and crash it into a wall to see how well it stood up to the impact. Well, that's pretty expensive and time consuming. Today, we don’t do that very often; we build a computer model with all the physics [calculations] and crash it into a simulated wall to understand where the weak points are.”

 

Supercomputers have been used for complex, mathematically intensive scientific problems, including simulating nuclear missile tests, forecasting the weather, simulating the climate, and testing the strength of encryption (computer security codes). In theory, a general-purpose supercomputer can be used for absolutely anything.

 

Now suppose you're a scientist charged with forecasting the weather, testing a new cancer drug, or modeling how the climate might be in 2050. Problems like that push even the world's best computers to the limit. Just like you can upgrade a desktop PC with a better processor and more memory, so you can do the same with a world-class computer. But there's still a limit to how fast a processor will work and there's only so much difference more memory will make. The best way to make a difference is to use parallel processing: add more processors, split your problem into chunks, and get each processor working on a separate chunk of your problem in parallel.

 

If you have a problem (like forecasting the world's weather for next week) that seems to split neatly into separate sub-problems (making forecasts for each separate country), that's one thing. Computer scientists refer to complex problems like this, which can be split up easily into independent pieces, as embarrassingly parallel computations (EPC) — because they are trivially easy to divide.

 

But most problems don't cleave neatly that way. The weather in one country depends to a great extent on the weather in other places, so making a forecast for one country will need to take account of forecasts elsewhere. Often, the parallel processors in a supercomputer will need to communicate with one another as they solve their own bits of the problems. Or one processor might have to wait for results from another before it can do a particular job. A typical problem worked on by a massively parallel computer will thus fall somewhere between the two extremes of a completely serial problem (where every single step has to be done in an exact sequence) and an embarrassingly parallel one; while some parts can be solved in parallel, other parts will need to be solved in a serial way. A law of computing (known as Amdahl's law, for computer pioneer Gene Amdahl), explains how the part of the problem that remains serial effectively determines the maximum improvement in speed you can get from using a parallel system.

 

Aziz supercomputer is one of the leading high-performance computing providers in the region. We provide 3 different packages to cater to the needs of all types of organizations. At HPCC, using the Aziz supercomputer, we endeavor to enable different types of businesses to make the right decisions in the highest-impact areas and to achieve smarter outcomes.

 

KAU’s HPCC provides organizations with the required big data analytics; we provide them with the right insights transforming how they do business in a perfect way. As well as, KAU’s HPCC helps academics to optimize technology in order to shape a better world, and building smarter societies by capturing the potential of digital technology and connected devices.

 

Aziz has been developed in collaboration with Fujitsu Ltd. It contains 496 computing nodes equipped with about 12,000 Intel® CPU cores. Whereas, 2 nodes equipped with NVIDIA Tesla K20® GPUs, and another 2 nodes equipped with Intel® Xeon-Phi accelerators.

 

We help the academic community, researchers, enterprises, and even individuals to implement their programs and systems that can handle the computationally-intensive problems with different architectures.

 

Aziz offers a wide range of pre-installed software packages, which can be utilized in different scopes including, computer vision, big data analytics, deep learning, mechanical engineering, aerodynamics, chemistry, civil engineering, physics, rendering graphics, climate research, and much more.

 

Aziz supercomputer is designed to cater to the needs and requirements of big data analytics, providing the ultimate scalability, speed, and to tackle the massive data workloads and complex simulations.

 

References:

https://en.wikipedia.org/wiki/Supercomputer

https://www.computerhistory.org/revolution/supercomputers/10/intro

https://www.explainthatstuff.com/how-supercomputers-work.html

https://builtin.com/hardware/supercomputers

https://en.wikipedia.org/wiki/Parallel_computing

https://en.wikipedia.org/wiki/Testing_high-performance_computing_applications