What is HPC?
What is HPC?¶
High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
At Dartmouth, we have both large-scale shared memory platforms (Andes and Polaris) and a cluster with many nodes (Discovery), or set of individual machines that are connected to a high-bandwidth network.

Why do we use HPC?¶
An important thing to understand when it comes to understanding why you would want to use HPC, is the difference between a shared memory model and a distributed memory model. Systems like your laptops and desktops use a shared memory model, where all of the memory is being shared with the varies compenents of a single system. In HPC, the concept of distributed memory comes into play where you are distributing your compute load across multiple compute nodes using the shared memory from each.

It is also important to understand that in a distributed model. Each step, or compute node which is doing it's assigned work must complete it's step before moving on to the next step in the program.
