What is HPC?
What is HPC?
High Performance Computing (HPC) generally refers to the practice of aggregating computing power in a way that delivers much higher performance than what a typical desktop computer or workstation can offer. This enhanced capability is used to solve large problems in science, engineering, or business.
At Dartmouth, we have access to both large-scale shared memory platforms (Andes and Polaris) and a cluster (Discovery) with many nodes, or a set of individual machines connected through a high-bandwidth network.

Keep in mind that Dartmouth’s HPC systems are shared resources used by the entire research community. This is especially important when working on Andes, Polaris, or the login node of Discovery.
Andes and Polaris are intended for interactive use, much like a personal computer. Being a responsible user means monitoring your processes carefully to avoid excessive use of CPU or memory resources.
Discovery, on the other hand, is primarily designed for batch-scheduled jobs submitted through a scheduler. When you log into Discovery, you’re on the login node—a shared entry point for all users. It’s appropriate for lightweight tasks like compiling code or monitoring jobs, but not for running or testing programs. Running intensive computations on the login node can negatively impact all users, potentially blocking job submissions and affecting system stability.
Why do we use HPC?
An important concept in high-performance computing (HPC) is understanding the difference between shared memory and distributed memory models. Standard systems, such as laptops and desktops, use a shared memory model in which all components within a single machine access the same memory space. In contrast, HPC systems often use a distributed memory model, where computational tasks are spread across multiple compute nodes—each with its own private memory. Communication between nodes is typically handled over a network, making data exchange an explicit part of the computation.
