HPC systems are traditionally associated with research projects but are becoming increasingly more mainstream as artificial intelligence, machine learning and big data analysis commoditise. There are many typical uses, including:
- with parallelisable tasks (parallelism can be achieved on a fine or coarse grain level — on a fine level you have a program which is intrinsically parallel while on a coarse grain level you might run many instances of the same program for parameter studies)
- to take advantage of fine/coarse grain parallelism, embarrassingly parallel and parameter studies techniques
- with computing jobs needing more memory (RAM) than available on their local systems (if you run out of memory on your desktop system or can no longer process your data quickly enough)
- needing access to high end software packages offered by the HPC facilities.
Supercomputing is used to solve real-world problems of significant scale or detail across a diverse range of disciplines including physics, biology, chemistry, geosciences, climate sciences, engineering and many others.
HPC jobs typically run in a non-interactive batch mode. They are submitted to a queue, and when the time comes for them to be executed they can run without any user intervention. This means that HPC systems rarely sit idle but utilise the hardware fully. It has the benefit that you can queue your jobs and get on with other things while you wait for them to execute. If you are accustomed to using your program interactively, you will have to learn how to use its batch mode in order to use it for HPC.