What is High Performance Computing?
High Performance Computing (HPC) typically takes a large number of commodity systems and combines them, as a cluster, to become a tightly coupled single-system. This provides capacity, to run a large number of computing tasks simultaneously, and capability, to run large-scale parallel tasks. If your desktop system is too slow for your "big datasets" or the problems too complex, High Performance Computing (HPC) is the tool you need.
Each individual core in an HPC system is no different to the cores in a personal computer. What makes an HPC different is when there is the capacity to submit many jobs or the capability for a computing task to in parallel. Usually, the former is carried out by scheduler directives, and the latter through coding for parallel tasks to run simultaneously.
Because they are designed for optimisation and performance HPC systems typically run on the Linux operating system, which scales efficiently and effectively. Further, due to latency and performance reasons, they also have limited use of interactive application use or the use of graphical interface. Instead, HPC tends to operate with a command-line interface with application use in batch mode.
How HPC Benefits Research
Both datasets and processing requirements are increasing faster than the computational performance of personal systems. As a result, more research now relies on HPC systems, in diverse disciplines including mathematics, the life sciences, engineering, astronomy, economics and finance, with numerous success stories. There is a strong association between research output and availability of HPC systems, and with and an average increase of profits (or cost savings) of $44 dollars per dollar invested in HPC.
High Performance Computing at University of Melbourne
Spartan is the general purpose High Performance Computing (HPC) system operated by Research Computing Services at The University of Melbourne. It combines a high performance bare-metal compute with and GPGPUs to suit a wide range of use-cases.
Use of Spartan, as with other Research Computing Services and University IT services, is governed by the University's general regulations for IT resources and the Research Computing Services Terms of Service.
What's special about Spartan?
Most modern HPC systems are built around a cluster of commodity computers tied together with very-fast networking. This allows computation to run across multiple cores in parallel, quickly sharing data between themselves as needed.
For certain jobs, this architecture is essential to achieving high-performance. For others, however, this is not the case, and each node can run without communicating with the others in the cluster. This class of problems often comes under the guise of embarrassingly parallel. That is, they can be run as independent parallel tasks by splitting up the data or calculation into discrete chunks. In this case, high speed networking is unnecessary, and the resources can be better spent on utilizing more cores to achieve high performance.
If you use Spartan to obtain results for a publication, we'd appreciate if you acknowledge us in your paper. This makes it easy for us demonstrate research impact, helping to secure ongoing funding for expansion and user support. Please include the following citation in the acknowledgements section of your paper:
This research was supported by The University of Melbourne’s Research Computing Services and the Petascale Campus Initiative.
Spartan is just one of many research IT resources offered by The University of Melbourne, or available from other institutions.
Nectar is a national initiative to provide cloud-based Infrastructure as a Service (IaaS) resources to researchers. It's based on OpenStack, and allows researchers on-demand access to computation instances, storage, and a variety of application platforms and Virtual Laboratories.
Melbourne Research Cloud
MRC is a University of Melbourne Openstack cloud similar to the Nectar cloud.
Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE)
MASSIVE is a HPC system at Monash University and the Australian Synchrotron which is optimized for imaging and visualization. It can run batched jobs, as well as provide a desktop environment for interactive work.