Spartan is a High Performance Computing (HPC) system operated by Research Computing Services at The University of Melbourne. It combines a high performance bare-metal compute with flexible cloud infrastructure and GPGPU to suit a wide range of use-cases.
If your computing jobs take too long on your desktop computer, or are simply not possible due to a lack of speed and memory, a HPC system like Spartan can help.
Spartan Daily Weather Report (20200603)
- CephFS usage: 1126.32TB Free: 175.25TB (86%)
- Spartan is very busy on the cloud partition, with close to 100% node allocation. Total pending/queued: 17376
- Spartan is very busy on the physical partition, with close to 99% node allocation. Total pending/queued: 7343
- Spartan is very busy on the snowy partition, with close to 100% node allocation. Total pending/queued: 7157
- Total queued pending/queued on the public partitions: 18099
- Spartan is busy on the GPGPU partition, with close to 87% node allocation. Total pending/queued: 16
- GPUGPU usage in the [ gpgpu ] partition: 162 / 224 cards in use (72.32%)
- Some nodes out (20), mainly due to being used for data copying and physically moving.
We run regular one-day courses on HPC, shell scripting, parallel programming, and GPU programming. Research Computing Services also offer training in a wide range of other digital tools to accelerate your research.
Signup here: http://melbourne.resbaz.edu.au/participate
If you can't find an answer here, need advice, or otherwise stuck, you can contact our support team
Please submit one topic per ticket. If you require a assistance with a separate matter, compose a new ticket. Do not reply to existing or closed tickets.
Spartan has a number of partitions available for general usage. A full list of partitions can be viewed with the command
|Partition||Nodes||Cores/node||Memory/node||Processor||Peak Performance (DP TFlops)||Slurm node types||Extra notes|
|cloud||165||12||100GB||Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz||63.5||avx512|
|longcloud||2||12||100GB||Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz||0.75||avx512||Max walltime of 90 days|
|interactive||9||12||254GB||Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz||3.92||avx2||Max walltime of 2 days|
|physical||9||12||254GB||Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz||2.28||physg1,avx2|
|5||32||508GB||Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz||2.15||physg3,avx2|
|12||72||1540GB||Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz||83||physg4,avx512|
|bigmem||2||36||1540GB||Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz||2.65||physg2,avx2|
|phi||4||256||190GB||Intel(R) Xeon Phi(TM) CPU 7230 @ 1.30GHz||42.6||avx512||Xeon Phi Knights Landing architecture|
|snowy||31||32||127GB||Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz||36.5||avx2|
|gpgpu||73||24||127GB||Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz||61.5 (CPU) + 1358 (GPU)||avx2||4 P100 Nvidia GPUs per node|
|Total||330 (CPU) + 1358 (GPU)|
Total includes private partitions (including mig, vccc, msps, msps2, ashley and punim0396)
This partition is best suited for general-purpose single-node jobs. Multiple node jobs will work, but communication between nodes will be comparatively slow.
Each node is connected by high-speed 25Gb networking with 1.15 µsec latency, making this partition suited to multi-node jobs (e.g. those using OpenMPI).
You can constrain your jobs to use different groups of nodes (e.g. just the Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz nodes) by adding
#SBATCH --constraint=physg4 to your submit script
See the GPU page for more details.
This partition is suited to memory-intensive single-node workloads.
A number of nodes make use of the AVX-512 extended instructions. These include all of the cloud nodes, phi nodes, and the physg4 nodes in the physical partition. To submit a job on the physical partition that makes use of these instructions add
#SBATCH --constraint=avx512 in your submission script.
There are also special partitions which are outside normal walltime constraints. In particular,
shortgpgpu should be used for quick test cases;
interactive has a maximum time of 2 days, and
shortgpgpu has a maximum time constraint of one hour.
Spartan uses a storage system called CephFS. CephFS is a highly scalable, parallel and robust filesystem.
The total Spartan storage on CephFS is broken up into 2 areas:
/home is on the University's NetApp NFS platform, backed by SSD
If you use Spartan to obtain results for a publication, we'd appreciate if you'd cite our service, including the DOI below. This makes it easy for us demonstrate research impact, helping to secure ongoing funding for expansion and user support.
Lev Lafayette, Greg Sauter, Linh Vu, Bernard Meade, "Spartan Performance and Flexibility: An HPC-Cloud Chimera", OpenStack Summit, Barcelona, October 27, 2016. doi.org/10.4225/49/58ead90dceaaa
If you are using the LIEF GPGPU cluster for a publication, please include the following citation in the acknowledgements section of your paper:
This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200.
Spartan is just one of many research IT resources offered by The University of Melbourne, or available from other institutions.
Nectar is a national initiative to provide cloud-based Infrastructure as a Service (IaaS) resources to researchers. It's based on OpenStack, and allows researchers on-demand access to computation instances, storage, and a variety of application platforms and Virtual Laboratories.
Melbourne Research Cloud
MRC is a University of Melbourne Openstack cloud similar to the Nectar cloud.
Spartan runs some of it's computation resources in the Melbourne Research Cloud.
Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE)
MASSIVE is a HPC system at Monash University and the Australian Synchrotron which is optimized for imaging and visualization. It can run batched jobs, as well as provide a desktop environment for interactive work.