Relion
Example
To use Relion version 5, select 5.0 from Relion_Testing in Open OnDemand
For most cases, Relion GUI doesn't need gpu. Please start Relion on a CPU node, and then submit jobs to the GPU partitions through Relion GUI.
$ ssh -X username@spartan.hpc.unimelb.edu.au # login to spartan
$ sinteractive --x11 --time=06:00:00 --cpus-per-task=4 # start an interactive job
$ module load intel-compilers/2022.1.0 # load dependencies
$ module load Relion/5.0 # load relion
$ cd /data/gpfs/projects/punim????/ProjectDirectory # navigate to your RELION project directory
$ relion # open the relion GUI
#!/bin/bash
#SBATCH --ntasks=XXXmpinodesXXX
#SBATCH --partition=XXXqueueXXX
#SBATCH --qos=XXXextra4XXX
#SBATCH --gres=gpu:XXXextra1XXX
#SBATCH --cpus-per-task=XXXthreadsXXX
#SBATCH --time=XXXextra2XXX
#SBATCH --mem-per-cpu=XXXextra3XXX
#SBATCH --error=XXXerrfileXXX
#SBATCH --output=XXXoutfileXXX
#SBATCH --tmp=300G
#INFO
echo "Starting at `date`"
echo "Running on hosts:$SLURM_NODELIST"
echo "Running on $SLURM_NNODES nodes."
srun XXXcommandXXX
RUNNING:
Number of MPI procs: 12 # between 1-12 should be fine for most jobs
Submit to queue: Yes
Queue name: cascade
Queue submit command: sbatch
Number of gpus: 0
Wall time: 20:00 # days-hours:min:secs - shorter will run sooner in the queue
Memory: 12G # per CPU. total = mem * MPI - lower will run sooner in the queue
QoS: normal
Standard submission script: /path/to/relion5_slurm.sh
Minimum dedicated cores per node: 1 # leave as default
COMPUTE:
Use parallel disc I/O? Yes
Number of pooled particles: 3
Skip padding? No
Pre-read all particles into RAM? No
Copy particles to scratch directory: /tmp/ # temporarily transfers data to SSD faster access during the run
Combine iterations through disc? No
Use GPU acceleration? Yes
Which GPUs to use: # leave blank
RUNNING:
Number of MPI procs: 5 # Works best as the number of GPUs+1
Number of threads: 6 # MPI*thread = total CPUs (gpu-a100 has max 31)
Submit to queue: Yes
Queue name: gpu-a100
Queue submit command: sbatch
Number of gpus: 4 # fewer will run sooner in the queue
Wall time: ?-??:??:?? # shorter will run sooner in the queue
Memory: 8G # per CPU. Total = mem*MPI*threads, max 495G (e.g. 16G for 30 CPU) - lower will run sooner in the queue
QoS: normal
Standard submission script: /path/to/relion5_slurm.sh
Minimum dedicated cores per node: 1 # leave as default