This is an old revision of the document!
Pittsburgh Super Computer
For planned outages see: https://www.psc.edu/calendar/
To log in to the supercomputer ssh userid@bridges2.psc.edu
To test what resources you have access to type projects
You can copy files to/from rhea-PSC via rsync, for example rsync –size-only -avhi –exclude CuBIDS –exclude miniconda3 $software_dir $psc:${psc_destdir}
. Check which files will be rsynced before officially running it by adding –dry-run
to the rsync call.
Quick test for interactive queue: salloc
. See slurm for more
Jobs are submitted on the PSC via sbatch. See sbatch options here https://www.psc.edu/resources/bridges-2/user-guide/#system-configuration. A description of some options is below:
- -p RM-shared : the partition you are requesting resources from. The most common one is RM-shared, but there is also RM, RM-512, and EM (extreme memory)
- –time hh:mm:ss : maximum run time for your job. On RM-shared, the max run time appears to be 48:00:00
- –nodes : The number of nodes to use. Typically 1 is sufficient (and appears to be the max you can request on RM-shared)
- –ntasks-per-node : The number of cores to use per node. Importantly, increasing the number of cores requested increases your job's memory (RAM) allocation. On RM-shared, each core comes with 1.95G memory. So four nodes e.g. will get you 7.81 GB.
- -n : number of cores requested in total
- -J “$subid-$script” : The name of your job. By default, jobs are named by their job id, however you can customize the job name via variables like $subid
- -o : output log file name
- -e : error log file name
*When initially testing an sbatch submission, it is recommended to launch on just one test participant (or one test run). If you are launching jobs for a list of participants or range of runs, this can be accomplished by adding break
to your bash script that launches the jobs
If you need to run a script that requires command line arguments, you can export them, for example:
export bids_dir freesurfer_dir freesurfer_sif license acq_label
in your script and then –export=“ALL,SUBJECT_ID=$subject_id,ACQ=$acq_label,BIDS_DIR=$bids_dir,FS_DIR=$freesurfer_dir,FS_SIF=$freesurfer_sif,LIC=$license”
in your sbatch call
- allocation hour calculator: TODO