NIH HPC
Jump to navigation
Jump to search
Contents
Biowulf environment Modules
- different on helix and biowulf
- module, module avail
Data transfer
- samba mount to local mac -- very slow
- Web proxy using proxy server - seems to be configured automatically
Slurm
- slurm.schedmd.com
commands
- sinfo - view info & status of slurm nodes and partitions
- sbatch <job-file.batch>
- salloc <options> request interactive job
- scancel <job-id> - cancel a batch script to SLURM
- squeue
- squeue -u username
- squeue --start display estimated start time
- sacct: to display accounting data for all jobs
- scontrol show job <job-id> - displayed detailed job details
nodes and partitions
- quick - 7 nodes, each node=16 cores
- norm, unlimited - with different time limits
swarm on biowulf
- Randy Johnson PhD.
- submit threads all together under one job id
- without sward: runfiles for all , and submit one by one, then to check jobs, get long list of all the little jobs, then output at the end, each job has it's own job id, not sequentioal, since other people submits
- swarm -f runAll.swarm
- output files all have same job id
- swarm -f runAll.swarm -b 50 - bundle 50 jobs on nodes
- -g default memory is 1.5GB
-t allocate additional threads