Peloton is a research and teaching cluster for the College of Letters and Science. This page documents the hardware, software, and policies surrounding this resource. The announcement archives are available online.
Announcement notifications are sent to a internally maintained mailing list. If you are a user of this cluster you will be added to the mailing lists automatically.
All researchers in College of Letters and Science are entitled to free access to the cluster. Their share of available resources depends on the sponsor. Those with MPS affiliation and nothing else can share 4 compute nodes (128 CPUs) and a share of any other free resources.
Those who contribute get immediate (within 1 minute) access to the resources they contribute and a larger share of any free resources. The minimum level contribution is approximately $16,000.
Default storage is 1TB, extra storage can be purchased in 22TB chunks for approximately $3,000. These 22TB chunks do NOT include backups.
Ganglia is available at http://stats.cse.ucdavis.edu/ganglia/?c=peloton&m=load_one&r=hour&s=descending&hc=4&mc=2
Peloton cluster runs Ubuntu 18.04 and uses the slurm batch queue manager. System configuration and management is via cobbler and puppet. (Last updated: 03/2019)
Requests for any centrally installed software should go to firstname.lastname@example.org. Any software that is available in Ubuntu is also available for installation or already installed on this cluster. In many cases we compile and install our own software packages. These custom packages include compilers, mpi layers, open source packages, commercial packages, HDF, NetCDF, WRF, and others. We use Environment Modules to manage the environment. A quick intro:
module load <directory/application>
Documentation on some of the custom installed software is at HPC Software Documentation. An (outdated) list is at Custom Software. Best to use the “module avail” command for the current list of installed software.
Low priority means that you might be killed at any time. Great for soaking up unused cycles with short jobs; a particularly good fit for large array jobs with short run times.
Medium priority means you might be suspended, but will resume when a high priority job finishes. *NOT* recommended for MPI jobs. Up to 100% of idle resources can be used.
High priority - your job will kill/suspend lower priority jobs. High priority means your jobs will keep the allocated hardware until it's done or there's a system or power failure. Limited to the number of CPUs your group contributed. Recommended for MPI jobs.
GPU - If you contributed to the GPU nodes you can access to the GPU partition to run CUDA jobs that take advantage of the GPUs
High2 - For contributors, allows running on twice your contribution for a week.
Med2 - For contributors, allows running on additional resources, but may be suspended by higher priority jobs.