User Tools

Site Tools



This page outlines the various policies that we have implemented for our environment.


Anyone that is sponsored by affiliated faculty or staff may have an account. This policy is how we adhere to the campus acceptable use policies. To verify that the person has an active sponsorship we require two bits of information:

  1. A current UCD account (accounts can be sponsored via the Temporary Affiliate Form)
  2. Notification (usually via email) of affiliation reported by the sponsor


We have various services and internal processes that require usernames match the person's UCD Kerberos username.

:!: Note: If don't want to type your username at the shell prompt every time please see these SSH notes.


This sections describes the authentication policy for all services we support.


We use the UCD Central Authentication Service (CAS) for all web sites we run. This requires that you have a UCD Kerberos account.


We use Kerberos for authentication on our Subversion server. This requires that you have a UCD Kerberos account.


We use SSH public key authentication for Git repositories.

Secure SHell

For secure shell access we require that your username match your UCD Kerberos account and that you provide us with a passphrase-protected ssh public key. We use public key authentication for all our ssh servers. There are no passwords, or other personal data, stored on our machines.

Wireless Network

CSE space now has good campus wifi coverage.

Tech Support

If you use our services you can send a message to help to get technical support. Your request will enter a priority queue that the support staff will manage. You will get a response when your issue has been resolved or if we have any follow-up questions.

Cluster Computing

Here are the policies surrounding cluster computing on machines that we manage.

Physical Access

The only people that have physical access are system administrators.

Remote Access

All remote access to the clusters are over SSL or SSH. In most cases we will only allow SSH access and will require passphrase-protected public key authentication.

Batch Queue

All of our clusters use a batch queue system, typically Slurm. If a job is found running outside of the batch queue it will be killed and your account may be locked. Contact help for help unlocking your account.

Scratch Space

We provide scratch space on the compute nodes. Do not use /tmp or your home directory for intense I/O operations. Scratch space is located at /scratch. You are responsible for cleaning up after yourselves. Remove any temporary files or directories that you have created. If you do not have control over the entire node you are expected to create a sub directory in /scratch/$USER so your program does not collide with other users.

Jobs on the Frontend

Do not run compute jobs on the frontend or login node. It slows down the entire cluster when you do this. If we see a compute job on the head node it will be killed. The frontend is to be used for compiling, serving files, and other utility functions, not general purpose processing.


There are no backups for user data on our clusters. We provide a certain level of redundancy at the disk level and we backup the cluster configuration but in general we do not do backups for user data. You are responsible for backing up your own data. In some cases your PI may have purchased some additional equipment for backups. If you aren't sure ask him/her or send an inquiry to help.

support/administrative/policies.txt · Last modified: 2021/05/07 09:44 by omen