User Tools

Site Tools


support:hpc:software:python

Python

Ubuntu comes with python 2.7.6 preinstalled into /usr/bin and is available by default. No module is required to use it.

Python 2.7.15 and many additional Python packages are available via conda in the bio/1.0 module. If you need other packages not already installed, please contact farm-hpc@ucdavis.edu with your request.

Python 3.6.8 is also available as a module.

Use Notes

To load use the conda-based python 2.7.15:

module load bio/1.0

Or to use python 3:

module load python/3.6.8

Python on Farm II

Batch files run python scripts using the default version of python as specified by the current Ubuntu release being used on Farm. The Farm installation can be found here: /usr/bin/python. If “module load Python” is added to batch files, python scripts are run using a custom compilation of python maintained on Farm for bioinformatics applications.

A simple example of running a python script as a batch job:

user@agri:~/examples/hello_world$ more hello_world.py
print "Hello, World! \n"


user@agri:~/examples/hello_world$ more hello_world.sh
#!/bin/bash -l
#SBATCH --job-name=hello_world

# Specify the name and location of i/o files.  
# “%j” places the job number in the name of those files.
# Here, the i/o files will be saved to the current directory under /home/user.
#SBATCH --output=hello_world_%j.out
#SBATCH --error=hello_world_%j.err

# Send email notifications.  
#SBATCH --mail-type=END # other options are ALL, NONE, BEGIN, FAIL
#SBATCH --mail-user=user@ucdavis.edu

# Specify the partition.
#SBATCH --partition=hi # other options are low, med, bigmem, serial.

# Specify the number of requested nodes.
#SBATCH --nodes=1

# Specify the number of tasks per node, 
# which may not exceed the number of processor cores on any of the requested nodes.
#SBATCH --ntasks-per-node=1 

hostname # Prints the name of the compute node to the output file.
srun python hello_world.py # Runs the job.


user@agri:~/examples/hello_world_sarray$ sbatch hello_world.sh
Submitted batch job X


user@agri:~/examples/hello_world$ more hello_world_X.err
Module BUILD 1.6 Loaded.
Module slurm/2.6.2 loaded 


user@agri:~/examples/hello_world$ more hello_world_X.out
c8-22
Hello, World! 

A simple example of an array job:

user@agri:~/examples/hello_world_sarray$ more hello_world.py
import sys

i = int(sys.argv[1])

print "Hello, World", str(i) + "! \n"


user@agri:~/examples/hello_world_sarray$ more hello_world.sh
#!/bin/bash -l
#SBATCH --job-name=hello_world

# Specify the name and location of i/o files.
#SBATCH --output=hello_world_%j.out
#SBATCH --error=hello_world_%j.err

# Send email notifications.  
#SBATCH --mail-type=END # other options are ALL, NONE, BEGIN, FAIL
#SBATCH --mail-user=user@ucdavis.edu

# Specify the partition.
#SBATCH --partition=hi # other options are low, med, bigmem, serial.

# Specify the number of requested nodes.
#SBATCH --nodes=1

# Specify the number of tasks per node, 
# which may not exceed the number of processor cores on any of the requested nodes.
#SBATCH --ntasks-per-node=1 

# Specify the number of jobs to be run, 
# each indexed by an integer taken from the interval given by "array”.
#SBATCH --array=0-1

hostname
echo "SLURM_NODELIST = $SLURM_NODELIST"
echo "SLURM_NODE_ALIASES = $SLURM_NODE_ALIASES"
echo "SLURM_NNODES = $SLURM_NNODES"
echo "SLURM_TASKS_PER_NODE = $SLURM_TASKS_PER_NODE"
echo "SLURM_NTASKS = $SLURM_NTASKS"
echo "SLURM_JOB_ID = $SLURM_JOB_ID"
echo "SLURM_ARRAY_TASK_ID = $SLURM_ARRAY_TASK_ID"

srun python hello_world.py $SLURM_ARRAY_TASK_ID


user@agri:~/examples/hello_world_sarray$ sbatch hello_world.sh
Submitted batch job X


user@agri:~/examples/hello_world_sarray$ more *.err
::::::::::::::
hello_world_X+0.err
::::::::::::::
Module BUILD 1.6 Loaded.
Module slurm/2.6.2 loaded 
::::::::::::::
hello_world_X+1.err
::::::::::::::
Module BUILD 1.6 Loaded.
Module slurm/2.6.2 loaded 

user@agri:~/examples/hello_world_sarray$ more *.out
::::::::::::::
hello_world_X+0.out
::::::::::::::
c8-22
SLURM_NODELIST = c8-22
SLURM_NODE_ALIASES = (null)
SLURM_NNODES = 1
SLURM_TASKS_PER_NODE = 1
SLURM_NTASKS = 1
SLURM_JOB_ID = 76109
SLURM_ARRAY_TASK_ID = 0
Hello, World 0! 

::::::::::::::
hello_world_X+1.out
::::::::::::::
c8-22
SLURM_NODELIST = c8-22
SLURM_NODE_ALIASES = (null)
SLURM_NNODES = 1
SLURM_TASKS_PER_NODE = 1
SLURM_NTASKS = 1
SLURM_JOB_ID = 76110
SLURM_ARRAY_TASK_ID = 1
Hello, World 1! 
support/hpc/software/python.txt · Last modified: 2021/05/07 09:49 by omen