Project

General

Profile

Dione user instructions » History » Version 5

Anonymous, 2019-10-04 15:51

1 1 Anonymous
h1. User instructions for Dione cluster
2
University of Turku
3
Åbo Akademi
4
Jussi Salmi (jussi.salmi@utu.fi)
5 2 Anonymous
6 1 Anonymous
h2. 1. Resources
7 2 Anonymous
8 1 Anonymous
h3. 1.1. Computation nodes
9
10 5 Anonymous
<pre>
11
PARTITION NODES NODELIST  MEMORY
12
normal    36    di[1-36]  192GB
13
gpu       6     di[37-42] 384GB
14
</pre>
15 1 Anonymous
16
Dione has 6 GPU-nodes where the user can perform calculation which benefits from very fast and parallel number crunching. This includes e.g. neural nets. The 36 other nodes are general purpose processors. The nodes are connected via a fast network, Infiniband, enabling MPI (Message Passing Interface) usage in the cluster. In addition, the cluster is connected to the EGI-grid (European Grid Infrastructure) and NORDUGRID which are allowed to use a part of the computational resources. The website
17
18
https://p55cc.utu.fi/
19
20
Contains information on the cluster, a cluster monitor and provides instructions on getting access and using the cluster.
21 3 Anonymous
22 1 Anonymous
h3. 1.2. Disk space
23
24
The system has an NFS4 file system with 100TB capacity on the home partition. The system is not backed up anywhere, so the user must handle backups himself/herself.
25 3 Anonymous
26 1 Anonymous
h3. 1.3. Software
27
28
The system uses the SLURM workload manager (Simple Linux Utility for Resource Management) for scheduling the jobs.
29
30
The cluster uses the module-system for loading software modules with different version for execution.
31 3 Anonymous
32 1 Anonymous
h2. 2. Executing jobs in the cluster
33
34
The user may not execute jobs on the login node. All jobs must be dispatched to the cluster by using SLURM commands. Normally a script is used to define the jobs and the parameters for SLURM. There is a large number of parameters and environment variables that can be used to define how the jobs should be executed, please look at the SLURM manual for a complete list.
35
36
A typical script for starting the jobs can look as follows (name:batch-submit.job):
37
38
<pre>
39
#!/bin/bash
40
#SBATCH --job-name=test
41
#SBATCH -o result.txt
42
#SBATCH --workdir=<Workdir path>
43
#SBATCH -c 1
44
#SBATCH -t 10:00
45
#SBATCH --mem=10M
46
module purge # Purge modules for a clean start
47
module load <desired modules if needed> # You can either inherit module environment, or insert one here
48
49
srun <executable>
50
srun sleep 60
51
</pre>
52
53
54
The script is run with
55
56
sbatch batch-submit.job
57
58
The script defines several parameters that will be used for the job.
59
60
<pre>
61 4 Anonymous
--job-name    defines the name
62 1 Anonymous
-o result.txt redirects the standard output to results.txt
63 4 Anonymous
--workdir     defines the working directory
64
-c 1          sets the number of cpus per task to 1
65
-t 10:00      the time limit of the task is set to 10 minutes. After that the process is stopped
66
--mem=10M     the memory required for the task is 10MB.
67 1 Anonymous
</pre>
68
69 4 Anonymous
srun starts a task. When starting the task SLURM gives it a job id which can be used to track it’s execution with e.g. the squeue command.
70 1 Anonymous
71
72
h2. 3. The module system
73
74
Many of the software packages in Dione require you to load the kernel modules prior to using the software. Different versions of the software can be used with module.
75
76
<pre>
77
module avail Show available modules
78
79
module list Show loaded modules
80
81
module unload <module> Unload a module
82
83
module load <module> Load a module
84
85
module load <module>/10.0 Load version 10.0 of <module>
86
87
module purge unload all modules
88
</pre>
89
90
91
h2. 4. Useful commands in SLURM
92
93
<pre>
94
sinfo shows the current status of the cluster.
95
96
sinfo -p gpu Shows the status of the GPU-partition
97
sinfo -O all Shows a comprehensive status report node per node
98
99
sstat <job id> Shows information on your job
100
101
squeue The status of the job queue
102
squeue -u <username> Show only your jobs
103
104
srun <command> Dispatch jobs to the scheduler
105
106
sbatch <script> Run a script defining jobs to be run
107
108
scontrol Control your jobs in many aspects
109
scontrol show job <job id> Show details about the job
110
scontrol -u <username> Show only a certain users jobs
111
112
scancel <job id> Cancel a job
113
scancel -u <username> Cancel all your jobs
114
</pre>
115
116
117
h2. 5. Further information
118
119
Further information can be asked from the administrators (fgi-admins@lists.utu.fi).