HPC Cluster @ IISER Bhopal
Cluster is Up and Running
Queue Policy
Kanad runs the PBS job-scheduler and jobs are assigned in two major user pools :
Core user pool
Peripheral user pool
consists of users which heavy and regular need of computational resources. There are total of 127 nodes available for core users of which 4 are high-memory nodes.
consists of users with low resource requirements. There are a total of 5 nodes currently available for this pool.

Queues available for Core users
queue name
maximum nodes
maximum walltime
run limit/max. nodes
relative priority
interactive
1
30 mins
4 jobs
1
debug
1-8
30 mins
4 jobs
1
short1
9-16
24 hours
90 nodes
2
short2
9-16
48 hours
64 nodes
3
medium1
4-8
48 hours
32 nodes
3
medium2
4-8
72 hours
64 nodes
4
long1
1-3
72 hours
15 nodes
4
long2
1-4
120 hours
8 nodes
5
hmq (High Memory)
1-4
72 hours
4 jobs
3
Queues Available for Peripheral Users
queue name
Range of Cores
maximum walltime
run limit
relative priority
p-queue
8-16 cores
48 hours
16 cores
1

The Run Limit refers to the maximum total number of jobs that can be running in a queue. Queue Name refers to the keyword to be used in a PBS job script to access the appropriate queue. Jobs will be accepted in the priority indicated by the last column above. IISER Bhopal students, researchers and faculty members can apply for accounts mentioning the user pool required along with a justification of the resources required. Click on the Application Form To apply online.

Usage Policy
  • Users will submit jobs only through the queue. Running long production runs interactively on any node without using the queue is strictly forbidden.
  • Chaining of jobs on the interactive and debug queues are strictly forbidden.
  • User will not place jobs on hold for more that 4 hours. Such jobs shall be deleted by the systems administrator beyond the allowed limit.
  • A single user can use for all jobs put together a total of 50 nodes.
  • The High Memory Queue (hmq) is dedicated to jobs requiring more than 64 GB per node and should only be used for such jobs..
  • A single user can have no more than 3 jobs waiting in the queue at any point of time.
  • Users should (as far as possible) route all runtime I/O operations to the parallel file-system (scratch area).
The filesystems accessible to the users are divided into two main parts - Home (composing /home1 and /home2) and Scratch (/scratch). All areas are completely backed up on our tape-library (TSM-based).
Home Area
Scracth Area
Every user will be allocated a home area 1 TB for Core users and 500 GB for Peripheral Users, by default. Additional space can be allocated based on adequate justification. The location of this area will be e.g. /home1/<userid>. and can also be accessed by the environmental variable $HOME
The Scratch area is mounted on GPFS (parallel-file system) and should be used for I/O intensive activity over the NFS. Every user will be assigned their own directory in the scratch area - e.g. /scratch/<userid>. This is also accessible by the environmental variable $SCRATCH. Currently, there is no quota limit for any user in the scratch area. However, please be advised that files in the scratch area more than 14 days old shall be purged. So users are advised to backup dated files.

For more information on software and hardware available on the cluster kindly contact the HPC Users Committee.