Skip to main content

Secondary navigation:

Research Services

information technology services

Linux Cluster User Guide

This is a user guide with some of the basic commands to use the Cluster. 

Accessing the Cluster

To login to the sirius cluster:

     ssh -l -p Port-Number

where user-name is your BC user name.  Enter your password.

From on campus, you do not need to specify the port number (i.e. you do not need the "-p Port-number" option).  You must enter a port number from off-campus.  Contact for the port number.

If you are using an X11 client,

   ssy -Y -p Port-Number

 will allow you to run graphical applications on sirius from your workstation. The pleiades cluster is similar

You can also connect using NoMachine, which gives you a desktop on the cluster.  Ask Research Services for instructions

Changing your password

Please change your intitial password after you login. To change your password, type:


then follow instructions.

User Environment

We use "Environment Modules"  to keep the environment clean.  For application software, there will be a module to load before you can use the software.  The module will set all paths and environment variables necessary to use the software.  Environment Modules are simple to use.  For example, to use the software called matlab, first load the matlab module by typing:

    module load matlab

If you use a module frequently, you can add the module to the existing "module load" command in your .tcshrc or your .bash_profile, depending on which shell you are using.The default shell is the tcsh.

There are some basic  commands:
   module avail                        - lists all currently available modules
   module list                          - lists all modules currently loaded. 
   module load modulefile        - add/load one or more modulefile  
   module unload modulefile     - unload modulefile.
   module switch mod1 mod2  - replace mod1 with mod2

There should be no need for you to set a path yourself to run application software that is available to all users of the system.


The gnu, Intel and other compilers are installed on scorpio. For Intel , the C, C++, and FORTRAN 77/90/95 compilers are icc, icc, and ifort, respectively. To compile and link a C program, for example, you may type from your shell prompt:

    gcc -o hello hello.c

The intell module needs to be loaded to use the pathscale compilers (module load pathscale)

File Systems

Each account has a home directory. Home directories are backed up nightly. If a file in your home directory exists, there will be a copy on the backup system. For files that exist and change, we save the current file, and, for 15 days, the previous version.  We can restore either the current file or, if the requested is made within 15 days of the last change to the file, the previous version of the file.  If a file is deleted, then we can recover the file for up to 30 days from the day it was deleted. 

Home directories will have a quota.  The default quota will be 10 TB.  If you need more space, please keep in mind there is a file system called /scratch that can be used for temporary files.  You may also request a larger quota by sending email to

For temporary files, please create a directory for your work in /scratch and put your temporary files there.  Files in /scratch are not backed up.

Running a program (Queues)

Other than short test jobs on, all jobs must be submitted to the queuing system.  For information on the queue structure, see the Cluster Queue Web page.  We are using  PBS(Torque), along with the Moab scheduler to dispatch jobs to the compute nodes.   For more information and instructions see the Torque User Guide. The most common PBS and Moab commands are as follows:
    qsub             submit jobs
    qdel              delete a job(s)  job from the queue.
    showq           show the jobs waiting to run, and the running
    showstart      displays an estimated start time of a job waiting to run

Parameters such as memory, the number of cores  and wallclock time requested are specified in a command file.  Here is an example of a command file.

#PBS -l mem=500mb,nodes=1:ppn=1,walltime=1:00:00
#PBS -m abe -M your-email-address

cd work-directory

This will request 500 MB of memory and one core for 1 hour

To submit the job via the script file sample.pbs, you may type

    qsub  sample.pbs

Specifing the maximum wall clock time (walltime=hh:mm:ss) helps schedule your job promptly.  Wall clock time is the elapsed time from when your job starts running to the time it completes. We have one queue, the scheduler will determine where to run your job so that it gets started as soon as possible.  We have reserved some nodes for short jobs.  By having one queue and letting the scheduler determine where to run the job means that you won't submit your job to the wrong queue (meaning a queue that is full, when there are available processors in another).  Likewise, we have nodes with different amounts of memory and the scheduler will guarantee that you get the memory you requested for yourself alone. 

Unfortunately, both  the memory and wall-clock time parameters require you to over estimate the amount. If you under estimate, your job may be killed.   Use this to get better estimates on future job submissions. For assistance, contact researchservices (

To view all jobs in the system, type:


You may want to kill your job with job id 901, you may type: 

    qdel 901

To view the estimated start time of job id 901, type:

    showstart 901

This is only an estimate of the start time, and the start time may change as other jobs are submitted.


The following options may generate more faster code:
   -O3 -OPT:Ofast


In order to use OpenMP, your program must be compiled and linkied with
   -mp option


For assistance please contact Research Services at: