Skip to main content

Secondary navigation:

Research Services

information technology services

Linux Cluster


The Linux cluster provides computational resources for BC faculty members and their research groups.  This page contains information including links on how to get an account on the cluster, and how to use the cluster.


The use of the cluster is guided by the Cluster Policy Committee.  The members of this committee are:

  • Stefano Anzellotti (Psychology)
  • David Broido (Physics)
  • Fredrik Haeffner, (Chemistry)
  • Michelle Meyer, Chair (Biology)
  • Sam Ransbotham (CSOM)

If you have comments on cluster policy, please contact Barry Schaudt or one of the committee members.

Active Research Projects

We ask each research group for a short abstract of work being done on the Linux Cluster.  These abstracts are published at:


Siruis.  The sirius cluster was installed  in the Summer of 2017 and additional nodes were added in the Summer of 2018 and in January 2019.  In addition to 390 TB of storage, and an interactive node, it has 134 nodes with a total of 4,488 cores.

  • 66 compute nodes.  Each nodes has two 14-core Intel Xeon  E5-2680 v4 processors (2.40GHz) sharing 128 GB of memory.
  • 66 compute nodes.  Each node has two 20-core Intel Xeon Gold 6148 processors (2.40GHz) sharing 192 GB of memory
  • 1 node with two 12-core Intel Xeon E5-2680 v3 (2.5 GHz) with 128 Gb of memory and two Nvidia K40 GPUs.
  • 2 nodes with two 20-core Intel Xeon Gold 6148 processors (2.40GHz) sharing 192 GB of memory and 4 Nvida V100 GPUs.

Pleiades.  Acquired in the Fall of 2012.  It has been upgraded twice and now has 165 nodes plus an interactive (login) node. 

  • 64 compute nodes have two 8-core Intel Xeon E5-2680 processors (2.70GHz). 
  • 14 compute nodes have two 8-core Intel Xeon E5-2660 processors (2.2 GHz). 
  • 36 nodes have two 12-core Intel Xeon E5-2697 processors (2.70 GHz)
  • 52 nodes have two 12-core Intel Xeon E5-2680 v3 (2.50GHz) with 128 GB of memory
  • 1 node with two 12-core Intel Xeon E5-2680 v3 (2.5 GHz) with 128 Gb of memory and two Nvidia K80 GPUs

Unless otherwise specified, each node has 64 GB of memory. There is 240 TB of shared file space.   


In addition to compilers,  many public domain and commercial application software packages will be installed.  To request sofware for the cluster, contact Barry Schaudt (, 617-552-0242).  The Cluster Software Web Pages describes the software installed with brief instructions on how to use the software.

Getting an Account

The Linux Custer is available to all faculty members at Boston College and their research groups.  There is a simple application process to get an account on the cluster.

User Guide

The User Guide gives information on how to use the clusters (how to login, how to compile, how to run jobs, and more).


Except for short test runs, all jobs must be submitted to the queues.   For information on the general information on the queue, such as how jobs are scheduled, see:   For information on how to submit jobs, see the Torque (PBS) User Guide.

Purchase of Equipment and adding it to the Cluster

We encourage researchers to write grants to purchase nodes (or other appropriate  hardware)  that can be added to the cluster.  Buying equipment and adding it to the Cluster gives researchers access to a larger system than they could buy independently,  counts as part of Boston College's match to the grant, and removes system administration responsibilities from researchers allowing them to focus on their work.    It also gives the entire Boston College community access  to a larger resource.  For more information, see

Tape Backup

Home directories are backed up about once every two weeks.  When a backup ends, which takes more than a week, we start it again. If a file in your home directory exists, there will be a copy on the backup system. For files that exist and change, we save the current file, and, for 15 days, the previous version.  We can restore either the current file or, if the requested is made within 15 days of the last change to the file, the previous version of the file.  If a file is deleted, then we can recover the file for up to 30 days from the day it was deleted.  Files in /scratch are not backed up.

Policies/Proper Use Guidelines

Users of the Linux Cluster are expected abide by the Cluster Policies which include Boston College's Computing Policies and Guidelines

Course Work

We will create accounts to accommodate the use of the cluster in courses.   As usual, there will be no shared accounts.  These accounts are created only for the semester.  If an  account is needed after the course has been completed, the student should become part of a research group.

If you plan to use the cluster to support your classes, please contact Barry Schaudt (617-552-0242,

Scheduled Downtimes

Downtimes are scheduled as needed.  Announcements are sent out one week before to all who have use the cluster in the previous 30 days.   The downtimes will be used only if necessary, and will be used to perform routine maintenance and apply patches and upgrades.  Emergency downtimes may be required.