information technology services
The Linux cluster provides computational resources for BC faculty members and their research groups. This page contains information including links on how to get an account on the cluster, and how to use the cluster.
The use of the cluster is guided by the Cluster Policy Committee. The members of this committee are:
- David Broido (Physics)
- Fredrik Haeffner, (Chemistry)
- Elizabeth Kensinger (Psychology)
- Michelle Meyer, Chair (Biology)
- Sam Ransbotham (CSOM)
If you have comments on cluster policy, please contact Barry Schaudt or one of the committee members.
Active Research Projects
We ask each research group for a short abstract of work being done on the Linux Cluster. These abstracts are published at:
Siruis. Installed in the Summer of 2017. In addition to 390 TB of storage, and an interactive node, it has
- 66 compute nodes. Each nodes has two 14-core Intel Xeon E5-2680 v4 processors (2.40GHz).
We plan to add nodes to this in late 2017 or early 2018.
Pleiades. Acquired in the Fall of 2012. It has been upgraded twice and now has 165 nodes plus an interactive (login) node.
- 64 compute nodes have two 8-core Intel Xeon E5-2680 processors (2.70GHz).
- 14 compute nodes have two 8-core Intel Xeon E5-2660 processors (2.2 GHz).
- 36 nodes have two 12-core Intel Xeon E5-2697 processors (2.70 GHz)
- 52 nodes have two 12-core Intel Xeon E5-2680 v3 (2.50GHz) with 128 GB of memory
- 1 node with two 12-core Intel Xeon E5-2680 v3 (2.5 GHz) with 128 Gb of memory and two Nvidia K40 GPUs.
- 1 node with two 12-core Intel Xeon E5-2680 v3 (2.5 GHz) with 128 Gb of memory and two Nvidia K80 GPUs
Unless otherwise specified, each node has 64 GB of memory. There is 150 TB of shared file space.
In addition to compilers, many public domain and commercial application software packages will be installed. To request sofware for the cluster, contact Barry Schaudt (firstname.lastname@example.org, 617-552-0242). The Cluster Software Web Pages describes the software installed with brief instructions on how to use the software.
Getting an Account
The Linux Custer is available to all faculty members at Boston College and their research groups. There is a simple application process to get an account on the cluster.
The User Guide gives information on how to use the clusters (how to login, how to compile, how to run jobs, and more).
Except for short test runs, all jobs must be submitted to the queues. For information on the general information on the queue, such as how jobs are scheduled, see: www.bc.edu/offices/researchservices/cluster/queues.html. For information on how to submit jobs, see the Torque (PBS) User Guide.
Purchase of Equipment and adding it to the Cluster
We encourage researchers to write grants to purchase nodes (or other appropriate hardware) that can be added to the cluster. Buying equipment and adding it to the Cluster gives researchers access to a larger system than they could buy independently, counts as part of Boston College's match to the grant, and removes system administration responsibilities from researchers allowing them to focus on their work. It also gives the entire Boston College community access to a larger resource. For more information, see www.bc.edu/offices/researchservices/cluster/purchase.html.
Home directories are backed up nightly. If a file in your home directory exists, there will be a copy on the backup system. For files that exist and change, we save the current file, and, for 15 days, the previous version. We can restore either the current file or, if the requested is made within 15 days of the last change to the file, the previous version of the file. If a file is deleted, then we can recover the file for up to 30 days from the day it was deleted. Files in /scratch are not backed up.
Policies/Proper Use Guidelines
We will create accounts to accommodate the use of the cluster in courses. As usual, there will be no shared accounts. These accounts are created only for the semester. If an account is needed after the course has been completed, the student should become part of a research group.
If you plan to use the cluster to support your classes, please contact Barry Schaudt (617-552-0242, email@example.com).
Downtimes are scheduled as needed. Announcements are sent out one week before to all who have use the cluster in the previous 30 days. The downtimes will be used only if necessary, and will be used to perform routine maintenance and apply patches and upgrades. Emergency downtimes may be required.