Skip to main content

Secondary navigation:

Research Services

information technology services

Linux Cluster

Overview

The Linux cluster provides computational resources for BC faculty members and their research groups.  This page contains information including links on how to get an account on the cluster, and how to use the cluster.

Governance

The use of the cluster is guided by the Cluster Policy Committee.  The members of this committee are:

  • Andrew Beauchamp (Economics)
  • Patricia Doherty (Institute for Scientific Research)
  • Evan Kantrowitz, Chair (Chemistry)
  • Elizabeth Kensinger (Psychology)
  • Sam Ransbotham (CSOM)

If you have comments on cluster policy, please contact Barry Schaudt or one of the committee members.

Active Research Projects

We ask each research group for a short abstract of work being done on the Linux Cluster.  These abstracts are published at:
www. bc.edu/offices/researchservices/cluster/research.html.

Hardware

There are now two Clusters:  pleiades and scorpio.

Pleiades.  In the Fall of 2012, BC acquired a new Linux cluster.  It has 78 nodes plus an interactive (login) node.  64 compute nodes have two 8-core Intel Xeon E5-2680 processors (2.70GHz).  14 compute nodes have two 8-core Intel Xeon E5-2660 processors (2.2 GHz).  Each node has 64 GB of memory.  There is 150 TB of shared file space.  The name of the cluster is pleiades.bc.edu. 

Scorpio.  The older cluster  has 110 nodes available for use.  There are four types of nodes:

  • Dual-core nodes (from Rackable, Inc).  26 Nodes.  Each has two dual-core AMD processors (2.6 GHz).  25 of these nodes  have 8 GB of memory, one has 32 GB.
  • Quad-core nodes (from Rackable, Inc).  32 Nodes.  Each has two quad-core AMD Processors (2.0 GHz) with 16 GB of memory.
  • Quad-core nodes from Hewlett-Packard.  44 nodes.  Each node has two quad-core Intel Xeon processors (2.26 GHz).  Two nodes have 48 GB of memory, the remaining 42 have 24 GB. 
  • 6-core nodes from Hewlett-Packard (3.33 GHz). 12 nodes.  Each node  has 36 GB of memory

One node of each cluster is for interactive use (ssh to pleiades.bc.edu or to scorpio.bc.edu).  From this node, jobs can be submitted to the job scheduler to run on the compute nodes.  In addition to the interactive and compute nodes, each cluster has a file server with about 50 TB of disk space available. For more information, see the User Guide.

Software

In addition to compilers,  many public domain and commercial application software packages will be installed.  To request sofware for the cluster, contact Barry Schaudt (barry.schaudt@bc.edu, 617-552-0242).  The Cluster Software Web Pages describes the software installed with brief instructions on how to use the software.

Getting an Account

The Linux Custer is available to all faculty members at Boston College and their research groups.  There is a simple application process to get an account on the cluster.

User Guide

The User Guide gives information on how to use the clusters (how to login, how to compile, how to run jobs, and more).

Queues

Except for short test runs, all jobs must be submitted to the queues.   For information on the general information on the queue, such as how jobs are scheduled, see: www.bc.edu/offices/researchservices/cluster/queues.html.   For information on how to submit jobs, see the Torque (PBS) User Guide.

Purchase of Equipment and adding it to the Cluster

We encourage researchers to write grants to purchase nodes (or other appropriate  hardware)  that can be added to the cluster.  Buying equipment and adding it to the Cluster gives researchers access to a larger system than they could buy independently,  counts as part of Boston College's match to the grant, and removes system administration responsibilities from researchers allowing them to focus on their work.    It also gives the entire Boston College community access  to a larger resource.  For more information, see www.bc.edu/offices/researchservices/cluster/purchase.html

Tape Backup

Home directories are backed up nightly.  If a file in your home directory exists, there will be a copy on the backup system. For files that exist and change, we save the current file, and, for 15 days, the previous version.  We can restore either the current file or, if the requested is made within 15 days of the last change to the file, the previous version of the file.  If a file is deleted, then we can recover the file for up to 30 days from the day it was deleted.  Files in /scratch are not backed up.

Policies/Proper Use Guidelines

Users of the Linux Cluster are expected abide by the Cluster Policies which include Boston College's Computing Policies and Guidelines

Course Work

We will create accounts to accommodate the use of the cluster in courses.   As usual, there will be no shared accounts.  These accounts are created only for the semester.  If an  account is needed after the course has been completed, the student should become part of a research group.

If you plan to use the cluster to support your classes, please contact Barry Schaudt (617-552-0242, barry.schaudt@bc.edu).

Scheduled Downtimes

Downtimes are scheduled for the second Tuesday of every even month.   The scheduled downtimes will be used only if necessary, and will be used to perform routine maintenance and apply patches and upgrades.  Emergency downtimes may be required.