Start a conversation

FBRI Clusters - Getting Started



The FBRI currently has two clusters available for research high performance computing (HPC).  Each cluster consists of compute and / or GPU nodes, 1 Login Node and 1 Head Node running the Slurm Workload Manager.  Each has many modules ready for you to use such as MATLAB, SPM, Python, and R.  

OUR CLUSTERS:

HAWKING: (https://en.wikipedia.org/wiki/Stephen_Hawking)

  • Dell / Intel HPC Cluster - SLURM (Hawking):
    • 4 compute  nodes totalling 192 cores, 768 GB RAM, 40Gb Interconnect
    • 2 GPU nodes totalling 4 NVIDIA Tesla V100 16G Passive GPUs, 56 cores, 768 GB RAM, 40Gb Interconnect

HNL has a cluster available for approved individuals that will allow you to offload some work to be used in an HPC Cluster.

DIRAC: (https://en.wikipedia.org/wiki/Paul_Dirac)

  • Dell / Intel HPC Cluster -SLURM (Dirac):
    •  20 compute nodes, 960 cores, 3,840 GB RAM, 40Gb Interconnect

[NOTE: Virginia Tech also offers HPC systems for research through Advanced Research Computing (ARC)]


GETTING STARTED: 

LOGGING IN:
Be sure you are connected to the FBRI network locally or connected via the VPN.  Using an ssh terminal of your choice connect to each login node via:  

ssh <username>@<clustername>-login.vtc.vt.edu

...for example, if your FBRI user ID is 'mgordon' use: 

ssh mgordan@<clustername>-login.vtc.vt.edu


HAWKING: 

ssh <user>@hawking-login.vtc.vt.edu

...for example, if your FBRI user ID is 'mgordon' use: 

ssh mgordan@hawking-login.vtc.vt.edu


DIRAC: 

ssh <user>@dirac-login.vtc.vt.edu

...for example, if your FBRI user ID is 'mgordon' use: 

ssh mgordan@dirac-login.vtc.vt.edu


ACCESSING DATA: 
 You can access your network home, lab, and project shares from the clusters and from any FBRI-provided Linux system: 

Project Data: 

/mnt/nfs/proj

Lab Shares: 

/mnt/nfs/labs

* Home Directories:

 /home/<username>/rihome 

NOTE: Each cluster has a local / shared home directory found in /home/<username>.  This is required for the operation of the clusters and can be used, however, data stored here will only be available from the cluster and not from any other network host.  Best practices are to use your network home directory which is mapped to /home/<username>/rihome.  


USEFUL COMMANDS:
 To list and load modules and other tasks within the cluster, please see: Useful HPC Commands.

For a list of Slurm-related commands, please see: HPC Slurm Commands.

For a list of scheduler commands for multiple schedulers, please see: Scheduler Commands (PDF).


MONITORING PERFORMANCE: 
 To remotely monitor the system performance of the nodes running your jobs, please see: Monitoring HPC Graphically.



Choose files or drag and drop files
Was this article helpful?
Yes
No
  1. Chris Bateson

  2. Posted
  3. Updated

Comments