BioHPC Cloud

biohpc_cloud

Compared to traditional HPC facilities, BioHPC aims to be easy-to-use for all of our users regardless of computing experience. Though the heart of our systems is a high-performance compute cluster with fast storage, we offer a wide range of services. The BioHPC Cloud encompases all of our systems and services, with access to:

  • Simple web-based tools via the BioHPC Portal.
  • Specialized, research-focused web applications, e.g. Astrocyte .
  • Thin-client and workstation computers that are directly connected to the Nucleus compute cluster.
  • Virtualized servers providing access to a multitude of research software.
  • Traditional remote access using a command-line terminal and file transfer clients.

Nucleus Compute Cluster

nucleus.jpg

Our main compute facility is nucleus, a 500-node heterogenous compute cluster, consisting of:

Partition Nodes Physical (Logical) Cores Memory Capacity (GB) GPU
32GB 280 16 (32) 32 N/A
128GB 24 16 (32) 128 N/A
256GB 78 24 (48) 256 N/A
256GBv1 48 28 (56) 256 N/A
384GB 2 16 (32) 384 N/A
GPU 8 28 (56) 256 Tesla K20/K40
GPUp4 16 36 (72) 384 Tesla P4
GPUp40 16 36 (72) 384 Tesla P40
GPUp100 12 28 (56) 256 Tesla P100 (2X)
GPUv100 2 36 (72) 384 Tesla V100 (2x)
PHG 8 24 (48) 256 N/A
webDesktop 6 24 (48) 256 Tesla K80

 

 

 

 

 

 

 

 

 

 

 

Across these systems a total of 28,000+ CPU cores and 65TB+ of RAM is available.

Access to the cluster can be made through the Portal by an SSH login to the head node, nucleus.biohpc.swmed.edu, with a BioHPC username and password.


Lysosome Storage System

lysosome.jpg

Lysosome is our primary high-performance bulk storage consisting of 5.4 PB raw storage provided by a Data Direct Networks SFA12X system, and an additional 960TB of raw storage provisioned with Dell PowerVault RAID hardware. 

Our Data Direct Networks SFA12K system uses dual active-active storage controllers, connected to multiple storage enclosures. Providing raw speeds of 35-40GB/s the storage is configured to host a Lustre parallel filesystem and is connected to the nucleus cluster with multiple inifiniband and 10Gb ethernet links. 40 Object Storage Targets (backed by the DDN storage) are aggregated into a single high performance filesystem, with operations directed by a Meta Data Target (MDT). The architecture is well suited to typical workloads in BioHPC, which operate on large image and sequence datasets.
 

 


Endosome Storage System

gpfs.png

Endosome is our off-site high-performance bulk storage consisting of 5.4 raw storage provided by IBM ESS system. Endosome uses IBM GPFS file system and locates at the Clements University Hospital site. This storage system is connected to our Nucleus cluster and Lysosome storage using high speed fibres across the Harry Hines Blvd. Currently, it provides the /work and /archive file spaces for users and off-site backups of /home2 mirrors, /project incrementals and other web services for DR. This storage system has ~720 8 TB hard drives in 12 enclosures providing 3.4 PB usable space. The actual aggregated IO throughput is ~40 GB/s, and the per compute node max. throughput is ~3.4 GB/s. 

 

 

Lamella Cloud Storage Gateway

lamella.png

Lamella is our cloud storage gateway, providing easy access to files stored on BioHPC from Windows, Mac or Linux clients, over the web or on a smartphone. The Lamella web interface at lamella.biohpc.swmed.edu provides an easy-to-use site, similar to dropbox, to upload, download, manage and share files. Each user has an allocation of cloud storage and can also directly access their home, project, and work allocations. This service also supports file synchronization between computers using the OwnCloud client.

Direct access to files in home, project, and work locations is possible by mounting shared drives under Windows, Mac OS X, or Linux. Lamella shares these directories using SMB, with transfer rates up to 100MB/s for users on the campus 1 Gbps network and higher speeds available via the campus 10 Gbps network. FTP access to storage is also available via the Lamella gateway.

 

BioHPC Cloud Client

workstation_thinclient.png

BioHPC Cloud Clients are computers running Linux, integrated into the BioHPC cloud. These systems provide a graphical Linux desktop and have direct access to BioHPC storage over the campus 10 Gbps network. All software available on the BioHPC cluster can be used on the cloud clients, which are ideal for developing and testing code and analysis workflows. Scripts and programs can be run locally on the client or directly submitted to the nucleus cluster using the sbatch command. Access to Windows applications is possible with virtualization.

Two models of cloud client are available. The thin-client is a small desktop device, while the workstation is a larger tower PC. The workstation has a more powerful CPU than the thin-client and has an internal GPU card. The workstation is recommended for users running local analyses regularly or developing GPU code.

 

 

 

 

 

 


IB Network

ibnetwork2.png

BioHPC Compute Nodes are interconnected via an Infiniband network in a fat-tree topology. The 6 EDR central swithes provide up to 43.2 Tbps total bandwidth. We are using FDR, EDR cables between 23 Mellanox switches and nodes and fiber cables between switches. The supporting port speeds can reach 56 Gbps and 100 Gbps respectively