Community Cluster Templates

The descriptions provided below are templates that may be used in a proposal or publication to acknowledge use of or describe UD’s Information Technologies HPC resources. Contact us if you need help modifying these templates to suit your specific needs.

HPC Acknowledgement

This research was supported in part through the use of Information Technologies (IT) resources at the University of Delaware, specifically the high-performance computing resources.

HPC Brief Grant Description

Caviness Cluster

The University of Delaware’s Caviness community cluster first-generation consists of 127 compute nodes containing 4536 Intel E5 family processor cores and 25TB RAM; a 200TB Lustre filesystem; and an Omni-path high-speed network.

Notes

¹ If you have purchased systems with nVidia Pascal card and would like help augmenting this description, please contact us for more information.

Farber Cluster

The University of Delaware’s Farber community cluster consists of 190 compute nodes containing 3800 Intel E5 family processor cores and 14TB RAM; a 256TB Lustre filesystem; and an FDR InfiniBand high-speed network.

Notes

¹ If you have purchased systems with nVidia Tesla or Intel Phi cards and would like help augmenting this description, please contact us for more information.

Mills Cluster

The University of Delaware’s Mills community cluster consists of 200 compute nodes which total 5160 AMD “Interlagos” cores, 14.5TB RAM, 180TB Lustre filesystem, and a QDR InfiniBand network backplane.

Detailed Grant Description

Caviness Cluster

The Caviness community cluster is located in the University’s core data center; it is supported by the central IT department, with redundant power, cooling, and network connectivity. Its Open Compute Project (OCP) design optimizes both space and power usage, and provides a greater amount of reusable infrastructure over time. The cluster is designed to grow with the addition of new racks, and to be upgraded by replacing nodes in existing racks. First-generation compute nodes feature dual Intel E5-2695v4 (18 core “Broadwell”) processors, at least 128 GiB RAM, and a 960 GB SSD scratch disk. Some nodes include dual nVidia “Pascal” P100 GPUs. Each first-generation rack also includes 100 TB of Lustre storage and 40 TB of NFS storage. This yields a total of 4536 CPU cores, 25 TiB of RAM, 35840 CUDA cores, 200 TB of Lustre scratch storage, and 80 TB of NFS workgroup and longer-term storage. Nodes are connected to a 100 Gbps Intel Omni-path high-speed network.

Notes

¹ A limited set of 256GB and 512GB RAM compute nodes are available on Caviness for applications which require more memory.  If you have purchased such nodes and would like help augmenting this description, please contact us for more information.

² If you have purchased additional storage and would like help augmenting this description, please contact us for more information.

Farber Cluster

The University of Delaware’s Farber community cluster is located in the University’s core data center; it is supported by the central IT department, with redundant power, cooling, and network connectivity.  The 190 compute nodes contain dual Intel E5 family 10-core processors, 64 GB or 128 GB RAM, and 500 GB local scratch disk for a total of 3800 CPU cores and 14.3 TB RAM.  A 256 TB Lustre filesystem is available for on-line processing, and a 150 TB NFS filesystem is provided for workgroup and long-term storage.  Storage options include a 288 TB Lustre filesystem for on-line processing, and a 72 TB NFS filesystem for workgroup and long-term storage.  The cluster has a 56 Gbps (FDR) InfiniBand network for MPI and Lustre fast disk access, a 1 Gbps TCP/IP network for scheduling & NFS, and (2) 10 Gbps links to the Core UD network.

Notes

¹ A limited set of 128GB RAM compute nodes are available on Farber for applications which require more memory.  If you have purchased such nodes and would like help augmenting this description, please contact us for more information.

² If you have purchased additional storage and would like help augmenting this description, please contact us for more information.

Mills Cluster

The University of Delaware’s Mills community cluster is located in the University’s core data center; it is supported by the central IT department, with redundant power, cooling, and network connectivity.  Compute nodes are a mixture of dual and quad AMD Opteron 6234 (12-core “Interlagos”) processors, 64 GB RAM (per processor), and 1.7 TB local scratch space.  This comes to a total of 5160 cores and 14.5 TB RAM.  Storage options include a 180 TB Lustre filesystem for on-line processing, and a 72 TB NFS filesystem for workgroup and long-term storage.  The cluster has a 40 Gbps (QDR) InfiniBand network for MPI and Lustre fast disk access, a 1 Gbps TCP/IP network for scheduling & NFS, and a 10 Gbps uplink to the core UD network.