Need help?

Our expert staff is here to provide support.

The Caviness cluster, UD’s third Community Cluster, was deployed in July 2018 and is a distributed-memory Linux cluster. It is based on a rolling-upgradeable model for expansion and replacement of hardware over time. The first generation consists of 126 compute nodes (4536 cores, 24.6 TB memory). The nodes are built of Intel “Broadwell” 18-core processors in a dual-socket configuration for 36-cores per node. An OmniPath network fabric supports high-speed communication and the Lustre filesystem (approx 200 TiB of usable space). Gigabit and 10-Gigabit Ethernet networks provide access to additional filesystems and the campus network. The cluster was purchased with a proposed 5 year life for the first generation hardware, putting its refresh in the April 2023 to June 2023 time period.

This cluster was designed to pack more computing power into less physical space, use power more efficiently, and leverage reusable infrastructure for a longer overall lifespan. This cluster will use Penguin’s Tundra Extreme Scale (ES) design which follows the specifications of the Open Compute Project.

This cluster is currently unavailable for purchase

You can request access from an existing stakeholder on Farber.

Provided Infrastructure

UDIT provided infrastructure table
Basic details
  • Installation in a secure data center
  • Racks, floor space, cooling and power
  • Five-year warranty on nodes
Network
  • 2 x 10 Gbps uplink to campus network
  • 12 x 100 Gbps Intel OmniPath uplinks between racks
Storage

Aggregate storage across the cluster:

  • 320 TB (raw) Lustre high-speed scratch
  • 160 TB (raw) NFS for home and workgroup directories

Workgroups have unlimited access to Lustre, plus:

  • Unlimited UD- and guest-user accounts with 20 GB home directory
  • Workgroup directory quotas start at 1 TB and scale in proportion to investment
Login nodes

Users connect to the cluster through two login nodes:

  • 2 x 18C Intel E5-2695 v4 (36 cores)
  • 128 GB DDR4 memory (8 x 16 GB)
  • 1 x 10 Gbps uplink to campus network, Internet
  • 1 x 100 Gbps Intel OmniPath cluster network

Stakeholder Purchases

Stakeholder Purchases table
Feature Description Items Cost
Standard architecture
  • 2 x 18C Intel E5-2695 v4 (36 cores)
  • 128 GB DDR4 memory (8 x 16 GB)
  • 960 GB SSD (local scratch, swap)
  • 1 x 100 Gbps Intel OmniPath cluster network
60 $5000
Memory upgrade
  • 256 GB DDR4 memory (8 x 32 GB)
  • 512 GB DDR4 memory (16 x 32 GB)
  • 48
  • 6
  • $1000+
  • $3500+
Coprocessors
  • 2 x nVidia P100 “Pascal” GPU
6 $3500+
Other options

Subject to discussion with IT:

  • Additional workgroup storage quota
  • Accelerated local scratch storage (per-node NVMe)
10 $7000+

What's in the name?

Portrait image of David L Mills

David L Mills

The Mills cluster has been named in honor of David L. Mills, UD professor emeritus and a pioneer of the early Internet and its precursor networks. Mills was a professor in the University’s Department of Electrical and Computer Engineering from 1986-2008 and continues to teach and lead research sponsored by such agencies as the NASA Jet Propulsion Laboratory, Defense Advanced Research Projects Agency and National Science Foundation.