Need help?

Our expert staff is here to provide support.

The Caviness cluster, UD’s third Community Cluster, was deployed in July 2018 and is a distributed-memory Linux cluster. It is based on a rolling-upgradeable model for expansion and replacement of hardware over time. The first generation consists of 126 compute nodes (4536 cores, 24.6 TB memory). The nodes are built of Intel “Broadwell” 18-core processors in a dual-socket configuration for 36-cores per node. An OmniPath network fabric supports high-speed communication and the Lustre filesystem (approx 200 TiB of usable space). Gigabit and 10-Gigabit Ethernet networks provide access to additional filesystems and the campus network. The cluster was purchased with a proposed 5 year life for the first generation hardware, putting its refresh in the April 2023 to June 2023 time period.

This cluster was designed to pack more computing power into less physical space, use power more efficiently, and leverage reusable infrastructure for a longer overall lifespan. This cluster will use Penguin’s Tundra Extreme Scale (ES) design which follows the specifications of the Open Compute Project.

Provided Infrastructure

UDIT provided infrastructure table
Basic details
  • Installation in a secure data center
  • Racks, floor space, cooling and power
  • Five-year warranty on nodes
  • 2 x 10 Gbps uplink to campus network
  • 12 x 100 Gbps Intel OmniPath uplinks between racks

Aggregate storage across the cluster:

  • 320 TB (raw) Lustre high-speed scratch
  • 160 TB (raw) NFS for home and workgroup directories

Workgroups have unlimited access to Lustre, plus:

  • Unlimited UD- and guest-user accounts with 20 GB home directory
  • Workgroup directory quotas start at 1 TB and scale in proportion to investment
Login nodes

Users connect to the cluster through two login nodes:

  • 2 x 18C Intel E5-2695 v4 (36 cores)
  • 128 GB DDR4 memory (8 x 16 GB)
  • 1 x 10 Gbps uplink to campus network, Internet
  • 1 x 100 Gbps Intel OmniPath cluster network

Stakeholder Purchases

Stakeholder Purchases table
Feature Description Cost
Standard architecture

Generation 1:

  • 2 x 18C Intel E5-2695 v4 (36 cores)
  • 128 GB DDR4 memory (8 x 16 GB)

Generation 2 and 2.1:

  • 2 x 20C Intel Xeon Gold 6230 (40 cores)
  • 192 GB DDR4 memory (12 x 16 GB)

Generation 1, 2 and 2.1:

  • 960 GB SSD (local scratch, swap)
  • 1 x 100 Gbps Intel OmniPath cluster network

  • $5000


  • $6500
Memory upgrade

Generation 1:

  • 256 GB DDR4 memory (8 x 32 GB)
  • 512 GB DDR4 memory (16 x 32 GB)

Generation 2 and 2.1:

  • 384 GB DDR4 memory (12 x 32 GB)
  • 768 GB DDR4 memory (12 x 64 GB)
  • 1024 GB DDR4 memory (16 x 64 GB)

  • $1000+
  • $3500+


  • $500+
  • $2500+
  • $4500+

Generation 1:

  • 2 x nVidia P100 “Pascal” GPU

Generation 2 and 2.1:

  • nVidia T4
  • 2 x nVidia V100
  • 4 x nVidia V100 + NVLINK

  • $3500+


  • $2000+
  • $12500+
  • $23500+
Other options

Subject to discussion with IT:

  • Additional workgroup storage quota
  • Accelerated local scratch storage (per-node NVMe)

What's in the name?

Portrait image of Jane Caviness


Named in honor of Jane Caviness, former director of Academic Computing Services at the University of Delaware. In the 1980s, Caviness led a ground-breaking expansion of UD’s computing resources and network infrastructure that laid the foundation for UD’s current research computing capabilities. After leaving UD, Caviness joined the National Science Foundation (NSF) as program director for NSFNET. Caviness’ activity in the Association for Computing Machinery (ACM) and EDUCOM highlight her strong advocacy for collaboration in the research computing community.

The naming of UD’s third HPC community cluster continues the practice of naming our HPC clusters in honor of UD faculty and staff who have played a key role in the development of the internet: David Mills, David Farber, and now Jane Caviness.