This is an old revision of the document!


Computing Resources

The CS Dept. deploys general purpose compute and GPU compute servers, as well as specialized GPU servers. We also have database, NX/Nomachine (graphical desktop), and Windows servers. This section describes the general use and research use servers (see other sections for database and other server information).

portal load balanced servers

The portal servers are general purpose servers into which anyone can login. They are available for general use. They are also the “jump off” point for off Grounds connections to the CS network. The servers are in a load balanced cluster, and are accessed through ssh to portal.cs.virginia.edu. See: The portal cluster

Use these servers to code, compile, test, etc.. However these are not meant for long running processes that will tie up resources. Computationally expensive processes should be run on other servers listed below.

All GPU, Memory, CPU, etc. counts are per node.

Hostname Memory (GB) CPU Type CPUs Cores/CPU Threads/Core Total Cores
portal[01-04] 132 Intel 1 8 2 16

GPU servers

The gpusrv* servers are general purpose servers that contain GPUs into which anyone can login (via 'ssh'). They are intended for code development, testing, and short computations. Long running computations are discouraged, and are better suited to one of the GPU servers controlled by the job scheduler (see the next section).

All GPU, Memory, CPU, etc. counts are per node.

Hostname Memory (GB) CPU Type CPUs Cores/CPU Threads/Core Total Cores GPUs GPU Type
gpusrv[01-08] 256 Intel 2 10 2 20 4 Nvidia RTX 2080Ti

Nodes controlled by the SLURM Job Scheduler

See our main article on Slurm for more information.

These servers are available by submitting a job through the SLURM job scheduler.

Per CS Dept. policy, servers are placed into the SLURM job scheduling queues and are available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty.

This policy allows servers to be used when the project group is not using them. So instead of sitting idle and consuming power and cooling, other Dept. users can benefit from the use of these systems.

All GPU, Memory, CPU, etc. counts are per node.

Hostname Memory (GB) CPU Type CPUs Cores/CPU Threads/Core Total Cores GPUs GPU Type
affogato[01-15] 128 Intel 2 8 2 32 0
affogato[11-15] 128 Intel 2 8 2 32 4 Nvidia GTX1080Ti
ai[01-06] 64 Intel 2 8 2 32 4 Nvidia GTX1080Ti
cheetah01 256 AMD 2 8 2 32 4 Nvidia A100
cheetah[02-03] 1024(2) Intel 2 18 2 72 2 Nvidia RTX 2080Ti
cortado[01-10] 512 Intel 2 12 2 48 0
doppio[01-05] 128 Intel 2 16 2 64 0
falcon[1-10] 128 Intel 2 6 2 24 0
hermes[1-4] 256 AMD 4 16 1 64 0
lynx[01-04] 64 Intel 4 8 2 32 4 Nvidia GTX1080Ti
lynx[05-07] 64 Intel 4 8 2 32 4 Nvidia P100
lynx[08-09] 64 Intel 4 8 2 32 3 ATI FirePro W9100
lynx[10] 64 Intel 4 8 2 32 0 Altera FPGA
lynx[11-12] 64 Intel 4 8 2 32 0
nibbler[1-4] 64 Intel 2 10 2 20 0
optane01 512(1) Intel 2 16 2 64 0
ristretto[01-04] 128 Intel 2 6 1 12 8 Nvidia GTX1080Ti
slurm[1-5] 512 Intel 2 12 2 24 0
trillian[1-3] 256 AMD 4 8 2 64 0

(1) Intel Optane memory

(2) In addition to 1TB of DDR4 RAM, these servers also house a 900GB Optane NVMe SSD and a 1.6TB NVMe regular SSD drive

Job Scheduler Queues

See our main article on Slurm for more information on using queues (“partitions”)

Queue Nodes
main cortado[01-10], doppio[01-05], falcon[1-10], granger[1-8], hermes[1-4], lynx[10-12], nibbler[1-4], optane01, slurm[1-5], trillian[1-3]
gpu affogato[11-15], ai[01-06], cheetah[01-03], lynx[01-09], ristretto[01-04]

Group specific computing resources

Several groups in CS deploy servers that are used exclusively by that group. Approximately 40 servers are deployed in this fashion, ranging from traditional CPU servers to specialized servers containing GPU accelerators.

  • compute_resources.1613508734.txt.gz
  • Last modified: 2021/02/16 20:52
  • by pgh5a