This is an old revision of the document!
Computing Resources
The CS Dept. deploys many servers. We have general purpose compute and GPU compute servers, as well as specialized GPU and FPGA servers. We also have database, NX/Nomachine (graphical desktop), and Windows servers. This section describes the general use and research use servers (see other sections for database and other server information).
Per CS Dept. policy, non interactive login CS Dept. servers are placed into the SLURM job scheduling queues and available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty.
This policy allows servers to be used when the project group is not using them. So instead of sitting idle and consuming power and cooling, other Dept users can benefit from the use of these systems.
General Purpose Nodes
The portal
nodes are general purpose servers into which anyone can login. They are the “jump off” point for off Grounds connections to the CS network. The servers are in a load balanced cluster, and are accessed through ssh to portal.cs.virginia.edu
.
Feel free to use these servers to code, compile, test, etc.. However these are not meant for long running processes that will tie up resources. Computationally expensive processes should be run on other nodes.
All GPU, Memory, CPU, etc. counts are per node.
Hostname | Memory (GB) | CPU Type | CPUs | Cores/CPU | Threads/Core | Total Cores |
---|---|---|---|---|---|---|
portal[01-04] | 132 | Intel | 1 | 8 | 2 | 16 |
gpusrv nodes
The gpusrv* servers are general purpose, interactive login nodes that are intended for code development and testing. Long running processes are discouraged, and are better suited to one of the SLURM controlled GPU nodes.
All GPU, Memory, CPU, etc. counts are per node.
Hostname | Memory (GB) | CPU Type | CPUs | Cores/CPU | Threads/Core | Total Cores | GPUs | GPU Type |
---|---|---|---|---|---|---|---|---|
gpusrv[01-08] | 256 | Intel | 2 | 10 | 2 | 20 | 4 | Nvidia RTX 2080Ti |
Nodes controlled by the SLURM Job Scheduler
See our main article on Slurm for more information.
These servers are available by submitting a job through the SLURM job scheduler. All GPU, Memory, CPU, etc. counts are per node.
Hostname | Memory (GB) | CPU Type | CPUs | Cores/CPU | Threads/Core | Total Cores | GPUs | GPU Type |
---|---|---|---|---|---|---|---|---|
hermes[1-4] | 256 | AMD | 4 | 16 | 1 | 64 | 0 | |
slurm[1-5] | 512 | Intel | 2 | 12 | 2 | 24 | 0 | |
nibbler[1-4] | 64 | Intel | 2 | 10 | 2 | 20 | 0 | |
trillian[1-3] | 256 | AMD | 4 | 8 | 2 | 64 | 0 | |
granger[1-6] | 64 | Intel | 2 | 10 | 2 | 40 | 0 | |
granger[7-8] | 64 | Intel | 2 | 2 | 4 | 16 | 0 | |
ai[01-06] | 64 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti |
lynx[01-04] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti |
lynx[05-07] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia P100 |
lynx[08-09] | 64 | Intel | 4 | 8 | 2 | 32 | 3 | ATI FirePro W9100 |
lynx[10] | 64 | Intel | 4 | 8 | 2 | 32 | 0 | Altera FPGA |
lynx[11-12] | 64 | Intel | 4 | 8 | 2 | 32 | 0 | |
ristretto[01-04] | 128 | Intel | 2 | 6 | 1 | 12 | 8 | Nvidia GTX1080Ti |
affogato[01-15] | 128 | Intel | 2 | 8 | 2 | 32 | 0 | |
affogato[11-15] | 128 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti |
cortado[01-10] | 512 | Intel | 2 | 12 | 2 | 48 | 0 |
SLURM Queues
See our main article on Slurm for more information on using partitions
Queue | Nodes |
---|---|
main | hermes[1-4], artemis[1-3], slurm[1-5], nibbler[1-4], trillian[1-3], granger[1-6], granger[7-8], lynx[08-12], cortado[01-10] |
share | lynx[01-12] |
gpu | artemis[4-7], ai0[1-6], lynx[01-07], affogato[11-15], ristretto[01-04] |