Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
compute_resources [2020/03/27 17:47]
pgh5a
compute_resources [2020/08/31 16:37]
pgh5a
Line 1: Line 1:
-====== General Purpose Nodes ======+==== Computing Resources ​==== 
 +The CS Dept. deploys many servers. We have general purpose compute and GPU compute servers, as well as specialized GPU servers. We also have database, [[nx_lab|NX/​Nomachine (graphical desktop)]], and [[windows_server|Windows servers]]. This section describes the general use and research use servers (see other sections for database and other server information).
  
-The ''​%%portal%%''​ nodes are general purpose servers that anyone can log intoThey are the "jump off" point for off Grounds connections to the CS networkThe servers are in a load balanced clusterand are accessed through //ssh// to ''​%%portal.cs.virginia.edu%%''​+Per CS Deptpolicy, non interactive login CS Dept. servers are placed into the SLURM job scheduling queues and available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardwarethey can be given a reservation for that exclusive use for a specified time. Otherwise, the systems ​are open for use by anyone with a CS accountThis policy was approved by the CS DeptComputing Committee comprised of CS Faculty.
  
-Feel free to use these servers to code, compile, test, etc.. However these are //not// meant for long running processes that will tie up resources. ​ Computationally expensive processes should be run on the SLURM nodes. **All GPU, Memory, CPU, etc. counts are per node**.+This policy allows servers to be used when the project group is not using them. So instead of sitting idle and consuming power and cooling, other Dept users can benefit from the use of these systems. 
 + 
 +=== portal load balanced servers === 
 +The ''​%%portal%%''​ nodes are general purpose servers into which anyone can login. They are available for general use. They are also the "jump off" point for off Grounds connections to the CS network. The servers are in a load balanced cluster, and are accessed through //ssh// to ''​%%portal.cs.virginia.edu%%''​.  
 + 
 +Feel free to use these servers to code, compile, test, etc.. However these are //not// meant for long running processes that will tie up resources. ​ Computationally expensive processes should be run on other servers listed below 
 + 
 +**All GPU, Memory, CPU, etc. counts are per node**.
  
 ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^
 | portal[01-04] | 132 | Intel | 1 | 8 | 2 | 16 | | portal[01-04] | 132 | Intel | 1 | 8 | 2 | 16 |
  
-====== gpusrv nodes ====== +=== GPU servers ​=== 
-The gpusrv* ​nodes are general purpose, interactive login nodes that are intended for code development and testing. Long running processes are discouraged,​ and are better suited to one of the SLURM controlled ​GPU nodes. **All GPU, Memory, CPU, etc. counts are per node**.+The gpusrv* ​servers ​are general purpose, interactive login servers ​that are intended for code development and testing. Long running processes are discouraged,​ and are better suited to one of the GPU servers ​controlled ​by the job scheduler (see the next section) 
 + 
 +**All GPU, Memory, CPU, etc. counts are per node**.
  
 ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^ ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^
 | gpusrv[01-08] | 256 | Intel | 2 | 10 | 2 | 20 | 4 | Nvidia RTX 2080Ti | | gpusrv[01-08] | 256 | Intel | 2 | 10 | 2 | 20 | 4 | Nvidia RTX 2080Ti |
  
-====== Nodes controlled by the SLURM Job Scheduler ​======+=== Nodes controlled by the SLURM Job Scheduler ===
  
 //See our main article on [[compute_slurm|Slurm]] for more information.//​ //See our main article on [[compute_slurm|Slurm]] for more information.//​
  
-The following nodes are available by submitting a job through the SLURM job scheduler. **All GPU, Memory, CPU, etc. counts are per node**. ​+These servers ​are available by submitting a job through the SLURM job scheduler. ​ 
 + 
 +**All GPU, Memory, CPU, etc. counts are per node**. 
 + 
 ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^ ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^
 | hermes[1-4] | 256 | AMD | 4 | 16 | 1 | 64 | 0 |  | hermes[1-4] | 256 | AMD | 4 | 16 | 1 | 64 | 0 | 
-| artemis[1-3] | 128 | AMD | 2 | 16 | 1 | 32 | 0 |  
-| artemis[4-7] | 128 | AMD | 2 | 16 | 1 | 32 | 3 | Nvidia Tesla K20c|  
 | slurm[1-5] | 512 | Intel | 2 | 12 | 2 | 24 | 0 |  | slurm[1-5] | 512 | Intel | 2 | 12 | 2 | 24 | 0 | 
 | nibbler[1-4] | 64 | Intel | 2 | 10 | 2 | 20 | 0 |  | nibbler[1-4] | 64 | Intel | 2 | 10 | 2 | 20 | 0 | 
 | trillian[1-3] | 256 | AMD | 4 | 8 | 2 | 64 | 0 |  | trillian[1-3] | 256 | AMD | 4 | 8 | 2 | 64 | 0 | 
-granger[1-6] | 64 | Intel | 2 | 10 | 2 | 40 | 0 |  +ai[01-06] | 64 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti|
-| granger[7-8] | 64 | Intel | 2 | 2 | 4 | 16 | 0 | +
-| ai0[1-6] | 64 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti|+
 | lynx[01-04] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti| | lynx[01-04] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti|
 | lynx[05-07] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia P100| | lynx[05-07] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia P100|
Line 39: Line 48:
 | cortado[01-10] | 512 | Intel | 2 | 12 | 2 | 48 | 0 | | cortado[01-10] | 512 | Intel | 2 | 12 | 2 | 48 | 0 |
  
-====== SLURM Queues ​======+=== Job Scheduler ​Queues ===
  
-//See our [[compute_slurm#​partitions|main article on Slurm]] for more information on using partitions//​+//See our [[compute_slurm#​partitions|main article on Slurm]] for more information on using queues ("partitions")//
  
 ^ Queue ^ Nodes ^ ^ Queue ^ Nodes ^
-| main | hermes[1-4],​ artemis[1-3],​ slurm[1-5], nibbler[1-4],​ trillian[1-3],​ granger[1-6],​ granger[7-8],​ lynx[08-12], cortado[01-10]| +| main | hermes[1-4],​ artemis[1-3],​ slurm[1-5], nibbler[1-4],​ trillian[1-3],​ granger[1-6],​ granger[7-8],​ lynx[10-12], cortado[01-10]| 
-| share | lynx[01-12] | +| gpu | ai0[1-6], lynx[01-09], affogato[11-15], ristretto[01-04] |
-| gpu | artemis[4-7], ​ai0[1-6], lynx[01-07], ristretto[01-04] |+
  
  
  • compute_resources.txt
  • Last modified: 2020/08/31 16:37
  • by pgh5a