Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
compute_resources [2020/02/21 16:11]
pgh5a [gpusrv nodes]
compute_resources [2020/06/30 19:33] (current)
pgh5a
Line 1: Line 1:
 +====== Computing Resources ======
 +Per CS Dept. policy, CS Dept. servers are placed into the SLURM job scheduling queues and available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty.
 +This policy allows servers to be used when the project group is not using them. So instead of sitting idle and consuming power and cooling, other Dept users can benefit from the use of these systems.
 ====== General Purpose Nodes ====== ====== General Purpose Nodes ======
  
-The ''​%%portal%%''​ nodes are meant to be used as general purpose ​systems ​that anyone can log into.  Feel free to use these servers ​to codecompile, test etc. however these are //not// meant for running jobs that will tie up resources ​Computationally expensive jobs should be run on the SLURM nodes.+The ''​%%portal%%''​ nodes are general purpose ​servers ​that anyone can log into. They are the "jump off" point for off Grounds connections ​to the CS network. The servers ​are in a load balanced clusterand are accessed through ​//ssh// to ''​%%portal.cs.virginia.edu%%''​
  
-^ Hostname ​^ Node Count ^ GPUs ^ Memory ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ +Feel free to use these servers to code, compile, test, etc.. However these are //not// meant for long running processes that will tie up resources. ​ Computationally expensive processes should be run on other nodes. **All GPU, Memory, CPU, etc. counts are per node**. 
-| portal[01-04] ​| 4 | 0 | 132 | Intel | 1 | 8 | 2 | 16 |+ 
 +^ Hostname ^ Memory ​(GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ 
 +| portal[01-04] | 132 | Intel | 1 | 8 | 2 | 16 |
  
 ====== gpusrv nodes ====== ====== gpusrv nodes ======
-The gpusrv* ​nodes are general purpose, interactive login nodes that are intended for code development and testing. Long running processes are discouraged,​ and are better suited to one of the SLURM controlled GPU nodes.+The gpusrv* ​servers ​are general purpose, interactive login nodes that are intended for code development and testing. Long running processes are discouraged,​ and are better suited to one of the SLURM controlled GPU nodes. **All GPU, Memory, CPU, etc. counts are per node**.
  
 ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^ ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^
 | gpusrv[01-08] | 256 | Intel | 2 | 10 | 2 | 20 | 4 | Nvidia RTX 2080Ti | | gpusrv[01-08] | 256 | Intel | 2 | 10 | 2 | 20 | 4 | Nvidia RTX 2080Ti |
  
-====== ​SLURM Nodes ======+====== Nodes controlled by the SLURM Job Scheduler ​======
  
 //See our main article on [[compute_slurm|Slurm]] for more information.//​ //See our main article on [[compute_slurm|Slurm]] for more information.//​
  
-The following nodes are available ​via SLURM. All GPU, Memory, CPU, etc. counts are per node. +These servers ​are available ​by submitting a job through the SLURM job scheduler**All GPU, Memory, CPU, etc. counts are per node**
 ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^ ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^
 | hermes[1-4] | 256 | AMD | 4 | 16 | 1 | 64 | 0 |  | hermes[1-4] | 256 | AMD | 4 | 16 | 1 | 64 | 0 | 
-| artemis[1-3] | 128 | AMD | 2 | 16 | 1 | 32 | 0 |  
-| artemis[4-7] | 128 | AMD | 2 | 16 | 1 | 32 | 3 | Nvidia Tesla K20c|  
 | slurm[1-5] | 512 | Intel | 2 | 12 | 2 | 24 | 0 |  | slurm[1-5] | 512 | Intel | 2 | 12 | 2 | 24 | 0 | 
 | nibbler[1-4] | 64 | Intel | 2 | 10 | 2 | 20 | 0 |  | nibbler[1-4] | 64 | Intel | 2 | 10 | 2 | 20 | 0 | 
Line 26: Line 29:
 | granger[1-6] | 64 | Intel | 2 | 10 | 2 | 40 | 0 |  | granger[1-6] | 64 | Intel | 2 | 10 | 2 | 40 | 0 | 
 | granger[7-8] | 64 | Intel | 2 | 2 | 4 | 16 | 0 | | granger[7-8] | 64 | Intel | 2 | 2 | 4 | 16 | 0 |
-ai0[1-6] | 64 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti|+ai[01-06] | 64 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti|
 | lynx[01-04] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti| | lynx[01-04] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti|
 | lynx[05-07] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia P100| | lynx[05-07] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia P100|
 | lynx[08-09] | 64 | Intel | 4 | 8 | 2 | 32 | 3 | ATI FirePro W9100| | lynx[08-09] | 64 | Intel | 4 | 8 | 2 | 32 | 3 | ATI FirePro W9100|
-| lynx[10] | 64 | Intel | 32 | 4 | 8 | 2 | 0 | Altera FPGA| +| lynx[10] | 64 | Intel | 4 | 8 | 2 | 32 | 0 | Altera FPGA| 
-| lynx[11-12] | 64 | Intel | 4 | 8 | 2 | 32 |+| lynx[11-12] | 64 | Intel | 4 | 8 | 2 | 32 | 0 |
 | ristretto[01-04] | 128 | Intel | 2 | 6 | 1 | 12 | 8 | Nvidia GTX1080Ti| | ristretto[01-04] | 128 | Intel | 2 | 6 | 1 | 12 | 8 | Nvidia GTX1080Ti|
-| affogato[01-15] | 128 | Intel | 2 | 8 | 2 | 32 | +| affogato[01-15] | 128 | Intel | 2 | 8 | 2 | 32 | 0 | 
-| cortado[01-10] | 512 | Intel | 2 | 12 | 2 | 48 |+| affogato[11-15] | 128 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti
 +| cortado[01-10] | 512 | Intel | 2 | 12 | 2 | 48 | 0 |
  
 ====== SLURM Queues ====== ====== SLURM Queues ======
Line 41: Line 45:
  
 ^ Queue ^ Nodes ^ ^ Queue ^ Nodes ^
-| main | hermes[1-4],​ artemis[1-3],​ slurm[1-5], nibbler[1-4],​ trillian[1-3],​ granger[1-6],​ granger[7-8],​ lynx[08-12],​ cortado[01-07]|+| main | hermes[1-4],​ artemis[1-3],​ slurm[1-5], nibbler[1-4],​ trillian[1-3],​ granger[1-6],​ granger[7-8],​ lynx[08-12],​ cortado[01-10]|
 | share | lynx[01-12] | | share | lynx[01-12] |
-| gpu | artemis[4-7],​ ai0[1-6], lynx[01-07],​ ristretto[01-04] |+| gpu | artemis[4-7],​ ai0[1-6], lynx[01-07], affogato[11-15], ristretto[01-04] |
  
  
  • compute_resources.1582301508.txt.gz
  • Last modified: 2020/02/21 16:11
  • by pgh5a