Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
compute_resources [2019/11/01 16:43]
pgh5a
compute_resources [2020/04/01 21:38] (current)
pgh5a
Line 1: Line 1:
 +====== Computing Resources ======
 +
 ====== General Purpose Nodes ====== ====== General Purpose Nodes ======
  
-The ''​%%power%%''​ and ''​%%portal%%''​ nodes are meant to be used as general purpose ​systems ​that anyone can log into.  Feel free to use these servers ​to codecompile, test etc. however these are //not// meant for running jobs that will tie up resources ​Computationally expensive jobs should be run on the SLURM nodes.+The ''​%%portal%%''​ nodes are general purpose ​servers ​that anyone can log into. They are the "jump off" point for off Grounds connections ​to the CS network. The servers ​are in a load balanced clusterand are accessed through ​//ssh// to ''​%%portal.cs.virginia.edu%%''​
  
-^ Hostname ​^ Node Count ^ GPUs ^ Memory ^ CPU Type ^ CPUs ^ Sockets ​^ Cores/Socket ​^ Threads/​Core ^ +Feel free to use these servers to code, compile, test, etc.. However these are //not// meant for long running processes that will tie up resources. ​ Computationally expensive processes should be run on the SLURM nodes. **All GPU, Memory, CPU, etc. counts are per node**. 
-| power[1-6] | 6 | 0 | 96 | AMD | 16 | 2 | 8 | 2 |  + 
-| portal[01-04] ​| 4 | 0 | 132 | Intel | 16 | 2 | 8 | 2 |+^ Hostname ^ Memory ​(GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ 
 +| portal[01-04] | 132 | Intel | | 8 | 2 | 16 |
  
 ====== gpusrv nodes ====== ====== gpusrv nodes ======
-The gpusrv* ​nodes are general purpose, interactive login nodes that contain a variety of GPU cards, and are intended for code development and testing. Long running processes are discouraged,​ and are better suited to one of the SLURM controlled GPU nodes.+The gpusrv* ​servers ​are general purpose, interactive login nodes that are intended for code development and testing. Long running processes are discouraged,​ and are better suited to one of the SLURM controlled GPU nodes. **All GPU, Memory, CPU, etc. counts are per node**.
  
-^ Hostname ​^ GPUs ^ Memory ^ CPU Type ^ CPUs ^ Sockets ​^ Cores/Socket ​^ Threads/​Core ^ GPUs ^ +^ Hostname ^ Memory ​(GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ​^ Total Cores ^ GPUs ^ GPU Type 
-gpusrv02 ​2 | 64 | Intel | 12 | 1 | 6 | 2 | Nvidia Titan X | +gpusrv[01-08] ​256 | Intel | 2 | 10 | 2 | 20 | 4 | Nvidia ​RTX 2080Ti ​|
-| gpusrv03 ​| 2 | 64 | Intel | 12 | 1 | 6 | 2 | Nvidia Titan X | +
-| gpusrv05 | 2 | 64 | Intel | 8 | 1 | 4 | 2 | Nvidia GTX 1080Ti | +
-| gpusrv06 | 3 | 64 | Intel | 8 | 1 | 4 | 2 | Nvidia ​GTX 1080Ti ​|+
  
-====== ​SLURM Nodes ======+====== Nodes controlled by the SLURM Job Scheduler ​======
  
 //See our main article on [[compute_slurm|Slurm]] for more information.//​ //See our main article on [[compute_slurm|Slurm]] for more information.//​
  
-The following nodes are available ​via SLURM. All GPU, Memory, CPU, etc. counts are per node.  +These servers ​are available ​by submitting a job through the SLURM job scheduler**All GPU, Memory, CPU, etc. counts are per node**.  
- +^ Hostname ^ Memory ​(GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ​^ Total Cores ^ GPUs ^ GPU Type 
-^ Hostname ​^ Nodes ^ GPUs ^ Memory ^ CPU Type ^ CPUs ^ Sockets ​^ Cores/Socket ​^ Threads/​Core ^ GPU Model +| hermes[1-4] | 256 | AMD | 4 | 16 1 | 64 | 0 |  
-| hermes[1-4] ​| 4 | 0 | 256 | AMD | 64 | 4 | |  +| artemis[1-3] | 128 | AMD | 2 | 16 1 | 32 | 0 |  
-| artemis[1-3] ​| 3 | 0 | 128 | AMD | 32 | 2 | |  +| artemis[4-7] | 128 | AMD | 2 | 16 1 | 32 | 3 | Nvidia Tesla K20c|  
-| artemis[4-7] ​| 3 | 1 | 128 | AMD | 32 | 2 | |Nvidia Tesla K20c|  +| slurm[1-5] | 512 | Intel | | 12 | 2 | 24 | 0 |  
-| slurm[1-5] ​| 5 | 0 | 512 | Intel | 24 | 1 | 12 | 2 |  +| nibbler[1-4] | 64 | Intel | | 10 | 2 | 20 | 0 |  
-| nibbler[1-4] ​| 4 | 0 | 64 | Intel | 20 | 1 | 10 | 2 |  +| trillian[1-3] | 256 | AMD | 4 | 8 | 2 | 64 | 0 |  
-| trillian[1-3] ​| 3 | 0 | 256 | AMD | 64 | 4 | 8 | 2 |  +| granger[1-6] | 64 | Intel | 2 | 10 | 2 | 40 | 0 |  
-| granger[1-6] ​| 6 | 0 | 64 | Intel | 40 | 2 | 10 | 2 |  +| granger[7-8] | 64 | Intel | | 2 | 4 | 16 | 0 
-| granger[7-8] ​| 2 | 0 | 64 | Intel | 16 | 2 | 4 | |  +ai[01-06] | 64 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti| 
-ai0[1-6| 6 | 4 | 64 | Intel | 32 | 2 | 8 | 2 |Nvidia GTX1080Ti| +| lynx[01-04] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti| 
-| lynx[01-04] ​| 4 | 4 | 64 | Intel | 32 | 4 | 8 | 2 |Nvidia GTX1080Ti| +| lynx[05-07] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia P100| 
-| lynx[05-07] ​| 3 | 4 | 64 | Intel | 32 | 4 | 8 | 2 |Nvidia P100| +| lynx[08-09] | 64 | Intel | 4 | 8 | 2 | 32 | 3 | ATI FirePro W9100| 
-| lynx[08-09] ​| 2 | 3 | 64 | Intel | 32 | 4 | 8 | 2 |ATI FirePro W9100| +| lynx[10] | 64 | Intel | 4 | 8 | 2 | 32 | 0 | Altera FPGA| 
-| lynx[10] ​| 1 | 0 | 64 | Intel | 32 | 4 | 8 | 2 | Altera FPGA| +| lynx[11-12] | 64 | Intel | 4 | 8 | 2 | 32 | 0 
-| lynx[11-12] ​| 2 | 0 | 64 | Intel | 32 | 4 | 8 | 2 || +| ristretto[01-04] | 128 | Intel | 2 | 6 | 1 | 12 | 8 | Nvidia GTX1080Ti| 
-| ristretto[01-04] ​| 4 | 8 | 128 | Intel | 12 | 2 | 6 | 1 |Nvidia GTX1080Ti| +| affogato[01-15] | 128 | Intel | 2 | 8 | 2 | 32 | 0 
-| affogato[01-15] | 15 | 0 | 128 | Intel | 16 | 2 | 8 | 2 |+| affogato[11-15] ​| 128 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti| 
 +| cortado[01-10] | 512 | Intel | 2 | 12 | 2 | 48 | 0 |
  
 ====== SLURM Queues ====== ====== SLURM Queues ======
Line 45: Line 46:
  
 ^ Queue ^ Nodes ^ ^ Queue ^ Nodes ^
-| main | hermes[1-4],​ artemis[1-3],​ slurm[1-5], nibbler[1-4],​ trillian[1-3],​ granger[1-6],​ granger[7-8],​ lynx[08-12]+| main | hermes[1-4],​ artemis[1-3],​ slurm[1-5], nibbler[1-4],​ trillian[1-3],​ granger[1-6],​ granger[7-8],​ lynx[08-12], ​cortado[01-10]|
-| intel | artemis7,slurm[1-5],​granger[1-6],​granger[7-8],​nibbler[1-4],​ai0[1-6],​ristretto[01-04],​lynx[01-12] | +
-| amd | hermes[1-4],​artemis[1-6],​trillian[1-3] |+
 | share | lynx[01-12] | | share | lynx[01-12] |
-| gpu | artemis[4-7],​ai0[1-6],​lynx[01-07],​ristretto[01-04] |+| gpu | artemis[4-7],​ ai0[1-6], lynx[01-07], affogato[11-15], ristretto[01-04] |
  
  
  • compute_resources.1572626597.txt.gz
  • Last modified: 2019/11/01 16:43
  • by pgh5a