Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
compute_resources [2019/02/06 20:40]
ktm5j
compute_resources [2019/04/03 21:27] (current)
ktm5j [SLURM Nodes]
Line 1: Line 1:
 ====== General Purpose Nodes ====== ====== General Purpose Nodes ======
  
-The ''​%%power%%''​ nodes are meant to be used as general purpose systems that anyone can log into.  Feel free to use these servers to code, compile, test etc. however these are //not// meant for running jobs that will tie up resources. ​ Computationally expensive jobs should be run on the SLURM nodes.+The ''​%%power%%''​ and ''​%%portal%%''​ nodes are meant to be used as general purpose systems that anyone can log into.  Feel free to use these servers to code, compile, test etc. however these are //not// meant for running jobs that will tie up resources. ​ Computationally expensive jobs should be run on the SLURM nodes.
  
-^ Hostname ^ Node Count ^ GPUs ^ Memory ^ CPU Type ^ CPUs ^ Sockets ^ Cores Per Socket ^ Threads ​Per Core ^+^ Hostname ^ Node Count ^ GPUs ^ Memory ^ CPU Type ^ CPUs ^ Sockets ^ Cores/Socket ^ Threads/Core ^
 | power[1-6] | 6 | 0 | 96 | AMD | 16 | 2 | 8 | 2 |  | power[1-6] | 6 | 0 | 96 | AMD | 16 | 2 | 8 | 2 | 
 | portal[01-04] | 4 | 0 | 132 | Intel | 16 | 2 | 8 | 2 | | portal[01-04] | 4 | 0 | 132 | Intel | 16 | 2 | 8 | 2 |
  
-====== ​GPUSrv Nodes ======+====== ​gpusrv nodes ====== 
 +The gpusrv* nodes are general purpose, interactive login nodes that contain a variety of GPU cards, and are intended for code development and testing. Long running processes are discouraged,​ and are better suited to one of the SLURM controlled GPU nodes.
  
-^ Hostname ^ GPUs ^ Memory ^ CPU Type ^ CPUs ^ Sockets ^ Cores Per Socket ^ Threads ​Per Core ^+^ Hostname ^ GPUs ^ Memory ^ CPU Type ^ CPUs ^ Sockets ^ Cores/Socket ^ Threads/Core ^
 | gpusrv01 | 2 | 32 | Intel | 12 | 1 | 6 | 2 | | gpusrv01 | 2 | 32 | Intel | 12 | 1 | 6 | 2 |
 | gpusrv02 | 2 | 64 | Intel | 12 | 1 | 6 | 2 | | gpusrv02 | 2 | 64 | Intel | 12 | 1 | 6 | 2 |
Line 19: Line 20:
 ====== SLURM Nodes ====== ====== SLURM Nodes ======
  
-The following nodes are available via SLURM.  ​See the main article on [[compute_slurm|SLURM]] for more information.+//See our main article on [[compute_slurm|Slurm]] for more information.//
  
-^ Hostname ^ Node Count ^ GPUs ^ Memory ^ CPU Type ^ CPUs ^ Sockets ^ Cores Per Socket ^ Threads ​Per Core ^+The following nodes are available via SLURM. ​  
 + 
 +^ Hostname ^ Node Count ^ GPUs ^ Memory ^ CPU Type ^ CPUs ^ Sockets ^ Cores/Socket ^ Threads/Core ^ GPU Model ^
 | hermes[1-4] | 4 | 0 | 256 | AMD | 64 | 4 | 8 | 2 |  | hermes[1-4] | 4 | 0 | 256 | AMD | 64 | 4 | 8 | 2 | 
-| artemis[1-7] | 7 | 1 | 128 | AMD | 32 | 2 | 8 | 2 | +| artemis[1-3] | 7 | 0 | 128 | AMD | 32 | 2 | 8 | 2 |  
 +| artemis[4-7] | 7 | 1 | 128 | AMD | 32 | 2 | 8 | 2 | 
 | slurm[1-5] | 5 | 0 | 512 | Intel | 24 | 1 | 12 | 2 |  | slurm[1-5] | 5 | 0 | 512 | Intel | 24 | 1 | 12 | 2 | 
 | nibbler[1-4] | 4 | 0 | 64 | Intel | 20 | 1 | 10 | 2 |  | nibbler[1-4] | 4 | 0 | 64 | Intel | 20 | 1 | 10 | 2 | 
Line 29: Line 33:
 | granger[1-6] | 6 | 0 | 64 | Intel | 40 | 2 | 10 | 2 |  | granger[1-6] | 6 | 0 | 64 | Intel | 40 | 2 | 10 | 2 | 
 | granger[7-8] | 2 | 0 | 64 | Intel | 16 | 2 | 4 | 2 |  | granger[7-8] | 2 | 0 | 64 | Intel | 16 | 2 | 4 | 2 | 
-| ai0[1-6] | 6 | 4 | 64 | Intel | 32 | 2 | 8 | 2 | +| ai0[1-6] | 6 | 4 | 64 | Intel | 32 | 2 | 8 | 2 | GTX1080Ti ​
-| lynx[01-07] | 7 | 4 | 64 | Intel | 32 | 4 | 8 | 2 |+| lynx[01-07] | 7 | 4 | 64 | Intel | 32 | 4 | 8 | 2 |P100|
 | lynx[08-12] | 5 | 0 | 64 | Intel | 32 | 4 | 8 | 2 | | lynx[08-12] | 5 | 0 | 64 | Intel | 32 | 4 | 8 | 2 |
-| ristretto[01-04] | 4 | 8 | 128 | Intel | 12 | 2 | 6 | 1 |+| ristretto[01-04] | 4 | 8 | 128 | Intel | 12 | 2 | 6 | 1 |GTX1080Ti| 
 +| affogato[01-15] | 16 | 0 | 128 | Intel | 16 | 2 | 8 | 2 | 
 + 
 +====== SLURM Queues ====== 
 + 
 +//See our [[compute_slurm#​partitions|main article on Slurm]] for more information on using partitions//​ 
 + 
 +^ Queue ^ Nodes ^ 
 +| main | hermes[1-4],​ artemis[1-3],​ slurm[1-5], nibbler[1-4],​ trillian[1-3],​ granger[1-6],​ granger[7-8],​ lynx[08-12]| 
 +| intel | artemis7,​slurm[1-5],​granger[1-6],​granger[7-8],​nibbler[1-4],​ai0[1-6],​ristretto[01-04],​lynx[01-12] | 
 +| amd | hermes[1-4],​artemis[1-6],​trillian[1-3] | 
 +| share | lynx[01-12] | 
 +| gpu | artemis[4-7],​ai0[1-6],​lynx[01-07],​ristretto[01-04] | 
 + 
 + 
 + 
 + 
 + 
 + 
  • compute_resources.1549485627.txt.gz
  • Last modified: 2019/02/06 20:40
  • by ktm5j