Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
compute_resources [2020/03/30 18:38]
pgh5a
compute_resources [2020/06/30 19:33] (current)
pgh5a
Line 1: Line 1:
 +====== Computing Resources ======
 +Per CS Dept. policy, CS Dept. servers are placed into the SLURM job scheduling queues and available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty.
 +This policy allows servers to be used when the project group is not using them. So instead of sitting idle and consuming power and cooling, other Dept users can benefit from the use of these systems.
 ====== General Purpose Nodes ====== ====== General Purpose Nodes ======
  
 The ''​%%portal%%''​ nodes are general purpose servers that anyone can log into. They are the "jump off" point for off Grounds connections to the CS network. The servers are in a load balanced cluster, and are accessed through //ssh// to ''​%%portal.cs.virginia.edu%%''​. ​ The ''​%%portal%%''​ nodes are general purpose servers that anyone can log into. They are the "jump off" point for off Grounds connections to the CS network. The servers are in a load balanced cluster, and are accessed through //ssh// to ''​%%portal.cs.virginia.edu%%''​. ​
  
-Feel free to use these servers to code, compile, test, etc.. However these are //not// meant for long running processes that will tie up resources. ​ Computationally expensive processes should be run on the SLURM nodes. **All GPU, Memory, CPU, etc. counts are per node**.+Feel free to use these servers to code, compile, test, etc.. However these are //not// meant for long running processes that will tie up resources. ​ Computationally expensive processes should be run on other nodes. **All GPU, Memory, CPU, etc. counts are per node**.
  
 ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^
Line 21: Line 24:
 ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^ ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^
 | hermes[1-4] | 256 | AMD | 4 | 16 | 1 | 64 | 0 |  | hermes[1-4] | 256 | AMD | 4 | 16 | 1 | 64 | 0 | 
-| artemis[1-3] | 128 | AMD | 2 | 16 | 1 | 32 | 0 |  
-| artemis[4-7] | 128 | AMD | 2 | 16 | 1 | 32 | 3 | Nvidia Tesla K20c|  
 | slurm[1-5] | 512 | Intel | 2 | 12 | 2 | 24 | 0 |  | slurm[1-5] | 512 | Intel | 2 | 12 | 2 | 24 | 0 | 
 | nibbler[1-4] | 64 | Intel | 2 | 10 | 2 | 20 | 0 |  | nibbler[1-4] | 64 | Intel | 2 | 10 | 2 | 20 | 0 | 
Line 28: Line 29:
 | granger[1-6] | 64 | Intel | 2 | 10 | 2 | 40 | 0 |  | granger[1-6] | 64 | Intel | 2 | 10 | 2 | 40 | 0 | 
 | granger[7-8] | 64 | Intel | 2 | 2 | 4 | 16 | 0 | | granger[7-8] | 64 | Intel | 2 | 2 | 4 | 16 | 0 |
-ai0[1-6] | 64 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti|+ai[01-06] | 64 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti|
 | lynx[01-04] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti| | lynx[01-04] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti|
 | lynx[05-07] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia P100| | lynx[05-07] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia P100|
  • compute_resources.1585593508.txt.gz
  • Last modified: 2020/03/30 18:38
  • by pgh5a