Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
compute_resources [2021/06/10 12:52]
pgh5a
compute_resources [2022/09/23 19:28] (current)
Line 1: Line 1:
 ==== Computing Resources ==== ==== Computing Resources ====
-The CS Dept. deploys general purpose compute and GPU compute servers, as well as specialized GPU servers. We also have database, [[nx_lab|NX/​Nomachine (graphical ​desktop)]], ​and [[windows_server|Windows ​servers]]. This section describes the general use and research use servers (see other sections for database and other server information).+The CS Dept. deploys general purpose compute and GPU compute servers, as well as specialized GPU servers. We also have database, [[nx_lab|NX/​Nomachine (remote Linux desktop)]], [[windows_server|Windows ​desktop server]], and a docker server. This section describes the general use and research use servers (see other sections for database and other server information).
  
 === portal load balanced servers === === portal load balanced servers ===
Line 10: Line 10:
  
 ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^
-| portal[01-04] | 132 | Intel | 1 | 8 | 2 | 16 |+| portal[01-06] | 132 | Intel | 1 | 8 | 2 | 16 |
  
 === GPU servers === === GPU servers ===
Line 17: Line 17:
 **All GPU, Memory, CPU, etc. counts are per node**. **All GPU, Memory, CPU, etc. counts are per node**.
  
-^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^ +^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ GPUs ^ GPU Type ^ GPU RAM (GB) 
-| gpusrv[01-08] | 256 | Intel | 1 | 10 | 2 | 20 | 4 | Nvidia RTX 2080Ti | +| gpusrv[01-08] | 256 | Intel | 1 | 10 | 2 | 20 | 4 | Nvidia RTX 2080Ti ​| 11 
-| gpusrv[09-16] | 512 | Intel | 2 | 10 | 2 | 40 | 4 | Nvidia RTX 4000 |+| gpusrv[09-16] | 512 | Intel | 2 | 10 | 2 | 40 | 4 | Nvidia RTX 4000   | 8  | 
 +| gpusrv[17] | 128 | Intel | 1 | 10 | 2 | 20 | 4 | Nvidia RTX 2080Ti | 11 | 
 + 
 +=== Docker servers === 
 +The docker server allows users to instantiate a docker container without the need for super-user (root) privileges. Once a user uses '​ssh'​ to login to the server, the user will execute //sudo /​usr/​bin/​docker//​ to create a Docker container. Note that the //​slurm[1-5]//​ nodes, available through the SLURM job scheduler, also allow general users to create docker containers (see below). 
 + 
 +**All GPU, Memory, CPU, etc. counts are per node**. 
 + 
 +^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/​Core ^ Total Cores ^ 
 +| docker01 | 256 | Intel | 2 | 10 | 2 | 40 |
  
 === Nodes controlled by the SLURM Job Scheduler === === Nodes controlled by the SLURM Job Scheduler ===
Line 25: Line 34:
 //See our main article on [[compute_slurm|Slurm]] for more information.//​ //See our main article on [[compute_slurm|Slurm]] for more information.//​
  
-These servers are available by submitting a job through the SLURM job scheduler. They are not available for direct logins via '​ssh'​. ​+These servers are available by submitting a job through the SLURM job scheduler. They are not available for direct logins via '​ssh'​. However, users can login to these servers without a job script using the 'srun -i' direct login command. See the SLURM section for more information.
  
 Per CS Dept. policy, servers are placed into the SLURM job scheduling queues and are available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty. Per CS Dept. policy, servers are placed into the SLURM job scheduling queues and are available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty.
Line 31: Line 40:
 This policy allows servers to be used when the project group is not using them. So instead of sitting idle and consuming power and cooling, other Dept. users can benefit from the use of these systems. This policy allows servers to be used when the project group is not using them. So instead of sitting idle and consuming power and cooling, other Dept. users can benefit from the use of these systems.
  
-**All GPU, Memory, CPU, etc. counts are per node**.+**All GPU, Memory, CPU, etc. counts are per node. This list may not be up to date. Use the '​sinfo'​ command for current server inventory.**
    
-^ Hostname ^ Mem (GB) ^ CPU ^ #CPUs ^ Cores ^ Threads ^ Total Threads ^ GPUs ^ GPU Type ^+^ Hostname ^ Mem (GB) ^ CPU ^ #CPUs ^ Cores ^ Threads ^ Total Threads ^ GPUs ^ GPU Type ^ GPU RAM (GB) ^ OS ^ 
 +| adriatic[01-06] | 1024   | Intel | 2 | 8  | 2 | 32 | 4 | Nvidia RTX 4000    | 8  | Centos Linux 7 | 
 +| affogato[11-15] | 128    | Intel | 2 | 8  | 2 | 32 | 4 | Nvidia GTX1080Ti ​  | 11 | Centos Linux 7 | 
 +| ai[01-06] ​      | 64     | Intel | 2 | 8  | 2 | 32 | 4 | Nvidia GTX1080Ti ​  | 11 | Centos Linux 7 | 
 +| ai[07-08] ​      | 128    | Intel | 2 | 8  | 2 | 32 | 4 | Nvidia GTX1080Ti ​  | 11 | Centos Linux 7 | 
 +| cheetah01 ​      | 256    | AMD   | 2 | 8  | 2 | 32 | 4 | Nvidia A100        | 40 | Centos Linux 7 | 
 +| cheetah[02-03] ​ | 1024(2)| Intel | 2 | 18 | 2 | 72 | 2 | Nvidia RTX 2080Ti ​ | 11 | Centos Linux 7 | 
 +| cortado[01-10] ​ | 512    | Intel | 2 | 12 | 2 | 48 | 0 | | | Centos Linux 7 | 
 +| doppio[01-05] ​  | 128    | Intel | 2 | 16 | 2 | 64 | 0 | | | Centos Linux 7 | 
 +| hydro           | 256    | Intel | 2 | 16 | 2 | 64 | 0 | | | Centos Linux 7 | 
 +| lotus           | 256    | Intel | 2 | 20 | 2 | 80 | 8 | Nvidia RTX 6000    | 24 | Centos Linux 7 | 
 +| lynx[01-04] ​    | 64     | Intel | 4 | 8  | 2 | 32 | 4 | Nvidia GTX1080Ti ​  | 11 | Centos Linux 7 | 
 +| lynx[05-07] ​    | 64     | Intel | 4 | 8  | 2 | 32 | 4 | Nvidia P100        | 16 | Centos Linux 7 | 
 +| lynx[08-09] ​    | 64     | Intel | 4 | 8  | 2 | 32 | 0 | | | Centos Linux 7 | 
 +| lynx10  ​ | 64     | Intel | 4 | 8  | 2 | 32 | 3 | Nvidia GTX1080 ​    | 8  | Centos Linux 7 | 
 +| lynx[11,​12] ​    | 64     | Intel | 4 | 8  | 2 | 32 | 4 | Nvidia Titan X     | 12 | Ubuntu Server 22.04 | 
 +| optane01 ​       | 1024(1)| Intel | 2 | 16 | 2 | 64 | 0 | | | Centos Linux 7 | 
 +| ristretto[01-04]| 128    | Intel | 2 | 6  | 1 | 12 | 8 | Nvidia GTX1080Ti ​  | 11 | Centos Linux 7 | 
 +| sds[01-02]  ​ | 512    | Intel | 2 | 10 | 2 | 40 | 4 | Nvidia RTX A4000   | 16 | Ubuntu Server 22.04 | 
 +| slurm[1-5](3) ​  | 512    | Intel | 2 | 12 | 2 | 24 | 0 | | | Centos Linux 7 | 
 +| titanx[01-03]  ​ | 256    | Intel | 1 | 8  | 16 | 16 | 1 | Nvidia Titan X   | 12 | Ubuntu Server 22.04 | 
 +| titanx[04-06]  ​ | 64     | Intel | 1 | 6  | 12 | 12 | 1 | Nvidia Titan X   | 12 | Ubuntu Server 22.04 | 
 +| pegasusboots  ​ | 192    | Intel | 2 | 10  | 20 | 40 | 0 |                 ​| ​   | Ubuntu Server 22.04 | 
 +| heartpiece ​ | 160    | Intel | 2 | 10  | 20 | 40 | 0 |                 ​| ​   | Ubuntu Server 22.04 | 
 +| epona  | 64    | Intel | 1 | 4  | 8 | 8 | 0 |                 ​| ​   | Ubuntu Server 22.04 |
  
-| adriatic[01-06] | 1024 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia RTX 4000 | +(1) 512GB Intel Optane memory, ​512 DDR4 memory
-| affogato[01-15] | 128 | Intel | 2 | 8 | 2 | 32 | 0 | +
-| affogato[11-15] | 128 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti| +
-| ai[01-06] | 64 | Intel | 2 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti| +
-| cheetah01 | 256 | AMD | 2 | 8 | 2 | 32 | 4 | Nvidia A100 | +
-| cheetah[02-03] | 1024(2) | Intel | 2 | 18 | 2 | 72 | 2 | Nvidia RTX 2080Ti | +
-| cortado[01-10] | 512 | Intel | 2 | 12 | 2 | 48 | 0 | +
-| doppio[01-05] | 128 | Intel | 2 | 16 | 2 | 64 | 0 | +
-| falcon[1-10] | 128 | Intel | 2 | 6 | 2 | 24 | 0 | +
-| hermes[1-4] | 256 | AMD | 4 | 16 | 1 | 64 | 0 |  +
-| lotus | 256 | Intel | 2 | 20 | 2 | 80 | 8 | Nvidia RTX 6000| +
-| lynx[01-04] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia GTX1080Ti| +
-| lynx[05-07] | 64 | Intel | 4 | 8 | 2 | 32 | 4 | Nvidia P100| +
-| lynx[08-09] | 64 | Intel | 4 | 8 | 2 | 32 | 3 | ATI FirePro W9100| +
-| lynx[10] | 64 | Intel | 4 | 8 | 2 | 32 | 0 | Altera FPGA| +
-| lynx[11-12] | 64 | Intel | 4 | 8 | 2 | 32 | 0 | +
-| nibbler[1-4] | 64 | Intel | 2 | 10 | 2 | 20 | 0 |  +
-| optane01 | 512(1) Intel | 2 | 16 | 2 | 64 | 0 | +
-| ristretto[01-04] | 128 | Intel | 2 | 6 | 1 | 12 | 8 | Nvidia GTX1080Ti| +
-| slurm[1-5](3) | 512 | Intel | 2 | 12 | 2 | 24 | 0 | +
-| trillian[1-3] | 256 | AMD | 4 | 8 | 2 | 64 | 0 |   +
- +
-(1) Intel Optane ​memory+
  
 (2) In addition to 1TB of DDR4 RAM, these servers also house a 900GB Optane NVMe SSD and a 1.6TB NVMe regular SSD drive  (2) In addition to 1TB of DDR4 RAM, these servers also house a 900GB Optane NVMe SSD and a 1.6TB NVMe regular SSD drive 
Line 68: Line 79:
  
 ^ Queue ^ Nodes ^ ^ Queue ^ Nodes ^
-| main | cortado[01-10],​ doppio[01-05], ​falcon[1-10],​ granger[1-8],​ hermes[1-4],​ lynx[10-12],​ nibbler[1-4], optane01, slurm[1-5], trillian[1-3] | +| main | cortado[01-10],​ doppio[01-05], ​hydro, optane01, slurm[1-5] | 
-| gpu | affogato[11-15],​ ai[01-06], cheetah[01-03],​ lynx[01-09], ristretto[01-04], ​lotus |+| gpu | adriatic[01-06], ​affogato[11-15],​ ai[01-08], cheetah[01-03], lotus, lynx[01-12], ristretto[01-04], ​sds[01-02] ​|
  
 === Group specific computing resources === === Group specific computing resources ===
 Several groups in CS deploy servers that are used exclusively by that group. Approximately 40 servers are deployed in this fashion, ranging from traditional CPU servers to specialized servers containing GPU accelerators. Several groups in CS deploy servers that are used exclusively by that group. Approximately 40 servers are deployed in this fashion, ranging from traditional CPU servers to specialized servers containing GPU accelerators.
 +
 +=== Docker Server ===
 +
 +//See our main article on [[compute_docker|Docker]] for more information.//​
 +
 +The Docker server on hostname '​docker01'​ is currently available to all department members. Docker containers can also be created on nodes //​slurm[1-5]//​. ​
  
  
  • compute_resources.txt
  • Last modified: 2022/09/23 19:28
  • (external edit)