Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
compute_resources [2020/07/10 12:12] pgh5a [gpusrv nodes] |
compute_resources [2020/08/06 13:13] pgh5a |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Computing Resources ====== | ====== Computing Resources ====== | ||
- | The CS Dept. deploys many servers. We have general purpose compute and GPU compute servers, as well as specialized GPU and FPGA servers. We also have database, NX/Nomachine (graphical desktop), and Windows servers. This section describes the general use and research use servers (see other sections for database and other server information). | + | The CS Dept. deploys many servers. We have general purpose compute and GPU compute servers, as well as specialized GPU servers. We also have database, NX/Nomachine (graphical desktop), and Windows servers. This section describes the general use and research use servers (see other sections for database and other server information). |
Per CS Dept. policy, non interactive login CS Dept. servers are placed into the SLURM job scheduling queues and available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty. | Per CS Dept. policy, non interactive login CS Dept. servers are placed into the SLURM job scheduling queues and available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty. | ||
Line 7: | Line 7: | ||
====== General Purpose Nodes ====== | ====== General Purpose Nodes ====== | ||
+ | ====== portal load balanced nodes ====== | ||
The ''%%portal%%'' nodes are general purpose servers into which anyone can login. They are the "jump off" point for off Grounds connections to the CS network. The servers are in a load balanced cluster, and are accessed through //ssh// to ''%%portal.cs.virginia.edu%%''. | The ''%%portal%%'' nodes are general purpose servers into which anyone can login. They are the "jump off" point for off Grounds connections to the CS network. The servers are in a load balanced cluster, and are accessed through //ssh// to ''%%portal.cs.virginia.edu%%''. | ||
Line 28: | Line 29: | ||
//See our main article on [[compute_slurm|Slurm]] for more information.// | //See our main article on [[compute_slurm|Slurm]] for more information.// | ||
- | These servers are available by submitting a job through the SLURM job scheduler. **All GPU, Memory, CPU, etc. counts are per node**. | + | These servers are available by submitting a job through the SLURM job scheduler. |
+ | |||
+ | **All GPU, Memory, CPU, etc. counts are per node**. | ||
+ | |||
^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/Core ^ Total Cores ^ GPUs ^ GPU Type ^ | ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/Core ^ Total Cores ^ GPUs ^ GPU Type ^ | ||
| hermes[1-4] | 256 | AMD | 4 | 16 | 1 | 64 | 0 | | | hermes[1-4] | 256 | AMD | 4 | 16 | 1 | 64 | 0 | |