Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
compute_resources [2020/07/10 12:12]
pgh5a [Nodes controlled by the SLURM Job Scheduler]
compute_resources [2020/08/06 13:13]
pgh5a
Line 1: Line 1:
 ====== Computing Resources ====== ====== Computing Resources ======
-The CS Dept. deploys many servers. We have general purpose compute and GPU compute servers, as well as specialized GPU and FPGA servers. We also have database, NX/​Nomachine (graphical desktop), and Windows servers. This section describes the general use and research use servers (see other sections for database and other server information).+The CS Dept. deploys many servers. We have general purpose compute and GPU compute servers, as well as specialized GPU servers. We also have database, NX/​Nomachine (graphical desktop), and Windows servers. This section describes the general use and research use servers (see other sections for database and other server information).
  
 Per CS Dept. policy, non interactive login CS Dept. servers are placed into the SLURM job scheduling queues and available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty. Per CS Dept. policy, non interactive login CS Dept. servers are placed into the SLURM job scheduling queues and available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty.
Line 7: Line 7:
 ====== General Purpose Nodes ====== ====== General Purpose Nodes ======
  
 +====== portal load balanced nodes ======
 The ''​%%portal%%''​ nodes are general purpose servers into which anyone can login. They are the "jump off" point for off Grounds connections to the CS network. The servers are in a load balanced cluster, and are accessed through //ssh// to ''​%%portal.cs.virginia.edu%%''​. ​ The ''​%%portal%%''​ nodes are general purpose servers into which anyone can login. They are the "jump off" point for off Grounds connections to the CS network. The servers are in a load balanced cluster, and are accessed through //ssh// to ''​%%portal.cs.virginia.edu%%''​. ​
  
  • compute_resources.txt
  • Last modified: 2021/02/16 20:52
  • by pgh5a