compute_resources
Differences
This shows you the differences between two versions of the page.
compute_resources [2021/08/12 10:18] – pgh5a | compute_resources [2024/11/04 11:13] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ===== CS Computing Resources ===== | ||
+ | The CS Dept. deploys general purpose compute and GPU compute servers, as well as specialized GPU servers, all running Linux. We also have database, [[nx_lab|NX/ | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ===== General Purpose Servers ===== | ||
+ | General purpose servers are servers that can be accessed directly via SSH logins with CS credentials. | ||
+ | |||
+ | ==== Load Balanced Servers (portal) ==== | ||
+ | The //portal// servers are general purpose servers, running Linux, into which anyone can login. They are available for general use. They are also the "jump off" point for off Grounds connections to the CS network. The servers are in a load balanced cluster, and are accessed through //ssh// to '' | ||
+ | |||
+ | Use these servers to code, compile, test, etc.. However these are //not// meant for long running processes that will use excessive resources. | ||
+ | |||
+ | //(all GPU, Memory, CPU, etc. counts are per node)//. | ||
+ | ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/ | ||
+ | | portal[01-12] | 132-256 | Intel | 1 | 8 | 2 | 16 | | ||
+ | |||
+ | ==== GPU servers ==== | ||
+ | The gpusrv* servers are general purpose servers, running Linux, that contain GPUs into which anyone can login (via ' | ||
+ | |||
+ | //(all GPU, Memory, CPU, etc. counts are per node)//. | ||
+ | |||
+ | ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/ | ||
+ | | gpusrv[01-02, | ||
+ | | gpusrv03 | 256 | Intel | 1 | 10 | 2 | 20 | 3 | Nvidia RTX 2080Ti | 11 | | ||
+ | | gpusrv[09-16] | 512 | Intel | 2 | 10 | 2 | 40 | 4 | Nvidia RTX 4000 | 8 | | ||
+ | | gpusrv[17-18] | 128 | Intel | 1 | 10 | 2 | 20 | 4 | Nvidia RTX 2080Ti | 11 | | ||
+ | | gpusrv19 | 128 | Intel | 2 | 20 | 2 | 80 | 4 | Nvidia RTX 2080Ti | 11 | | ||
+ | |||
+ | |||
+ | ==== Apptainer servers ==== | ||
+ | The Apptainer servers, running Linux, allow users to instantiate and test containers. Once a user uses ' | ||
+ | |||
+ | ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/ | ||
+ | | apptainer01 | 256 | Intel | 2 | 10 | 2 | 40 | | ||
+ | | apptainer02 | 64 | Intel | 1 | 4 | 2 | 8 | | ||
+ | | apptainer03 | 20 | Intel | 1 | 2 | 2 | 4 | | ||
+ | |||
+ | |||
+ | |||
+ | ==== ARM Architecture Servers ==== | ||
+ | |||
+ | These are servers that have the ARM64 Architecture and support direct SSH logins. | ||
+ | |||
+ | ^ Hostname ^ Memory (GB) ^ CPU Type ^ CPUs ^ Cores/CPU ^ Threads/ | ||
+ | | arm01 | 64 | aarch64 | 1 | 64 | 1 | 64 | | ||
+ | |||
+ | |||
+ | ==== Group specific computing resources ==== | ||
+ | Several groups in CS deploy servers that are used exclusively by that group. Approximately 60 servers are deployed in this fashion, ranging from traditional CPU servers to specialized servers containing GPU accelerators. | ||
+ | |||
+ | ---- | ||
+ | |||
+ | |||
+ | ===== Remote Desktop ===== | ||
+ | ==== Linux ==== | ||
+ | The Department provides graphical linux desktop sessions using an application called “NoMachine”. This app runs on Windows, Mac, or Linux and allows you to access our Linux servers from laptops or desktops. You get a full Linux desktop environment, | ||
+ | |||
+ | Please see our article about using NoMachine: ([[nx_lab|NX/ | ||
+ | |||
+ | ==== Windows ==== | ||
+ | The Department provides general purpose Windows server(s) that allow for multiple simultaneous **Remote Desktop (RDP)** sessions. Users can create unique RDP desktop sessions on this server and run Windows applications. This server is useful for students or faculty who need to run a Windows application but do not have a PC desktop or laptop. | ||
+ | |||
+ | Please see our article about using RDP for these servers: ([[windows_server|Windows Desktop Server]]). | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ===== Nodes Controlled by the SLURM Job Scheduler ===== | ||
+ | |||
+ | Servers controlled by the job scheduler are running Linux and are available by submitting a job through the SLURM job scheduler. **They are not available for direct logins via ' | ||
+ | |||
+ | Per CS Dept. policy, servers are placed into the SLURM job scheduling queues and are available for general use. Also, per that policy, if a user in a research group that originally purchased the hardware requires exclusive use of that hardware, they can be given a reservation for that exclusive use for a specified time. Otherwise, the systems are open for use by anyone with a CS account. This policy was approved by the CS Dept. Computing Committee comprised of CS Faculty. | ||
+ | |||
+ | This policy allows servers to be used when the project group is not using them. So instead of sitting idle and consuming power and cooling, other Dept. users can benefit from the use of these systems. | ||
+ | |||
+ | **See our main article on (__[[compute_slurm|SLURM]]__) for more information about available resources.** | ||
+ | |||
+ | ---- | ||
compute_resources.txt · Last modified: 2024/11/04 11:13 by 127.0.0.1