Site Tools


slurm_report_one_week

CS SLURM Cluster Report - 1 week

Report generated for jobs run on the CS SLURM cluster from 2026-01-25 through 2026-01-31.

Job total during this query range: 25,033

Job total since August 1st 2024: 5,351,187

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2026-01-25T00:00:00 - 2026-01-31T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated                  Down         PLND Down                    Idle            Planned                Reported 
--------- -------------- ---------------------- --------------------- ----------------- ----------------------- ------------------ ----------------------- 
       cs            cpu         155400(22.32%)           9750(1.40%)          0(0.00%)          467457(67.14%)       63586(9.13%)         696192(100.00%) 
       cs            mem     1126862530(15.25%)       29509298(0.40%)          0(0.00%)      6230588172(84.35%)           0(0.00%)     7386960000(100.00%) 
       cs       gres/gpu           7260(22.86%)           1163(3.66%)          0(0.00%)           23329(73.47%)           0(0.00%)          31752(100.00%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1596,mem=17562000M,node=40
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2066,mem=22254000M,node=42,gres/gpu=162
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=226,mem=2528000M,node=7
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=256,mem=1626000M,node=10,gres/gpu=27
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
cpu2081973215:30:1616532108502906000170126000000
gpu205665447:21:0615402170285000104000000
nolim214311824:48:321346222057400001000000
gnolim154653:14:24150400050000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-shangtong-zhang33077811:33:36156108511510215090000150000000
slurm-cs-henry-kautz650718758:36:3243981720920423152701491000157101000000
slurm-cs-ashish-venkat1595914161:47:3615959000134126350190800040000000
slurm-cs-yue-cheng2313840:41:52023008101200020000000
slurm-cs-tianhao-wang2108730:03:44102108001522203400011000000
slurm-cs-yu-meng36600:27:000300030000000000000
slurm-cs-lu-feng16206380:11:060162000137270017800000000000
slurm-cs-yenling-kuo115979:01:20011000100100000000000
slurm-cs-undefined1471072:04:002212500873502000032000000
slurm-cs-kevin-skadron34875:19:581222009601600021000000
slurm-cs-briana-morrison19790:20:1401900640800010000000
slurm-cs-madhur-behl170140:47:201700001321011000026000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
cheetah04107077:54:32800000011000000
puma016566257:24:4043873013500028000000
serval06385523:25:28111001400021000000
serval07295229:46:0012110600000000000
bigcat0121794899:39:58170713102700003140000000
bigcat0313484859:45:06114341016400000000000
bigcat0221184616:01:4017827902320001114000000
serval08174589:28:401230200000000000
bigcat0512394532:27:06104661013200000000000
bigcat0416124362:11:52138241018900000000000
serval09234214:19:001570100000000000
cheetah081373704:59:321061601500000000000
cheetah091083290:23:16732001400001000000
slurm25613225:30:1030468018900000000000
heartpiece2593210:05:241692106800001000000
struct014353001:46:443182109100023000000
affogato053022840:10:142452103300030000000
lynx081692689:26:161142802500011000000
jaguar01172660:44:08920500010000000
affogato062972581:46:422221305600042000000
serval0392535:58:54530000001000000
affogato083452527:26:582781504900012000000
affogato11152460:03:200110300010000000
cheetah011072453:18:2689140400000000000
affogato043712418:26:542792606100041000000
bigcat0620342335:33:4818757608300000000000
hydro3842209:46:582733008000010000000
lotus992175:29:44671202000000000000
affogato014392153:10:403212908200043000000
slurm18092114:55:2657832019900000000000
slurm53101980:43:302212006900000000000
cheetah021981979:38:141561802400000000000
struct033331974:10:262372606500023000000
struct024231928:59:023261108200013000000
ai03351906:55:122940100010000000
lynx1051897:45:52030100010000000
struct043461876:11:202472407100013000000
ai04401875:39:483340200010000000
affogato023801864:18:242523109400003000000
ai06191839:53:24980200000000000
nekomata011111760:16:2892701100010000000
struct053851698:00:582832607200013000000
struct063491636:39:162453106800023000000
ai02381633:44:102870300000000000
struct092971617:11:162291704700013000000
struct083201616:47:502491505200013000000
affogato13601561:48:4801204800000000000
panther016611458:15:2046142015200006000000
slurm31811197:50:22745804900000000000
affogato14191137:20:400601300000000000
lynx09205935:14:361542102700021000000
struct07445903:43:1233424072000123000000
ai054846:30:24120100000000000
cheetah0378840:04:246290700000000000
jaguar03186834:38:2416890900000000000
affogato1516767:15:520501100000000000
jinx014759:55:20020200000000000
jinx023734:41:44010100010000000
adriatic0149630:00:3427401800000000000
adriatic0242584:55:4025401300000000000
ai071578:01:44000000010000000
titanx021578:01:44000000010000000
titanx031578:01:44000000010000000
titanx051578:01:44000000010000000
affogato09301540:32:082451303900031000000
affogato10393539:20:523152704900020000000
affogato07336492:40:262691804700002000000
cortado01429428:36:34294320650002315000000
jaguar02328362:00:34307601500000000000
affogato03316357:21:482401905600010000000
cortado02342324:46:402675047000230000000
cortado03213225:48:581335052000230000000
cortado04130222:49:3295502200080000000
cortado0780168:44:4454402200000000000
cortado0579163:27:5463401200000000000
adriatic044148:49:24400000000000000
jaguar0686143:53:087930300010000000
adriatic064136:25:52400000000000000
adriatic0314130:10:52430700000000000
adriatic054111:56:28400000000000000
slurm42395:43:400230000000000000
cortado066592:21:545620700000000000
cortado102635:56:042600000000000000
cortado091927:01:441700200000000000
cortado082026:10:541800200000000000
jaguar058407:42:368300100000000000
ai011901:50:301900000000000000
ai08000:00:00000000000000000
ai09000:00:00000000000000000
ai10000:00:00000000000000000
epona000:00:00000000000000000
lynx01000:00:00000000000000000
lynx02000:00:00000000000000000
lynx03000:00:00000000000000000
lynx04000:00:00000000000000000
lynx05000:00:00000000000000000
lynx06000:00:00000000000000000
lynx07000:00:00000000000000000
lynx11000:00:00000000000000000

slurm_report_one_week.txt · Last modified: 2026/02/08 17:00 by 127.0.0.1