Site Tools


slurm_report_one_week

CS SLURM Cluster Report - 1 week

Report generated for jobs run on the CS SLURM cluster from 2025-11-02 through 2025-11-08.

Job total during this query range: 9,079

Job total since August 1st 2024: 5,155,792

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2025-11-02T00:00:00 - 2025-11-08T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated                 Down         PLND Down                    Idle            Planned                Reported 
--------- -------------- ---------------------- -------------------- ----------------- ----------------------- ------------------ ----------------------- 
       cs            cpu         114679(15.67%)          6562(0.90%)          0(0.00%)          557958(76.23%)       52704(7.20%)          731903(99.38%) 
       cs            mem      802591903(10.73%)      26880071(0.36%)          0(0.00%)      6649157644(88.91%)           0(0.00%)      7478629618(99.40%) 
       cs       gres/gpu           4929(15.14%)           392(1.20%)          0(0.00%)           27242(83.66%)           0(0.00%)           32563(99.84%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1626,mem=17690000M,node=41
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2106,mem=22382000M,node=42,gres/gpu=164
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=226,mem=2528000M,node=7
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=256,mem=1626000M,node=10,gres/gpu=27
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
gpu279782445:45:386908400724000141402000000
cpu621038033:27:263835656015320005182000000
gnolim7276:44:086500700000000000
nolim000:00:00000000000000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-unassigned196338965:43:222151748001116220576000127527000000
slurm-cs-undefined18726972:02:424183009927046000123000000
slurm-cs-adwait-jog23223341:19:542266007110700000153000000
slurm-cs-madhur-behl8911130:32:5257707205201500011000000
slurm-cs-tianhao-wang2403865:21:381923801022380800010000000
slurm-cs-geoffrey-fox83748:39:000800110500010000000
slurm-cs-yenling-kuo63464:28:120600050100000000000
slurm-cs-ashish-venkat48033359:29:40480300030202650151800000000000
slurm-cs-yangfeng-ji382335:55:583350061002100010000000
slurm-cs-kevin-skadron231458:55:54122001260400010000000
slurm-cs-hadi-daneshmand12361075:28:24706530008403920400000000000
slurm-cs-shangtong-zhang170384:36:24556005516900100000000000
slurm-cs-charles-reiss11258:17:0801100610400000000000
slurm-cs-wajih-hassan9194:06:240900800000010000000
slurm-cs-chen-yu-wei6200:31:32062002006000000000000
slurm-cs-lu-feng200:28:080200200000000000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
jaguar0384621447:37:303642602580001260000000
cheetah04313027:26:24100100010000000
serval07256763:52:441510900000000000
serval09726330:26:0025104600000000000
hydro5074983:46:1036037086000123000000
puma019674590:11:446161380168000045000000
jaguar041914368:05:006125010500000000000
jaguar01713955:56:04301302700010000000
serval08223813:24:041500500020000000
serval06383735:55:2013302100010000000
serval03252859:40:501510900000000000
cortado03142814:30:182120000000000000
cortado02202522:11:420180000002000000
sdscpu012952439:21:5015831096000010000000
cortado01202177:13:146120000002000000
cheetah01741916:13:58372001500020000000
lotus141662:10:521010100020000000
panther01961624:04:16561601800006000000
cheetah081111557:35:502125020000045000000
cheetah021631467:58:544751025000139000000
affogato03801419:09:08401102500004000000
jaguar061021393:56:08382404000000000000
jaguar02691392:14:2246901000031000000
affogato025461162:33:58349740108000114000000
cheetah091231004:32:443537019000032000000
cortado0410999:41:14460000000000000
cheetah0370829:10:1284206000014000000
affogato04378813:19:5418619016600007000000
affogato05304745:14:5017517010700005000000
lynx08402728:00:282604009300009000000
cortado066727:53:36060000000000000
cortado106727:53:36060000000000000
cortado056727:53:20060000000000000
cortado076727:53:20060000000000000
cortado086727:53:20060000000000000
cortado096727:53:20060000000000000
affogato10175610:23:501021704900034000000
affogato01403582:22:442814407600002000000
lynx10142568:02:083828015000061000000
struct02198558:55:221201306100004000000
adriatic0192558:23:043124011000026000000
struct01165554:54:061221302500005000000
struct03173532:53:201071304900004000000
struct05183531:31:501071305700006000000
struct04157531:17:461041303300007000000
adriatic0373505:18:101625011000021000000
struct06177502:25:3283808200004000000
adriatic0281490:41:482125011000024000000
struct07100489:13:1868202800002000000
struct0970484:45:066020600002000000
struct0895481:49:5057203200004000000
lynx11145466:35:483728015000065000000
adriatic0458447:04:1222909000018000000
bigcat0113364:19:12030500005000000
affogato06224141:48:401401606800000000000
ai0694140:12:181510013000056000000
affogato08213139:56:481331506500000000000
affogato07169136:06:521241502900001000000
adriatic0519131:05:501510300000000000
adriatic0619130:43:281510300000000000
jaguar051662:34:381210100020000000
affogato11638:13:28120300000000000
ai052927:43:282600300000000000
jinx011423:56:481400000000000000
jinx021115:03:521100000000000000
ai07905:28:16600300000000000
lynx092004:03:521500000005000000
ai08703:02:40600100000000000
ai09201:29:04200000000000000
affogato13400:33:52000400000000000
affogato09000:00:00000000000000000
affogato14000:00:00000000000000000
affogato15000:00:00000000000000000
ai01000:00:00000000000000000
ai02000:00:00000000000000000
ai03000:00:00000000000000000
ai04000:00:00000000000000000
ai10000:00:00000000000000000
bigcat02000:00:00000000000000000
bigcat03000:00:00000000000000000
bigcat04000:00:00000000000000000
bigcat05000:00:00000000000000000
bigcat06000:00:00000000000000000
epona000:00:00000000000000000
heartpiece000:00:00000000000000000
lynx01000:00:00000000000000000
lynx02000:00:00000000000000000
lynx03000:00:00000000000000000
lynx04000:00:00000000000000000
lynx05000:00:00000000000000000
lynx06000:00:00000000000000000
lynx07000:00:00000000000000000
slurm1000:00:00000000000000000
slurm2000:00:00000000000000000
slurm3000:00:00000000000000000
slurm4000:00:00000000000000000
slurm5000:00:00000000000000000
titanx02000:00:00000000000000000
titanx03000:00:00000000000000000
titanx05000:00:00000000000000000

slurm_report_one_week.txt · Last modified: 2025/11/16 17:00 by 127.0.0.1