Site Tools


slurm_report_one_week

CS SLURM Cluster Report - 1 week

Report generated for jobs run on the CS SLURM cluster from 2026-02-01 through 2026-02-07.

Job total during this query range: 26,370

Job total since August 1st 2024: 5,379,393

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2026-02-01T00:00:00 - 2026-02-07T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated                  Down         PLND Down                    Idle            Planned                Reported 
--------- -------------- ---------------------- --------------------- ----------------- ----------------------- ------------------ ----------------------- 
       cs            cpu         169704(24.38%)           9367(1.35%)          0(0.00%)          407273(58.50%)     109848(15.78%)         696192(100.00%) 
       cs            mem     1088573677(14.74%)       28752654(0.39%)          0(0.00%)      6269633668(84.87%)           0(0.00%)     7386960000(100.00%) 
       cs       gres/gpu           9008(28.37%)           1111(3.50%)          0(0.00%)           21633(68.13%)           0(0.00%)          31752(100.00%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1596,mem=17562000M,node=40
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2066,mem=22254000M,node=42,gres/gpu=162
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=226,mem=2528000M,node=7
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=256,mem=1626000M,node=10,gres/gpu=27
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
cpu2130783648:15:24178401703016560008820000000
gpu440774391:30:1423395350140702011113000000
nolim65610597:32:5935265023900000000000
gnolim000:00:00000000000000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-undefined159789931:53:1610963171840593771023300000000000
slurm-cs-ashish-venkat1292020287:08:4412920000118171030023000500000000
slurm-cs-lu-feng214719566:11:52208193900766265010060002108000000
slurm-cs-henry-kautz153010024:25:47105804720133210018800000000000
slurm-cs-yen-ling-kuo2586724:52:162302800231801400005000000
slurm-cs-unassigned77445204:34:3657501994005688189018120003520000000
slurm-cs-kevin-skadron863988:14:42434300472101100070000000
slurm-cs-tianhao-wang523878:12:32052004280200000000000
slurm-cs-yu-meng33368:56:322100210000000000000
slurm-cs-madhur-behl113099:39:1201100200702000000000
slurm-cs-zezhou-cheng81990:16:000800200200040000000
slurm-cs-ferdinando-fioretto10570:22:0401000900000010000000
slurm-cs-briana-morrison402:31:040400000400000000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
puma0174312843:24:3659410504100030000000
cheetah02837194:17:5254702300008000000
bigcat0121185917:02:12189617603700090000000
bigcat037825791:56:386966001500065000000
bigcat0413045675:12:1890481031100044000000
bigcat0213945130:16:221247114014000145000000
cheetah08534816:47:48132101400005000000
cheetah04454736:00:0628401002010000000
bigcat0518574658:08:02131053049300010000000
serval061264551:23:2011960100000000000
bigcat0618824199:38:361473840311000131000000
jaguar011373983:53:127124030000012000000
heartpiece1333855:48:46382507000000000000
affogato13513723:07:2852701900000000000
affogato11363721:21:204270500000000000
affogato14353690:43:284260500000000000
lynx081853233:56:041192504000010000000
hydro313131:49:102080200010000000
jaguar031863031:52:503152061000042000000
affogato052862872:48:16237450400000000000
cheetah03262769:38:364130500004000000
affogato043432594:36:142794002100030000000
slurm21922523:17:281161306300000000000
lotus1382427:53:123432062000010000000
serval074872402:44:52468401300020000000
cheetah095162388:17:5613027035700020000000
affogato014362347:21:06388430400001000000
serval09242230:21:003401500002000000
slurm5751994:29:04521201100000000000
slurm11381980:18:17921303300000000000
serval03141901:25:46370200020000000
affogato023541888:20:46294590000001000000
lynx10161869:43:442140000000000000
affogato15311866:57:2831401400000000000
cheetah012241864:11:0816133019000011000000
ai03111849:10:24290000000000000
ai02111844:58:40290000000000000
lynx092071789:45:081411904600010000000
adriatic01641680:00:20221502700000000000
adriatic03741668:01:16161903900000000000
adriatic02551548:19:28161002900000000000
serval084091490:34:02380601900022000000
ai04181459:04:121170000000000000
jaguar063411383:27:243031002100016000000
cortado014961382:43:344401903700000000000
panther014371356:36:083834001100012000000
cortado034421322:45:003972202300000000000
nekomata01961305:25:28111706100007000000
cortado024571256:01:263922004500000000000
cortado043791229:09:303371602500010000000
struct073991111:30:02360320700000000000
struct053641015:41:26330310300000000000
cortado05339994:09:142971702400010000000
affogato06292994:03:12256270200070000000
struct02343990:36:56303330700000000000
affogato09387933:04:38342380300040000000
struct01423918:15:08374390900010000000
affogato03285878:15:182192803700010000000
jaguar02185848:35:542216014300004000000
cortado06279833:55:082491601400000000000
struct03369793:04:02320440500000000000
struct04413741:51:04368420300000000000
affogato07351731:12:26311330000070000000
affogato10377726:09:06337330300040000000
struct06503722:04:584263504200000000000
affogato08412701:07:58373340000050000000
struct08440626:52:42391470200000000000
struct09427582:02:28384400200001000000
adriatic0491447:35:32161805700000000000
adriatic0564366:45:16161303500000000000
ai06110359:36:2263704000000000000
cortado07266346:45:52234290300000000000
adriatic0631321:11:0016501000000000000
ai019289:46:20010800000000000
slurm3118243:39:2454206200000000000
jaguar05612215:41:063573025100010000000
cortado10217208:16:20187250500000000000
cortado08165164:33:42137270100000000000
cortado09125113:50:1496250400000000000
lynx01600:00:08000600000000000
ai05000:00:00000000000000000
ai07000:00:00000000000000000
ai08000:00:00000000000000000
ai09000:00:00000000000000000
ai10000:00:00000000000000000
epona000:00:00000000000000000
jinx01000:00:00000000000000000
jinx02000:00:00000000000000000
lynx02000:00:00000000000000000
lynx03000:00:00000000000000000
lynx04000:00:00000000000000000
lynx05000:00:00000000000000000
lynx06000:00:00000000000000000
lynx07000:00:00000000000000000
lynx11000:00:00000000000000000
slurm4000:00:00000000000000000
titanx02000:00:00000000000000000
titanx03000:00:00000000000000000
titanx05000:00:00000000000000000

slurm_report_one_week.txt · Last modified: 2026/02/15 17:00 by 127.0.0.1