Site Tools


slurm_report_four_weeks

CS SLURM Cluster Report - 4 weeks

Report generated for jobs run on the CS SLURM cluster from 2025-11-23 through 2025-12-20.

Job total during this query range: 28,079

Job total since August 1st 2024: 5,219,078

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2025-11-23T00:00:00 - 2025-12-20T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated                  Down         PLND Down                    Idle            Planned                 Reported 
--------- -------------- ---------------------- --------------------- ----------------- ----------------------- ------------------ ------------------------ 
       cs            cpu          230089(8.20%)           7168(0.26%)          0(0.00%)         2446925(87.21%)      121559(4.33%)          2805741(99.98%) 
       cs            mem      2113036312(7.13%)      244439396(0.82%)          0(0.00%)     27277904452(92.05%)           0(0.00%)      29635380160(99.98%) 
       cs       gres/gpu          20592(16.10%)            956(0.75%)          0(0.00%)          106372(83.15%)           0(0.00%)          127920(100.19%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1596,mem=17562000M,node=40
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2066,mem=22254000M,node=42,gres/gpu=162
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=226,mem=2528000M,node=7
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=256,mem=1626000M,node=10,gres/gpu=27
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
gpu3755168073:17:3096511810135900075175000000
cpu1874943060:20:4815258134101360000661129000000
nolim52895585:31:0051502013200050000000
gnolim2862373:45:464927021000000000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-lu-feng499457886:16:08219925650230234410480142400033145000000
slurm-cs-yenling-kuo364255163:01:383575670035507201900001000000
slurm-cs-yu-meng719396:28:320700030000040000000
slurm-cs-undefined138713949:07:4410333540010019101370007088000000
slurm-cs-yangfeng-ji27213931:00:184268009649011400058000000
slurm-cs-kevin-skadron7112162:29:142248012312025000110000000
slurm-cs-tianhao-wang466010439:35:484587730030091007045500013950000000
slurm-cs-adwait-jog338318:26:56033001890300030000000
slurm-cs-henry-kautz52936358:55:16045289051503013300070000000
slurm-cs-ashish-venkat49046032:16:444904000389219406520001660000000
slurm-cs-madhur-behl224637:19:181210001440200011000000
slurm-cs-samira-khan22673093:59:442267000197300160002780000000
slurm-cs-chen-yu-wei342355:34:00034002260500010000000
slurm-cs-unassigned742027:40:3239350055101100070000000
slurm-cs-wajih-hassan431900:47:280430010220100082000000
slurm-cs-miaomiao-zhang42628:16:0017250020100900021000000
slurm-cs-shangtong-zhang170537:54:405560055133003700000000000
slurm-cs-seongkook-heo34150:47:243400016160200000000000
slurm-cs-charles-reiss6261:45:18062004410700037000000
slurm-cs-raymond-pettit245:37:200200110000000000000
slurm-cs-angela-orebaugh5307:46:44053004520500001000000
slurm-cs-xiaozhu-lin203:01:380200000000020000000
slurm-cs-aidong-zhang502:46:560500200300000000000
slurm-cs-thomas-horton101:30:341000000000010000000
slurm-cs-derrick-stone400:27:400400400000000000000
slurm-cs-ferdinando-fioretto100:01:300100000100000000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
cheetah046333212:53:44173201000031000000
serval088815530:13:58252103200019000000
serval0916215493:01:38562107900051000000
jaguar015212808:04:46251201200030000000
serval06879926:02:42312802400031000000
hydro128941:47:00040100070000000
serval07828369:16:46342901400032000000
lotus1348290:20:043233054000510000000
cheetah011456831:52:12426402800065000000
adriatic012406051:51:0027122072000415000000
jaguar032475763:32:00875909900020000000
jaguar022395108:03:141133908300040000000
adriatic031404637:53:481443065000414000000
adriatic06704576:07:34102403000042000000
puma0126344525:55:40224012101570006749000000
adriatic05864515:30:50132504200051000000
adriatic041034477:43:1093105300046000000
bigcat0114523853:03:121129940197000320000000
cheetah082363355:20:567892054000111000000
cheetah091903017:00:064183054000012000000
jaguar06892668:53:0249300700030000000
adriatic021592120:42:221648080000015000000
cheetah021162080:36:563738019000616000000
bigcat028571936:48:0874315080000190000000
jaguar051131911:03:06294104000030000000
affogato044161910:05:123086403600080000000
panther0111071891:38:28921570940002510000000
bigcat032951829:39:142498028000100000000
ai062181733:08:345510005100039000000
nekomata011601518:58:203522084000118000000
slurm123311427:26:042296103400000000000
slurm511511418:40:061129002100010000000
struct037671398:06:5460073050000386000000
struct027631363:56:2261364049000316000000
lynx101031325:02:121449030000010000000
slurm28701305:36:06837003300000000000
struct018771208:56:1074749050000256000000
heartpiece6721203:38:34636003200040000000
affogato025231176:45:0244928021000250000000
struct077311166:30:0460939051000311000000
struct067251124:12:4460739051000244000000
struct087101115:52:4859827058000252000000
struct097151091:04:3660634052000221000000
serval03641047:30:08261002600020000000
struct046881040:49:3054866042000257000000
struct057221029:35:4059740054000283000000
lynx111041019:00:38113804600009000000
affogato017701017:08:20597114027000302000000
lynx08715873:00:50595410360002914000000
affogato05425856:13:1227663068000171000000
bigcat05108654:57:5678001000290000000
lynx09737604:23:28632380290002216000000
bigcat06122549:14:04108000000140000000
ai0584482:10:3214506500000000000
ai0847417:22:246603500000000000
jinx0212406:40:48460200000000000
affogato03417398:28:5831649028000231000000
ai1029373:14:060002900000000000
bigcat0480366:26:2861300000160000000
ai0748306:14:1210403400000000000
cheetah0353289:05:4825160400008000000
affogato10244281:35:0418426020000140000000
affogato09257251:10:201983002000090000000
epona265230:10:10252101200000000000
affogato08286214:08:4221342020000110000000
affogato06310209:34:242226102200050000000
jinx0114201:44:08820400000000000
ai0940184:14:204403200000000000
affogato07282157:50:262135101800000000000
affogato1130107:02:246801600000000000
ai03767:30:24040300000000000
ai04767:30:24040300000000000
ai02767:30:20040300000000000
ai01664:50:12030300000000000
affogato1310129:51:203309500000000000
cortado01121:21:52100000000000000
lynx01103:10:24100000000000000
lynx04102:59:44100000000000000
lynx03202:13:52200000000000000
lynx05102:12:48100000000000000
lynx02101:48:32100000000000000
lynx07101:21:36100000000000000
lynx06101:14:08100000000000000
affogato141201:01:520201000000000000
affogato151201:01:400201000000000000
titanx05400:42:44100300000000000
titanx03300:42:32100200000000000
titanx02500:40:00100400000000000
cortado02000:00:00000000000000000
cortado03000:00:00000000000000000
cortado04000:00:00000000000000000
cortado05000:00:00000000000000000
cortado06000:00:00000000000000000
cortado07000:00:00000000000000000
cortado08000:00:00000000000000000
cortado09000:00:00000000000000000
cortado10000:00:00000000000000000
slurm3000:00:00000000000000000
slurm4000:00:00000000000000000

slurm_report_four_weeks.txt · Last modified: 2025/12/28 17:00 (external edit)