Table of Contents

CS SLURM Cluster Report - 4 weeks

Report generated for jobs run on the CS SLURM cluster from 2025-11-30 through 2025-12-27.

Job total during this query range: 53,082

Job total since August 1st 2024: 5,248,729

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2025-11-30T00:00:00 - 2025-12-27T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated                  Down         PLND Down                    Idle            Planned                 Reported 
--------- -------------- ---------------------- --------------------- ----------------- ----------------------- ------------------ ------------------------ 
       cs            cpu          255596(9.16%)           8207(0.29%)          0(0.00%)         2347774(84.14%)      178708(6.40%)          2790285(99.98%) 
       cs            mem      2305165715(7.80%)      118616818(0.40%)          0(0.00%)     27147085627(91.80%)           0(0.00%)      29570868160(99.98%) 
       cs       gres/gpu          23308(18.32%)           1094(0.86%)          0(0.00%)          102846(80.82%)           0(0.00%)          127248(100.19%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1596,mem=17562000M,node=40
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2066,mem=22254000M,node=42,gres/gpu=162
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=226,mem=2528000M,node=7
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=256,mem=1626000M,node=10,gres/gpu=27
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
gpu3884189083:58:52114211400135800073171000000
cpu3811546488:17:3224936540906711000933126000000
nolim107977975:00:16716130362800050000000
gnolim2862001:21:085027020900000000000

SLURM Usage by Advisor Group

GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-lu-feng500569501:24:20199527790231242010120139500033145000000
slurm-cs-yenling-kuo319453699:35:303115790030907902400001000000
slurm-cs-yu-meng1233644:09:0401200030000090000000
slurm-cs-yangfeng-ji24416222:58:102242008743010100058000000
slurm-cs-tianhao-wang585314548:42:225767860034591573063200013950000000
slurm-cs-kevin-skadron256311060:37:282512510015668120400001450000000
slurm-cs-ashish-venkat80679861:08:0880670005860115108260002300000000
slurm-cs-henry-kautz108018748:24:3204107970716140362900070000000
slurm-cs-adwait-jog428323:32:18834002690300040000000
slurm-cs-samira-khan155746913:24:321557400082991798051020003750000000
slurm-cs-undefined12504846:03:4094630400978660750004883000000
slurm-cs-madhur-behl192321:46:46109001240200001000000
slurm-cs-unassigned681824:10:303137005520900020000000
slurm-cs-chen-yu-wei101719:45:0401000910000000000000
slurm-cs-wajih-hassan17997:18:4001700830100050000000
slurm-cs-miaomiao-zhang40612:54:3815250018100900021000000
slurm-cs-shangtong-zhang170537:54:405560055133003700000000000
slurm-cs-seongkook-heo1765:15:16170001340000000000000
slurm-cs-raymond-pettit1254:27:1201200610500000000000
slurm-cs-charles-reiss5627:18:42056003810700037000000
slurm-cs-angela-orebaugh5307:46:44053004520500001000000
slurm-cs-xiaozhu-lin405:12:520400010000030000000
slurm-cs-aidong-zhang502:46:560500200300000000000
slurm-cs-thomas-horton101:30:341000000000010000000
slurm-cs-derrick-stone400:27:400400400000000000000
slurm-cs-ferdinando-fioretto100:01:300100000100000000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
cheetah046438594:30:42173201100031000000
serval088515801:41:42231903100039000000
serval0916114653:21:36572207800031000000
serval076213615:31:22222201200042000000
serval068211050:30:44292602300031000000
lotus15110137:20:105132053000510000000
hydro20068954:28:4667326401038000310000000
jaguar01408029:32:0017901100030000000
adriatic012567261:48:4846119072000415000000
cheetah011255952:20:28336002500043000000
adriatic031605808:38:403243067000414000000
adriatic041195487:10:50253105300046000000
adriatic06835438:08:54232403000042000000
adriatic051005403:28:10272504200051000000
jaguar032195186:28:34854708400030000000
jaguar022235148:46:381153506900040000000
serval03704788:05:12231602800030000000
cheetah082394638:37:529990039000011000000
cheetah092064162:33:565982053000012000000
adriatic021783271:01:503548080000015000000
puma0145863190:16:00333642606960007949000000
bigcat0120363164:09:1414502390301000460000000
jaguar061332825:09:44713902100020000000
bigcat0214782441:52:3011171380186000370000000
cheetah021192298:46:244238018000516000000
bigcat0310082273:45:427071490131000210000000
nekomata011702063:14:164420087000118000000
slurm149112031:27:10339510151500000000000
slurm525051988:23:2216200088400010000000
heartpiece14201935:58:287290068700040000000
ai062081843:06:26608805000037000000
affogato048891629:42:225031420222000220000000
slurm214751627:55:5011530032200000000000
jaguar051091580:35:46283904000020000000
struct0711241550:49:508201150145000431000000
struct0810721539:18:38799990131000412000000
struct0610501507:08:12815118077000364000000
affogato0210791430:19:546781410231000290000000
struct0410831420:51:027461490145000367000000
struct0911611387:43:02788960244000321000000
struct0311491361:43:528091500139000456000000
struct0112921356:31:528961310222000376000000
struct0510981342:30:468081150139000333000000
lynx101031325:02:121449030000010000000
struct0211831303:35:568271370181000326000000
affogato0113701273:24:068662000263000392000000
affogato058761068:56:564971340221000231000000
lynx111041019:00:38113804600009000000
bigcat051124941:11:12710290092000320000000
panther011439886:46:0810031260271000309000000
bigcat061159873:01:54702345095000170000000
bigcat04890756:30:04545234085000260000000
lynx081130746:52:346717703410002813000000
lynx091161735:54:287239003130002015000000
affogato03677507:17:20451990101000251000000
ai0585482:23:1615506500000000000
ai0847417:22:246603500000000000
jinx0212406:40:48460200000000000
affogato10435398:21:5833445039000170000000
epona485391:15:262641022000000000000
ai0411383:53:48440300000000000
cheetah0355383:24:3227160400008000000
ai0312376:04:08540300000000000
affogato09438372:49:1832765035000110000000
ai0240359:35:044403200000000000
affogato06545342:43:5835010308400080000000
affogato08498333:28:5235882046000120000000
ai0748306:14:1210403400000000000
cortado01279300:36:00113125023000180000000
affogato07500288:13:143388607300030000000
cortado03542227:10:3426719407400070000000
cortado02564217:41:242691690112000140000000
cortado04493213:25:0629713905400030000000
jinx0114201:44:08820400000000000
ai0940184:14:204403200000000000
cortado05663144:22:36325177016100000000000
affogato1131111:33:226801600010000000
ai01664:50:12030300000000000
affogato1310129:51:203309500000000000
cortado062303:23:224190000000000000
lynx01103:10:24100000000000000
lynx04102:59:44100000000000000
lynx03202:13:52200000000000000
lynx05102:12:48100000000000000
lynx02101:48:32100000000000000
lynx07101:21:36100000000000000
cortado071401:18:501400000000000000
lynx06101:14:08100000000000000
affogato141201:01:520201000000000000
affogato151201:01:400201000000000000
titanx05400:42:44100300000000000
titanx03300:42:32100200000000000
titanx02500:40:00100400000000000
ai102800:36:440002800000000000
cortado08000:00:00000000000000000
cortado09000:00:00000000000000000
cortado10000:00:00000000000000000
slurm3000:00:00000000000000000
slurm4000:00:00000000000000000