Site Tools


slurm_report_one_week

CS SLURM Cluster Report - 1 week

Report generated for jobs run on the CS SLURM cluster from 2026-01-04 through 2026-01-10.

Job total during this query range: 22,690

Job total since August 1st 2024: 5,298,103

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2026-01-04T00:00:00 - 2026-01-10T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated                  Down         PLND Down                    Idle            Planned                Reported 
--------- -------------- ---------------------- --------------------- ----------------- ----------------------- ------------------ ----------------------- 
       cs            cpu           49771(7.15%)          10235(1.47%)          0(0.00%)          474986(68.23%)     161200(23.15%)         696192(100.00%) 
       cs            mem       507827978(6.87%)       80583716(1.09%)          0(0.00%)      6798548306(92.03%)           0(0.00%)     7386960000(100.00%) 
       cs       gres/gpu           3776(11.89%)           1219(3.84%)          0(0.00%)           26758(84.27%)           0(0.00%)          31752(100.00%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1596,mem=17562000M,node=40
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2066,mem=22254000M,node=42,gres/gpu=162
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=226,mem=2528000M,node=7
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=256,mem=1626000M,node=10,gres/gpu=27
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
gpu872734866:38:5485659405400068000000
cpu1119611384:55:36773733801967000341120000000
nolim27527660:29:05138515701111000990000000
gnolim15859:02:481300200000000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-henry-kautz968014057:36:0169280275205129408028920001311120000000
slurm-cs-yenling-kuo1910823:43:00019001100500003000000
slurm-cs-yu-meng39240:33:040300000000030000000
slurm-cs-tianhao-wang455374:06:560450015220500003000000
slurm-cs-shangtong-zhang494624:55:3615190154100800000000000
slurm-cs-unassigned17533736:37:120175300175200100000000000
slurm-cs-ashish-venkat19152428:13:02191500018128501800000000000
slurm-cs-lu-feng91151730:20:222329678600887939019500002000000
slurm-cs-yangfeng-ji11046:44:160100000000010000000
slurm-cs-kevin-skadron10781:46:229100220400020000000
slurm-cs-undefined62757:59:520620043130600000000000
slurm-cs-charles-reiss38168:30:400380026100000020000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
serval082177708:04:4220670100021000000
serval093606721:14:1835510300010000000
serval066836355:16:2467360300010000000
jaguar019943286:08:2898850000001000000
heartpiece7822936:10:00242610440000390000000
cheetah043092799:06:1629660300004000000
slurm56222456:49:32317340253000180000000
serval031392217:22:2813810000000000000
puma017202040:49:504333501430006103000000
slurm19731212:44:09555500332000360000000
lotus91116:38:00260100000000000
bigcat0111111083:58:4897632031000072000000
jaguar06520985:33:14501140500000000000
hydro305958:55:3214730123000527000000
serval07378780:55:50357201900000000000
cheetah08554767:02:5055030100000000000
lynx09258735:39:3010480108000236000000
epona213735:09:121091208600060000000
lynx08265661:11:44109100109000136000000
struct01446600:23:10228170122000475000000
struct03425589:33:44209170121000375000000
struct02433531:08:28218180120000275000000
bigcat04140511:55:3614000000000000000
panther01511469:13:5840722046000729000000
bigcat02787457:44:2065680200000103000000
ai0277353:56:067420100000000000
ai0467347:34:526700000000000000
ai0362347:15:386200000000000000
ai054340:58:24300100000000000
jinx014339:17:28400000000000000
slurm2162319:36:1216200000000000000
struct04414312:31:50247200121000125000000
lynx102305:42:40200000000000000
bigcat03152296:35:5615200000000000000
bigcat05141291:32:4614100000000000000
bigcat06139290:34:5413900000000000000
struct05495206:39:06284180122000170000000
ai0651185:25:504080100020000000
jinx023178:08:32200100000000000
jaguar02900168:41:5688780300002000000
affogato01377144:38:0832518022000012000000
struct07505132:44:40297180123000166000000
struct06478131:27:12271190121000166000000
struct09662117:53:00356200221000065000000
struct08690116:25:42372160237000065000000
cheetah0178115:38:187170000000000000
affogato02240110:20:242091201400005000000
affogato0718798:07:20168304000012000000
affogato1018597:48:18166304000012000000
cheetah02100490:34:42100040000000000000
jaguar0561982:12:4861540000000000000
affogato0918374:37:06164304000012000000
affogato0618969:47:46171204000012000000
affogato0418266:44:44165304000010000000
affogato0518666:25:14169304000010000000
affogato0817565:45:22156304000012000000
affogato0319548:35:261407013000035000000
jaguar0367248:12:1466660000000000000
nekomata0161541:27:1860420900000000000
cheetah0340940:51:5440720000000000000
cortado011805:06:021800000000000000
affogato11100:26:56100000000000000
affogato13100:26:24100000000000000
lynx11100:25:52100000000000000
affogato14100:22:56100000000000000
ai07100:11:44100000000000000
titanx02100:08:56100000000000000
titanx05100:08:56100000000000000
titanx03100:08:48100000000000000
adriatic01000:00:00000000000000000
adriatic02000:00:00000000000000000
adriatic03000:00:00000000000000000
adriatic04000:00:00000000000000000
adriatic05000:00:00000000000000000
adriatic06000:00:00000000000000000
affogato15000:00:00000000000000000
ai01000:00:00000000000000000
ai08000:00:00000000000000000
ai09000:00:00000000000000000
ai10000:00:00000000000000000
cheetah09000:00:00000000000000000
cortado02000:00:00000000000000000
cortado03000:00:00000000000000000
cortado04000:00:00000000000000000
cortado05000:00:00000000000000000
cortado06000:00:00000000000000000
cortado07000:00:00000000000000000
cortado08000:00:00000000000000000
cortado09000:00:00000000000000000
cortado10000:00:00000000000000000
lynx01000:00:00000000000000000
lynx02000:00:00000000000000000
lynx03000:00:00000000000000000
lynx04000:00:00000000000000000
lynx05000:00:00000000000000000
lynx06000:00:00000000000000000
lynx07000:00:00000000000000000
slurm3000:00:00000000000000000
slurm4000:00:00000000000000000

slurm_report_one_week.txt · Last modified: 2026/01/18 17:00 by 127.0.0.1