Site Tools


slurm_report_one_week

CS SLURM Cluster Report - 1 week

Report generated for jobs run on the CS SLURM cluster from 2026-02-08 through 2026-02-14.

Job total during this query range: 48,602

Job total since August 1st 2024: 5,429,688

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2026-02-08T00:00:00 - 2026-02-14T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated                   Down         PLND Down                    Idle            Planned                Reported 
--------- -------------- ---------------------- ---------------------- ----------------- ----------------------- ------------------ ----------------------- 
       cs            cpu         162912(23.40%)           27796(3.99%)          0(0.00%)          358375(51.48%)     147110(21.13%)         696192(100.00%) 
       cs            mem      969137411(13.12%)       338585207(4.58%)        142(0.00%)      6079237240(82.30%)           0(0.00%)     7386960000(100.00%) 
       cs       gres/gpu           9170(28.88%)             659(2.08%)          0(0.00%)           21923(69.04%)           0(0.00%)          31752(100.00%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1534,mem=17306000M,node=39
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2036,mem=22190000M,node=41,gres/gpu=158
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=220,mem=2464000M,node=6
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=256,mem=1626000M,node=10,gres/gpu=27
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
gpu606365899:08:5654732330302000514000000
cpu4251750592:45:42409997580281000197282000000
nolim204257:08:161001900000000000
gnolim269:55:52000000002000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-undefined93143257:41:2487635200240404028700000000000
slurm-cs-madhur-behl26721604:18:2022650010833082000440000000
slurm-cs-lu-feng3260321246:59:202700655950232289176013200024000000
slurm-cs-ashish-venkat1404014222:35:26140400001325231400000196278000000
slurm-cs-yen-ling-kuo38511651:14:04384100351310000003000000
slurm-cs-mark-floryan12887:41:300100000000010000000
slurm-cs-kevin-skadron632559:14:003330005650200000000000
slurm-cs-tianhao-wang201176:15:1202000590600000000000
slurm-cs-zezhou-cheng2625:42:000200100100000000000
slurm-cs-unassigned3455:01:200300100200000000000
slurm-cs-wei-kai-lin65357:26:509560024803100002000000
slurm-cs-ferdinando-fioretto15268:45:32015001220100000000000
slurm-cs-henry-kautz150241:53:22150000120003000000000000
slurm-cs-rich-nguyen19122:36:16019005001400000000000
slurm-cs-hyojoon-kim1874:08:4401800190300041000000
slurm-cs-chen-chen1058:09:1810000400500010000000
slurm-cs-yue-cheng106:46:400100100000000000000
slurm-cs-shangtong-zhang101:20:441000100000000000000
slurm-cs-matheus-xavier-ferreira601:07:446000200400000000000
slurm-cs-yu-meng200:01:000200000200000000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
jaguar01275768:46:127601200020000000
jaguar031085527:41:1674260700010000000
serval0775176:58:32030100030000000
serval09484802:40:543750400020000000
serval0684679:09:58500100020000000
puma0127093411:06:242542120070001525000000
jaguar06273078:38:287601000040000000
bigcat0158333048:23:545678100000002728000000
lotus722748:28:4862203500090000000
cheetah086212747:35:185541705000000000000
cheetah01502296:19:34181901200010000000
bigcat0252162130:13:22515726050001414000000
adriatic03122076:08:32010800030000000
adriatic02132056:20:12110800030000000
struct074772044:25:02451110800016000000
lynx082112019:38:18182901100054000000
adriatic01141968:55:20210800030000000
cheetah023381825:35:08318140600000000000
heartpiece21789:46:08100100000000000
cheetah097781780:08:4476960300000000000
affogato1321741:15:28000200000000000
affogato1131733:10:14010200000000000
affogato1421668:34:56000200000000000
serval081161594:34:4499110400020000000
affogato0111231521:17:1610742209000810000000
affogato049331437:58:248872301000058000000
affogato058891436:35:188501701000075000000
jaguar0217921399:25:3217452102100041000000
cortado024661336:58:1244830000078000000
affogato086721315:53:3465270300064000000
affogato068061310:22:1878440400068000000
cortado015321307:14:08486300000097000000
cheetah033791288:09:58356160500020000000
cortado034221261:33:48395600000714000000
struct067351234:20:00714140100024000000
struct097541208:15:227122201400015000000
cortado043241183:00:5230570100047000000
cortado072971176:42:2428270000044000000
struct018241176:09:327851701400035000000
struct056961168:29:52668180500041000000
cortado053271168:05:5831670100021000000
struct084121164:27:40388130500015000000
cortado082761160:08:5426050000047000000
cortado062701159:46:5226050000032000000
struct046551148:27:46626180600032000000
lynx092661147:35:262211502000055000000
adriatic05111143:49:04110600030000000
struct037651139:58:367251701500026000000
adriatic04111138:10:08010700030000000
struct027751128:12:147351501300066000000
cortado092771124:59:44256601000311000000
cortado103291123:29:2431960000013000000
panther0115311004:50:141490200800085000000
adriatic0620909:27:081501100021000000
ai0659909:02:165900000000000000
affogato151907:08:00000100000000000
ai0458901:47:465700100000000000
lynx101899:26:08100000000000000
ai0360897:37:305900100000000000
ai0260895:06:185900100000000000
bigcat032726884:58:502701002000023000000
slurm11858:01:36000100000000000
slurm51806:44:48000100000000000
slurm216802:35:440001600000000000
cheetah04137792:06:52651305700011000000
affogato07947661:17:5090911013000311000000
affogato09842648:20:14811150900052000000
nekomata01214621:28:5420540500000000000
affogato021046616:48:42972640000037000000
bigcat062226553:09:1021822509000010000000
affogato03736553:03:18697210700047000000
bigcat041854543:28:5817971003100079000000
serval0355523:00:1040100300011000000
bigcat052193522:47:00214616018000013000000
ai01220197:47:1620670700000000000
affogato10128144:12:3212620000000000000
ai05269:55:52000000002000000
jaguar0573554:40:34723120000000000000
ai07000:00:00000000000000000
ai08000:00:00000000000000000
ai09000:00:00000000000000000
ai10000:00:00000000000000000
jinx01000:00:00000000000000000
jinx02000:00:00000000000000000
lynx01000:00:00000000000000000
lynx02000:00:00000000000000000
lynx03000:00:00000000000000000
lynx04000:00:00000000000000000
lynx05000:00:00000000000000000
lynx06000:00:00000000000000000
lynx07000:00:00000000000000000
slurm3000:00:00000000000000000
slurm4000:00:00000000000000000
titanx02000:00:00000000000000000
titanx03000:00:00000000000000000
titanx05000:00:00000000000000000

slurm_report_one_week.txt · Last modified: 2026/02/22 17:00 (external edit)