Site Tools


slurm_report_one_week

CS SLURM Cluster Report - 1 week

Report generated for jobs run on the CS SLURM cluster from 2026-04-05 through 2026-04-11.

Job total during this query range: 53,975

Job total since August 1st 2024: 5,786,952

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2026-04-05T00:00:00 - 2026-04-11T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated               Down         PLND Down                    Idle             Planned                Reported 
--------- -------------- ---------------------- ------------------ ----------------- ----------------------- ------------------- ----------------------- 
       cs            cpu         257910(38.15%)          20(0.00%)          0(0.00%)          113610(16.81%)      304493(45.04%)         676032(100.00%) 
       cs            mem     1293631836(17.77%)      227465(0.00%)          0(0.00%)      5986588699(82.23%)            0(0.00%)     7280448000(100.00%) 
       cs       gres/gpu           9519(30.79%)           0(0.00%)          0(0.00%)           21393(69.21%)            0(0.00%)          30912(100.00%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1534,mem=17306000M,node=39
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2036,mem=22190000M,node=41,gres/gpu=158
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=220,mem=2464000M,node=6
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=234,mem=1376000M,node=9,gres/gpu=26
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
cpu34184141494:52:342589242120177700012191084000000
gpu1676168303:44:54150673610120000011320000000
gnolim150942804:58:1810856802590008017000000
nolim152140822:06:551218580170000678000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-undefined3475140869:33:203406481011916192005670004626000000
slurm-cs-ashish-venkat2323345414:02:5319278110415111340206617970747000879149000000
slurm-cs-lu-feng1534129303:12:4017571342601581325417901898000100000000
slurm-cs-hadi-daneshmand1416884:46:2001400600700010000000
slurm-cs-wei-kai-lin578416489:41:245782200360716280000050148000000
slurm-cs-madhur-behl10212727:29:2201020082801200000000000
slurm-cs-henry-kautz55111945:38:0044910200471007700012000000
slurm-cs-chen-yu-wei27854280:14:1213281457002762701600000000000
slurm-cs-mircea-stan19754257:27:0819750001010710200016876000000
slurm-cs-unassigned112857:06:404700200400050000000
slurm-cs-yen-ling-kuo1202126:23:4001200083802300051000000
slurm-cs-kevin-skadron281838:38:402710014201100010000000
slurm-cs-tianhao-wang1881203:01:2411672001374022000025000000
slurm-cs-shangtong-zhang81083:31:128000130400000000000
slurm-cs-ferdinando-fioretto23604:30:44023001640300000000000
slurm-cs-yu-meng46548:23:3604600141603000121000000
slurm-cs-zezhou-cheng105451:23:344461009870000000000000
slurm-cs-rich-nguyen33311:00:08132002660000001000000
slurm-cs-aidong-zhang147222:50:22414300100380800010000000
slurm-cs-sebastian-elbaum104:30:000100000000010000000
slurm-cs-matheus-xavier-ferreira502:17:225000210200000000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
puma01178414816:54:42101952401280006152000000
serval03713943:28:34400200010000000
slurm43288258:28:3026312035000153000000
slurm33048243:55:4024914026000141000000
slurm23258219:22:1226912033000101000000
heartpiece1657929:03:48136501600071000000
ai07517769:07:5033301100040000000
bigcat0118437295:06:441314117022200086104000000
bigcat0223197065:31:321816132015600087128000000
bigcat0416096852:18:24129211501290002944000000
jaguar012706741:31:1419917040000113000000
bigcat0619806404:04:0415721610990007474000000
jaguar061456237:04:28124401700000000000
bigcat058306117:06:22594990790001741000000
bigcat0310286096:39:427548301040001770000000
serval071455577:05:2413190200030000000
serval081555218:46:2214940000020000000
ai092735201:56:0820615030000148000000
serval092884716:38:5028160100000000000
ai102994658:15:0622516041000152000000
ai052764653:03:082067043000164000000
lotus954338:23:52532308000101000000
struct026264334:21:28448109040003332000000
cheetah011304324:32:54109701200020000000
struct096844253:48:3455373050003617000000
jinx011534206:28:32107603300061000000
lynx096474203:08:063921270780003515000000
jinx021664188:50:4690606200080000000
struct078694162:50:5473472070002630000000
ai08934140:31:087270400091000000
struct046474124:07:44514640200002722000000
slurm12454090:42:3917610042000152000000
slurm51544080:34:06125501800060000000
titanx031534077:55:26103803500061000000
struct085054003:43:0237984020001921000000
struct016553997:12:44471116090003227000000
struct066043974:59:1646363020004135000000
struct057663928:32:3463175040003026000000
titanx05453908:50:144300000020000000
struct035123865:45:28379690100003123000000
lynx086793841:35:064341070940003311000000
affogato0514063323:47:1010871780530004444000000
affogato0113143275:41:2210231840350004131000000
affogato0211073173:09:347891880660003034000000
cortado0112883035:12:101088108081000101000000
affogato0413302932:51:4410481750480002534000000
cortado0211462923:57:58911168047000182000000
cheetah0472526:23:04200300020000000
panther014492142:45:02267820510001435000000
cortado0312742049:48:44113810106000236000000
cortado0410961980:06:2694511603000302000000
jaguar0236961951:10:123367150295000172000000
affogato085141879:00:52360790250003218000000
affogato095021847:45:32366780260001715000000
affogato105821846:01:36422870300002320000000
affogato076111818:35:00449880320002418000000
affogato066051801:27:08433860430002419000000
affogato035791671:17:34442820170001820000000
cortado066611622:11:4654460027000273000000
cortado054421614:29:1632281011000244000000
cheetah0913021386:43:30108124019200050000000
cheetah0815831179:08:2614762807200070000000
cheetah0218921079:12:02172459010000090000000
cheetah038871045:56:148233102700060000000
jaguar0519181020:01:00182210068000117000000
affogato11166981:59:24116704000030000000
adriatic01110963:51:58811404000110000000
cortado07209948:45:1614830015000151000000
cortado09156943:51:24952605000282000000
cortado10148727:26:40113700000271000000
cortado08176598:53:541431802000112000000
serval0640589:51:4819401700000000000
adriatic0466524:53:106210100020000000
jaguar0340511:15:122750800000000000
ai06654497:06:325892402800067000000
lynx10178496:15:521431701700010000000
adriatic03105434:33:548870900010000000
affogato15103369:09:3260403900000000000
nekomata01194327:37:261291804500020000000
affogato1497324:38:3260403300000000000
affogato1388310:35:5260402400000000000
adriatic02122304:06:3610760800010000000
adriatic0567167:32:086520000000000000
adriatic0686129:01:2050603000000000000
ai0353621:24:02521001500000000000
ai0453121:16:30516001500000000000
ai0152321:15:28511001200000000000
ai0253321:13:32518001500000000000
lynx01000:00:00000000000000000
lynx02000:00:00000000000000000
lynx03000:00:00000000000000000
lynx04000:00:00000000000000000
lynx05000:00:00000000000000000
lynx06000:00:00000000000000000
lynx07000:00:00000000000000000

slurm_report_one_week.txt · Last modified: 2026/04/19 17:00 by 127.0.0.1