Site Tools


slurm_report_one_week

CS SLURM Cluster Report - 1 week

Report generated for jobs run on the CS SLURM cluster from 2026-03-29 through 2026-04-04.

Job total during this query range: 19,790

Job total since August 1st 2024: 5,729,897

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2026-03-29T00:00:00 - 2026-04-04T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated                 Down         PLND Down                    Idle             Planned                Reported 
--------- -------------- ---------------------- -------------------- ----------------- ----------------------- ------------------- ----------------------- 
       cs            cpu         183888(27.13%)          1833(0.27%)          0(0.00%)          186570(27.52%)      305574(45.08%)          677865(99.97%) 
       cs            mem     1098716716(15.05%)      20827222(0.29%)          0(0.00%)      6181731284(84.67%)            0(0.00%)      7301275222(99.96%) 
       cs       gres/gpu          10557(34.06%)            83(0.27%)          0(0.00%)           20355(65.67%)            0(0.00%)          30995(100.27%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1534,mem=17306000M,node=39
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2036,mem=22190000M,node=41,gres/gpu=158
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=220,mem=2464000M,node=6
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=234,mem=1376000M,node=9,gres/gpu=26
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
gpu623977554:07:14506824008980002013000000
cpu1274626075:51:48114858170143000121180000000
gnolim405836:23:58375401900007000000
nolim400414:33:2840000000000000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-hadi-daneshmand1422461:00:18014000001000040000000
slurm-cs-henry-kautz83215779:36:46716116004522040240003149000000
slurm-cs-zezhou-cheng24589943:35:36845161300226763012100025000000
slurm-cs-ashish-venkat75749515:58:18638640540038372441920200007741000000
slurm-cs-brad-campbell389322:25:208300015130600040000000
slurm-cs-undefined3508563:15:181671830013699010600063000000
slurm-cs-chen-yu-wei52826350:49:262608267400527700400010000000
slurm-cs-lu-feng6725477:39:5406500222789055500001000000
slurm-cs-mircea-stan8164463:30:0081600050426704400001000000
slurm-cs-unassigned62706:08:524200100100040000000
slurm-cs-tianhao-wang262109:30:40026002130200000000000
slurm-cs-yu-meng281946:08:300280014140000000000000
slurm-cs-madhur-behl4201556:23:08042000186106012700010000000
slurm-cs-yen-ling-kuo9261451:37:0292060092110400000000000
slurm-cs-kevin-skadron611447:01:5625360030602400010000000
slurm-cs-wei-kai-lin259916:59:122441500219003000370000000
slurm-cs-ferdinando-fioretto12773:04:4401200710300010000000
slurm-cs-nada-basit593:21:200500000500000000000
slurm-cs-yue-cheng201:31:280200100100000000000
slurm-cs-sebastian-elbaum200:40:200200020000000000000
slurm-cs-matheus-xavier-ferreira700:38:207000610000000000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
serval03413665:00:00000400000000000
jaguar06625598:29:345710300010000000
cheetah033385375:18:0033300100031000000
lotus2494536:31:381204608200001000000
cheetah011214043:55:18102601200010000000
jaguar012203648:52:1021040600000000000
serval084143403:18:4241310000000000000
cheetah023922852:09:34369901100030000000
puma0113322836:43:30127137000001014000000
cheetah091652750:08:32129902700000000000
affogato11742748:30:5440902400001000000
cheetah082012588:25:30162803100000000000
ai01132583:19:44740100010000000
ai0372509:10:30320100010000000
ai02202422:32:201530100010000000
cheetah0422312:10:08000000020000000
ai04372055:16:282820600010000000
bigcat018181923:53:326821300300012000000
serval09341919:17:262530000006000000
affogato033081876:02:02271200400049000000
jaguar025581761:53:064711906500012000000
lynx093811641:24:14337290400083000000
affogato016051607:38:24552320700095000000
serval074051530:12:5840130000010000000
lynx084891313:13:3841526027000156000000
ai062941224:06:52275401500000000000
panther01473898:48:42388480160001110000000
affogato10327881:10:38289290000009000000
affogato09312871:49:42272280100029000000
affogato06374864:25:28327370200008000000
jaguar05287834:08:26236604300011000000
lynx10216825:57:58199401300000000000
bigcat02476783:55:40432430000010000000
lynx0113700:10:16910200010000000
affogato07377673:41:38337260500009000000
bigcat03386655:49:06367006000130000000
adriatic01266651:29:08232602700001000000
affogato02414647:29:58397160100000000000
affogato05487592:17:46447310300006000000
cortado01318553:53:3231200200031000000
struct01322538:03:38304170000010000000
struct04239531:05:44207160700081000000
adriatic02184525:04:50150602800000000000
struct02329507:24:16309170000012000000
adriatic03156506:34:32114703400010000000
struct05223502:28:32189160800073000000
affogato08355495:16:443112309000111000000
cortado02350489:31:24330100000109000000
affogato04395478:17:20351370100015000000
struct03301466:28:06272160700033000000
affogato13117458:54:3689802000000000000
struct06228448:06:40209150100021000000
struct08266436:36:04234170600027000000
bigcat06292433:34:18277130000020000000
struct07287420:41:28255170600018000000
affogato14132413:33:58106402200000000000
struct09250397:15:042211601300000000000
adriatic04102385:03:4268602800000000000
adriatic0590362:57:5055602900000000000
serval067355:20:32700000000000000
nekomata01532344:23:343210021000010000000
affogato15116343:06:2287002900000000000
adriatic0689328:08:2251603200000000000
bigcat05174324:37:56150240000000000000
bigcat04192315:00:50161310000000000000
ai05146312:34:28134201000000000000
cortado04407276:03:56382000000124000000
lynx0315253:50:501200300000000000
lynx0218247:56:141400400000000000
lynx0417239:45:441700000000000000
jinx0238185:44:243110400002000000
cortado10117170:16:269870300036000000
jinx0150150:26:244110500003000000
slurm292122:37:529200000000000000
slurm385101:34:128500000000000000
lynx0738100:11:563300500000000000
slurm48488:47:168400000000000000
lynx051658:45:521400200000000000
titanx034456:31:384200000002000000
lynx061656:29:321500100000000000
cortado092252:37:102100000010000000
heartpiece4244:22:384200000000000000
slurm18343:22:508300000000000000
ai073340:15:283300000000000000
cortado052436:04:282110000002000000
cortado072235:51:262000000002000000
ai103435:18:363400000000000000
cortado033034:27:582710100001000000
jaguar0315133:33:3679406800000000000
ai083432:31:263400000000000000
cortado082032:12:401800000002000000
cortado062431:32:102200000002000000
slurm51413:48:401400000000000000
titanx051313:40:461300000000000000
ai091309:20:481300000000000000

slurm_report_one_week.txt · Last modified: 2026/04/12 17:00 by 127.0.0.1