Site Tools


slurm_report_one_week

CS SLURM Cluster Report - 1 week

Report generated for jobs run on the CS SLURM cluster from 2026-04-12 through 2026-04-18.

Job total during this query range: 55,183

Job total since August 1st 2024: 5,846,128

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2026-04-12T00:00:00 - 2026-04-18T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated                  Down         PLND Down                    Idle             Planned                Reported 
--------- -------------- ---------------------- --------------------- ----------------- ----------------------- ------------------- ----------------------- 
       cs            cpu         257584(38.10%)           2952(0.44%)          0(0.00%)          115026(17.01%)      300471(44.45%)         676032(100.00%) 
       cs            mem     1401596504(19.25%)       39756663(0.55%)          0(0.00%)      5839094832(80.20%)            0(0.00%)     7280448000(100.00%) 
       cs       gres/gpu          11948(38.65%)            308(1.00%)          0(0.00%)           18656(60.35%)            0(0.00%)          30912(100.00%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1534,mem=17306000M,node=39
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2036,mem=22190000M,node=41,gres/gpu=158
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=220,mem=2464000M,node=6
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=234,mem=1376000M,node=9,gres/gpu=26
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
cpu37754115769:43:0629179254305481000319232000000
gpu1430695629:40:3612170411016200008124000000
gnolim252916693:41:34151868092800087000000
nolim59413722:28:56563501600073000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-zezhou-cheng350380777:08:46161918840029341460364000563000000
slurm-cs-undefined42145811:17:0235041219244700260003942000000
slurm-cs-lu-feng2309842876:32:281347377090191615180601072770001129000000
slurm-cs-ashish-venkat1131517391:47:3684301710571604110104604500015559000000
slurm-cs-wei-kai-lin921612076:31:18919917007053190209000132120000000
slurm-cs-hadi-daneshmand218045:29:56021000002100000000000
slurm-cs-yu-meng496371:04:360490042801000115000000
slurm-cs-chen-yu-wei43165084:27:362204211200431500100000000000
slurm-cs-kevin-skadron923591:07:04236900311104700021000000
slurm-cs-madhur-behl23369:19:360200200000000000000
slurm-cs-unassigned1193110:00:46211700451206000020000000
slurm-cs-ferdinando-fioretto2083094:29:040208001551603200032000000
slurm-cs-henry-kautz8003043:49:1077921006577906100012000000
slurm-cs-tianhao-wang382917:19:20201800101001500030000000
slurm-cs-shangtong-zhang142109:07:12122001010300000000000
slurm-cs-yen-ling-kuo18591496:35:4816342250017249703800000000000
slurm-cs-aidong-zhang49425:13:16247004603800001000000
slurm-cs-matheus-xavier-ferreira1271:55:486600600500001000000
slurm-cs-adwait-jog266:09:500020100000001000000
slurm-cs-jack-davidson856:19:120800520100000000000
slurm-cs-mircea-stan3125:37:50130003000100000000000
slurm-cs-rich-nguyen904:10:560900900000000000000
slurm-cs-hyojoon-kim100:00:020100100000000000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
bigcat0112559387:28:481119330660002017000000
bigcat0213719358:49:1611247901380001416000000
serval082718998:21:14222703900021000000
bigcat0315058136:06:0213402901020002113000000
bigcat0420287702:53:24165417801600002115000000
serval071147089:31:0283702300010000000
bigcat0618516786:38:24153416101300001214000000
bigcat0521806290:38:44186115901330001512000000
jaguar019016000:07:48802408800061000000
lotus3845879:44:4825336078000125000000
heartpiece595252:06:525520000011000000
puma0113164818:57:0210591450870001411000000
cheetah086704148:23:425722606200073000000
affogato059573730:53:547966608300057000000
affogato0411143646:41:4291195089000613000000
cheetah0211883525:13:16105724010200050000000
affogato0111223506:21:36909540143000115000000
serval06193205:03:541020600010000000
ai054183082:55:262649014300011000000
jaguar029833020:46:388483109700043000000
cheetah012093020:27:441581503100032000000
adriatic052512935:52:501621906800011000000
adriatic022622876:14:421872105300001000000
jaguar03222848:06:541500700000000000
jinx011642821:08:08106405300010000000
slurm21302811:20:5812110400040000000
jinx021622798:41:40106405100001000000
adriatic013632791:03:262762406200010000000
affogato0212272782:39:0095969018700066000000
cortado0110312752:15:1054169041800021000000
adriatic032562710:26:521732305900010000000
titanx031392690:18:3286105100010000000
slurm11142651:11:2011310000000000000
slurm5522648:11:385110000000000000
titanx05232570:44:262210000000000000
jaguar0610552474:38:361011503300051000000
cortado073922431:14:582514609200030000000
adriatic063182362:15:362282106900000000000
adriatic042452339:36:381602505700030000000
struct0112272337:54:2292147024800074000000
nekomata015732328:34:325381201600043000000
struct048522206:42:5255085020800045000000
struct099032191:56:5272236013400065000000
struct0311662189:59:4691757018200073000000
affogato074962124:43:524053704200057000000
affogato095971970:41:305023405000083000000
affogato114101967:51:543641103400010000000
cortado0210471966:37:1859350040100003000000
lynx098851961:23:567327007200056000000
cortado037961937:32:5450268022200004000000
struct0810471919:56:2485141014500073000000
struct078941881:53:3465939017900098000000
affogato065551872:34:1445037052000115000000
struct0612391827:56:5095046023300091000000
struct059411825:38:5071341018300013000000
cheetah0313091803:26:141287501100033000000
lynx088761793:39:0469110107200084000000
cortado085961746:09:32412660102000133000000
affogato034221742:28:303312904600097000000
cortado095501732:48:0039668075000101000000
affogato106471644:32:065114208600053000000
affogato087321627:16:126024807000075000000
affogato155011620:35:404201106900010000000
cortado105961609:22:3849850030000153000000
cheetah0451534:54:24120100010000000
struct0211651524:30:2693341018300035000000
affogato135131481:48:364281407100000000000
ai061211448:56:3693402200020000000
jaguar054891301:47:5235916011100030000000
affogato143731268:17:043211104100000000000
lynx031071261:58:2271003500010000000
lynx02291231:50:582600200010000000
ai03671209:34:585720600020000000
panther014281206:19:443185904500033000000
lynx103771100:28:24330404100020000000
cheetah095381093:41:48485704400020000000
lynx011241030:39:2211230700020000000
lynx0430987:34:242800200000000000
ai04230965:11:08183104500010000000
ai01123961:38:42104801000010000000
ai02142939:33:22119601700000000000
lynx07147924:59:4813800900000000000
lynx06177854:59:44134004300000000000
lynx05153836:39:42142001100000000000
serval09171803:01:40158001300000000000
ai10339751:15:203041201800023000000
ai09310750:43:282801301400021000000
cortado04692736:51:4840174021200014000000
ai08531637:48:1022311029600001000000
ai07442590:06:2412712030200010000000
cortado06483508:04:36271380158000124000000
serval0362445:39:425530200020000000
cortado05572350:29:1629056022200040000000
slurm4118212:04:4411100400012000000
slurm3121147:33:2411200800010000000

slurm_report_one_week.txt · Last modified: 2026/04/26 17:00 by 127.0.0.1