Site Tools


slurm_report_four_weeks

CS SLURM Cluster Report - 4 weeks

Report generated for jobs run on the CS SLURM cluster from 2026-01-11 through 2026-02-07.

Job total during this query range: 78,242

Job total since August 1st 2024: 5,379,393

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2026-01-11T00:00:00 - 2026-02-07T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name               Allocated                  Down         PLND Down                    Idle             Planned                 Reported 
--------- -------------- ----------------------- --------------------- ----------------- ----------------------- ------------------- ------------------------ 
       cs            cpu          495875(17.81%)          23594(0.85%)          0(0.00%)         1673524(60.10%)      591775(21.25%)         2784768(100.00%) 
       cs            mem      3783262487(12.80%)       72285180(0.24%)          0(0.00%)     25692292333(86.95%)            0(0.00%)     29547840000(100.00%) 
       cs       gres/gpu           25781(20.30%)           2802(2.21%)          0(0.00%)           98425(77.49%)            0(0.00%)          127008(100.00%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1596,mem=17562000M,node=40
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2066,mem=22254000M,node=42,gres/gpu=162
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=226,mem=2528000M,node=7
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=256,mem=1626000M,node=10,gres/gpu=27
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
cpu52754225753:36:4842325348606492000305146000000
gpu20359215972:22:341682999902223020182124000000
nolim508432359:42:3926503710206200001000000
gnolim457487:57:4421002800050000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-undefined2490182933:05:1217434672354575511680552000150000000
slurm-cs-henry-kautz1837893457:36:43134001294849012253113104716000177101000000
slurm-cs-ashish-venkat3087937632:36:283087900027092173301975000790000000
slurm-cs-yu-meng1536753:23:2821300250300050000000
slurm-cs-tianhao-wang27030992:18:240270001785004000011000000
slurm-cs-lu-feng1073426378:58:4620810526009099335011900002108000000
slurm-cs-yen-ling-kuo55024540:16:42490600049226022000010000000
slurm-cs-yue-cheng2313840:41:52023008101200020000000
slurm-cs-unassigned1386810099:25:105750811800113122670208600018221000000
slurm-cs-kevin-skadron1596147:01:047188007134040000131000000
slurm-cs-zezhou-cheng2785130:20:08027800173409200090000000
slurm-cs-madhur-behl1854939:17:1217411001381018020026000000
slurm-cs-ferdinando-fioretto3234558:06:4415308001959802700030000000
slurm-cs-adwait-jog202939:18:0420000970200020000000
slurm-cs-hyojoon-kim19790:20:1401900640800010000000
slurm-cs-briana-morrison26383:45:44026001520600012000000
slurm-cs-mircea-stan251:48:002000200000000000000
slurm-cs-haiying-shen1705:15:44017001001600000000000
slurm-cs-sebastian-elbaum600:04:060600500000001000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
puma01270225545:18:0420712360376000118000000
serval0753019095:41:324861702300040000000
serval0617518106:55:081351801800031000000
cheetah04254317107:41:56245814026020421000000
bigcat01611516291:53:48460144209880004440000000
bigcat02481413997:05:28387927806100002819000000
serval0925813785:23:242181402300012000000
serval0844813390:22:024031302700032000000
serval0364312563:29:426151401000031000000
bigcat03252512556:15:442115165023400065000000
jaguar0118611953:29:369930044000112000000
bigcat04317711602:01:002497132054000044000000
cheetah0847611169:04:563634905600035000000
bigcat05334510362:18:502539160064400020000000
cheetah0283010314:29:3065671090000310000000
affogato02114710015:51:44926112010100044000000
heartpiece8179804:02:5241281032300001000000
lynx085439128:22:4036665010800031000000
slurm211789010:43:4064485044900000000000
cheetah098888485:23:1042153040400091000000
lotus2568025:20:0411147083000015000000
affogato057187666:14:145967104700040000000
affogato048217632:56:0265173086000101000000
affogato0110497618:01:028528909700074000000
hydro6097522:32:3843459011400020000000
bigcat0641466977:44:2635441740414000131000000
jaguar0317056465:54:44151362088000042000000
slurm118536416:33:15100659078800000000000
affogato11866384:23:4444403100070000000
cheetah0111636307:31:461006690440003311000000
struct0110396141:45:2085166011300063000000
lynx096055592:30:5843951011100031000000
slurm58825545:52:3245261036900000000000
affogato131365393:47:2053908600060000000
jaguar0623255029:34:3221718905600036000000
struct029785004:47:108175609900033000000
affogato14794936:55:1243203700060000000
struct049494931:53:187897208300023000000
struct059764874:37:5275571014500023000000
struct039714853:03:387998108400043000000
struct069974663:25:3879973011900033000000
lynx10494402:03:1222002000070000000
struct098514382:20:367236205900034000000
struct088964373:30:487626706200023000000
struct079684244:33:2479962089000153000000
affogato067864215:14:3065651066000112000000
panther0115504124:54:041149120027000038000000
ai023674040:15:463192002200060000000
affogato089803890:37:068496006200072000000
ai033693886:32:543271601900070000000
cheetah033043801:40:342562801600004000000
ai043733465:12:323321402000070000000
ai064593347:29:523762305600040000000
nekomata0112223209:51:4410852809600067000000
affogato15722743:04:2431904400060000000
jaguar0215022545:07:30127833018700004000000
adriatic011382375:06:54492805800030000000
adriatic022252181:28:521472305300020000000
affogato099022165:04:167856304600071000000
cortado0110241998:08:508175501140002315000000
affogato078901912:55:547676205000092000000
affogato109931901:46:148546906400060000000
ai05121816:20:00140700000000000
adriatic034211811:57:203383105200000000000
affogato038461740:22:5062559016000020000000
cortado028751738:22:14719290104000230000000
cortado037321712:42:1459131087000230000000
jinx01111681:17:20040700000000000
cortado045401511:46:304602204900090000000
slurm32991441:29:4612860011100000000000
jinx0281363:50:24020500010000000
cortado054341170:25:563762103600010000000
cortado06344926:17:023051802100000000000
ai073867:59:44100100010000000
adriatic04106602:41:12202406200000000000
titanx054586:10:16000300010000000
titanx024586:10:08000300010000000
titanx033586:09:52000200010000000
cortado07346515:30:362883302500000000000
adriatic0574483:27:20201603800000000000
adriatic0639461:07:1620801100000000000
jaguar051830300:41:1615724025300010000000
ai0128291:36:501910800000000000
cortado10243244:12:24213250500000000000
cortado08185190:44:36155270300000000000
cortado09144140:51:58113250600000000000
lynx1129127:35:360302000060000000
slurm42395:43:400230000000000000
epona3145:16:548102200000000000
lynx01600:00:08000600000000000
ai08000:00:00000000000000000
ai09000:00:00000000000000000
ai10000:00:00000000000000000
lynx02000:00:00000000000000000
lynx03000:00:00000000000000000
lynx04000:00:00000000000000000
lynx05000:00:00000000000000000
lynx06000:00:00000000000000000
lynx07000:00:00000000000000000

slurm_report_four_weeks.txt · Last modified: 2026/02/15 17:00 by 127.0.0.1