Site Tools


slurm_report_one_week

CS SLURM Cluster Report - 1 week

Report generated for jobs run on the CS SLURM cluster from 2026-03-08 through 2026-03-14.

Job total during this query range: 14,307

Job total since August 1st 2024: 5,609,642

This page is updated every Sunday at 5:00pm EST.


SLURM Scheduler System Output

--------------------------------------------------------------------------------
Cluster Utilization 2026-03-08T00:00:00 - 2026-03-14T23:59:59
Usage reported in TRES Hours/Percentage of Total
--------------------------------------------------------------------------------
  Cluster      TRES Name              Allocated                  Down         PLND Down                    Idle            Planned                Reported 
--------- -------------- ---------------------- --------------------- ----------------- ----------------------- ------------------ ----------------------- 
       cs            cpu         153009(22.65%)           1928(0.29%)          0(0.00%)          396877(58.74%)     123868(18.33%)         675682(100.00%) 
       cs            mem     1046754476(14.38%)       47242751(0.65%)          0(0.00%)      6184864773(84.97%)           0(0.00%)     7278862000(100.00%) 
       cs       gres/gpu           6423(20.79%)            185(0.60%)          0(0.00%)           24286(78.61%)           0(0.00%)          30895(100.00%) 

* Total Cluster Resources Avaialble by Partition
 (Note, TRES is short for Trackable RESources)
PartitionName=cpu
   TRES=cpu=1534,mem=17306000M,node=39
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gpu
   TRES=cpu=2036,mem=22190000M,node=41,gres/gpu=158
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0
PartitionName=nolim
   TRES=cpu=220,mem=2464000M,node=6
   TRESBillingWeights=CPU=2.0,Mem=0.15
PartitionName=gnolim
   TRES=cpu=256,mem=1626000M,node=10,gres/gpu=27
   TRESBillingWeights=CPU=1.0,Mem=0.15,GRES/gpu=2.0

SLURM Usage by Partition

PartitionNametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
cpu1003567831:53:0280336020105300027077000000
gpu398958798:03:28352487019000017216000000
nolim13512094:32:23120100000050000000
gnolim1486511:27:001281000000100000000

SLURM Usage by Advisor Group

  • slurm-cs-undefined, users that have CS accounts but are not CS students
  • slurm-cs-unassigned, users that are CS students but do not have a listed CS advisor
GroupNametotal_jobscputime(HH:MM:SS)cpugpunolimgnolimcompletedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
slurm-cs-undefined14966021:46:48773715201663012000580000000
slurm-cs-ashish-venkat644426294:23:1748781318120128595999016400018240000000
slurm-cs-unassigned5811969:59:4429290021502900021000000
slurm-cs-madhur-behl25010468:37:4222480068230200001390000000
slurm-cs-adwait-jog10249511:23:44102400050546400000550000000
slurm-cs-henry-kautz1104588:54:1678320023707300070000000
slurm-cs-mark-floryan32887:42:300300200000010000000
slurm-cs-chen-yu-wei48572813:28:082808204900407400744000039000000
slurm-cs-lu-feng802762:30:3097100361003000040000000
slurm-cs-tianhao-wang262529:02:40026006601100003000000
slurm-cs-mircea-stan1201535:56:0612000038008200000000000
slurm-cs-yen-ling-kuo10061456:04:4095848009701201600044000000
slurm-cs-ferdinando-fioretto10850:30:1201000050400001000000
slurm-cs-kevin-skadron81780:45:5852290060120500040000000
slurm-cs-brad-campbell1241:45:360100000100000000000
slurm-cs-yue-cheng20169:28:56020001600200011000000
slurm-cs-zezhou-cheng8151:27:300800000800000000000
slurm-cs-hadi-daneshmand19125:44:48019001301300002000000
slurm-cs-nada-basit352:50:160300000100002000000
slurm-cs-matheus-xavier-ferreira2218:06:40022007001500000000000
slurm-cs-sebastian-elbaum1605:25:52016003001300000000000

SLURM Usage by NodeName

Nodenametotal_jobscputime(HH:MM:SS)completedcancelledrunningfailedpreemptedrequeuedpendingtimeoutout_of_memorysuspendedboot_faildeadlinenode_failresizingrevoked
puma01102614795:48:366663602830001823000000
cheetah041310925:24:181020100000000000
affogato025225040:58:3447319016000131000000
cheetah02144767:00:44340200041000000
heartpiece154630:44:06940000020000000
jaguar03913850:31:0823909000500000000
cheetah01443645:47:3215150800033000000
lynx082093579:47:341752200000120000000
affogato013063310:25:5828480600062000000
serval091963307:26:2218900500020000000
jaguar02473099:01:3623601300032000000
bigcat016163056:34:224647207400051000000
cheetah08492853:20:543470300041000000
lynx092872779:25:362532000000140000000
struct012622545:31:022271209000131000000
cheetah03302518:42:462160100020000000
slurm2482497:45:224520000010000000
struct022672497:34:482301209000151000000
struct096942453:57:08213120448000156000000
slurm5122356:29:32920000010000000
slurm1212355:12:031820000010000000
affogato052882348:24:28251210600091000000
struct082762338:08:0223711011000143000000
struct032592326:08:042211209000161000000
lotus592263:44:0498014000280000000
affogato031602225:03:2412215013000100000000
jaguar01932066:25:007330700091000000
bigcat025302044:07:5643821056000312000000
affogato042472042:17:3023150500060000000
serval061861961:20:3216440800091000000
panther011521934:29:061221101200070000000
bigcat036661834:24:5053858059000110000000
serval035381688:27:3652230700051000000
struct042961495:28:002502704000141000000
struct052781475:12:202362704000101000000
affogato111741446:03:5016730300010000000
struct072891444:17:242373205000141000000
cheetah09681412:27:386030200021000000
affogato141171372:21:2211410100010000000
ai05431370:36:443920000020000000
adriatic061471364:28:1813310500080000000
affogato131351363:04:4213110200010000000
affogato151481305:06:4014010600010000000
jinx01291300:18:182520000020000000
jinx02321297:13:022820000020000000
jaguar06791247:27:20375025000120000000
ai07101206:37:14620000020000000
struct062661163:35:102312100000140000000
affogato072121092:01:28177220200092000000
affogato082021066:03:54171220300051000000
affogato061951059:24:54159220400091000000
serval081491042:39:4014400300020000000
affogato102101034:30:12176210500071000000
serval071491003:16:3014520100010000000
ai06120963:50:4810700900022000000
affogato09201963:07:48170150700081000000
adriatic01158860:00:1015110000060000000
adriatic04147840:05:0413610400060000000
adriatic02132827:04:4212410100060000000
adriatic03128818:58:5411910200060000000
ai01123665:55:0811810400000000000
titanx039648:07:42710000010000000
titanx0513632:22:381110000010000000
bigcat04253511:59:16232130000026000000
jaguar0520490:55:221520300000000000
nekomata01142367:03:1213500700000000000
bigcat05208341:07:26195110000020000000
lynx10110283:07:5411000000000000000
slurm339254:21:203900000000000000
adriatic05105240:02:1610400000001000000
ai0399125:59:489900000000000000
bigcat06179103:07:4217630000000000000
ai028795:46:548600000001000000
ai047091:27:246300600001000000
titanx021046:26:401000000000000000
cortado019729:37:129700000000000000
cortado029722:08:369700000000000000
cortado038621:07:428600000000000000
cortado056519:16:246500000000000000
cortado047618:54:427600000000000000
ai08209:44:42200000000000000
cortado063503:23:063500000000000000
cortado072202:05:122200000000000000
lynx01100:42:24010000000000000
ai09000:00:00000000000000000
ai10000:00:00000000000000000
cortado08000:00:00000000000000000
cortado09000:00:00000000000000000
cortado10000:00:00000000000000000
lynx02000:00:00000000000000000
lynx03000:00:00000000000000000
lynx04000:00:00000000000000000
lynx05000:00:00000000000000000
lynx06000:00:00000000000000000
lynx07000:00:00000000000000000
slurm4000:00:00000000000000000

slurm_report_one_week.txt · Last modified: 2026/03/22 17:00 by 127.0.0.1