changelog Updates

Memex Login Hung on 11/17/21

by Floyd Fayton
Initial incident and probable cause: Broken pipe on the login caused by a file update on the login. The master node and login were both rebooted and “wwsh file sync” commands were automated memex_routecheck.sh (crontab and cron.hourly).
New
System Failure
Maintenance
Announcement

Login hangs after kernel message...

by Floyd Fayton, HPC Admin
Update (6/29/20): The login hang was caused by high I/O load which is returning after a weekend hiatus. Unfortunately, limiting the I/O on the login is not yet feasible due to the design of the system. The issue is not due to a lack of...
New
System Failure

Master Node Rebooted

by Floyd Fayton, HPC Admin
Incident (5/21/20): While fixing issues with the GPU nodes, the master node became unstable because of several mount points that were damaged. All of the mount were runtime filesystems so a reboot was requested and fulfilled by SRCF...
System Failure
Fix

SLURM's Default Memory Per CPU Increased (1GB --> 2GB)

by Floyd Fayton, HPC Admin
Hi All, Based on previous usage, the default allocation of 1GB of memory per cpu is too low. I have now increased this default to 2GB of memory per cpu. If you are not setting memory requirements in your job(s), this will change will...
Announcement
Improvement

SLURM Priority Adjustment

by Floyd Fayton, HPC Admin
Since priorities were not working for those users who use Memex less frequently and in smaller batches of submitted jobs, these parameters were adjusted: As a result gres/gpu was added to: This functionality changes in SLURM 19+, but...
Announcement
Maintenance
Improvement

Did You Know ... Slack Edition

by Floyd Fayton, HPC Admin
Did you know we have a Slack channel for HPC/Research Computing? Signup to our Carnegie Institution for Science workspace (click here) and then join the #hpc channel. Please use your Google login, "@carnegiescience.edu", email...
Tips
Announcement
Welcome Guide