urn:noticeable:projects:3f43Ej0LaTLbXv21eFelchangelog Updateshpc-internal.carnegiescience.edu2020-06-03T18:03:45.531ZCopyright © changelogNoticeablehttps://storage.noticeable.io/projects/3f43Ej0LaTLbXv21eFel/newspages/t8lIbf2iSTWZIIP91xqU/01h55ta3gshjbemty2fj8xrzn2-header-logo.pnghttps://storage.noticeable.io/projects/3f43Ej0LaTLbXv21eFel/newspages/t8lIbf2iSTWZIIP91xqU/01h55ta3gshjbemty2fj8xrzn2-header-logo.png#1e88e5urn:noticeable:publications:UE8MII49pgq894dKcogP2020-05-07T18:12:00.001Z2020-06-03T18:03:45.531ZSLURM's Default Memory Per CPU Increased (1GB --> 2GB)Hi All, Based on previous usage, the default allocation of 1GB of memory per cpu is too low. I have now increased this default to 2GB of memory per cpu. If you are not setting memory requirements in your job(s), this will change will...<p>Hi All,</p> <p>Based on previous usage, the default allocation of 1GB of memory per cpu is too low. I have now increased this default to 2GB of memory per cpu.</p> <p>If you are <strong>not</strong> setting memory requirements in your job(s), this will change will affect them.</p> <p>If you are already setting memory requirements in your job(s), this change will <strong>not</strong> affect you.</p> <p>Best practice is to specify memory in all jobs. If you haven’t been doing so and are now running into issues (waiting in partitions for longer than usual), you’ll now need to specify memory per cpu in jobs.</p> <p><code>--mem-per-cpu=1G</code> # when passed via command line,<br> or<br> <code>#SBATCH --mem-per-cpu=1G</code> # when added to a submission script.</p> <p>Adding either option should set your memory per cpu back to 1GB.</p> <p>** Again, if you are already using the <code>--mem=</code> or <code>--mem-per-*=</code> options, no changes are required for your job(s).</p> <p>Thank You</p> Floyd Fayton[email protected]urn:noticeable:publications:r21hNwxoxQFI4LTYNgHe2020-05-07T15:15:00.001Z2020-06-03T18:03:55.692ZSLURM Priority AdjustmentSince priorities were not working for those users who use Memex less frequently and in smaller batches of submitted jobs, these parameters were adjusted: As a result gres/gpu was added to: This functionality changes in SLURM 19+, but...<p>Since priorities were not working for those users who use Memex less frequently and in smaller batches of submitted jobs, these parameters were adjusted:</p> <p><code>PriorityWeightFairShare=20000</code><br> <code>PriorityWeightTRES=CPU=1000,Mem=2000,GRES/gpu=3000</code></p> <p>As a result gres/gpu was added to:</p> <p><code>AccountingStorageTRES=cpu,mem,energy,node,billing,fs/disk,vmem,pages,gres/gpu</code></p> <p>This functionality changes in SLURM 19+, but our current version is 18.08 which is the latest version packaged with OpenHPC 1.3.</p> <p><strong>Update:</strong> AcctGatherFilesystemType was also enabled for lustre.</p> Floyd Fayton[email protected]urn:noticeable:publications:oMEPZGzyU6K3IyopCsLD2019-02-20T21:40:00.001Z2021-11-17T21:43:06.623ZNew Nodes Added - memex-c[117-124]Nodes, memex-c[117-124], were added to Memex on 2/13/19. These nodes are identical to memex-c[109-116], which all have 256GB of raw memory and up to 250GB of free/unused memory per node. Users can request any of these nodes by adding...<p>Nodes, memex-c[117-124], were added to Memex on 2/13/19. These nodes are identical to memex-c[109-116], which all have 256GB of raw memory and up to 250GB of free/unused memory per node.</p> <p>Users can request any of these nodes by adding the SLURM option "-C 250G" to sbatch scripts, srun, or salloc. For instance, adding:</p> <blockquote> <p>#SBATCH -C 250G</p> </blockquote> <p>to any sbatch script will only try to run on these servers, memex-c[109-124]. They are available in SHARED and DTM partitions on Memex.</p> Floyd Fayton[email protected]