Hi Tongtong,

I notice you're running simulation jobs with the hps priority account still specifying "centos7".  Have you tested HPS simulations on the new nodes?  They're about 20% of the farm and are currently idle, while the queue for centos7 is backlogged.  Ideally we should submit jobs with the "general" constraint instead of more limited ones (e.g. centos7/centos77).  See email from Bryan below.


Begin forwarded message:

From: Bryan Hess <[log in to unmask]>
Subject: [Jlab-scicomp-briefs] Scientific Computing farm additions: new AMD farm nodes, AMD ifarm1901.
Date: December 12, 2019 at 09:36:55 EST
To: "[log in to unmask]" <[log in to unmask]>

The new farm19 nodes are available for use. This adds over 5000 job slots to the farm. 

A new development machine, ifarm1901, is also available for interactive work. 

The new machines are 64 core (128 thread) AMD "Rome" processors with 256GB of memory each, or 2GB of memory per job slot. The nodes run CentOS 7.7 and now include support for CVMFS, XROOTD, and slurm interactive sessions. 

The remainder of the farm will be upgraded to CentOS 7.7 in a series of steps. Please migrate your work to the new CentOS 7.7 nodes as soon as possible to facilitate the upgrade of the remaining nodes. 

CPU constraints for farm node selection:
  • amd - will select AMD processors only
  • xeon - will select Intel Xeon processors only
  • general - will select a node from either the amd or the xeon, on any CentOS system that is part of the production batch farm. 
  • farm19 - will select only farm19 nodes. 

Operating System constraints for farm node selection:
  • centos77 - the new nodes are tagged with the centos77 constraint, but purposefully not the less-specific centos7 constraint. 
  • centos72 - this selects the old nodes; this set of nodes will shrink
  • centos7 - this tag is deprecated and will not be carried forward as a constraint. 

A complete list of feature constraints/tags can always be shown using the slurm command sinfo -o "%30N %f"

Submitting batch jobs is supported using both jsub (Auger as a slurm front end) and sbatch (using slurm directly).

Documentation is available online at


Jlab-scicomp-briefs mailing list
[log in to unmask]

Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link: