Hi all,

I multiple people were wondering about the status of the pass2, that was submitted just before Christmas brake.

During the brake significant amount of jobs are failed.
Main issue is the so called "No space left on disk" exception.

I did a ccpr I was in contact with computer center before and after the brakes.

Briefly about what happened:
The response was that auger scheduler is taking more jobs than the local disk of the node can take, and after
about 7-8 hours of running the local disk of the node becomes completely full, and therefore crashing all the jobs on that node.

Computer center advised that the problem is related to the so called "farm18" nodes, and before the brakes I sent jobs
to "farm16" nodes to avoid jobs to be run by any farm18 node, however during th Christmas brake, we got similar issue
with farm16 nodes too.
    In addition to this, number of nodes for some reason went offline (restarted) during the brake again causing a lot of jobs to fail.
Given the high rate of failures, I didn't submit additional jobs during brake time, until we will understand the failure reason.

After the brake I asked computer center about it, and they suggested instead of auger, to use slurm.
Now half of nodes are running on slurm, and in a near future all jobs will be submitted through slurm instead of auger.

Past week I was doing some tests on different nodes to check if the similar problem still exists.

It happened again in farm18 nodes, but for others it worked well for a single run.

I promised Maurik to present details about this on the coming Wednesday meeting.

Details of ccpr ar in the following link
https://misportal.jlab.org/mis/apps/ccpr/ccpr_user/ccpr_user_request_history.cfm?ccpr_number=249598

Rafo


On 1/7/19 12:26 PM, Graf, Norman A. wrote:
[log in to unmask]">

Hello Rafo,


Can you please bring me up to speed on where we are with the Pass2 recon?


I just got back and have not yet gone through all of my email, so apologies if you've posted a progress report already.


Norman




Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1