Print

Print


Hi Nathan,

It doesn't ignore the request, but as they described, the scheduler 
doesn't know that some part of the memory is reserved for swap, and
does calculation with the swap too, but in reality it is tot_disk_size - 
swap_size.

Initially I increased from 9 Gb to 12 Gb, but the only noticeable effect 
was it run a little bit longer.

Yes that is an option factor of few, if I ask e.g. 25 Gb, I think we 
will not see this problem, but I am kind of hesitating, to do this, 
since I might
get complains from computer center, and in addition our job priority 
might get decreased.

I will discuss this with computer center, to see if this will be an 
acceptable option to continue.

Rafo


On 1/7/19 2:25 PM, Nathan Baltzell wrote:
> Hi Rafo,
>
> Is it just completely ignoring the requests and/or real disk resources, or could just exaggerating your request by a factor of few help alleviate the problem?
>
> -Nathan
>
>
>> On Jan 7, 2019, at 14:17, Graf, Norman A. <[log in to unmask]> wrote:
>>
>> Hello Rafo,
>>   
>> Thanks for this update, and for all the time and effort you put in over the break. I’m sorry to hear that you encountered such difficulties, but it sounds like a solution might be imminent.
>>   
>> While this is being resolved, I want to again strongly encourage everyone to start analyzing the existing output from pass2. The dst format has changed, several new collections have been added and the content of some other collections has changed, so there is a lot to be investigated. Enough run partitions have been reconstructed that sufficient statistics exist to find even subtle bugs in the software so please look at the data and report any results either back to these mailing lists or to slack.
>>   
>> More information on the pass2 reconstruction can be found here:
>>   
>> https://confluence.slac.stanford.edu/display/hpsg/Pass2
>>   
>> Norman
>>   
>> From: Rafayel Paremuzyan <[log in to unmask]>
>> Sent: Monday, January 7, 2019 10:58 AM
>> To: Graf, Norman A. <[log in to unmask]>; hps-software <[log in to unmask]>; [log in to unmask]
>> Subject: Re: Pass2
>>   
>> Hi all,
>>
>> I multiple people were wondering about the status of the pass2, that was submitted just before Christmas brake.
>>
>> During the brake significant amount of jobs are failed.
>> Main issue is the so called "No space left on disk" exception.
>>
>> I did a ccpr I was in contact with computer center before and after the brakes.
>>
>> Briefly about what happened:
>> The response was that auger scheduler is taking more jobs than the local disk of the node can take, and after
>> about 7-8 hours of running the local disk of the node becomes completely full, and therefore crashing all the jobs on that node.
>>
>> Computer center advised that the problem is related to the so called "farm18" nodes, and before the brakes I sent jobs
>> to "farm16" nodes to avoid jobs to be run by any farm18 node, however during th Christmas brake, we got similar issue
>> with farm16 nodes too.
>>      In addition to this, number of nodes for some reason went offline (restarted) during the brake again causing a lot of jobs to fail.
>> Given the high rate of failures, I didn't submit additional jobs during brake time, until we will understand the failure reason.
>>
>> After the brake I asked computer center about it, and they suggested instead of auger, to use slurm.
>> Now half of nodes are running on slurm, and in a near future all jobs will be submitted through slurm instead of auger.
>>
>> Past week I was doing some tests on different nodes to check if the similar problem still exists.
>>
>> It happened again in farm18 nodes, but for others it worked well for a single run.
>>
>> I promised Maurik to present details about this on the coming Wednesday meeting.
>>
>> Details of ccpr ar in the following link
>> https://misportal.jlab.org/mis/apps/ccpr/ccpr_user/ccpr_user_request_history.cfm?ccpr_number=249598
>>
>> Rafo
>>
>>
>> On 1/7/19 12:26 PM, Graf, Norman A. wrote:
>> Hello Rafo,
>>   
>> Can you please bring me up to speed on where we are with the Pass2 recon?
>>   
>> I just got back and have not yet gone through all of my email, so apologies if you've posted a progress report already.
>>   
>> Norman
>>   
>>
>> Use REPLY-ALL to reply to list
>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1