Print

Print


Hello Tongtong,


Thanks for that clarification. I would suggest a review of the plots which we are creating. I note that there's already been an effort to resolve some duplications, but we might want to consider pruning some more histograms, or reducing the binning of some of them.


Norman




From: Hps-analysis <[log in to unmask]> on behalf of Tongtong Cao <[log in to unmask]>
Sent: Monday, August 5, 2019 1:30 PM
To: Graf, Norman A. <[log in to unmask]>
Cc: [log in to unmask] <[log in to unmask]>; hps-software <[log in to unmask]>
Subject: Re: [Hps-analysis] Requests for data cooking
 
Hello Norman,

Thanks for the note.
We always set memory request as 4G, and it works well.

The problem is not due to memory. Instead, there is a maximum heap limit for JVM of the batch machine. According to tests, the limit is about 2G.
If we can not increase the limit, we need to be careful about the heap usage when making codes.

Recently, we tried to add histograms into DQM for all different triggers. The heap usage is over the limit (2G). 
So we have to temporarily shut down drivers for some triggers, and add them back after fixing the issue.

Best regards,
Tongtong

On Aug 5, 2019, at 4:16 PM, Graf, Norman A. <[log in to unmask]> wrote:

One thing to note is that job requests for large amounts of memory may restrict the available queues/machines the jobs will run on. Most likely not an issue at the moment, but something to watch out for.

Norman



From: Hps-analysis <[log in to unmask]> on behalf of Tongtong Cao <[log in to unmask]>
Sent: Monday, August 5, 2019 11:41 AM
To: Nathan Baltzell <[log in to unmask]>
Cc: [log in to unmask] <[log in to unmask]>; hps-software <[log in to unmask]>
Subject: Re: [Hps-analysis] Requests for data cooking
 
Hello Nathan,

Thanks for the reminder.
The memory request has been be specified as 4G. No problem for memory.

-Tongtong


> On Aug 5, 2019, at 1:49 PM, Nathan Baltzell <[log in to unmask]> wrote:
> 
> Tongtong,
> 
> The memory request is (or at least should be) specified in your job submission setup.
> 
> -Nathan
> 
>> On Aug 5, 2019, at 12:20, Tongtong Cao <[log in to unmask]> wrote:
>> 
>> Dear all,
>> 
>> Jobs for runs 10169 and 10170 triggered by hps_v10.cnf were submitted for cooking just now.
>> 
>> DQM histogram for runs 10136 and 10149, including separate plots for most of triggers, have been produced. 
>> You can browse them through https://hpsweb.jlab.org/dqm/dqm.php
>> 
>> We will add plots for extra triggers if heap usage issue is fixed.
>> There is a limit (seems ~2G) for the maximum heap size in the JLab batch machine. But the heap usage will be over the limit if too many DQM plots are included.
>> 
>> Best regards,
>> Tongtong
>> 
>>> On Aug 2, 2019, at 12:53 PM, Tongtong Cao <[log in to unmask]> wrote:
>>> 
>>> Dear all,
>>> 
>>> Jobs for runs 10136 and 10149 were submitted for cooking just now.
>>> 
>>> For DQM plots of cooked runs, you can browse through https://hpsweb.jlab.org/dqm/dqm.php
>>> 
>>> Best regards,
>>> Tongtong
>>> 
>>>> On Aug 1, 2019, at 11:04 AM, Tongtong Cao <[log in to unmask]> wrote:
>>>> 
>>>> Dear all,
>>>> 
>>>> Since yesterday afternoon, we have been taking data with the trigger configuration of hps_v9_2.cnf
>>>> Jobs for runs 10128, 10129 and 10130 were submitted for cooking just now. 
>>>> 
>>>> Best regards,
>>>> Tongtong
>>>> 
>>>>> On Jul 31, 2019, at 1:24 PM, Tongtong Cao <[log in to unmask]> wrote:
>>>>> 
>>>>> Dear all,
>>>>> 
>>>>> Since yesterday afternoon, we have been taken runs triggered by various trigger configurations.
>>>>> Jobs for runs 10096, 10097, 10101, 10103, 10104, 10115, 10117 and 10118 were submitted for cooking just now.
>>>>> 
>>>>> Run’s setup could be found at https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_spreadsheets_d_1Ru4weeIOqtcpebiXKxFGCbPXGWXZF-2DW2JAolHcpvIFU_edit-23gid-3D43855609&d=DwIFaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=JaSEOiNc_6InrJmbYDvKU2tZqhhONpIkbyl_AUnkDSY&m=mgXu2Pooq94AeSMSPL4GcyXy44W-1STlfK_8v1NoCAw&s=mlQFbQT6g6VYshnHCe4XsxhIqaXSo4E5NnbzuhT5kV0&e= 
>>>>> Trigger configuration files could be found at /usr/clas12/release/1.4.0/parms/trigger/HPS/Run2019/ in the clondaq3 machine.
>>>>> 
>>>>> Best regards,
>>>>> Tongtong
>>>>> 
>>>>>> On Jul 30, 2019, at 11:03 AM, Tongtong Cao <[log in to unmask]> wrote:
>>>>>> 
>>>>>> Dear all,
>>>>>> 
>>>>>> More production runs with trigger hps_v9.cnf have been taken since yesterday.
>>>>>> Jobs for runs 10084, 10085, 10089, 10090, 10091, and 10092 have been submitted for cooking.
>>>>>> 
>>>>>> As mentioned yesterday, DQM has been added into cooking processing.
>>>>>> Cooking outputs are saved in /volatile/hallb/hps/data/run2019/ConcurrentCook
>>>>>> Each run includes three folders to save outputs:
>>>>>> 1) .slcio files from reconstruction is saved in /volatile/hallb/hps/data/run2019/ConcurrentCook/${run}/recon
>>>>>> 2) .root files from DQM is saved in  /volatile/hallb/hps/data/run2019/ConcurrentCook/${run}/dqm
>>>>>> 3) .root files from Make_Root.cc are saved in /volatile/hallb/hps/data/run2019/ConcurrentCook/${run}/rootTree
>>>>>> Additionally, log files are saved in /volatile/hallb/hps/data/run2019/ConcurrentCook/logs
>>>>>> 
>>>>>> Best regards,
>>>>>> Tongtong
>>>>>> 
>>>>>>> On Jul 29, 2019, at 7:41 PM, Tongtong Cao <[log in to unmask]> wrote:
>>>>>>> 
>>>>>>> /volatile/hallb/hps/data/run2019/ConcurrentCook/${run}/dqm
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> _______________________________________________
>> Hps-analysis mailing list
>> [log in to unmask]
>> https://mailman.jlab.org/mailman/listinfo/hps-analysis
> 


_______________________________________________
Hps-analysis mailing list
[log in to unmask]
https://mailman.jlab.org/mailman/listinfo/hps-analysis



Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1