Print

Print


We could probably dump them, IMO.  


> On Sep 1, 2015, at 9:11 AM, Nathan Baltzell <[log in to unmask]> wrote:
> 
> Another pass2 question:
> 
> Looks like we’re still saving all raw hits in LCIO.  Do people use/need/want them?
> 
> I know ECal doesn’t even save a pointer between raw and reconstructed hits.
> 
> In EVIO it would be ~0.4/1.2 GB per file for ECAL/SVT raw hits.  Not sure if/how that
> scales to LCIO.
> 
> 
> 
> On Sep 1, 2015, at 9:01, Graham, Mathew Thomas <[log in to unmask]> wrote:
> 
>> 
>> Yeah, Sho caught these and fixes are in already. 
>> 
>>> On Sep 1, 2015, at 6:45 AM, Nathan Baltzell <[log in to unmask]> wrote:
>>> 
>>> 
>>> tpass2.1 is finished.  Of the 49 jobs, 9 were killed by the batch farm for exceeding 15 hours.  
>>> We can increase that limit too.
>>> 
>>> I noticed no dqm ROOT files nor trigger diagnostics histograms were generated.
>>> But they were run with the same commands as pass1, with steering files:
>>> 
>>> /org/hps/steering/production/DataQualityRecon.lcsim
>>> and
>>> /org/hps/steering/recon/TriggerDiagnosticsAnalysis.lcsim
>>> 
>>> 
>>> Maybe the first one is related to this error:
>>> java.lang.NullPointerException
>>> 	at org.hps.analysis.dataquality.V0Monitoring.process(V0Monitoring.java:270)
>>> 
>>> 
>>> And the 2nd one because DAQConfigDriver was moved, but steering file not updated.
>>> 
>>> But I doubt we need to make the trigger diagnostics histograms again for pass2
>>> anyway, since shouldn't they be identical?
>>> 
>>> -Nathan
>>> 
>>> 
>>> On Aug 31, 2015, at 11:36 AM, Nathan Baltzell <[log in to unmask]> wrote:
>>> 
>>>> I submitted a second test pass with 8GB limit on job size (won’t be
>>>> finished for at least 12 hours):
>>>> 
>>>> /work/hallb/hps/data/engrun2015/tpass2.1 
>>>> 
>>>> 
>>>> Also, there was a pass2 question raised again whether there is flag
>>>> in each event to tell if SVT bias/position is correct?
>>>> 
>>>> 
>>>> 
>>>> On Aug 31, 2015, at 10:48, Nathan Baltzell <[log in to unmask]> wrote:
>>>> 
>>>>> These farm jobs had a disk usage limit of 5 GB set in their xml submission file (same as pass1).
>>>>> I can raise it and see what happens.
>>>>> 
>>>>> 
>>>>> 
>>>>> On Aug 31, 2015, at 10:27, Maurik Holtrop <[log in to unmask]> wrote:
>>>>> 
>>>>>> Hi Matt,
>>>>>> 
>>>>>> I agree with Stepan that *eventually* we will need to pair down the amount of information in the output quite significantly, but for the pass2 trial this is less important. 
>>>>>> 
>>>>>> Until we figure out what it is we really want to keep in the output, you could split the job into two smaller ones, so we end up with a  0a and 0b file, etc.
>>>>>> 
>>>>>> For paring down the output, we should also consider cutting events, not only the information in the events. I suspect we take a lot of events that aren’t all that useful.
>>>>>> 
>>>>>> Best,
>>>>>> 	Maurik
>>>>>> 
>>>>>> 
>>>>>>> On Aug 31, 2015, at 9:42 AM, Graham, Mathew Thomas <[log in to unmask]> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> Well, it looks like the jobs failed around event ~225000 (so, almost made it through) with "Caused by: java.io.IOException: File too large”…they are ~5GB, so I guess that’s the ulimit.  I guess we’re adding too much stuff…more track types (+associated recon particles) and the GBL output.  We could ask to have the file size limit increased or start paring down our recon files, either by cutting out information or events.  Thoughts?
>>>>>>> 
>>>>>>>> On Aug 29, 2015, at 6:57 PM, Nathan Baltzell <[log in to unmask]> wrote:
>>>>>>>> 
>>>>>>>> FYI, here is a test pass on run 5772 with current trunk in preparation for pass2:
>>>>>>>> 
>>>>>>>> /work/hallb/hps/data/engrun2015/tpass2
>>>>>>>> 
>>>>>>>> Everyone should run their code on it to check for problems.
>>>>>>>> 
>>>>>>>> -Nathan
>>>>>>>> 
>>>>>>>> ########################################################################
>>>>>>>> Use REPLY-ALL to reply to list
>>>>>>>> 
>>>>>>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>>>>>> 
>>>>>>> 
>>>>>>> ########################################################################
>>>>>>> Use REPLY-ALL to reply to list
>>>>>>> 
>>>>>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>>>>> 
>>>>> 
>>>>> ########################################################################
>>>>> Use REPLY-ALL to reply to list
>>>>> 
>>>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>>> 
>>>> ########################################################################
>>>> Use REPLY-ALL to reply to list
>>>> 
>>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>> 
>> 
> 

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1