Print

Print


I guess you don't have a JLab computing account?

Is there any web server at UNH that you can upload these files to so I can 
grab them?

The easiest thing to do would be to use HPS2014ReadoutNoPileup.lcsim - 
that doesn't simulate pileup at all, so it's safe to use with unspaced 
events, and is otherwise a drop-in replacement for 
HPS2014ReadoutToLcio.lcsim. I'll add a note about that on the Confluence 
page.

You can also use the instructions for spacing photon beam data for readout 
simulation, given on Confluence - that hasn't been tested for A' events 
but should work. But I would only do that if you find there's something 
about HPS2014ReadoutNoPileup.lcsim that doesn't work for you; and let me 
know in that case.

On Mon, 14 Oct 2013, Kyle McCarty wrote:

> I could upload the SLCIO files somewhere if you can suggest a good place. I
> am unsure of the best way to host them - they are between 0.25 - 2.5 GB, so
> they are rather large.
>
> I had not considered spacing. The background events I had been working with
> were already spaced with many empty events, so I suppose I assumed the A'
> files would be too. I'm happy to apply some spacing if you send me the
> instructions.
> On Oct 14, 2013 1:26 PM, "Sho Uemura" <[log in to unmask]> wrote:
>
>> Is there any way you can make your input LCIO files available?
>>
>> Your events have a lot of hits because you have no spacing between A'
>> events (we normally add empty bunches so that A' events only occur once
>> every 500 events) - that's probably exacerbating any memory problems. You
>> should add empty bunches, since without empty bunches your results are
>> going to be nonsense; in production MC this is done at the stdhep level,
>> but I can work out simpler instructions for you to do it with hps-java.
>>
>> But I agree that your symptoms suggest some memory leak that needs to be
>> looked at. We can help.
>>
>> On Mon, 14 Oct 2013, Kyle McCarty wrote:
>>
>>  Hello hps-software,
>>>
>>> I have been running some A' events through the lcsim software and have
>>> been
>>> running into memory problems.
>>>
>>> System Information:
>>> OS: Red Hat Enterprise Linux Server release 5.7 (Tikanga)
>>> RAM: 22 GB
>>>
>>> SLCIO File Generation Information:
>>> SLIC: 4.9.6
>>> GEANT4: 9.6.1
>>> Geometry: HPS-Proposal2014-v5-6pt6.lcdd
>>> Input Files: ap6.6gevXXXmev.stdhep
>>> where XXX = { 050, 100, 200, 300, 400, 500, 600 } are the A' masses.
>>>
>>> LCSim Information:
>>> hps-java: 1.8
>>> Drivers:
>>>     - EventMarkerDriver
>>>     - CalibrationDriver
>>>     - TestRunTriggeredReconToLcio
>>>     - FADCEcalReadoutDriver
>>>     - EcalRawConverterDriver
>>>     - CTPEcalClusterer
>>>     - FADCTriggerDriver
>>>     - SimpleSvtReadout
>>>     - HPSEcalTriggerPlotsDriver
>>>     - AidaSaveDriver
>>>     - ClockDriver
>>> The steering file is attached for more detailed reference. It is a
>>> modified
>>> version of Sho's HPS2014ReadoutToLcio.lcsim.
>>>
>>> Problem Manifestation:
>>> When I started running the A' events through LCSim, I got heap errors and
>>> OutOfMemoryErrors. These were intially resolved by including the
>>> -Xmx[Amount] option when running, but for the larger files (>1 GB,
>>>> 100,000
>>> events) I still received memory errors even when I allotted Java the
>>> entirety of the server's available memory. I was ultimately able to get
>>> all
>>> the files to run by downloading them to my personal machine (a Windows
>>> device) and running hps-java there, but it was necessary to allot Java
>>> approximately 55 GB of RAM to accomplish this.
>>>
>>> I ran some diagnostics while the LCSim software was running on my local
>>> machine and observed the memory footprint of the software. I found that it
>>> started low, but continually increased throughout the duration of the run.
>>> My guess from what I saw is that the Java virtual machine is not correctly
>>> cleaning old objects from memory, so they are building up causing the
>>> large
>>> event files to rapidly expand in memory.
>>>
>>> I have attached two log files. The first is from the 2.5 GB A' file for
>>> 600
>>> MeV masses. This run was ultimately terminated by me because it reached
>>> the
>>> maximum amount of server memory that I could allot it and then froze while
>>> it tried to get more memory. The second log files are from another run
>>> where it did yield an OutOfMemoryError.
>>>
>>> Any ideas as to cause of this?
>>>
>>> ##############################**##############################**
>>> ############
>>> Use REPLY-ALL to reply to list
>>>
>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> https://listserv.slac.**stanford.edu/cgi-bin/wa?**SUBED1=HPS-SOFTWARE&A=1<https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1>
>>>
>>>
>> ##############################**##############################**
>> ############
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>> https://listserv.slac.**stanford.edu/cgi-bin/wa?**SUBED1=HPS-SOFTWARE&A=1<https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1>
>>
>
> ########################################################################
> Use REPLY-ALL to reply to list
>
> To unsubscribe from the HPS-SOFTWARE list, click the following link:
> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1