Print

Print


To be clear, there is only one FilterMCBunches command. We are using the 
same tool, but probably not in quite the same way:

You are taking a signal SLIC output file of 888 MB (I would guess ~180k 
events), adding some number of empty events after each event to get a 26 
GB file (probably ~50 milion events), then running that through the 
readout simulation. You are not using beam background at any point; for 
that you would need to use a separate SLIC output file containing beam 
background.


I take a 61 MB signal file (tritrig.slcio) of 12000 events; this is a 
"trigger tridents" SLIC output file as described in the Confluence page 
(linked below). I use command-line arguments to FilterMCBunches to tell it 
to accept only events leaving 100 MeV in both halves of the ECal (this 
filter accepts roughly 19% of the trigger trident events), stop after it 
accepts 2000 events, and space events by 250 (add 249 empty events after 
each event).

This is the actual command that does that:

java -cp ${hps-java} org.hps.users.meeg.FilterMCBunches -e250 
tritrig.slcio tritrig_filt.slcio -d -E0.1 -w2000

The output of this command (tritrig_filt.slcio) is 0.5 million events, 90 
MB.

Then I merge this file, event-by-event, with a 0.5 million event file of 
beam background, 1.2 GB. The output of the merge is 1.2 GB.

https://confluence.slac.stanford.edu/display/hpsg/Finding+Monte+Carlo+data+at+JLab


ECal readout window: I don't know what FADC settings are being used for 
the new run, but in the test run the FADCs were set to read out and search 
for pulses in a 400 ns window of the readout pipeline. As you note, this 
is much longer than the pulse width, but there is a separate setting that 
determines how much of each pulse is integrated. So it is good for the 
readout window to be significantly wider than the integration window.

SVT readout: For each event, the SVT reads out 6 samples with a 24 ns (not 
25 ns) spacing. This is the same for every sensor and channel, though 
channels without hits are not saved in the data. So the readout covers a 
time of 6*24=144 ns.

On Mon, 6 Oct 2014, [log in to unmask] wrote:

> Hi Sho,
>
> sorry, I am not sure to understand: you have signal StdHep file of half 
> million events of 90 Mb, then 1.2 Gb is the file merged with beam background? 
> I though your "FilterMCBunches" does something similar, but apparently from a 
> similar signal file I get 26 Gb of "merged" (probably included empty bunches 
> only, and hopefully some detector noise?) events.
>
> Ether the simulated beam background appears only in 5% of beam bunches or 
> "FilterMCBunches" does something completely different.
>
> Why ECal reads 400 ns of data? I though shaping time was ~30 ns? One could 
> expect ~100 ns of sampling.
>
> Best Regards,
>            Mikhail.
>
>
>
>
> On 10/03/2014 05:50 PM, Sho Uemura wrote:
>> me, and I think (for the reasons I gave earlier) that for production MC the 
>> size of the spaced file is never significant relative to the other 
>> resources (the beam background file, CPU). For user work that is (as you've 
>> seen) not always the case, but I still think that this can be mitigated 
>> with better use of the filter tools built in to FilterMCBunches. (if you 
>> run FilterMCBunches with a -e argument but nothing else, it will print a 
>> des
>
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1