Print

Print


Hi Pelle,

I have a simple steering file here that can take in your non-spaced events:
org.hps.steering.users.holly.EcalReadoutNoPileUp

This will treat each event independently and can tell you if you are having
problems in your readout of the data (it will show you where the peaks are
and if you did spacing correctly, etc). This uses the full Recon Clusterer
which works fine with Monte Carlo (as least I have never had any problems
with it).

-Holly


On Tue, May 26, 2015 at 5:50 PM, Kyle McCarty <[log in to unmask]> wrote:

> Hello Pelle,
>
> Your cluster energy distributions definitely look different from what I
> saw (see the attached PDF). What clustering algorithm are you using? You
> need to use the GTPClusterer algorithm for it work right with Monte Carlo;
> you might be using GTPOnlineClusterer, which only works for the EvIO
> readout. I'm not sure why you would get more When I run reconstruction for
> Monte Carlo, I used the following drivers (note that I didn't include
> tracking):
>
>    1. org.hps.conditions.ConditionsDriver
>    2. org.hps.readout.ecal.FADCEcalReadoutDriver
>    3. org.hps.recon.ecal.EcalRawConverterDriver
>    4. org.hps.recon.ecal.cluster.GTPClusterDriver
>    5. org.hps.readout.ecal.FADCPrimaryTriggerDriver
>
> You need the GTPClusterDriver to properly handle Monte Carlo hit readout,
> GTP clustering to make sure that you are simulating the hardware, and then
> the FADCPrimaryTriggerDriver simulates the hardware pair trigger. Are you
> using these same drivers?
>
> - Kyle
>
> On Tue, May 26, 2015 at 5:42 PM, Hansson Adrian, Per Ola <
> [log in to unmask]> wrote:
>
>>  Hi Again,
>>
>>  following up on this topic.
>>
>>  Running recon:
>>
>>  java -jar target/hps-distribution-3.3.1-SNAPSHOT-bin.jar
>> /org/hps/steering/recon/EngineeringRun2015FullRecon.lcsim
>> -DoutputFile=outfile -i bunched-readout.slcio
>>
>>  I see ECal energies that don’t look right. Any idea?
>>
>>  I see about 180 tracks in these 1342 triggered events. They look ok but
>> the number seems low?
>>
>>  Am I forgetting something again?
>>
>>  /Pelle
>>
>>
>>
>>
>>  On May 26, 2015, at 1:43 PM, Hansson Adrian, Per Ola <
>> [log in to unmask]> wrote:
>>
>>
>>  Ok,
>>
>>  using 150 bunches on 10k events I get ~13% acceptance.
>>
>>  Thanks,
>> pelle
>>
>>
>>  Trigger Processing Results
>> Single-Cluster Cuts
>> Total Clusters Processed     :: 20295
>> Passed Seed Energy Cut       :: 14119
>> Passed Hit Count Cut         :: 11938
>> Passed Total Energy Cut      :: 10991
>>
>>  Cluster Pair Cuts
>> Total Pairs Processed        :: 3168
>> Passed Energy Sum Cut        :: 2942
>> Passed Energy Difference Cut :: 2941
>> Passed Energy Slope Cut      :: 2755
>> Passed Coplanarity Cut       :: 1342
>>
>>  Trigger Count :: 1342
>>
>>  Trigger Module Cut Values:
>> Seed Energy Low        :: 0.050
>> Seed Energy High       :: 6.600
>> Cluster Energy Low     :: 0.060
>> Cluster Energy High    :: 0.630
>> Cluster Hit Count      :: 2
>> Pair Energy Sum Low    :: 0.200
>> Pair Energy Sum High   :: 0.860
>> Pair Energy Difference :: 0.540
>> Pair Energy Slope      :: 0.6
>> Pair Coplanarity       :: 30.0
>> FADCPrimaryTriggerDriver: Trigger count: 1342
>>
>>
>>
>>  On May 25, 2015, at 3:23 PM, Kyle McCarty <[log in to unmask]> wrote:
>>
>>    Hello Pelle,
>>
>>  If you are reconstructing everything from a SLiC file, you need to space
>> out the events because each one contains an A' event and will produce weird
>> pile-up otherwise. I usually insert around 150 empty events between each
>> real event to ensure that the hits and clusters (which are displaced by
>> around 60ish events from the source) do not overlap at all. You can do this
>> with the command:
>>
>>  java -cp $HPS_JAVA org.hps.users.meeg.FilterMCBunches $INPUT $OUTPUT
>>> -e150 -a
>>>
>>
>>  You should definitely get better acceptance that that as well; I got
>> somewhere between 15% - 20% acceptance for 40 MeV (I don't have the exact
>> value on hand).
>>
>>  Note also that there are (or at least were last I checked) differences
>> in how the Monte Carlo simulation is stored versus the EvIO. In the EvIO
>> data, all the hits and clusters for an event are stored in the same event,
>> but in Monte Carlo readout, they are spaced across several events with each
>> event representing 2 ns of time. As such, some drivers that work for EvIO
>> readout do not for the Monte Carlo (this mainly affects clustering). Sho
>> can correct me if I am misrepresenting something here.
>>
>>  - Kyle
>>
>> On Mon, May 25, 2015 at 6:08 PM, Hansson Adrian, Per Ola <
>> [log in to unmask]> wrote:
>>
>>> Just realized, how is the bunch spacing simulated in the readout step? I
>>> suppose I need to add some amount of “time” between the events or do a
>>> “NoPileUp” simulation...
>>>
>>>  /Pelle
>>>
>>>  On May 25, 2015, at 3:04 PM, Hansson Adrian, Per Ola <
>>> [log in to unmask]> wrote:
>>>
>>>  Hi,
>>>
>>>  Have some issues getting the MC simulation to run.
>>>
>>>  I’m running readout and recon over some 1.1GeV 40MeV A' MC files
>>> locally with the trunk. Slic file is here:
>>>
>>>  /nfs/slac/g/hps3/users/phansson/ap1.1gev40mevall_1_200ux40u_beamspot_gammactau_0cm_30mrad_SLIC-v04-00-00_Geant4-v10-00-02_QGSP_BERT_HPS-EngRun2015-Nominal-v1.slcio
>>>
>>>
>>>  I’m using Nominal-v1 detector and I simulate readout with:
>>>
>>>   java -jar target/hps-distribution-3.3.1-SNAPSHOT-bin.jar
>>> ../kepler2/hps-java/steering-files/src/main/resources/org/hps/steering/readout/EngineeringRun2015TrigPairs1.lcsim
>>> -Ddetector=HPS-EngRun2015-Nominal-v1 -Drun=2000
>>>
>>>
>>>  I see 245 accepted events out of 10k events which is much lower than I
>>> would naively expect?
>>>
>>>
>>>  I then run recon with:
>>>
>>>   java -jar
>>> target/hps-distribution-3.3.1-SNAPSHOT-bin.jar  ../kepler2/hps-java/steering-files/src/main/resources/org/hps/steering/recon/EngineeringRun2015FullRecon.lcsim
>>> -DoutputFile=outfiles/ap1.1gev40mevall_1_200ux40u_beamspot_gammactau_0cm_30mrad_SLIC-v04-00-00_Geant4-v10-00-02_QGSP_BERT_HPS-EngRun2015-Nominal-v1-recon
>>> -i
>>> outfiles/ap1.1gev40mevall_1_200ux40u_beamspot_gammactau_0cm_30mrad_SLIC-v04-00-00_Geant4-v10-00-02_QGSP_BERT_HPS-EngRun2015-Nominal-v1-readout.slcio
>>>
>>>   And I basically get to the 2nd event and then it just hangs there for
>>> 5-10mins.
>>>
>>>
>>>
>>>  Are these the right steps with the latest and greatest sw? Any ideas?
>>>
>>>  /pelle
>>>
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Use REPLY-ALL to reply to list
>>>
>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Use REPLY-ALL to reply to list
>>>
>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>>
>>
>>
>>
>> ------------------------------
>>
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>
>>
>>
>> ------------------------------
>>
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>
>
>
> ------------------------------
>
> Use REPLY-ALL to reply to list
>
> To unsubscribe from the HPS-SOFTWARE list, click the following link:
> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>  ------------------------------
> NOTE: This message was trained as non-spam. If this is wrong, please
> correct the training as soon as possible.
> Spam
> <https://www.spamtrap.odu.edu/canit/b.php?i=01OwJPo8L&m=d9d12cadc0a9&t=20150526&c=s>
> Not spam
> <https://www.spamtrap.odu.edu/canit/b.php?i=01OwJPo8L&m=d9d12cadc0a9&t=20150526&c=n>
> Forget previous vote
> <https://www.spamtrap.odu.edu/canit/b.php?i=01OwJPo8L&m=d9d12cadc0a9&t=20150526&c=f>
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1