Print

Print


Hello Pelle,

Looking at your Monte Carlo steering file, it definitely should not be
using the GTPOnline driver. That only works properly for EvIO readout.
However, it also isn't actually using the GTP clusters for anything
directly, so if you are looking at the recon cluster distributions, those
oddities are not related to the GTP driver. I would guess some manner of
ADC-to-energy issue, since you seem to have a distributions centered well
above 1.1 GeV. Maybe the running pedestals is the problem? I didn't think
we needed that for Monte Carlo.

This will treat each event independently and can tell you if you are having
> problems in your readout of the data (it will show you where the peaks are
> and if you did spacing correctly, etc). This uses the full Recon Clusterer
> which works fine with Monte Carlo (as least I have never had any problems
> with it).
>

The difference between the Monte Carlo and EvIO readout is that the Monte
Carlo treats each event as a 2 ns time interval, so when it writes out hits
from an event, they will be spaced out over several events (higher energy
ones will come first and then lower energy ones later), so if your driver
only looks for hits in one event, you'll miss a bunch of the cluster. The
GTP clustering algorithm holds a buffer of events corresponding to the
temporal clustering window that it collects hits across so that it can
account for this. The GTPOnline just looks at the current event, since EvIO
readout places are the hits in the same event and each event is independent
from the others.

- Kyle

On Tue, May 26, 2015 at 6:30 PM, Hansson Adrian, Per Ola <
[log in to unmask]> wrote:

>
>  Hi Kyle, Sho,
>
>  thanks for the replies.
>
>  Kyle,
>
>  for the readout simulation I was using those drivers: these are in the
> EngineeringRun2015TrigPairs1.lcsim. That seems fine.
>
>  When I run recon to make the *offline* ECal cluster plots that I
> attached, I’m using those defined in EngineeringRun2015FullReconMC.lcsim
> (attached below).
>
>  Maybe an ecal expert can tell me or modify/commit that steering file to
> do what we want (or tell me which one to use)?
>
>  Sho, I updated the EngineeringRun2015FullReconMC.lcsim file and removed
> the time cuts. I’m not 100% sure that’s what we always want so have a look.
> Perhaps we need to have more than one default file depending on what we are
> simulating (overlaid events or not for example).
>
>  It would be good to keep some of these steering files on an “official”
> page somewhere that is maintained by people who actually knows when things
> change that should propagate to users like me who are doing low-level stuff
> myself.
>
>  Should I put them on e.g. :
> https://confluence.slac.stanford.edu/display/hpsg/HPS+Java+Instructions
> if these are correct?
>
>
>  Thanks,
> Pelle
>
>
>
>           <driver name="EcalRunningPedestal"/>
>         <driver name="EcalRawConverter" />
>         <driver name="ReconClusterer" />
>         <driver name="GTPOnlineClusterer" />
>
>
>
>          <driver name="EcalRunningPedestal"
> type="org.hps.recon.ecal.EcalRunningPedestalDriver">
>             <logLevel>CONFIG</logLevel>
>             <minLookbackEvents>10</minLookbackEvents>
>             <maxLookbackEvents>50</maxLookbackEvents>
>         </driver>
>         <driver name="EcalRawConverter"
> type="org.hps.recon.ecal.EcalRawConverterDriver">
>             <ecalCollectionName>EcalCalHits</ecalCollectionName>
>             <use2014Gain>false</use2014Gain>
>             <useTimestamps>false</useTimestamps>
>             <useTruthTime>false</useTruthTime>
>             <useRunningPedestal>true</useRunningPedestal>
>             <useTimeWalkCorrection>true</useTimeWalkCorrection>
>             <emulateFirmware>true</emulateFirmware>
>             <emulateMode7>true</emulateMode7>
>             <leadingEdgeThreshold>12</leadingEdgeThreshold>
>             <nsa>100</nsa>
>             <nsb>20</nsb>
>             <windowSamples>50</windowSamples>
>             <nPeak>3</nPeak>
>         </driver>
>         <driver name="ReconClusterer"
> type="org.hps.recon.ecal.cluster.ReconClusterDriver">
>             <logLevel>WARNING</logLevel>
>
> <outputClusterCollectionName>EcalClusters</outputClusterCollectionName>
>             <hitEnergyThreshold>0.01</hitEnergyThreshold>
>             <seedEnergyThreshold>0.100</seedEnergyThreshold>
>             <clusterEnergyThreshold>0.200</clusterEnergyThreshold>
>             <minTime>0.0</minTime>
>             <timeWindow>25.0</timeWindow>
>             <useTimeCut>true</useTimeCut>
>             <writeRejectedHitCollection>false</writeRejectedHitCollection>
>         </driver>
>         <driver name="GTPOnlineClusterer"
> type="org.hps.recon.ecal.cluster.ClusterDriver">
>             <logLevel>WARNING</logLevel>
>             <clustererName>GTPOnlineClusterer</clustererName>
>
> <outputClusterCollectionName>EcalClustersGTP</outputClusterCollectionName>
>             <!-- seedMinEnergy -->
>             <cuts>0.100</cuts>
>         </driver>
>
>
>
>
>
>
>
>
>
>
>  On May 26, 2015, at 2:50 PM, Kyle McCarty <[log in to unmask]> wrote:
>
>   Hello Pelle,
>
>  Your cluster energy distributions definitely look different from what I
> saw (see the attached PDF). What clustering algorithm are you using? You
> need to use the GTPClusterer algorithm for it work right with Monte Carlo;
> you might be using GTPOnlineClusterer, which only works for the EvIO
> readout. I'm not sure why you would get more When I run reconstruction for
> Monte Carlo, I used the following drivers (note that I didn't include
> tracking):
>
>    1. org.hps.conditions.ConditionsDriver
>    2. org.hps.readout.ecal.FADCEcalReadoutDriver
>    3. org.hps.recon.ecal.EcalRawConverterDriver
>    4. org.hps.recon.ecal.cluster.GTPClusterDriver
>    5. org.hps.readout.ecal.FADCPrimaryTriggerDriver
>
>  You need the GTPClusterDriver to properly handle Monte Carlo hit readout,
> GTP clustering to make sure that you are simulating the hardware, and then
> the FADCPrimaryTriggerDriver simulates the hardware pair trigger. Are you
> using these same drivers?
>
>  - Kyle
>
> On Tue, May 26, 2015 at 5:42 PM, Hansson Adrian, Per Ola <
> [log in to unmask]> wrote:
>
>>  Hi Again,
>>
>>  following up on this topic.
>>
>>  Running recon:
>>
>>  java -jar target/hps-distribution-3.3.1-SNAPSHOT-bin.jar
>> /org/hps/steering/recon/EngineeringRun2015FullRecon.lcsim
>> -DoutputFile=outfile -i bunched-readout.slcio
>>
>>  I see ECal energies that don’t look right. Any idea?
>>
>>  I see about 180 tracks in these 1342 triggered events. They look ok but
>> the number seems low?
>>
>>  Am I forgetting something again?
>>
>>  /Pelle
>>
>>
>>  <Screen Shot 2015-05-26 at 2.22.35 PM.png>
>> <Screen Shot 2015-05-26 at 2.31.44 PM.png>
>>
>>
>>  On May 26, 2015, at 1:43 PM, Hansson Adrian, Per Ola <
>> [log in to unmask]> wrote:
>>
>>
>>  Ok,
>>
>>  using 150 bunches on 10k events I get ~13% acceptance.
>>
>>  Thanks,
>> pelle
>>
>>
>>  Trigger Processing Results
>> Single-Cluster Cuts
>> Total Clusters Processed     :: 20295
>> Passed Seed Energy Cut       :: 14119
>> Passed Hit Count Cut         :: 11938
>> Passed Total Energy Cut      :: 10991
>>
>>  Cluster Pair Cuts
>> Total Pairs Processed        :: 3168
>> Passed Energy Sum Cut        :: 2942
>> Passed Energy Difference Cut :: 2941
>> Passed Energy Slope Cut      :: 2755
>> Passed Coplanarity Cut       :: 1342
>>
>>  Trigger Count :: 1342
>>
>>  Trigger Module Cut Values:
>> Seed Energy Low        :: 0.050
>> Seed Energy High       :: 6.600
>> Cluster Energy Low     :: 0.060
>> Cluster Energy High    :: 0.630
>> Cluster Hit Count      :: 2
>> Pair Energy Sum Low    :: 0.200
>> Pair Energy Sum High   :: 0.860
>> Pair Energy Difference :: 0.540
>> Pair Energy Slope      :: 0.6
>> Pair Coplanarity       :: 30.0
>> FADCPrimaryTriggerDriver: Trigger count: 1342
>>
>>
>>
>>  On May 25, 2015, at 3:23 PM, Kyle McCarty <[log in to unmask]> wrote:
>>
>>    Hello Pelle,
>>
>>  If you are reconstructing everything from a SLiC file, you need to space
>> out the events because each one contains an A' event and will produce weird
>> pile-up otherwise. I usually insert around 150 empty events between each
>> real event to ensure that the hits and clusters (which are displaced by
>> around 60ish events from the source) do not overlap at all. You can do this
>> with the command:
>>
>>  java -cp $HPS_JAVA org.hps.users.meeg.FilterMCBunches $INPUT $OUTPUT
>>> -e150 -a
>>>
>>
>>  You should definitely get better acceptance that that as well; I got
>> somewhere between 15% - 20% acceptance for 40 MeV (I don't have the exact
>> value on hand).
>>
>>  Note also that there are (or at least were last I checked) differences
>> in how the Monte Carlo simulation is stored versus the EvIO. In the EvIO
>> data, all the hits and clusters for an event are stored in the same event,
>> but in Monte Carlo readout, they are spaced across several events with each
>> event representing 2 ns of time. As such, some drivers that work for EvIO
>> readout do not for the Monte Carlo (this mainly affects clustering). Sho
>> can correct me if I am misrepresenting something here.
>>
>>  - Kyle
>>
>> On Mon, May 25, 2015 at 6:08 PM, Hansson Adrian, Per Ola <
>> [log in to unmask]> wrote:
>>
>>> Just realized, how is the bunch spacing simulated in the readout step? I
>>> suppose I need to add some amount of “time” between the events or do a
>>> “NoPileUp” simulation...
>>>
>>>  /Pelle
>>>
>>>  On May 25, 2015, at 3:04 PM, Hansson Adrian, Per Ola <
>>> [log in to unmask]> wrote:
>>>
>>>  Hi,
>>>
>>>  Have some issues getting the MC simulation to run.
>>>
>>>  I’m running readout and recon over some 1.1GeV 40MeV A' MC files
>>> locally with the trunk. Slic file is here:
>>>
>>>  /nfs/slac/g/hps3/users/phansson/ap1.1gev40mevall_1_200ux40u_beamspot_gammactau_0cm_30mrad_SLIC-v04-00-00_Geant4-v10-00-02_QGSP_BERT_HPS-EngRun2015-Nominal-v1.slcio
>>>
>>>
>>>  I’m using Nominal-v1 detector and I simulate readout with:
>>>
>>>   java -jar target/hps-distribution-3.3.1-SNAPSHOT-bin.jar
>>> ../kepler2/hps-java/steering-files/src/main/resources/org/hps/steering/readout/EngineeringRun2015TrigPairs1.lcsim
>>> -Ddetector=HPS-EngRun2015-Nominal-v1 -Drun=2000
>>>
>>>
>>>  I see 245 accepted events out of 10k events which is much lower than I
>>> would naively expect?
>>>
>>>
>>>  I then run recon with:
>>>
>>>   java -jar
>>> target/hps-distribution-3.3.1-SNAPSHOT-bin.jar  ../kepler2/hps-java/steering-files/src/main/resources/org/hps/steering/recon/EngineeringRun2015FullRecon.lcsim
>>> -DoutputFile=outfiles/ap1.1gev40mevall_1_200ux40u_beamspot_gammactau_0cm_30mrad_SLIC-v04-00-00_Geant4-v10-00-02_QGSP_BERT_HPS-EngRun2015-Nominal-v1-recon
>>> -i
>>> outfiles/ap1.1gev40mevall_1_200ux40u_beamspot_gammactau_0cm_30mrad_SLIC-v04-00-00_Geant4-v10-00-02_QGSP_BERT_HPS-EngRun2015-Nominal-v1-readout.slcio
>>>
>>>   And I basically get to the 2nd event and then it just hangs there for
>>> 5-10mins.
>>>
>>>
>>>
>>>  Are these the right steps with the latest and greatest sw? Any ideas?
>>>
>>>  /pelle
>>>
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Use REPLY-ALL to reply to list
>>>
>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Use REPLY-ALL to reply to list
>>>
>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>>
>>
>>
>>
>> ------------------------------
>>
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>
>>
>>
>> ------------------------------
>>
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>
>
>  <v5_1-hit.pdf>
>
>
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1