Print

Print


Hello Pelle,

For SLiC, you probably want to talk to Matt or Sho. I think that's more of
a Monte Carlo simulation expert question that an Ecal one.

If you have an SLCIO file, I can check it, but I can't really get at it
easily on the SLAC system. I've CC'd Matt on this email. Maybe he can help
both these? If I can get the base SLiC output file, I can try running
readout and see if I reproduce errors and look into where they come from.

- Kyle
On Jun 3, 2015 1:30 PM, "Hansson Adrian, Per Ola" <
[log in to unmask]> wrote:

>
>  Hi All,
>
>  here is the topic that I brought up during the meeting.
>
>  Here is a std hep file that you can use if you don’t have one (at SLAC,
> not sure where to find them at JLab):
>
>  /u/br/mgraham/hps/DarkPhoton/SignalEvents/
> ap1.1gev40mevall_1_200ux40u_beamspot_gammactau_0cm_30mrad.stdhep
>
>  If you want to avoid slic and bunching there are files here with that
> done:
>
>  /nfs/slac/g/hps3/users/phansson/
>
>
>  I think the best would be best if an ecal expert can simply start from a
> std hep file and run slic->bunching->readout sim-> recon on some well known
> process on the HPS-EngRun-Nominal-v1 detector, like the A’ file above.
>
>  I didn’t try Holly’s no-pile up as I would like to have the full
> treatment of the ecal but it would be good to verify how to run that too
> but to me personally it has less priority than just having something that
> works with the standard readout simulation that we are supposed to use
> unless I know what I’m doing.
>
>  The steering files I use are:
> EngineeringRun2015TrigPairs1.lcsim
> EngineeringRun2015FullReconMC.lcsim
>
>  I added a section in this existing page that is supposed to tell us what
> steering files that is recommended:
>
>
> https://confluence.slac.stanford.edu/display/hpsg/Steering+files+in+hps-java
>
>  I suggest each subsystem take the responsibility of keeping this page
> (and files) updated.
>
>  I hope this helps track down the issue.
>
>  Thanks,
> Pelle
>
>
>  On Jun 2, 2015, at 11:29 AM, Omar Moreno <[log in to unmask]> wrote:
>
>  Hi,
>
>  Was the issue with the Ecal energies resolved?
>
>  --Omar Moreno
>
> On Tue, May 26, 2015 at 4:35 PM, Kyle McCarty <[log in to unmask]> wrote:
>
>>  Hello Nathan,
>>
>>  I had one a long time ago, but I don't think it exists anymore. It's
>> kind of a pain to do; you have to insert a bunch of empty events in between
>> each Monte Carlo event (around 150 is safe) and then run the reconstruction
>> like normal, except stop before clustering. Because the FADC hits are not
>> separated by more event than 150, you know that every FADC hit belongs to
>> the whatever the last event containing raw hits was and can collect them
>> all and write them out in a single event.
>>
>>  - Kyle
>>
>> On Tue, May 26, 2015 at 7:30 PM, Nathan Baltzell <[log in to unmask]>
>> wrote:
>>
>>> Very nice picture explanation!
>>>
>>> Does a driver exist in hps-java to take care of this and merge it back
>>> into a
>>> format like real data, independently of the clustering algorithm?
>>> Shouldn't it?
>>>
>>>
>>>
>>> On May 26, 2015, at 7:02 PM, Kyle McCarty <[log in to unmask]> wrote:
>>>
>>> > Hello Holly,
>>> >
>>> > You aren't using running pedestals in MC- it's not even using this
>>> driver
>>> >
>>> > Do you mean that it's not used by anything in the driver (and could
>>> therefore just be removed entirely) or that it isn't present in the driver?
>>> It is definitely in my version of that steering.
>>> >
>>> > Holly, what does your driver actually look at for hits? If it looks at
>>> raw hits, you are fine with Monte Carlo. If you look at FADC hits, though,
>>> you would need to be able to look across multiple events to form your
>>> clusters. Consider the graphic below:
>>> >
>>> > <HitDist.png>
>>> >
>>> > The raw hits (the unprocessed SLiC output) is converted into FADC
>>> hits, but the FADC hits that correspond to the cluster are actually spread
>>> across six events (FADC hits are only written out every other event to
>>> simulate a clock-cycle) different events. Thusly, to correctly form this
>>> cluster, you would need to retain all six of those events. Otherwise, you
>>> are probably forming three different clusters from the same cluster (or
>>> possible just losing energy, since the latter hits may below threshold).
>>> >
>>> > - Kyle
>>> >
>>> > On Tue, May 26, 2015 at 6:48 PM, Holly Vance <[log in to unmask]> wrote:
>>> > Hi Pelle,
>>> >
>>> > A few things to check:
>>> > What does the timing of the hits in the cluster look like? There is a
>>> time cut set in the ReconClusterer, but it is probably going to be slightly
>>> different for MC (not sure).
>>> >
>>> > In general, these problems tend to arise when making the hits into
>>> readout hits. (You aren't using running pedestals in MC- it's not even
>>> using this driver). I suspect the issue is in EcalRawConverter.
>>> >
>>> > On Tue, May 26, 2015 at 6:32 PM, Hansson Adrian, Per Ola <
>>> [log in to unmask]> wrote:
>>> > Thanks Holly,
>>> >
>>> >
>>> >
>>> > On May 26, 2015, at 3:11 PM, Holly Vance <[log in to unmask]> wrote:
>>> >
>>> >> Hi Pelle,
>>> >>
>>> >> I have a simple steering file here that can take in your non-spaced
>>> events:
>>> >> org.hps.steering.users.holly.EcalReadoutNoPileUp
>>> >>
>>> >
>>> > Cool. That is good to have.
>>> >
>>> >> This will treat each event independently and can tell you if you are
>>> having problems in your readout of the data (it will show you where the
>>> peaks are and if you did spacing correctly, etc). This uses the full Recon
>>> Clusterer which works fine with Monte Carlo (as least I have never had any
>>> problems with it).
>>> >>
>>> >
>>> > Right now I’m trying to use our official simulation (see other email,
>>> didn’t see your reply before sending it) which includes spaced out events.
>>> The energy of the recon clusters are a little weird. Positions doesn’t seem
>>> crazy though. Any ideas what can go wrong?
>>> >
>>> > /pelle
>>> >
>>> >
>>> >> -Holly
>>> >>
>>> >>
>>> >> On Tue, May 26, 2015 at 5:50 PM, Kyle McCarty <[log in to unmask]>
>>> wrote:
>>> >> Hello Pelle,
>>> >>
>>> >> Your cluster energy distributions definitely look different from what
>>> I saw (see the attached PDF). What clustering algorithm are you using? You
>>> need to use the GTPClusterer algorithm for it work right with Monte Carlo;
>>> you might be using GTPOnlineClusterer, which only works for the EvIO
>>> readout. I'm not sure why you would get more When I run reconstruction for
>>> Monte Carlo, I used the following drivers (note that I didn't include
>>> tracking):
>>> >>      • org.hps.conditions.ConditionsDriver
>>> >>      • org.hps.readout.ecal.FADCEcalReadoutDriver
>>> >>      • org.hps.recon.ecal.EcalRawConverterDriver
>>> >>      • org.hps.recon.ecal.cluster.GTPClusterDriver
>>> >>      • org.hps.readout.ecal.FADCPrimaryTriggerDriver
>>> >> You need the GTPClusterDriver to properly handle Monte Carlo hit
>>> readout, GTP clustering to make sure that you are simulating the hardware,
>>> and then the FADCPrimaryTriggerDriver simulates the hardware pair trigger.
>>> Are you using these same drivers?
>>> >>
>>> >> - Kyle
>>> >>
>>> >> On Tue, May 26, 2015 at 5:42 PM, Hansson Adrian, Per Ola <
>>> [log in to unmask]> wrote:
>>> >> Hi Again,
>>> >>
>>> >> following up on this topic.
>>> >>
>>> >> Running recon:
>>> >>
>>> >> java -jar target/hps-distribution-3.3.1-SNAPSHOT-bin.jar
>>> /org/hps/steering/recon/EngineeringRun2015FullRecon.lcsim
>>> -DoutputFile=outfile -i bunched-readout.slcio
>>> >>
>>> >> I see ECal energies that don’t look right. Any idea?
>>> >>
>>> >> I see about 180 tracks in these 1342 triggered events. They look ok
>>> but the number seems low?
>>> >>
>>> >> Am I forgetting something again?
>>> >>
>>> >> /Pelle
>>> >>
>>> >>
>>> >> <Screen Shot 2015-05-26 at 2.22.35 PM.png>
>>> >> <Screen Shot 2015-05-26 at 2.31.44 PM.png>
>>> >>
>>> >>
>>> >> On May 26, 2015, at 1:43 PM, Hansson Adrian, Per Ola <
>>> [log in to unmask]> wrote:
>>> >>
>>> >>>
>>> >>> Ok,
>>> >>>
>>> >>> using 150 bunches on 10k events I get ~13% acceptance.
>>> >>>
>>> >>> Thanks,
>>> >>> pelle
>>> >>>
>>> >>>
>>> >>> Trigger Processing Results
>>> >>> Single-Cluster Cuts
>>> >>> Total Clusters Processed     :: 20295
>>> >>> Passed Seed Energy Cut       :: 14119
>>> >>> Passed Hit Count Cut         :: 11938
>>> >>> Passed Total Energy Cut      :: 10991
>>> >>>
>>> >>> Cluster Pair Cuts
>>> >>> Total Pairs Processed        :: 3168
>>> >>> Passed Energy Sum Cut        :: 2942
>>> >>> Passed Energy Difference Cut :: 2941
>>> >>> Passed Energy Slope Cut      :: 2755
>>> >>> Passed Coplanarity Cut       :: 1342
>>> >>>
>>> >>> Trigger Count :: 1342
>>> >>>
>>> >>> Trigger Module Cut Values:
>>> >>> Seed Energy Low        :: 0.050
>>> >>> Seed Energy High       :: 6.600
>>> >>> Cluster Energy Low     :: 0.060
>>> >>> Cluster Energy High    :: 0.630
>>> >>> Cluster Hit Count      :: 2
>>> >>> Pair Energy Sum Low    :: 0.200
>>> >>> Pair Energy Sum High   :: 0.860
>>> >>> Pair Energy Difference :: 0.540
>>> >>> Pair Energy Slope      :: 0.6
>>> >>> Pair Coplanarity       :: 30.0
>>> >>> FADCPrimaryTriggerDriver: Trigger count: 1342
>>> >>>
>>> >>>
>>> >>>
>>> >>> On May 25, 2015, at 3:23 PM, Kyle McCarty <[log in to unmask]> wrote:
>>> >>>
>>> >>>> Hello Pelle,
>>> >>>>
>>> >>>> If you are reconstructing everything from a SLiC file, you need to
>>> space out the events because each one contains an A' event and will produce
>>> weird pile-up otherwise. I usually insert around 150 empty events between
>>> each real event to ensure that the hits and clusters (which are displaced
>>> by around 60ish events from the source) do not overlap at all. You can do
>>> this with the command:
>>> >>>>
>>> >>>> java -cp $HPS_JAVA org.hps.users.meeg.FilterMCBunches $INPUT
>>> $OUTPUT -e150 -a
>>> >>>>
>>> >>>> You should definitely get better acceptance that that as well; I
>>> got somewhere between 15% - 20% acceptance for 40 MeV (I don't have the
>>> exact value on hand).
>>> >>>>
>>> >>>> Note also that there are (or at least were last I checked)
>>> differences in how the Monte Carlo simulation is stored versus the EvIO. In
>>> the EvIO data, all the hits and clusters for an event are stored in the
>>> same event, but in Monte Carlo readout, they are spaced across several
>>> events with each event representing 2 ns of time. As such, some drivers
>>> that work for EvIO readout do not for the Monte Carlo (this mainly affects
>>> clustering). Sho can correct me if I am misrepresenting something here.
>>> >>>>
>>> >>>> - Kyle
>>> >>>>
>>> >>>> On Mon, May 25, 2015 at 6:08 PM, Hansson Adrian, Per Ola <
>>> [log in to unmask]> wrote:
>>> >>>> Just realized, how is the bunch spacing simulated in the readout
>>> step? I suppose I need to add some amount of “time” between the events or
>>> do a “NoPileUp” simulation...
>>> >>>>
>>> >>>> /Pelle
>>> >>>>
>>> >>>> On May 25, 2015, at 3:04 PM, Hansson Adrian, Per Ola <
>>> [log in to unmask]> wrote:
>>> >>>>
>>> >>>>> Hi,
>>> >>>>>
>>> >>>>> Have some issues getting the MC simulation to run.
>>> >>>>>
>>> >>>>> I’m running readout and recon over some 1.1GeV 40MeV A' MC files
>>> locally with the trunk. Slic file is here:
>>> >>>>>
>>> >>>>>
>>> /nfs/slac/g/hps3/users/phansson/ap1.1gev40mevall_1_200ux40u_beamspot_gammactau_0cm_30mrad_SLIC-v04-00-00_Geant4-v10-00-02_QGSP_BERT_HPS-EngRun2015-Nominal-v1.slcio
>>> >>>>>
>>> >>>>> I’m using Nominal-v1 detector and I simulate readout with:
>>> >>>>>
>>> >>>>> java -jar target/hps-distribution-3.3.1-SNAPSHOT-bin.jar
>>> ../kepler2/hps-java/steering-files/src/main/resources/org/hps/steering/readout/EngineeringRun2015TrigPairs1.lcsim
>>> -Ddetector=HPS-EngRun2015-Nominal-v1 -Drun=2000
>>> >>>>>
>>> >>>>>
>>> >>>>> I see 245 accepted events out of 10k events which is much lower
>>> than I would naively expect?
>>> >>>>>
>>> >>>>>
>>> >>>>> I then run recon with:
>>> >>>>>
>>> >>>>> java -jar target/hps-distribution-3.3.1-SNAPSHOT-bin.jar
>>> ../kepler2/hps-java/steering-files/src/main/resources/org/hps/steering/recon/EngineeringRun2015FullRecon.lcsim
>>> -DoutputFile=outfiles/ap1.1gev40mevall_1_200ux40u_beamspot_gammactau_0cm_30mrad_SLIC-v04-00-00_Geant4-v10-00-02_QGSP_BERT_HPS-EngRun2015-Nominal-v1-recon
>>> -i
>>> outfiles/ap1.1gev40mevall_1_200ux40u_beamspot_gammactau_0cm_30mrad_SLIC-v04-00-00_Geant4-v10-00-02_QGSP_BERT_HPS-EngRun2015-Nominal-v1-readout.slcio
>>> >>>>>
>>> >>>>> And I basically get to the 2nd event and then it just hangs there
>>> for 5-10mins.
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>> Are these the right steps with the latest and greatest sw? Any
>>> ideas?
>>> >>>>>
>>> >>>>> /pelle
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>>  >>>>> Use REPLY-ALL to reply to list
>>> >>>>>
>>> >>>>> To unsubscribe from the HPS-SOFTWARE list, click the following
>>> link:
>>> >>>>>
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>> >>>>>
>>> >>>>
>>> >>>>
>>> >>>> Use REPLY-ALL to reply to list
>>> >>>>
>>> >>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> >>>>
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>> >>>>
>>> >>>>
>>> >>>
>>> >>>
>>> >>> Use REPLY-ALL to reply to list
>>> >>>
>>> >>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> >>>
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>> >>>
>>> >>
>>> >>
>>> >> Use REPLY-ALL to reply to list
>>> >>
>>> >> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> >> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>> >>
>>> >>
>>> >>
>>> >> Use REPLY-ALL to reply to list
>>> >>
>>> >> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> >> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>> >>
>>> >> NOTE: This message was trained as non-spam. If this is wrong, please
>>> correct the training as soon as possible.
>>> >> Spam
>>> >> Not spam
>>> >> Forget previous vote
>>> >
>>> > NOTE: This message was trained as non-spam. If this is wrong, please
>>> correct the training as soon as possible.
>>> > Spam
>>> > Not spam
>>> > Forget previous vote
>>> >
>>>  > Use REPLY-ALL to reply to list
>>> >
>>> > To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> > https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>> >
>>> >
>>> >
>>> >
>>> > Use REPLY-ALL to reply to list
>>> >
>>> > To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>> > https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>> >
>>>
>>>
>>
>>  ------------------------------
>>
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>
>
>
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1