Print

Print


The need for such a list was identified as an action item at the software meeting of 11/30. That action item was completed shortly thereafter and documented at

https://confluence.slac.stanford.edu/pages/viewpage.action?pageId=326518394

Note that this list is intended for the task of skimming the FEE, Møller, di-muon and random triggers and is not intended to be a final list of runs to be used for "physics" analyses.

Norman

________________________________
From: Hps-analysis <[log in to unmask]> on behalf of Bravo, Cameron B. <[log in to unmask]>
Sent: Wednesday, December 8, 2021 8:15 AM
To: Nathan Baltzell <[log in to unmask]>; [log in to unmask] <[log in to unmask]>; hps-software <[log in to unmask]>
Subject: [Hps-analysis] [EXTERNAL] Re: 2021 trigger skims

Hello,

I never saw a run list from Norman. Can you forward that info so the collaboration has access to it?

Thanks,
Cameron
________________________________
From: [log in to unmask] <[log in to unmask]> on behalf of Nathan Baltzell <[log in to unmask]>
Sent: Tuesday, December 7, 2021 7:03 PM
To: [log in to unmask] <[log in to unmask]>; hps-software <[log in to unmask]>
Subject: Re: [Hps-analysis] 2021 trigger skims

Hello All,

After some further preparations, the 2021 trigger skims are launched.

Outputs will be going to /cache/hallb/hps/physrun2021/production/evio-skims.

I broke the run list from Norman into 5 lists, and started with the first 20% in one batch, all submitted.  I'll proceed to the other 4 batches over the holidays, assessing tape usage as we go.

-Nathan

> On Nov 29, 2021, at 3:39 PM, Nathan Baltzell <[log in to unmask]> wrote:
>
> The 10x larger test is done at /volatile/hallb/hps/baltzell/trigtest3
>
> -Nathan
>
>
>> On Nov 29, 2021, at 2:52 PM, Nathan Baltzell <[log in to unmask]> wrote:
>>
>> Hello All,
>>
>> Before running over the entire 2021 data set, I ran some test jobs using Maurik’s EVIO trigger bit skimmer.   Here’s the fraction of events kept in 14750 for each skim:
>>
>> fee 2.0%
>> moll 3.3%
>> muon 1.9%
>> rndm 2.9%
>>
>> In each case, it’s inclusive of all such types, e.g., moll=moll+moll_pde+moll_pair, rndm=fcup+pulser.
>>
>> Are those numbers in line with expectations?  The total is 10% and not a problem if these skims are expected to be useful. The outputs are at /volatile/hallb/hps/baltzell/trigtest2 if people are interested to check things.
>>
>> A 10x larger test is running now and going to /volatile/hallb/hps/baltzell/trigtest3 and should be done in the next couple hours.
>>
>> ************
>>
>> Note, it would be prudent to do this *only* for production runs, those that would be used in physics analysis, to avoid unnecessary tape access.  By that I mean removing junk runs, keeping only those with some significant number of events, and only keeping those with physics trigger settings (not special runs).  For that we need a run list.  I think we have close to a PB, but I remember hearing at the collaboration meeting that at least 20% is not useful for the porpoises of trigger bit skimming.
>>
>> -Nathan_______________________________________________
>> Hps-analysis mailing list
>> [log in to unmask]
>> https://mailman.jlab.org/mailman/listinfo/hps-analysis
>


########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1<https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwMF-g&c=CJqEzB1piLOyyvZjb8YUQw&r=JaSEOiNc_6InrJmbYDvKU2tZqhhONpIkbyl_AUnkDSY&m=ZmstzaBgeIOpiGANRYv7uzOkDsRLdzoqIoCFLiRGN3wqRGspeRtyWV8WPp8dg5lK&s=LvlsXry9o7DUprYwwYshvsMAhW9mllQz0sq7hNS4s7o&e=>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1