Print

Print


[the following is a  kind-of consensus between HD/RHUL/SLAC or wishful
thinking that would provide benefits on the short time scale]


Hoi,

after  a few  phone calls,  it seems  that a  new option  is emerging.
Below you  find a proposal, please let  me know if you  disagree. If I
hear nothing from you, I'll send this to vub-recoil after Easter.


The situation
-------------

  o Currently, analysis-20  is being built and close  to release.  The
    executable is expected to be about  as fast as an OBJY exe ("a bit
    slower"). It should be capable of running on OBJY micro.

  o At the  end of next week, a substantial amount  of skimmed data is
    expected to become available (as  of now it's only 60/fb). By then
    it might be something like 130/fb. Maybe/hopefully.

  o Clare needs (or would like to use) more than just 80/fb for her
    thesis. 

  o The same is more or less true for the b2ulnu analysis (and Ed's
    thesis) 

  o The  projection for skimmed  MC CM2 availability is  slipping.  At
    the Wednesday physics meeting we heard "June".


The idea
--------

  o Produce new ntuples with minimal (=no) changes with respect to the
    old "big" ntuples. Except: Just produce ROOT files. This allows to
    use them for analysis immediately  and is probably the only way to
    use RUN3/4 data for public results before the end of this year.
 
    This is explicitly NOT a CM2 analysis, just an attempt to get more
    data for the short time scale.

  o Run on RUN4 CM2 data as more  becomes available

  o Run on  MC as soon as possible. We should  test whether running on
    SP5 OBJY is viable.


The plan
--------

  o Ed will provide a set of tags to build IslBrecoilUserApp, based on
    analysis-20. He will do basic validation, i.e. that it is running.
    This also includes tcl (steering) files for CM2 and OBJY running.

  o Urs will do a bit  more of validation, looking at all variables in
    the "h1" tree. 

  o Royal  Holloway will  organize the production.  I think  this will
    involve the following:

      - backup the  current HBOOK ntuples of  Henning/Oliver to mstore
        and/or RAL or somewhere else.

      - Create tcl files  for skimmed data. In the  following I detail
        what issues  must be considered in a  low-tech approach (based
        on  "run",  the   run-script  with  built-in  bookkeeping  and
        optimized  queue saturation :-)  Other possibilities  exist of
        course,  I  just  don't   know  them.  Whoever  organizes  the
        production is free to choose whatever works!

        +  The  naming  scheme  for  the  "basename"  should  be  well
          designed.  In the last production we had a bit of a mess and
          it made life difficult.  .  A possible solution is something
          like the following:

             genbch-run1-.....
             genbnu-run1-.....
             genccb-run1-.....
             genuds-run1-.....

             cktbch-run1-.....
             cktbnu-run1-.....
             cktb2u-run1-.....

             b2unre-run1-.....
             b2ures-run1-.....
             b2umix-run1-.....


          NOTE: The  total amount of files will  likely exceed 100000.
          (We had something like 30k for the previous production.)

          NOTE: We used to have  something like 2k events per file. We
          have  to think  whether we  want to  merge the  rootfiles to
          reflect the merged CM2 files. Another possibility is to have
          in the filename (in the  ..... part above) the start and end
          events in the  merged CM2 files (see next  item if this NOTE
          is not clear).

        + The  size of  the tcl  files needs to  be optimized  for the
          queue length. (kanga?)

          NOTE: I think this could mean that we cannot run one job per
          CM2 merged skim file. This needs to be studied!!! 
 
        + The tool of choice is probably "BbkDatasetTcl".

        + I think that the tcl  files should be in a logical directory
          structure to avoid too many files per directory

             $BASE/tcl/SemiExclBreco-2004a/data
             $BASE/tcl/SemiExclBreco-2004a/data/run1
             $BASE/tcl/SemiExclBreco-2004a/data/run2
             $BASE/tcl/SemiExclBreco-2004a/data/run3
             $BASE/tcl/SemiExclBreco-2004a/data/run4

             $BASE/tcl/SemiExclBreco-2004a/mc
             $BASE/tcl/SemiExclBreco-2004a/mc/run1
             $BASE/tcl/SemiExclBreco-2004a/mc/run1/bch
             $BASE/tcl/SemiExclBreco-2004a/mc/run1/bnu
             $BASE/tcl/SemiExclBreco-2004a/mc/run1/ccb
             $BASE/tcl/SemiExclBreco-2004a/mc/run1/uds
             $BASE/tcl/SemiExclBreco-2004a/mc/run1/sig
             $BASE/tcl/SemiExclBreco-2004a/mc/run1/ckt

      - The output root files should  be stored in a way that reflects
        this structure:

             $BASE/output/SemiExclBreco-2004a/data
             $BASE/output/SemiExclBreco-2004a/data/run1
             $BASE/output/SemiExclBreco-2004a/data/run2

             $BASE/output/SemiExclBreco-2004a/mc/run1
             $BASE/output/SemiExclBreco-2004a/mc/run1/bch


      - A few notes on the directories: 

         + It does not  really matter  what names  we choose,  but it
           should be  something that  is consistent and  extensible to
           new productions, which could end up in, e.g.

             $BASE/SemiExclBreco-2004b/

         + We  should avoid too  many subdirectories, but  should make
           sure  that not  too many  files  end up  in one  directory.
           (Note: In the old production,  80/fb data and 240/fb MC, we
           had 11000 gen B+ files in total.)

         + Not all directories need to be physically below $BASE, they
           could be symbolic links to a different disk.  But we should
           see all from one base location.


      - Of course, the logfiles should be stored similarly: 

             $BASE/log/SemiExclBreco-2004a/data
             $BASE/log/SemiExclBreco-2004a/mc

        and the corresponding subdirectories. If we use "run" this is
        essential.

      - The jobs will be run by a bunch of people, organized (and
        tabulated) by someone. "Volunteers" so far are 

             Clare
             Ed
             Henning
             Rolf
             Urs
             Oliver ('s account, at least)
             Other GradStudents

        Given this  amount of manpower, we might  actually get through
        the  unskimmed SP5  OBJY  (700/fb!) on  a relatively(?)  short
        timescale.

  o Diskspace might  be sufficient once we delete  the old HBOOK files
    (from the  previous production)  and then ask  for some  more when
    it's critical and we have enough momentum.


Cheers,
--U.