Print

Print


Hi Daniele,
I believe that a sensible request is 1Tb since in a month or so we should
be able to get the hbooks out. I still vote for filtering the events.
	ciao
	ric

______________________________________________________
Riccardo Faccini
Universita' "La Sapienza" & I.N.F.N. Roma
tel  +39/06/49914798 Fax.: +39/06/4957697
http://www.slac.stanford.edu/~rfaccini
Univ. La Sapienza. 2,Ple Aldo Moro, I-00185 Roma Dipartimento di Fisica

"I don't understand what you say, but I believe I disagree"

On Wed, 14 Apr 2004, Daniele del Re wrote:

>
> Hi,
>
>  I talked to Jeff. I told him that recoil production was usually on
> ISL resources and that it would be nice to have a "recoil" new
> disk space for the new production (run1-3 sp5).
>  At the moment there are many requests to Jeff and Howard. He said
> that we can have something now but not the full request. He suggested to
> send him a timescale (now 200Gb, 200Gb in three weeks, remaing later in
> one month) and they will manage to satisfy our requests.
>
>  I would propose to ask 200Gb now and ask for more in two weeks and the
> remaining later (~1 month). What is the new total disk space we need
> (removing useless stuff)? 1 Tb?
>
>  Let me know
>
>  Daniele
>
>
>
> >
> > Hoi,
> >
> > modulo any errors  in my computations, the production  2004 for 150/fb
> > data and 500/fb BB MC will need
> >
> >   1.3TB     disk space (+/- 150GB)
> >   O(60000)  CPU hours (assuming that the tagbits are functional in SP5)
> >   O(150000) CPU hours (worst case)
> >
> > Deleting  the moments-analysis (and  other) hbook  files will  free up
> > O(750GB) of AWG diskspace, i.e., we'd need to find about 600GB.
> >
> > Neither  scenario  is a  serious  problem  to  achieve in  two  months
> > (barring catastrophic SCS/batch breakdowns). See below for details.
> >
> > Seems  like a good  deal to  get two  theses out  by fall  and produce
> > publication-ready results before  BELLE publishes their combination of
> > b->sg and b->ulnu as another "first" ...
> >
> >
> > Cheers,
> > --U.
> >
> >
> > ----------------------------------------------------------------------
> > 1. Current ISL disks
> > ====================
> >
> > We have the following AWG disks:
> >
> >  shire01>df -k  /nfs/farm/babar/AWG37/.  /nfs/farm/babar/AWG23/.  /nfs/farm/babar/AWG18/.
> >  Filesystem            kbytes    used   avail capacity  Mounted on
> >  sulky25:/AWG18       619788288 616803143 2800045   100%    /a/sulky25/AWG18
> >  sulky26:/AWG23       619788288 613374726 6013098   100%    /a/sulky26/AWG23
> >  sulky13:/AWG37       619788288 607730751 11310159    99%    /a/sulky13/AWG37
> >
> > They contain a mixture of files:
> > -----------------------------------------------------------------------------
> > Disk       User            Size          What
> > -----------------------------------------------------------------------------
> > AWG18      vub-recoil      515948923     ISL/sx-080702: gen BB MC
> >                                                         data
> >                                                         skimmed data rfiles
> >            moments         -
> >            miscellaneous
> >
> > AWG23      vub-recoil      107038654     ISL/sx-080702: signal MC
> > 							cocktail MC
> >                                                         skimmed MC rfiles
> >                                                         hbook_gen_mar ???
> >                                          ISL/VubAna-out: DELETE ???
> >            moments         498613341
> >
> > AWG37      vub-recoil      262869337     ISL/sx-080702: skimmed MC rfiles
> > 					 ISL/sx-080702/newgenbb: gen BB MC
> >            vub-combo       152832512
> >            moments         191845657
> > -----------------------------------------------------------------------------
> >
> > In vub-recoil space the following HBOOK files are around (why??):
> > ----------------------------------------------------------------------
> >     AWG23:  ca 50GB in ISL/sx-080702/hbook_gen_mar/ and
> >                        ISL/sx-080702/newsig/output/outputdir/
> >
> >     AWG18:  ca 12GB in ISL/sx-080702/newsignal/output/outputdir/ and
> >                        ISL/sx-080702/signalMC/FIXED/
> > ----------------------------------------------------------------------
> >
> >
> > There is  a minor problem  in that many  of the files  and directories
> > belong to Alessio,  and his account is gone. Will have  to send a mail
> > to SCS.
> >
> >
> > We also occupy diskspace of group EC:
> >
> >  shire01>du -ks /nfs/farm/g/ec/u05/users/*
> >  2126409         /nfs/farm/g/ec/u05/users/asarti
> >  509917864       /nfs/farm/g/ec/u05/users/henning
> >
> > vub-recoil does not  use this diskspace, I think  (I found no symlinks
> > to this disk  and nothing in our chains).  The  story is different for
> > the moments-analysis, I guess.
> >
> >
> > -> There  are hbook files  to be  deleted, and  some logfiles  are not
> >    gzip'ed. We can free up >50 GB on relatively short notice, I guess.
> >
> >
> >
> > 2. CPU and Disk Usage for old production
> > ========================================
> >
> > I  assume that  we dump  identical  ntuples (root  format) as  before.
> >
> >  o ca. 25% of the generic BB allevents make it into the ntuples.
> >  o Without tagbits, the jobs are CPU limited:
> >
> >         barb: 1.4sec CPU/allEvent
> >         noma: 0.7sec CPU/allEvent
> >
> >    The wall clock  time is only marginally (<5%  .. 10%) higher. We'll
> >    have to see how this is with a CM2 executable on OBJY.
> >
> >  o  Running with  tagbits  was mostly  relevant  in the  old (K0S  bug
> >    affected) MC. I  cannot find tcl, log and  rootfiles for this case.
> >    Since  the jobs  are CPU  limited I  assume that  the time  gain is
> >    proportional to the events skipped.
> >
> >    The skim  fractions are 20% (somewhat  less than what  I see above,
> >    but this may be due to updated(?) purity tables), see the full list
> >    in http://www.slac.stanford.edu/BFROOT/www/Physics/Analysis/AWG/EHBDOC/skims/allskims.html
> >
> >  o Diskspace: 7.5 kB/tupleEvent
> >
> >
> > The above was generic BB MC. Now for data:
> >
> >  o CPU:       3.6sec CPU/filterEvent, 0.16sec CPU/allEvent,
> >  o Disk:      4.5 kB/tupleEvent
> >
> >
> >
> > 3. Projection for new production
> > ================================
> >
> > Data: assume 150/fb skimmed CM2
> > -------------------------------
> >
> > There are 1.5e7allEvents/fb-1 (see http://www.slac.stanford.edu/BFROOT/www/Computing/DataQuality/SkimmedData.html)
> >
> > This implies for the semiExcl skim (with a rate of 4% on data)
> >
> >   -> 150 * 1.5e7 * 0.04  = 9.0e7 events
> >   -> 9e7*4.5kB = 4.1e8kB = 410 GB
> >
> > Assuming 1sec CPU/event, this is 25000 CPU hours. (This assumption may
> > be somewhat optimistic with CM2 ...)  This is what a user can get from
> > the compute farm in about ONE month (if "run" is running 24/7).
> >
> >
> > generic BB MC: first assume 400/fb unskimmed SP5 OBJY
> > -----------------------------------------------------
> >
> > This is ca. 420e6 allEvents. Assume 20% with semiExcl BRECO skim.
> >
> >   -> 0.2 * 420e6 = 8.4e7 events
> >   -> 8.4e7*7.5kB = 630GB
> >
> > Assuming again  1sec CPU/event and  the availability of the  tag bits,
> > this is 23000 CPU hours. If the "skim rate" is higher (because the tag
> > bits  are  based on  older  purity  tables(?),  the disk  space  could
> > increase by up to 25%, to 780GB).
> >
> > If the tagbits  are not available, the total  time would be 125khours,
> > doable for a team in one month.
> >
> >
> > Now add 120/fb of skimmed CM2 SP6
> > ---------------------------------
> > This is ca 24e6 events for the ntuple
> >
> >   -> 24e6 * 7.5kB = 180GB
> >
> > The time to run on this is small, O(6000h).
> >
> >
> >
>