Print

Print


Hi Ric,

> > In looking at all this, I am coming back to the earlier suggestion that we should
> > perform a combined fit to the enhanced and depleted samples.  The depleted
> > sample should fix the b --> c background, the enriched sample to extract the b--> u
> > signal. One would need to be careful to treat the errors and correlations correctly.
> > Experienced users of MINUIT could assist here.
> > This should allow us to understand the correlations between C_u and C_c!
> > This should also help to estimate and limit the uncertainty on the s.l. branching ratios by checking the fit quality for different assumptions on the BR, not just the change in fit values.
> >
> this would introduce a dependency on the knowledge of the kaonID
> efficiencies and misidentification rates which would be much larger than
> the SL branching fraction (and worse determined)


I am not sure I can follow your argumentation.
The analysis is anyway dependent on the kaon ID
because it is used to define the two samples
(u-enriched and u-depleted). Therefore, an accurate
knowledge of the efficiency for the kaon tag is important
for the correction of the measured fraction of b->u
(or b->c) events - correct?!

Vera's proposal of extracting Ru (Rc) utilizing both
samples (parallel extraction with independent PDF's
for each sample) might even have the advantage of being less
sensitive to the kaon efficiency because you are only
interested in the relative kaon contribution in each sample
(remember: in this approach all events are used to extract
Ru and NOT only a subset).

In the present extraction, using only the kaon depleted sample,
one would have to know also the overall efficiency in order to
correct back to the full sample - not need in Vera's
approach.

I have to digest this a bit more but so far I only
see advantages and no disadvantages in Vera's proposal.

Do I miss something?

HNY!

Oliver