Hi Oliver,
> > >
> >
> > I agree with you that the fixed beta option is not optimal, but it should
> > not create such big biases: if something is correctly reconstructed it
> > should not get screwed up.
> >
>
> But Riccardo, the scenario you are describing is actually
> a perfect example for the fact that
>
> E_fit = Pfit *E_reco/P_reco
>
> is the wrong energy definition for your EXCLUSIVE decays
> and will lead to a screwed up of the mass.
> Lets assume for a second that the errors on Breco
> and the Lepton are negligible and only the XSystem
> is varying in the fit. Then a large missing Energy(missing mass)
> in the event will automatically be blamed on the XSystem
> (correct?!). Since the XSystem is described only as
> a 3Vector in the fit this missing mass(energy) can only
> be absorbed by scaling the length of the fitted momentum vector
> of the XSystem Pfit(X) by a factor SCALE. Therefore,
>
> E_fit(X) = P_fit(x)*E_reco/P_reco = SCALE*P_reco*E_reco/P_reco
> = SCALE*E_reco
>
> will also be scaled by the same factor leading to a screwed
> mass definition:
>
> M_fit**2 = E_fit(X)**2  P_fit(x)**2
> = SCALE**2 (E_reco**2P_reco**2)
> = SCALE**2 M_reco**2 (!!!)
>
> Hence the reconstructed mass is just scaled by a scaling factor SCALE.
> Of course, this scaling factor has to large for events with large
> missing mass!
I think that your example goes into the direction of what I say: if the
fit is done properly even under this assumpion something that is well
reconstructed will not get screwed up.
If everything is correctly reconstructed E_reco and Mnu will be
distributed according to just the resolution (which you account for in the
fit) around their expectation values. If E_reco is underestimated then
M_nu will be over estimated. This means that SCALE will be lessthan 1 and
E_reco will be brought towards the expectation value.
If E_reco is overestimated scale will be more than 1 and again resolution
will improve.
I agree one could do better but it should not harm and I actually showed
that it does not and that the effect I reported was due to missing
particles that the fit correctly tries to account for.
>
> In reality the fit is more complicated and the above give scenario
> is certainly a simple one. However, it clearly demonstrates that a
> 3Vector parameterization can indeed bias your fitted mass especially
> if you have events with large missing mass.
my majour point is that is the event is properly reconstructed and the
resolution functions assumed are correct there is no such thing as a
"large missing mass".
> > I think I found an easier (and more conforting) solution: the events that
> > get moved around are actually far from 0 in M_nu^2 (see m_nu^2 vs fitted
> > Mx in
> > http://babar.roma1.infn.it/~faccini/resoMx/mnuxhadfit.eps
> > )
> > This means that I was looking at D0lnuX events that were reconstructed as
> > D0lnu and the kinematic fit was trying to recover the X on a statistical
> > basis.
> >
> > The only missing point is to understand why (actually, if, the statistics
> > might confuse things) the data worsen more than the MC.
> > One point I could not get from any of the material you provided is what is
> > the impact of the resolution on the Breco and how do you account for it.
> > At this point one useful test would be to smear the Breco in the coctail
> > MC and see if we can achieve a resolution similar to the generic one.
> >
>
> I am not sure that I understand your statement that the cocktail mc
> has a better resolution than the generic MC. Attached to this mail
> you will find a comparison of the Mx resolution obtained from cocktail
> and generic for sp4run2 MC. After a 1bin (!) sideband subtraction they
> seem to be pretty much identical. Are I am missing something?
> By the way, the plot are made with 0.5 GeV missing mass cut and
> P*>1.0 GeV. It should match the cuts used for the Vub stuff.
I am sorry but I disagree on the fact that the two plots are identical:
the generic is clearly biassed on the high side wrt to the cocktail and
maybe also the resolution is slightly different
ciao
ric
