Print

Print


Hi all, 

I run a test to estimate the systematic uncertainty due to the
uncertainty on S/P in mES data fits in the following way: 

1) Take the S/P ratios determined as 

S/P(data_enriched) = [S/P(MC_enriched) / S/P(MC_depleted)] * S/P
(data_depleted)

I have used the values which I have been recently playing with, i.e. 
#mx_l mx_h  corr   err_corr
0.00 1.55 1.499 +- 0.495
1.55 1.90 2.688 +- 0.655
1.90 2.20 1.801 +- 0.296
2.20 2.50 1.896 +- 0.611
2.50 2.80 1.165 +- 0.468
2.80 3.10 0.637 +- 0.311
3.10 3.40 19.367+- 34.585
3.40 3.70 1.524 +- 1.610
3.70 4.20 8.180 +- 31.833
4.20 5.00 0.555 +- 6.639

No attempt to fit a n-th order polynomial, just take the values as they
come out of the single bin-by-bin fits on data depleted, MC enriched and
depleted. 
Note that the relative errors are quite large (e.g. 33% on the first
bin, 25% on the second, 22% on the third, 32% on the 4th, higher and
higher as mX increases). 

2) Randomize simultaneously the 10 above values according to a gaussian
distribution whose mean is the correction (column corr) and whose sigma
is the uncertainty (err_corr). The random number is of course different
for each mX bin. 

3) Fit with VVF by using the randomized S/P of point 2) 

4) go to 2), change the random seed, repeat 100 times

Results of the 100 jobs are in 
http://www.slac.stanford.edu/~bozzi/scra/Ibu_SP_*
*=1,...,100

Take the (width/mean) ratio of the resulting 100 fits as systematic
uncertainty: 

yakut02(~:) grep "BRBR           " ~bozzi/scra/Ibu_SP_*/*dat | awk
'BEGIN{sum=0; sum2=0}{sum+= $3; sum2+=$3*$3; num++}END{print sum/num;
print sqrt(sum2/num-sum*sum/num/num)}'
0.0291231
0.00188286

The relative uncertainty is 0.00188/0.02912 = 6.46% i.e. 3.2% on Vub. 
This is somewhat lower than a naive argument which can be used (see
below) to give the error on the fitted Vub events in the first bin and
which give about twice (13.2%) the error on BRBR. I think the reason for
this is that the errors on the first 4 bins are comparable, which
reduces the lever arm and therefore the variation in the first bin. 

Quite promising, isn't it? 

Concezio. 


PS: here is the naive argument on the uncertainty on the number of
signal events in the first bin, which translates in the uncertainty on
BRBR. We have

N_signal = N_data - N_argus - N_peaking 

N_peaking = N_signal * 1/corr
(corr is the S/P correction factor)

Solving for N_signal:

N_signal = [corr / (1+ corr)] * [N_data - N_argus] 

Error propagation gives: 

delta(N_signal) / N_signal = [delta(corr) / corr] / (1+corr) 

Taking corr = 1.499, delta(corr)/corr = 0.33 we get 

delta(N_signal) / N_signal = 13.2%