Print

Print


Hi,
on the web page where I posted results of systematics due to 
randomization of S/P:

http://www.slac.stanford.edu/~petrella/systsp.html

you can see that this systematic error is not stable, for example when 
cutting on integrated purtity.

Now I'm trying to look at other results from these jobs to see if I can 
find what makes this errors so large, but probably this is also due to 
the S/P ratio and its error.

For example if you look at the correction factors for IP > 0.50
(http://www.slac.stanford.edu/~petrella/tmp/SP_allrew/SPallweights/ip050_allrew/corrallwip050pol1.eps)

you can see that the first bin has a large error (the exact value of the 
correction factor for this bin is S/P = 5.67 +- 5.34)

These numbers (they're on the spreadsheet at
http://www.slac.stanford.edu/~petrella/tmp/SP_allrew/SPallweights/SoverPFullRew.sxc)
come from the double ratio of S/P on MC (0.74 +- 0.13) times the S/P 
ratio on data depleted sample

(http://www.slac.stanford.edu/~petrella/tmp/SP_allrew/SPallweights/ip050_allrew/data_depl_AC_intp0.50_0.001.55.eps)

On data depleted sample the signal component (fitted) is 291 +- 31 and 
the background component (fitted) is 38 +- 35, so the error on the final 
S/P ratio is driven by the background component on data depleted 
sample... and cutting on purity (and having less background) will give 
roughly higher errors on background component (at least the statistical 
error).

For the data depleted sample we get S +- dS and P +- dP as they come out 
from the fit and then we compute the quantity S/P +- d(S/P). But these 
errors are correlated, aren't they?

ciao,
   Antonio