Print

Print


I've suggested this before, but I'll try it again:

Why segment the calorimeters at all?

If we change the Gismo philosophy to recording hit positions instead of
tower energies, then a later stage of processing can apply any segmentation
size it likes.  The compute intensive step is generating the hits.  By
comparison, assigning the hits to towers is cheap.  Thus a study of
segmentation size should be easy to accomplish.

The standard objection is that dataset size will explode on us.  I 
experimented with this by fudging gismo to believe that all particle hits
were parentless, thus forcing each track hit on each tower to be recorded.
It's certainly true that data size can increase by as much as a factor of 20
for single high energy photons (100 GeV), but in the 'bread and butter'
region (2-10 GeV) the photon penalty ranged from 3-6.  For neutrons, the
penalty ranged from 1.2 to 2.5 across the energy range 1 GeV to 100 GeV.
This is not trivial, but then again, everyone keeps telling me how cheap
disk is these days...

It may be too late to do this with our Gismo studies, but if we migrate
to Geant 4 and have to rewrite our hit/digitization routines, this is
certainly worth considering.

Taking this logic to its natural conclusion then the current round of
modifications to the S&L detectors (sounds like a bailout waiting to happen!)
should concentrate on the distribution of MASS in those detectors.  The truly
expensive part of full simulation is transporting particles through matter
and any redistribution of matter forces another round of transportation.
If we get the mass distribution right and we record only -exact- hits at
reference planes, then studies like segmentation of the calorimeters or
point resolution of tracking devices become (computationally) trivial (which
is probably why the tracking guys have been doing this all along!)

My apologies to experienced simulators out there to whom I have just stated
the obvious.

Tony Waite.