LISTSERV mailing list manager LISTSERV 16.5

Help for VUB-RECOIL Archives


VUB-RECOIL Archives

VUB-RECOIL Archives


VUB-RECOIL@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

VUB-RECOIL Home

VUB-RECOIL Home

VUB-RECOIL  November 2006

VUB-RECOIL November 2006

Subject:

Re: questions regarding AWG computing resources

From:

Heiko Lacker <[log in to unmask]>

Date:

3 Nov 2006 18:06:32 +0100 (CET)Fri, 3 Nov 2006 18:06:32 +0100 (CET)

Content-Type:

TEXT/PLAIN

Parts/Attachments:

Parts/Attachments

TEXT/PLAIN (86 lines)

Hi Masahiro,

here is the information I collected so far from Roberto and Kerstin. 
I have not received feedback from Sheila. Hence, I propose to scale 
the disk space used by the current Run1-Run4 analysis to Run1-5.

Cheers,
Heiko


>>> 1) How much working disk space does your AWG have available at
>>> SLAC and/or at your AWGs Tier-A site? How much disk space is
>>> that per active analysis?

Roberto made a quick sum at the end of August, adding up the space 
taken in
/nfs/farm/babar/AWG12/ISL/
/nfs/farm/babar/AWG36/
/nfs/farm/babar/AWG38/
/nfs/farm/babar/AWGsemilep01/

and the result was 782 GB for VubRecoil.

>>> 2) What fraction of the available disk space is your AWG typically
>>> using? If you are usually close to 100%, how often do you have to
>>> clean up, to get running again?

VubRecoil:
In recent times, the fraction has always been close to 100%. Personally,
I (Roberto) had to clean up a couple of times in the last year, and in a 
couple of cases Chukwudi and Michael had their jobs crashing because of 
disk space issues and had to find some other space/clean up.

Unfolding:
Currently, about 10GB of AWG disk space is used at SLAC. This is the
amount we can live with to evaluate systematic uncertainties but it
also means cleaning up on regular basis.

>>> 3) If your available working disk space (per inverse fb of data)
>>> at SLAC
>>> and/or your Tier-A site would be reduced, how would your AWG cope
>>> with that? Assume the relative reduction would be 10% or 25%.

Vubrecoil:
That would be a problem, as we are already running low.

Unfolding:
We would cope with it by moving systematic VVF results to LBL.

>>> 4) Which skims are being used by your AWG? Which of these skims
>>> correspond to currently active analyses (or were recently active
>>> e.g. for
>>> ICHEP 2006 results)?

We use the BSemiExcl skim.

>>> 5) If the available storage space for skims (per inverse fb of
>>> data) at
>>> SLAC and/or your Tier-A site would be reduced, how would your AWG
>>> cope with that? Assume the same reductions as for the working disk
>>> space.

Same as the previous answer. We are already running low and need to find
storage space for the whole of Run 5 (Michael's analysis - and I guess
Sheila's too).

>>> 6) If your AWG is using deep-copy skims, what fraction of the skims
>>> are analyzed at Tier-C sites? Could this fraction be increased?

None. Everything is analysed at SLAC. I (Roberto) tried to run at gridka 
some months ago, but the experience was not successful: too many glitches 
and hang-ups, and eventually I gave up.
(Please note that GridKa is not officially holding the BSemiExcl skim.
 The performance of GridKa has been significantly improved.)

>>> 7) What is your experience with running on skims at SLAC / your
>>> Tier-A site? Do you think the CPU power is sufficient? If the
>>> available
>>> CPU power would be reduced, how would your AWG cope with that?

I (Roberto) think that CPU-wise we are fine. Our jobs run smoothly over 
the entire Run 1-4 dataset in a few days.



Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

March 2010
December 2009
August 2009
January 2009
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use