LISTSERV mailing list manager LISTSERV 16.5

Help for ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L Archives

ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L  May 2007

ATLAS-SCCS-PLANNING-L May 2007

Subject:

Minutes of ATLAS/SCCS Planning Meeting 9th May 2007

From:

"Stephen J. Gowdy" <[log in to unmask]>

Date:

9 May 2007 18:53:54 +0200 (CEST)Wed, 9 May 2007 18:53:54 +0200 (CEST)

Content-Type:

TEXT/PLAIN

Parts/Attachments:

Parts/Attachments

TEXT/PLAIN (208 lines)

ATLAS SCCS Planning 09May2007
-----------------------------

  9am, SCCS Conf Rm A, to call in +1 510 665 5437, press 1, 3935#

Present: Charlie, Len, Wei, Renata, Stephen

Agenda:

1. DQ2 Status/Web Proxy

    Have the new SLACXRD working, had 90 jobs finish successfully
    yesterday. Each server has 8TB. There is another 9TB on each server
    for local users.

2. Tier-2 Hardware

    Should make the 9TB on each server available via xrootd. One
    advantage of moving is the larger space available. It isn't easy to
    do an "ls" on the xrootd though, so perhaps we should also make
    these available via NFS. Could also have another DQ2 instance to
    have a database of that area.

    It seems that it is better to use DQ2 rather than dq2_get to bring
    files to SLAC.

    One issue with that xrootd is that there only one owner of all the
    files. Can use the automatic backup via xrootd so that disk space
    isn't limited due to old unused data hanging around.

    If we had two instances of DQ2 it could bring two copies of the
    same file.

    For the moment make it available via xrootd and see if we should
    make it available via NFS also.

    Lance is using the third box for doing benchmark and reliability
    tests. Need to also think about mirroring the system disk. This
    would take up 2 500GB disks. Could make the other 800GB available
    somehow, but this would effectively be lost. Probably not worth
    mirroring.

    Not heard anything recently about the CPU, everything was on track
    at last update.

3. xrootd/srm

    No news.

4. AFS Problems causing jobs to intermittently fail

    From Renata about BaBar problems;

    "First some AFS background.  An AFS file server keeps track of
     client requests with callbacks.  A callback is a promise by the
     file server to the tell the client when a change is made to any of
     the data being delivered.  This can have an impact on server
     performance in the following ways:


     1.  The performance of an AFS server can become seriously impaired
     when many clients are all accessing the same read-write
     file/directory and that file/directory is being updated
     frequently.  Every-time an update is made, the file server needs to
     notify each client.  So, a large number of clients can be a
     problem even if the number of updates is relatively small.

     2. The problem outlined above can be further exacerbated if a
     large number of requests for status are made on the file/directory
     as soon as the callbacks are broken.  A broken callback will tell
     the client to refetch information, so the larger the number of
     machines, the larger the number of status requests that will occur
     as a result of the broken callback.  And then any additional
     status requests that may be going on will cause further grief.

     The way to avoid callback problems is to avoid writing to the same
     file/directory in AFS from many clients.  The recommended
     procedure in batch is to write locally and copy once to AFS at the
     end of the job.


     The problems that we saw with BaBar:

     First I should say that the problems we saw with BaBar came after
     they started increasing the number of jobs being run as part of
     their skimming.  Before that, the problems were still there, but
     at a low enough level that they didn't have the same impact.

     1.  There was a problem with our TRS utility that was causing
     multiple updates to a file in one of their AFS directories.  This
     was causing the problem described above.  We have since changed
     the TRS utility to avoid making that update.

     2.  The BaBar folks were launching 1000s of batch jobs at once
     which were accessing the file(s) on one server in such a way that
     it caused a plunge in availability.  They have since changed the
     way they run by keeping the level of batch jobs up so that 1000s
     don't hit all at the same time, but are spread out.  We are still
     trying to figure out what the jobs are doing at startup that cause
     the problem (writing to AFS?), but the bypass has been working.  I
     have our AFS support people looking into it.

     3.  The BaBar folks also fixed a problem in their code that was
     launching 10s of 1000s of 1 minute batch jobs.  This was putting a
     heavy load on the batch system because it had to spend much/all of
     its time scheduling, in addition to the impact on AFS.

     4.  The BaBar code does huge numbers of accesses to files under
     /afs/slac/g/babar.  They suspect that their tcl files are part of
     the problem and they are going to move those files to readonly
     volumes.  This will spread the load across multiple machines.
     Unfortunately the BaBar group space has grown over time so that
     setting it up to be readonly now is a daunting task.  At the
     moment they have a parallel readonly volume that they will be
     using for the tcl space.  A little AFS background on readonly
     volumes....the readonly path through AFS requires that all volumes
     (mountpoints) along the way be readonly.  So, in the case of the
     atlas volume /afs/slac/g/atlas/AtlasSimulation for example,
     /afs/slac/g/atlas would have to be set up with readonlies in order
     for AtlasSimulation to be set up with readonlies.  So if you think
     some of your code would benefit from having the load spread across
     multiple fileservers in readonly volumes, it would be best to set
     up time to switch /afs/slac/g/atlas to be readonly now, before
     things get anymore complicated."

    And from Len about read-only volumes;

    "I thought I should add some comments about why we have not pushed
     the use of read-only clones more heavily.

     The AFS command to update read-only clones from the read-write
     volume is 'vos release'.  This is a privileged AFS command and the
     privilege is global, that is, it is not attached to particular
     volumes: if you've got this privilege, you can vos release any
     cloned volume in the AFS cell.  (IIRC, the same privilege allows
     you to run opther privileged vos commands.)

     We have a SLAC-written wrapper, 'vos_release', for the native AFS
     command that allows AFS package maintainers to do vos releases for
     the volumes in their packages.  The authorization scheme for this
     wrapper makes use of our naming conventions for package volumes
     and for the AFS groups in package space.  However, AFS group space
     is much less regular than package space, and our simple wrapper
     would scale well if we tried to provide fine-grained authorization
     for vos releases in group space.  What we are currently looking
     into for BaBar is to define a single AFS group whose members would
     be able to do a vos release for any cloned BaBar volume (all such
     volume names begin with 'g.bbr').  We have also asked that BaBar
     keep the number of people in the AFS group small (e.g., 5-10).

     With this sort of scheme, you probably only want to clone volumes
     that change infrequently.  This, coupled with the need to have
     clones on all parent volumes, implies constraints on how the space
     is organized."

     With ATLAS have seen some files not be able to be read. Have
     expected that a job would wait a long time but not think the file
     doesn't exist. Have seem problems like this but not been able to
     track them down. The ATLAS problems did seem to correlate with
     when BaBar had problems.

     Could make the top level read-only. This will mean separating out
     some things from that volume as it should be small. Need to build
     in some sort of authorisation scheme to allow ATLAS folk do the
     "vos release" on ATLAS space. Will wrap the command that
     communicates with a privileged server that does the actual "vos
     release". Not talked to Alf yet about this, need to discuss with
     him to see what schemes are reasonable to implement.

     One issue might be that the ATLAS release remembers where it is
     installed so it might remember the read-write path instead of the
     read-only one.

     Is having the NFS space mapped to users via AFS could a problem?
     Don't believe so but there is hte issue with NFS opening and
     closing files for each access might cause some worry.

     Could replicate the top level volume three times.

     Will check if we're still seeing problems running ATLAS
     software. Will report it to unix-admin next time we see it.

5. AOB

    None.

Action Items:
-------------

070509 Stephen	Split up top-level AFS volume, requests to unix-admin

070502 Stephen	Email Gordon about his action item
        070509 Done.

070502 Stephen	Arrange meeting about ATLAS TAG data on PetaCache
        070509 Not done yet.

070502 Wei	Check CA certificate update mechanism
        070509 Not done yet but believe VDT is the right way.

070321 Gordon   Discuss perception of SLAC Tier-2 with external folk.
        070404 no info
        070411 no info
        070509 No info.



Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

September 2016
July 2016
June 2016
May 2016
April 2016
March 2016
November 2015
September 2015
July 2015
June 2015
May 2015
April 2015
February 2015
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
September 2013
August 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use