ATLAS SCCS Planning 09May2007
-----------------------------
9am, SCCS Conf Rm A, to call in +1 510 665 5437, press 1, 3935#
Present: Charlie, Len, Wei, Renata, Stephen
Agenda:
1. DQ2 Status/Web Proxy
Have the new SLACXRD working, had 90 jobs finish successfully
yesterday. Each server has 8TB. There is another 9TB on each server
for local users.
2. Tier-2 Hardware
Should make the 9TB on each server available via xrootd. One
advantage of moving is the larger space available. It isn't easy to
do an "ls" on the xrootd though, so perhaps we should also make
these available via NFS. Could also have another DQ2 instance to
have a database of that area.
It seems that it is better to use DQ2 rather than dq2_get to bring
files to SLAC.
One issue with that xrootd is that there only one owner of all the
files. Can use the automatic backup via xrootd so that disk space
isn't limited due to old unused data hanging around.
If we had two instances of DQ2 it could bring two copies of the
same file.
For the moment make it available via xrootd and see if we should
make it available via NFS also.
Lance is using the third box for doing benchmark and reliability
tests. Need to also think about mirroring the system disk. This
would take up 2 500GB disks. Could make the other 800GB available
somehow, but this would effectively be lost. Probably not worth
mirroring.
Not heard anything recently about the CPU, everything was on track
at last update.
3. xrootd/srm
No news.
4. AFS Problems causing jobs to intermittently fail
From Renata about BaBar problems;
"First some AFS background. An AFS file server keeps track of
client requests with callbacks. A callback is a promise by the
file server to the tell the client when a change is made to any of
the data being delivered. This can have an impact on server
performance in the following ways:
1. The performance of an AFS server can become seriously impaired
when many clients are all accessing the same read-write
file/directory and that file/directory is being updated
frequently. Every-time an update is made, the file server needs to
notify each client. So, a large number of clients can be a
problem even if the number of updates is relatively small.
2. The problem outlined above can be further exacerbated if a
large number of requests for status are made on the file/directory
as soon as the callbacks are broken. A broken callback will tell
the client to refetch information, so the larger the number of
machines, the larger the number of status requests that will occur
as a result of the broken callback. And then any additional
status requests that may be going on will cause further grief.
The way to avoid callback problems is to avoid writing to the same
file/directory in AFS from many clients. The recommended
procedure in batch is to write locally and copy once to AFS at the
end of the job.
The problems that we saw with BaBar:
First I should say that the problems we saw with BaBar came after
they started increasing the number of jobs being run as part of
their skimming. Before that, the problems were still there, but
at a low enough level that they didn't have the same impact.
1. There was a problem with our TRS utility that was causing
multiple updates to a file in one of their AFS directories. This
was causing the problem described above. We have since changed
the TRS utility to avoid making that update.
2. The BaBar folks were launching 1000s of batch jobs at once
which were accessing the file(s) on one server in such a way that
it caused a plunge in availability. They have since changed the
way they run by keeping the level of batch jobs up so that 1000s
don't hit all at the same time, but are spread out. We are still
trying to figure out what the jobs are doing at startup that cause
the problem (writing to AFS?), but the bypass has been working. I
have our AFS support people looking into it.
3. The BaBar folks also fixed a problem in their code that was
launching 10s of 1000s of 1 minute batch jobs. This was putting a
heavy load on the batch system because it had to spend much/all of
its time scheduling, in addition to the impact on AFS.
4. The BaBar code does huge numbers of accesses to files under
/afs/slac/g/babar. They suspect that their tcl files are part of
the problem and they are going to move those files to readonly
volumes. This will spread the load across multiple machines.
Unfortunately the BaBar group space has grown over time so that
setting it up to be readonly now is a daunting task. At the
moment they have a parallel readonly volume that they will be
using for the tcl space. A little AFS background on readonly
volumes....the readonly path through AFS requires that all volumes
(mountpoints) along the way be readonly. So, in the case of the
atlas volume /afs/slac/g/atlas/AtlasSimulation for example,
/afs/slac/g/atlas would have to be set up with readonlies in order
for AtlasSimulation to be set up with readonlies. So if you think
some of your code would benefit from having the load spread across
multiple fileservers in readonly volumes, it would be best to set
up time to switch /afs/slac/g/atlas to be readonly now, before
things get anymore complicated."
And from Len about read-only volumes;
"I thought I should add some comments about why we have not pushed
the use of read-only clones more heavily.
The AFS command to update read-only clones from the read-write
volume is 'vos release'. This is a privileged AFS command and the
privilege is global, that is, it is not attached to particular
volumes: if you've got this privilege, you can vos release any
cloned volume in the AFS cell. (IIRC, the same privilege allows
you to run opther privileged vos commands.)
We have a SLAC-written wrapper, 'vos_release', for the native AFS
command that allows AFS package maintainers to do vos releases for
the volumes in their packages. The authorization scheme for this
wrapper makes use of our naming conventions for package volumes
and for the AFS groups in package space. However, AFS group space
is much less regular than package space, and our simple wrapper
would scale well if we tried to provide fine-grained authorization
for vos releases in group space. What we are currently looking
into for BaBar is to define a single AFS group whose members would
be able to do a vos release for any cloned BaBar volume (all such
volume names begin with 'g.bbr'). We have also asked that BaBar
keep the number of people in the AFS group small (e.g., 5-10).
With this sort of scheme, you probably only want to clone volumes
that change infrequently. This, coupled with the need to have
clones on all parent volumes, implies constraints on how the space
is organized."
With ATLAS have seen some files not be able to be read. Have
expected that a job would wait a long time but not think the file
doesn't exist. Have seem problems like this but not been able to
track them down. The ATLAS problems did seem to correlate with
when BaBar had problems.
Could make the top level read-only. This will mean separating out
some things from that volume as it should be small. Need to build
in some sort of authorisation scheme to allow ATLAS folk do the
"vos release" on ATLAS space. Will wrap the command that
communicates with a privileged server that does the actual "vos
release". Not talked to Alf yet about this, need to discuss with
him to see what schemes are reasonable to implement.
One issue might be that the ATLAS release remembers where it is
installed so it might remember the read-write path instead of the
read-only one.
Is having the NFS space mapped to users via AFS could a problem?
Don't believe so but there is hte issue with NFS opening and
closing files for each access might cause some worry.
Could replicate the top level volume three times.
Will check if we're still seeing problems running ATLAS
software. Will report it to unix-admin next time we see it.
5. AOB
None.
Action Items:
-------------
070509 Stephen Split up top-level AFS volume, requests to unix-admin
070502 Stephen Email Gordon about his action item
070509 Done.
070502 Stephen Arrange meeting about ATLAS TAG data on PetaCache
070509 Not done yet.
070502 Wei Check CA certificate update mechanism
070509 Not done yet but believe VDT is the right way.
070321 Gordon Discuss perception of SLAC Tier-2 with external folk.
070404 no info
070411 no info
070509 No info.
|