Print

Print


Hi, Pete!

Yuo proposal (d) is indeed extensively flexible and covering all feaseble
use cases, but is probably too complicated, because it requires developing
some messaging between SRM and xrootd olb.

For example, this can be pretty incovinient to implement:

          - when SRM updates a file it must tell xrootd to invalidate that
            entry in its redirector cache (and SRM must update all replicas)

since requires hacking/wrapping SRM tools. It just makes cyclic
dependency: xrootd uses SRM, SRM uses Xrootd.

Another concern is that you'd probably have to use one redirector for all
your rw and ro data pools, and may be even without a backup instance on
dns, since those multiple olbs need to be in sync.

I wonder if it enough to leave SRM for ro access only, and use Xrootd
only for rw. This will simplify things a lot, while giving up just a bit.
Opportunity to do ro with SRM is given as a sort of back door, in case you
really want to do something, and you really know what your are doing.
(like bulk replication).


This is coherent with my very old proposal to use xrootd for
communicating with underlying MSS, so you have only one consistent interface.
Unfortunately, administrative interface has been given the lowest priority
in this project. As the result we still use love or hate Pud, and still
can not ask such fundamental questions as: on which hosts(s) a file
is/should be.

Artem.

On Thu, 3 Mar 2005, Peter Elmer wrote:
>
>   (d) xrootd/olbd (ro or r/w) + SRM backend
>
>       - new files can be added either via the xrootd interface (if r/w) or
>         the SRM one
>          - when an xrootd (configured for r/w) creates a new file it must
>            ask/tell SRM after selecting a data server
>          - when SRM adds a new file to the disk cache (or underlying MSS)
>            xrootd can just discover it as in (a) above
>       - files can be updated via either interface
>          - when xrootd updates a file, it must tell SRM to allow internal
>            replicas to be removed
>          - when SRM updates a file it must tell xrootd to invalidate that
>            entry in its redirector cache (and SRM must update all replicas)
>       - files can be _read_ via xrootd with _no_ communication necessary
>         between xrootd/olbd and SRM if file is on disk
>       - files can be _read_ via SRM with _no_ communication necessary
>         between xrootd/olbd and SRM
>       - files can be _read_ via xrootd with a mssgwcmd-style communication
>         with SRM if file is not currently on disk (and MSS exists)
>       - xrootd/olbd decides when files should be replicated and chooses a
>         server based on load info and free space info from SRM, SRM on the
>         back end arranges for the replication (using whatever mechanism it
>         likes, e.g.  restaging from MSS or disk-to-disk copy)
>       - while (a) can be used for most of the bulk data in HEP, this solution
>         has some advantages for some methods of managing user data, etc.
>
>   Anything I've missed or gotten wrong?
>
>                                    Pete
>
>
> On Wed, Mar 02, 2005 at 12:10:47PM -0800, Andrew Hanushevsky wrote:
> > Hi JY,
> >
> > Hmmm, it works out in practice that checking for the file every time in
> > HPSS significantly slows down the open process. That also would mean that
> > if HPSS is down, the disk cache becomes inaccessible since xrootd
> > can't check the hpss copy. Something we've decided to avoid here for
> > robustness reasons (there is already an option to check during file
> > creation).
> >
> > The real problem here is that the person is using a backdoor to modify
> > the file. Had the user modified the file on the disk cache and let it
> > migrate back down, there would not be a problem. Using backdoors opens
> > a whole range of problems, only one of which the user experienced.
> >
> > I'm not opposed to putting in an option to do the check but I think
> > serious consideration needs to be given whether this is *really* the
> > mode of operation you want or will be happy with.
> >
> > Andy
> >
> > On Wed, 2 Mar 2005, Jean-Yves Nief wrote:
> >
> > > hello all,
> > >
> > >           one of the D0 user accessing files via xrootd encountered the
> > > following issue: after having accessed a file via xrootd (so after being
> > > staged from the "master" copy stored in HPSS), he modified the master
> > > copy in HPSS and wanted to access the modified file via xrootd: but as
> > > the old version of the file was already on the disk cache, no staging
> > > occured of course (but that is the expected behavior obviously) and he
> > > grabbed the old one version, which is not what he wanted. Well as an
> > > emergency solution and as it was the first time it happened, I've
> > > deleted the old version on the disk cache so he could proceed.
> > > However, I think it would be nice to have some control on the validity
> > > of the cache: one solution would be to add the following test: in case
> > > the file is already in the cache, compare the creation time on the cache
> > > disk (t1) with the last modified time of the file stored in HPSS (t2):
> > > if t1<t2 then restage the file.
> > > it will be a very small overhead to the mechanism, each time a file is
> > > accessed: it have just to issue a "statx" request to the MSS.
> > > or maybe there is a more simple solution.
> > > cheers,
> > > JY
> > >
> > >
>
>
>
> -------------------------------------------------------------------------
> Peter Elmer     E-mail: [log in to unmask]      Phone: +41 (22) 767-4644
> Address: CERN Division PPE, Bat. 32 2C-14, CH-1211 Geneva 23, Switzerland
> -------------------------------------------------------------------------
>