Print

Print


Hi Artem,

We are activly pursuing integrating with SRM. As it stands now, if one
wants to do a quick and dirty relative to the SRM one can use the srmcopy
command to bring files into a local disk cache. While in some
implementations that implies a double copy, that isn't as bad as it seems
since the srm disk cache need not be large and generally you want to
separate both anyway. Some implementations of the SRM also bundle in (some
loosely some strongly) the DRM (disk Resource Manager), that becomes more
problematic, depending on whether the implementation allows the cache to
be shared with other access providers or not. As you can see, other than
the srmcopy route, there isn't an immediate solution here. Have you looked
into using srmcopy?

Andy

On Wed, 14 Sep 2005, Artem Trunov wrote:

> Hi, Andy!
>
> Yuo din't actually mention about xrootd+SRM in this mail. Could you give
> some comments on this? Looking from inside LHC experiments, it seems that
> without SRM xrootd has no future in LHC, and it's more dimming as sites
> pass no-return point in deployment of other solutions.
>
> artem.
>
>
>
> On Tue, 13 Sep 2005, Andrew Hanushevsky wrote:
>
> > Hi Pavel,
> >
> >
> > On Tue, 13 Sep 2005, Pavel Jakl wrote:
> > > unusable for us. Many thanks to Andy for recognizing this issue as a
> > > show stopper for us and providing quick fixes and extension.
> > Your welcome!
> >
> > > 2)The other problems was through enabling the mss interface (our MSS is
> > > HPSS) and possibility to have distributed disc dynamically populated
> > > through the HPSS files:
> > Yes, Jerome and I talked about this. I sorry I forgot to give you our
> > solution to this problem called hpss_throttle that limits the number of
> > parallel clients to hpss. Please dowload
> > http://www.slac.stanford.edu/~abh/hpss_throttle
> >
> > and take a look on what we do. The program is what inetd launches instead
> > of pftp. The program then makes sure than no more han the number of pftp's
> > are running at any one time (or substitute any other scheduling
> > parameters).
> >
> > > After the configuration of MPS scripts, I bumped into a problem which
> > > relates to absent plugin of LFN - PFN conversion:
> > Yes, Jerome and I talked about this as well. The solution was to provide a
> > plugin mechanism where you could put any mapping algorithm you wanted.
> > There was some philosophical disagreement with this approach from some
> > members of the xrootd collaboration which cases this not not rise as
> > quickly up the priority scale as one would want. It's still on track to
> > getting done, however.
> >
> > > Andy, please can you prepare for us the new directive and all about that
> > > as we concluded at XROOTD+SRM meeting ? Thank you.
> > Yes, see above.
> >
> > > 4)Next problem was the script for measuring load of data servers called
> > > XrdOlbMonPerf. I have found some bugs and repaired them.
> > >  Changes:
> > >      I have repaired bug related to network result and I added missing
> > > paging I/O result. Also I made some small changes as paths to unix
> > > command etc.
> > >
> > >      See the attachment.
> > Thank you for the fixes. As soon as I review them, they will be included
> > in he xrootd rpm.
> >
> > >  I still have some problem with stopping this running command after the
> > > xrootd server is stopped. For some reason, this command is still running
> > > and it doesn't die when xrootd is killed.
> > >  Some ideas ? Thank you
> > Yes, this is a stupidity on Linux's part. The way that the program
> > releazes that the olbd has gone away is that the stdout pipe fills up. The
> > latest releases of Linux defines a fully buffered pipe which means even if
> > the receiving end goes away, you never know it until the pipe fills to
> > capacity. The solution is to have the monitor to listen on stdin and when
> > that shows an error (i.e., the olbd went away) it knows right away. This
> > also means launching yet another process because multithread perl is not
> > generally available everywhere, sigh.
> >
> >   > > 5)Monitoring issue:
> > >     we are monitoring the xrootd daemon and olbd daemon from ganglia
> > > view (reporting cpu and memory usage exploiting ganglia metrics). See
> > > the attachment.
> > >
> > > Now that the basic functionalities are in place and working, I am
> > > looking forward to the next steps and improvements along the line of
> > > Xrootd+SRM technology merging and development.
> > So would all of us!
> >
> > Andy
> >
>