Print

Print


	I am personally not aware of Alice working in this
area. Alice knows well however the plans from STAR as we
discussed our computing model with them in the past and many
times (could have been an oportunity for a common work). In
that regard, I do not feel compelled to contact them any
further on the issue but as you mention, all being on the
same page may be a good thing.

	I agree with your statements about "hack if" and
the need for a full SRM front-end. We (SLAC & LBL/BNL-STAR)
are going for SRM/DRM integrated solution. The SRM interface
however would allow for other SRM integration ... Many issues
needs to be resolved of course (not only a question of plugin
in an API and here we go) including mondain issues about who
manages the space (both currentely), how to handle AA, policies,
deal with pinning, pre-staging etc ... or even bringing or
accessing files from remote.

	Not sure I fully agree with your reference to catalogs
and seeing this as a problem but perhaps I have not understood
it fully either.


Artem Trunov wrote:
> Hi, Andy!
> 
> I agree with you, that srmcopy now is enough to claim srm-compatibility. I
> must admit, I am not fully familiar with srm idea and tools, but my
> understanding is that srmcopy talke to srm server on the recieving end,
> and it gives back transfer url of the destination. If one uses dCache, for
> example, than srm server you run is the one made by dCache, and it know ho
> to transfer files in dCache only. So, to srm-enable storage solution, i.e.
> xrootd, one has to write srm server implementation, that able to respond to "foreingh"
> srmcopy requests, supply TURL, and probably xrootd must be able to follow
> some defined transfer protocol (otherwise I don't know how foreighn
> srmcopy will talk to xrootd server).
> 
> If you propose that a site uses srm front end, and then copy files into
> xrootd space as separate process, this is some sort of solution, but
> I am afraid that it will not be taken seriously... There are meny other
> hacks one will have to do in this case. REmember, that the srm SE (not
> xrootd) is recorded in variouse grid catalogs, and hance even jobs will
> have to go directly to SE, and one has to make another hack to direct jobs
> to xrootd. Then one has to also provide export from srm. Sigh.
> 
> So, it looks like the best is still to make a real srm front end to
> xrootd...
> 
> But also I understand, that alice is working on something in this area,
> and someone else JY told me about? Are you all on the same page?
> 
> Artem.
> 
> On Wed, 14 Sep 2005, Andrew Hanushevsky wrote:
> 
> 
>>Hi Artem,
>>
>>We are activly pursuing integrating with SRM. As it stands now, if one
>>wants to do a quick and dirty relative to the SRM one can use the srmcopy
>>command to bring files into a local disk cache. While in some
>>implementations that implies a double copy, that isn't as bad as it seems
>>since the srm disk cache need not be large and generally you want to
>>separate both anyway. Some implementations of the SRM also bundle in (some
>>loosely some strongly) the DRM (disk Resource Manager), that becomes more
>>problematic, depending on whether the implementation allows the cache to
>>be shared with other access providers or not. As you can see, other than
>>the srmcopy route, there isn't an immediate solution here. Have you looked
>>into using srmcopy?
>>
>>Andy
>>
>>On Wed, 14 Sep 2005, Artem Trunov wrote:
>>
>>
>>>Hi, Andy!
>>>
>>>Yuo din't actually mention about xrootd+SRM in this mail. Could you give
>>>some comments on this? Looking from inside LHC experiments, it seems that
>>>without SRM xrootd has no future in LHC, and it's more dimming as sites
>>>pass no-return point in deployment of other solutions.
>>>
>>>artem.
>>>
>>>
>>>
>>>On Tue, 13 Sep 2005, Andrew Hanushevsky wrote:
>>>
>>>
>>>>Hi Pavel,
>>>>
>>>>
>>>>On Tue, 13 Sep 2005, Pavel Jakl wrote:
>>>>
>>>>>unusable for us. Many thanks to Andy for recognizing this issue as a
>>>>>show stopper for us and providing quick fixes and extension.
>>>>
>>>>Your welcome!
>>>>
>>>>
>>>>>2)The other problems was through enabling the mss interface (our MSS is
>>>>>HPSS) and possibility to have distributed disc dynamically populated
>>>>>through the HPSS files:
>>>>
>>>>Yes, Jerome and I talked about this. I sorry I forgot to give you our
>>>>solution to this problem called hpss_throttle that limits the number of
>>>>parallel clients to hpss. Please dowload
>>>>http://www.slac.stanford.edu/~abh/hpss_throttle
>>>>
>>>>and take a look on what we do. The program is what inetd launches instead
>>>>of pftp. The program then makes sure than no more han the number of pftp's
>>>>are running at any one time (or substitute any other scheduling
>>>>parameters).
>>>>
>>>>
>>>>>After the configuration of MPS scripts, I bumped into a problem which
>>>>>relates to absent plugin of LFN - PFN conversion:
>>>>
>>>>Yes, Jerome and I talked about this as well. The solution was to provide a
>>>>plugin mechanism where you could put any mapping algorithm you wanted.
>>>>There was some philosophical disagreement with this approach from some
>>>>members of the xrootd collaboration which cases this not not rise as
>>>>quickly up the priority scale as one would want. It's still on track to
>>>>getting done, however.
>>>>
>>>>
>>>>>Andy, please can you prepare for us the new directive and all about that
>>>>>as we concluded at XROOTD+SRM meeting ? Thank you.
>>>>
>>>>Yes, see above.
>>>>
>>>>
>>>>>4)Next problem was the script for measuring load of data servers called
>>>>>XrdOlbMonPerf. I have found some bugs and repaired them.
>>>>> Changes:
>>>>>     I have repaired bug related to network result and I added missing
>>>>>paging I/O result. Also I made some small changes as paths to unix
>>>>>command etc.
>>>>>
>>>>>     See the attachment.
>>>>
>>>>Thank you for the fixes. As soon as I review them, they will be included
>>>>in he xrootd rpm.
>>>>
>>>>
>>>>> I still have some problem with stopping this running command after the
>>>>>xrootd server is stopped. For some reason, this command is still running
>>>>>and it doesn't die when xrootd is killed.
>>>>> Some ideas ? Thank you
>>>>
>>>>Yes, this is a stupidity on Linux's part. The way that the program
>>>>releazes that the olbd has gone away is that the stdout pipe fills up. The
>>>>latest releases of Linux defines a fully buffered pipe which means even if
>>>>the receiving end goes away, you never know it until the pipe fills to
>>>>capacity. The solution is to have the monitor to listen on stdin and when
>>>>that shows an error (i.e., the olbd went away) it knows right away. This
>>>>also means launching yet another process because multithread perl is not
>>>>generally available everywhere, sigh.
>>>>
>>>>  > > 5)Monitoring issue:
>>>>
>>>>>    we are monitoring the xrootd daemon and olbd daemon from ganglia
>>>>>view (reporting cpu and memory usage exploiting ganglia metrics). See
>>>>>the attachment.
>>>>>
>>>>>Now that the basic functionalities are in place and working, I am
>>>>>looking forward to the next steps and improvements along the line of
>>>>>Xrootd+SRM technology merging and development.
>>>>
>>>>So would all of us!
>>>>
>>>>Andy
>>>>
>>>

-- 
              ,,,,,
             ( o o )
          --m---U---m--
              Jerome