well, nothing prevents you guys from going that way. But this is just to
say that, at least from my perspective, a more productive path would be to
write one or two plugins for xrootd and attach it to a vanilla dCache. Then
it would become just a question of bundling, but we have now much
experience for that kind of stories (ROOT, Alien, Aliroot, Castor, PROOF,
...), and we saw typically what's good to do and what's not. From a tech
point of view of course.
For the plugins, well, there's already some people doing dCache-xrootd
interactions with the mps scripts. The Alice computing model itself could
steer towards such possibilities. A plugin-based thing would be much more
efficient and self-contained. After all, around there are guys writing
plugins for many things, so why not a serious one for dCache?
By the way, PROOF itself is based on xrootd+some plugins which make the
servers do completely different things with respect to handling a massive
Obviously this choice is not up to me, and you can count on my support
anyway to use the xrootd protocol. Just a matter of efficience, manpower
and experiences which we could join eventually.
Martin Radicke wrote:
> Hi all,
> this is clearly a misbehaviour of the xrootd door of dCache.
> Several problems with the current implementation of dCache's xrootd
> server engine came up recently, which make us dCache people think about
> rewriting the engine (mostly from scratch).
> The most important goal of the new engine will being as compatible as
> possible with XrdClient (thanks Farbrizio for the help you offered, it
> is and will be very useful to improve compatibility). The engine will be
> written in JAVA and be mostly decoupled from dCache. It will run
> embeddeded in dCache doors as well as dCache pools, communicating with
> the dCache backend via callbacks.
> About PROOF: since some ATLAS guys (Munich and Brookhaven) trying to
> integrate dCache with PROOF, this topic appeared on my radar and I will
> certainly look into this.
> Pablo Fernandez wrote:
>> I've seen different behaviors in dCache xrootd door and the normal
>> xrootd daemon, and before rising a ticket in dCache I wanted to ask
>> you about what is normal and what's not.
>> I've used an example of proof job that tries to access a set of files.
>> First the files were stored in dCache (and accessible via xrdcrp
>> commands), and I found errors accessing the file with the proof job.
>> Afterwards I changed the location of the files to the olbd storage
>> instead, and that didn't return errors accessing the files, it worked
>> The error returned by root for every file is:
>> Srv err: Stat request requires open file.
>> 01:42:23 3789 Mst-0 | Error in <TDSetElement::Lookup>: couldn't
>> (as you may have guessed, grid017 is the dCache xrootd door, and I'm
>> sure it works because it can copy back that same file with xrdcp)
>> And the error returned by the dCache xrootd door is (again for each
>> LogicalStream(localPort=1094 SID=256) got new request from dispatcher
>> Xrootd-Error-Response: ErrorNr=3004 ErrorMsg=Stat request requires
>> open file.
>> Xrootd-Response-Thread: sending response
>> So, the error means that a "stat" operation was sent before opening
>> the file. If the same access attempt succedes on a normal xrootd
>> daemon and fails on dCache, what do you think? Is is xrootd daemon too
>> permissive or dCache too restrictive?