Hi Gregory,

Actually, you will be able to do this relatively easily in the next release 
since you will be able to run multiple xrootd servers on a single machine 
(actually you can do it now but it's a real pain to setup -- though 
Jean-Yves at IN2P3 does this somewhat regularly).


----- Original Message ----- 
From: "Gregory Schott" <[log in to unmask]>
To: "Peter Elmer" <[log in to unmask]>
Cc: "Andrew Hanushevsky" <[log in to unmask]>; "Stephen J. Gowdy" 
<[log in to unmask]>; "Langston, Matthew David" 
<[log in to unmask]>; "xrootd-l" <[log in to unmask]>
Sent: Thursday, March 24, 2005 9:26 AM
Subject: Re: How do we export multiple directories from

> Hello Peter,
>>  We just went through some of these things today at CNAF WRT gpfs (I
>> recognize this is what you are using here). The solutions are:
>>  a) use the cache filesystem to tie the three gpfs filesystems together
>>     into one "namespace" to export (This is what is done at SLAC and most
>>     of the other sites that have more than one filesystem/server.) See:
>>     (I _really_ need to provide a simple example of this.)
> Yes, it would be nice to have an example. I'll try to use mps on Tuesday.
>>  b) Run one xrootd per gpfs filesystem (since they are o(8TB) each, this
>>     probably isn't a disaster.
> Unfortunately I have only 2 GPFS fileservers and 3 disks so I cannot use 
> this as a solution without loosing a disk.
>>  The fundamental issue here is that gpfs is trying to do one of the 
>> things
>> that xrootd is trying to do, namely load balancing across the 
>> fileservers.
>> They aren't incompatible, but the part that gpfs doesn't do (allow the
>> separate filesystems to be treated as a single logical file space) has to 
>> be
>> dealt with some way. The choices are really (a) or (b) above. (Or 
>> _really_
>> ugly solutions, like the current one being used by some of LHC 
>> experiments
>> where the location of the file on a particular filesystem is being 
>> managed
>> by an external catalog....)
> -- Gregory