Print

Print


Hi Patrick,

OK. It works out that we'll likely need the oss caching mechanism in order 
to satisfy the new space token requirement (the oss provides a mechanism 
to use a space token in a meaningful way). So, looks like this one moved
up in priority.

Andy

P.S. I'll take you up on the offer :-)

On Mon, 3 Mar 2008, Patrick McGuigan wrote:

> Hi Andy,
>
> I'll let you know how the r/o migration attempt works when I try it later 
> tonight.
>
> I think it would be useful to remove the pathname limit under caching, 
> although I am biased ;)  I expect that ATLAS Tier3 sites will take a long 
> look at Scalla because of the integration with root and the more flexible the 
> software is, the more likely it will fit the perceived need.  I am about to 
> help setup a Tier3 site where the existing hardware provides three disks per 
> data server and would be willing to test implementations there.
>
> Patrick
>
> Andrew Hanushevsky wrote:
>> Hi Patrick,
>> 
>> Yes, you can do what you want to do. The most generc way is to export the 
>> paths as r/o (read/only). See the xrootd.export directive. You can do what 
>> you want, though we have never tried it. That is, having a server exporting 
>> a read/only path that you essentially want to duplicate on a read/write 
>> server(s). It probably will work. If not, there are a couple of other 
>> things you can do to force it to work. BTW to keep a single config file, 
>> simply bracket the exceptional export in an if/fi construct rereferencing 
>> that host. For instance,
>> 
>> if <problem host>
>> xrootd.export /thepath r/o
>> else
>> xrootd.export /thepath
>> fi
>> 
>> I understand your concern about granularity. That problem seems to plague 
>> many LVMs. The newest ones (i.e., zfs) try to address that issue. So, does 
>> that mean you think it's worth spending time removing the 255 path limit?
>> 
>> Andy
>> 
>> 
>> On Mon, 3 Mar 2008, Patrick McGuigan wrote:
>> 
>>> Hi Andy,
>>> 
>>> Is there a way to to enforce a read-only rule for particular data servers? 
>>> If this is possible, I can ensure that newly written data avoids the 
>>> systems that need to be "LVM'ed", while allowing the existing data to be 
>>> read.  I am curious if I can replicate data under this scenario using 
>>> xrdcp, or will I still have to take the read-only systems off-line to move 
>>> the data?
>>> 
>>> I am concerned about the granularity of recovery actions in our systems 
>>> under LVM, but I need to support larger pathnames now.  The pathnames are 
>>> being driven by the physics users and the use of metadata in the path 
>>> components.
>>> 
>>> Patrick
>>> 
>>> Andrew Hanushevsky wrote:
>>>> Hi Patrick,
>>>> 
>>>> You are quite right that the design of the cache does impose limits on 
>>>> file names. The oss was designed almost 10 years ago when there were no 
>>>> integrated LVM's and few that worked really well and users kept path 
>>>> names to less that 200 characters. Over the years, as LVM's became 
>>>> common, the oss turned into the "poor man's LVM". In general, we don't 
>>>> recommend using it if you have ready access to a suitable LVM. While, 
>>>> yes, you do give up some features (like fine-grained recoverability and 
>>>> application-directed partition selection) the other
>>>> limitations may be even more annoying. We'll make sure that this
>>>> restriction is prominently mentioned in the manual.
>>>> 
>>>> The path you've chosen is about the only one that will work (i.e., 
>>>> copying off the data and creating a single filesystem using an LVM).
>>>> 
>>>> Now, we do have some ideas on how to remove the pathname length 
>>>> restriction but wonder if it's really worth the trouble of doing it, 
>>>> given that LVM's provide practically the same basic features. Any 
>>>> thoughts?
>>>> 
>>>> Andy
>>>> 
>>>> P.S. What's the driving force for very long path names at your site?
>>>> 
>>>> 
>>>> On Sat, 1 Mar 2008, Patrick McGuigan wrote:
>>>> 
>>>>> Hi,
>>>>> 
>>>>> Some of our data servers have two disks and I am using the oss.cache 
>>>>> directive to use both disks to support a single namespace.  However, it 
>>>>> looks like the users of our applications have already run into a 
>>>>> problem. All of the files are stored in one directory (for a single 
>>>>> cache) and the filename is the full namespace path with "/" substituted 
>>>>> with "%".  Our problem arises from the fact that full namespace path is 
>>>>> now limited to the leafname length of the filesystem (255 characters) 
>>>>> when writing to the cache directory.
>>>>> 
>>>>> I see a couple of ways to mediate the problem; removing one disk or 
>>>>> using and LVM to create one drive in the OS.  I am curious if there are 
>>>>> other alternatives?
>>>>> 
>>>>> If I have to move to one disk, I would like to migrate the data in the 
>>>>> existing caches to other data servers while I rework the existing 
>>>>> system. What is the best way to migrate this data?  I am planning on 
>>>>> taking the "problem" data server off-line and use xrdcp to move the data 
>>>>> to the other servers.
>>>>> 
>>>>> Regards,
>>>>> 
>>>>> Patrick
>>>>> 
>>> 
>