Print

Print


Hi Sam,

Fair enough :-) I will add that to the documentation. Do you think adding 
a mention to the inplace export option would further clarify this?

Andy

On Fri, 15 Nov 2019, Sam Skipsey wrote:

> hi Andrew,
>
> with respect, I think you're slightly overegging the pudding in the
> opposite direction.
>
> The issue I had understanding the documentation is that the oss.spaces
> configuration description never says anything about the "path" needing
> to be writable on the actual system filesystem (outside of the spaces
> directories themselves).
> This is easily fixed by adding something like:
>
> "Even if you are using oss.spaces, the xrootd server will still need
> write access into the system filesystem in the paths that are accessed
> via the server. (These are used as mappings from the filesystem
> namespace into the spaces themselves.) As a result, you may need to
> set oss.localroot, depending on the paths exported, and your system
> configuration."
>
> to the oss.spaces entry.
>
> Sam
>
> On Thu, 14 Nov 2019 at 23:19, Andrew Hanushevsky
> <[log in to unmask]> wrote:
>>
>> Hi Sam,
>>
>> I suppose it isn't explicitly documented because we didn't think the
>> implementation mattered at the level of what the config directves do.
>>
>> a) You create arbitrary partitions and provide a space name for each.So,
>> you have lost of multiple oss.space directives.
>>
>> b) After that you don't have to do much of anything. In fact, technically,
>> you don't need local root as it defaults to a simple '/' (I know Matevz
>> said you need one but was ony because of his particular space arrangement
>> - see below).
>>
>> The issue really here is that you had a bunch of "data" spaces but no
>> "meta" space. It would apear that you wanted anything that wasn't "data"
>> to be defaulted to "localroot". There is no general way to do that when
>> the allocator specifies a named space. You can only do it this way if you
>> can differentate spaces by path.
>>
>> So, if the cinfo files always go into a distinguished path like
>> /a/b/cinfo/ then you can force files to be placed directly into that
>> directory (i.e. ignore the space name) by using he "inplace" option in the
>> export directives. For instance,
>>
>> all.export /a/b/cinfo/ inplace
>>
>> only thenyou would not need to have a "meta" space. Since that is
>> relatively difficult to document, we live with the extra symlinks as they
>> pose little overhead.
>>
>> Back to documentation, I do have a set of slides that describes all of
>> this. I can create a pdf file and link to it from the documention for
>> those who want the nitty gritty implementation details.
>>
>> Andy
>>
>>   On Thu, 14 Nov 2019, Sam Skipsey wrote:
>>
>>> Ah, okay, so when you're using spaces, the localroot is a "fake" (but
>>> real filesystem directory) which just symlinks into the spaces, as a
>>> sort of "lookup table" for which space to actually find a given file?
>>> That's interesting. (And this is definitely not explained *at all* in
>>> the documentation, so that needs fixed. The oss.spaces stuff just says
>>> that it is a means of aggregating storage spaces and doesn't say that
>>> you need a localroot too, and the oss.localroot stuff doesn't mention
>>> how it relates to spaces at all.)
>>>
>>> On Thu, 14 Nov 2019 at 21:13, Matev? Tadel <[log in to unmask]> wrote:
>>>>
>>>>
>>>>
>>>> On 2019-11-14 12:39, Sam Skipsey wrote:
>>>>> On Thu, 14 Nov 2019 at 20:27, Matev? Tadel <[log in to unmask]> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You are missing oss.localroot ... this has to be always defined. That's
>>>>>> where symlinks to various oss.space instances go.
>>>>>>
>>>>>
>>>>> ...how does this work (and why isn't it described in any detail how
>>>>> this applies to oss.spaces in the documentation?)
>>>>
>>>> I think Andy should explain this (and fix the docs) ... as it might be
>>>> my mental model is incomplete.
>>>>
>>>>>> Looking at your config, you don't seem to need oss.space and pfc.spaces
>>>>>> at all, you just need oss.localroot /storage0/
>>>>>>
>>>>>
>>>>> No, we have /storage0 through /storage4.
>>>>>
>>>>>> It seems the disks are aggregated in some other ways, not by being added
>>>>>> to any space.
>>>>>>
>>>>>
>>>>> That's... partly true. But the /storage* filesystem locations are all
>>>>> distinct, and associated with different devices.
>>>>>
>>>>>> public is the default space used by xcache, for both data and metadata
>>>>>> (so what you specify are actually the defaults).
>>>>>>
>>>>>
>>>>> I know, but since it wasn't working, I thought I'd put it in
>>>>> explicitly just in case.
>>>>>
>>>>>> xcache allows you to separate spaces for data and metadata, often it is
>>>>>> useful to have metadata and localroot on ssd, or at least on a separate
>>>>>> disk / controller.
>>>>>>
>>>>>
>>>>> Indeed... as noted, the problem is that there's no documentation about
>>>>> how to associate a space with the localroot, as the documentation
>>>>> doesn't talk about spaces in this context.
>>>>
>>>> All spaces are associate with the localroot implicitly, I believe ...
>>>> localroot just holds symlinks to instaniated files in various spaces.
>>>> You can think of localroot as the namespace representation ... with
>>>> actual data being stored on other spaces.
>>>>
>>>> Now, the question that arises for me is, how you associate parts of
>>>> namespace (export) with an oss.space -- and I believe this is also what
>>>> confuses you.
>>>>
>>>> In xcache we do it explicitly by writing data files to the "data" space
>>>> and cinfo files to the "meta" space. This way they can land on different
>>>> storage types and can be separated as cinfo files get access often
>>>> during purge cycle.
>>>>
>>>>>> See the relevant lines from a config we use at UCSD below.
>>>>>>
>>>>>> It turns out that for cache it is better to use raw disks than raid5/6
>>>>>> or zfs as you get independent streams for each file being opened going
>>>>>> to specific disks, both on input and output. Data loss due to a lost
>>>>>> disk is not a problem for caches.
>>>>>>
>>>>>> I hope this is clearer now :)
>>>>>>
>>>>>
>>>>> Not really: I still need to know how you associate the localroot with a space.
>>>>
>>>> See above + I hope Andy can further clarify it for you.
>>>>
>>>>>> Matevz
>>>>>>
>>>>>> oss.localroot /xcache-root  # On SSD
>>>>>>
>>>>>> oss.space data /data1/xcache # Spinning disks (we use xfs)
>>>>>> oss.space data /data2/xcache
>>>>>> oss.space data /data3/xcache
>>>>>> oss.space data /data4/xcache
>>>>>> oss.space data /data5/xcache
>>>>>> oss.space data /data6/xcache
>>>>>> oss.space data /data7/xcache
>>>>>> oss.space data /data8/xcache
>>>>>> oss.space data /data9/xcache
>>>>>> oss.space data /data10/xcache
>>>>>> oss.space data /data11/xcache
>>>>>> oss.space data /data12/xcache
>>>>>>
>>>>>> oss.space meta /xcache-meta # On SSD
>>>>>>
>>>>>> # Use space "data" for data, "meta" for meta-data/cinfo files
>>>>>> pfc.spaces data meta
>>>>>
>>>>> Yes. So... is /xcache-root a real path? Is /xcache-meta a real path?
>>>>
>>>> Yes, both are real paths you can write to :)
>>>>
>>>>> This doesn't help me with what I should point oss.localroot at.
>>>>
>>>> To a directory, preferably on a SSD. And the same for oss.space meta.
>>>> localroot will only hold sym-links so it needs not be very large. meta
>>>> will only hold cinfo files so it will be something like 1 kB per file +
>>>> 1-bit per-block of data files, also rather smallish.
>>>>
>>>> E.g., for blocksize of 512 kB:
>>>>
>>>> [1311] root@xcache-00 /# du -sch xcache-root xcache-meta /data{1..12}
>>>> 3.8M    xcache-root
>>>> 31M     xcache-meta
>>>> 1.8T    /data1
>>>> 1.8T    /data2
>>>> 1.8T    /data3
>>>> 1.8T    /data4
>>>> 1.8T    /data5
>>>> 1.8T    /data6
>>>> 1.8T    /data7
>>>> 1.8T    /data8
>>>> 1.8T    /data9
>>>> 1.8T    /data10
>>>> 1.8T    /data11
>>>> 1.8T    /data12
>>>> 22T     total
>>>>
>>>> Matevz
>>>>
>>>>> Sam
>>>>>
>>>>>>
>>>>>> On 2019-11-14 11:50, Sam Skipsey wrote:
>>>>>>> Hello,
>>>>>>> So, I've tried various interpretations of the documentation, but I'm
>>>>>>> obviously missing something.
>>>>>>>
>>>>>>> I'm trying to configure an Xrootd Proxy Cache, backed by storage
>>>>>>> aggregated into an oss.space.
>>>>>>>
>>>>>>> The cache is configured like:
>>>>>>>
>>>>>>>
>>>>>>> oss.space public /storage0*
>>>>>>> #oss.space public default /
>>>>>>>
>>>>>>> ofs.osslib libXrdPss.so
>>>>>>> pss.cachelib libXrdFileCache.so
>>>>>>> pfc.ram 16g
>>>>>>> all.trace all
>>>>>>> all.log all
>>>>>>> pfs.diskusage 0.90 0.95
>>>>>>>
>>>>>>> all.export /xroot:/ outplace
>>>>>>> all.export /root:/ outplace
>>>>>>>
>>>>>>> all.export / outplace
>>>>>>>
>>>>>>> pss.origin svr01.beowulf.cluster:1094
>>>>>>> xrd.allow host *.beowulf.cluster
>>>>>>>
>>>>>>> pfc.blocksize 256k
>>>>>>>
>>>>>>> pfc.spaces public public
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> where the /storage0* spaces are all writable and owned by the xrootd
>>>>>>> user (and when we start up the server, it creates "public" directories
>>>>>>> in each of them).
>>>>>>>
>>>>>>> However, whenever I try to access a file through the proxy, it logs
>>>>>>> that it couldn't write the file to the disk (specifying its path as if
>>>>>>> it was going to write it to the storage with no prefix - that is,
>>>>>>> outside of the specified "public" space, in the literal root
>>>>>>> filesystem).
>>>>>>> If this was a non-spaces based system, I'd specify oss.localroot, but
>>>>>>> that doesn't seem to apply to a space?
>>>>>>> I tried adding the outplace directives to the all.exports, but this
>>>>>>> hasn't changed anything.
>>>>>>>
>>>>>>> What am I missing? None of the example configurations I can see with
>>>>>>> oss.spaces seem to need any additional configuration directives...
>>>>>>>
>>>>>>> Sam
>>>>>>>
>>>>>>> ########################################################################
>>>>>>> Use REPLY-ALL to reply to list
>>>>>>>
>>>>>>> To unsubscribe from the XROOTD-L list, click the following link:
>>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>>>>>>
>>>>>>
>>>>>> ########################################################################
>>>>>> Use REPLY-ALL to reply to list
>>>>>>
>>>>>> To unsubscribe from the XROOTD-L list, click the following link:
>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>>
>>> ########################################################################
>>> Use REPLY-ALL to reply to list
>>>
>>> To unsubscribe from the XROOTD-L list, click the following link:
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>>
>>
>> ########################################################################
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the XROOTD-L list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>
> ########################################################################
> Use REPLY-ALL to reply to list
>
> To unsubscribe from the XROOTD-L list, click the following link:
> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1