Hi Andrew,
sorry for the late reply, couldn't test it earlier.
The issue seems to still be there with v4.4.0 [1]. It gives the same result as with v4.3.0 [2].
Cheers,
Max
[1]# echo -ne "query config version\nspaceinfo /\nexit\n" |/usr/bin/xrdfs 127.0.0.1:1094
[127.0.0.1:1094] / > query config version
v4.4.0
[127.0.0.1:1094] / > spaceinfo /
Path: /
Total: 28627434496589824
Free: 16285336187177248
Used: 882392942
Largest free chunk: 1017833511698578
[2]# echo -ne "query config version\nspaceinfo /\nexit\n" |/usr/bin/xrdfs f01-101-136-e:1094
[f01-101-136-e:1094] / > query config version
v4.3.0
[f01-101-136-e:1094] / > spaceinfo /
Path: /
Total: 28627434496589824
Free: 16286951590068224
Used: 183845045898
Largest free chunk: 1017934474379264
> Am 30.08.2016 um 21:50 schrieb Andrew Hanushevsky <[log in to unmask]>:
>
> Hi Max,
>
> I believe this has been fixed in 4.4.0 -- I don't know why a note was not included in the release notes but sometimes documentation on things like this do fall through the cracks.
>
> Andy
>
> On Wed, 24 Aug 2016, Fischer, Max (SCC) wrote:
>
>> Hi all,
>>
>> we run a cluster of xrootd servers which provide data from multiple directories, all of which reside on *the same* filesystem. When querying the available (free, etc.) space, we get the filesystem volume *multiplied by the number of directories*.
>>
>> Basically, we've got this situation:
>> o A single filesystem mounted at `/export/gka6201` [1].
>> o Inside, we have multiple sub-directories `/export/gka6201/xrootd/data-00` to `/export/gka6201/xrootd/data-15`, i.e. 16 directories.
>> o Xrootd server is configured to use them via `oss.space public /export/gka6201/xrootd/data*`.
>> o Xrootd expands this to the actual directories upon startup [2].
>> o When querying the space using `xrdfs spaceinfo` [3], we get 28627434496589824B - that's precisely 16 times the actual volume of 1747279937536kB.
>>
>> I'm rather sure it's not an issue from serving the filesystem from multiple servers - even if just one server is active, the factor is the same.
>> The docs [4] mention using `oss.space` specifically for different directories on the same partition. Do I have to make additional adjustments to the configuration for such a setup? Is this potentially a bug?
>> Xrootd is v4.3.0, but I didn't find any such issue in the v4.4 release notes or bug tracker.
>>
>> Cheers,
>> Max
>>
>> [1]$ df -h
>> Filesystem 1K-blocks Used Available Use% Mounted on
>> ...
>> /dev/gka6201 1747279937536 500460019712 1246819917824 29% /export/gka6201
>>
>> [2]$ cat /var/log/xrootd/logs/server/xrdlog
>> Config effective /etc/xrootd/server/xrootd.cf oss configuration:
>> ...
>> oss.space public /export/gka6201/xrootd/data-06
>> oss.space public /export/gka6201/xrootd/data-09
>> oss.space public /export/gka6201/xrootd/data-11
>> oss.space public /export/gka6201/xrootd/data-14
>> oss.space public /export/gka6201/xrootd/data-08
>> oss.space public /export/gka6201/xrootd/data-13
>> oss.space public /export/gka6201/xrootd/data-15
>> oss.space public /export/gka6201/xrootd/data-01
>> oss.space public /export/gka6201/xrootd/data-00
>> oss.space public /export/gka6201/xrootd/data-03
>> oss.space public /export/gka6201/xrootd/data-02
>> oss.space public /export/gka6201/xrootd/data-05
>> oss.space public /export/gka6201/xrootd/data-10
>> oss.space public /export/gka6201/xrootd/data-04
>> oss.space public /export/gka6201/xrootd/data-07
>> oss.space public /export/gka6201/xrootd/data-12
>> ...
>>
>> [3] $ xrdfs localhost:1094 spaceinfo / | grep Total
>> Total: 28627434496589824
>>
>> [4] http://xrootd.org/doc/dev41/ofs_config.htm#_Toc401930733 <http://xrootd.org/doc/dev41/ofs_config.htm#_Toc401930733>
>> ...
>> 2) File systems need not be physical partitions. When different directories on the same physical partition are specified, they are treated as different logical partitions from a space management viewpoint. This allows you to create arbitrary views of available space (e.g., by SRM static space token).
>> ########################################################################
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the XROOTD-L list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
########################################################################
Use REPLY-ALL to reply to list
To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
|