Print

Print


On 10/11/2013 11:08 AM, Fabrizio Furano wrote:
> Hi Adrian,
Hi Fabrizio!

>  ... hadn't you exported the same disks in both servers at some
> point in time? Maybe Andy is referring to that.
this is only one server and commands are run directly on the server.
It is true that the name schema we use is confusing :) (the servers are
named storage0{1..} and the data partitions the same), but in this case
there is only one server and xrd commands are directed to localhost..

Thanks!!
Adrian


> 
> Fabrizio
> 
> On 10/11/2013 08:20 AM, Adrian Sevcenco wrote:
>> On 10/11/2013 12:57 AM, Andrew Hanushevsky wrote:
>>> Hi Adrian,
>> Hi!
>>
>>> Looks to me that the double space problem comes from the fact that
>>> /storage01 has been included twice in the configuration. Additionally,
>> in configuration i have nothing twice ..
>>
>>> it would appear that /storage02 and /storage03 have not been included in
>>> the configuration. Could you send me your config file? I suspect that
>>> this is the problem here.
>> all config files attached .. xrootd.xrootd.cf is (re) created based on
>> system.cnf file
>>
>> Thanks for helping me with this!
>> Adrian
>>
>>>
>>> Andy
>>>
>>> -----Original Message----- From: Adrian Sevcenco
>>> Sent: Thursday, October 10, 2013 12:04 AM
>>> To: Andrew Hanushevsky
>>> Cc: [log in to unmask]
>>> Subject: Re: xrootd :: server 3.3.2 bug :: double size reporting
>>>
>>> On 10/10/2013 08:52 AM, Adrian Sevcenco wrote:
>>>> On 10/10/2013 03:26 AM, Andrew Hanushevsky wrote:
>>>>> Hi Adrian,
>>>> Hi!
>>>>
>>>>> Could you try to use xrdfs (the new client based xrd replacement)
>>>>> to see
>>>>> what you get there?
>>>> it reports double space also:
>>>> aliprod@storage03: ~ $ xrdfs localhost statvfs /
>>>> Path:                             /
>>>> Nodes with RW space:              1
>>>> Size of RW space (MB):            7978
>>> this is not so informative it seems .. it reports only ~8 Gb space!
>>>
>>> aliprod@storage03: ~ $ xrdfs localhost query space /
>>> oss.cgroup=public&oss.space=86636386320384&oss.free=11850958060&oss.maxf=2089872388&oss.used=10885940771&oss.quota=-1
>>>
>>>
>>>
>>> aliprod@storage03: ~ $ xrdfs localhost query stats total
>>> <statistics tod="1381388316" ver="v3.3.2"
>>> src="storage03.spacescience.ro:1094" tos="1381231940" pgm="xrootd"
>>> ins="anon" pid="12987" site=""><stats
>>> id="info"><host>storage03.spacescience.ro</host><port>1094</port><name>anon</name></stats><stats
>>>
>>>
>>> id="buff"><reqs>105006</reqs><mem>137090048</mem><buffs>180</buffs><adj>0</adj></stats><stats
>>>
>>>
>>> id="link"><num>10</num><maxn>73</maxn><tot>28885</tot><in>18096047509</in><out>1510999236517</out><ctime>1995692</ctime><tmo>45893</tmo><stall>0</stall><sfps>0</sfps></stats><stats
>>>
>>>
>>> id="poll"><att>10</att><en>49466</en><ev>45888</ev><int>0</int></stats><stats
>>>
>>>
>>> id="proc"><usr><s>40</s><u>428853</u></usr><sys><s>2509</s><u>239537</u></sys></stats><stats
>>>
>>>
>>> id="xrootd"><num>28884</num><ops><open>40772</open><rf>0</rf><rd>19695905</rd><pr>0</pr><rv>931313</rv><rs>10051282</rs><wr>4169</wr><sync>0</sync><getf>0</getf><putf>0</putf><misc>86049</misc></ops><aio><num>0</num><max>0</max><rej>0</rej></aio><err>201</err><rdr>0</rdr><dly>0</dly><lgn><num>28882</num><af>0</af><au>28880</au><ua>0</ua></lgn></stats><stats
>>>
>>>
>>> id="ofs"><role>server</role><opr>3</opr><opw>0</opw><opp>0</opp><ups>0</ups><han>3</han><rdr>0</rdr><bxq>0</bxq><rep>0</rep><err>0</err><dly>0</dly><sok>0</sok><ser>0</ser><tpc><grnt>0</grnt><deny>0</deny><err>0</err><exp>0</exp></tpc></stats><stats
>>>
>>>
>>> id="oss" v="2"><paths>2<stats
>>> id="0"><lp>"/"</lp><rp>"/storage01/xrdnamespace/home/aliprod/data/"</rp><tot>13959964628</tot><free>7213124</free><ino>1772814336</ino><ifr>1769450005</ifr></stats><stats
>>>
>>>
>>> id="1"><lp>"/"</lp><rp>"/storage01/xrdnamespace/home/aliprod/data/"</rp><tot>13959964628</tot><free>7213124</free><ino>1772814336</ino><ifr>1769450005</ifr></stats></paths><space>2<stats
>>>
>>>
>>> id="0"><name>public</name><tot>84605846016</tot><free>2010157</free><maxf>443841</maxf><fsn>6</fsn><usg>10693215</usg></stats></space></stats><stats
>>>
>>>
>>> id="sched"><jobs>78587</jobs><inq>0</inq><maxinq>2</maxinq><threads>28</threads><idle>27</idle><tcr>28</tcr><tde>0</tde><tlimr>0</tlimr></stats><stats
>>>
>>>
>>> id="sgen"><as>1</as><et>9</et><toe>1381388316</toe></stats></statistics>
>>>
>>> the problem is more clear in this output..
>>>
>>> Thanks!
>>> Adrian
>>>
>>>
>>>
>>>
>>>> Utilization of RW space (%):      c
>>>> Nodes with staging space:         0
>>>> Size of staging space (MB):       0
>>>> Utilization of staging space (%):
>>>>
>>>>
>>>> Thanks!
>>>> Adrian
>>>>
>>>>
>>>>>
>>>>> Andy
>>>>>
>>>>> -----Original Message----- From: Adrian Sevcenco
>>>>> Sent: Tuesday, October 08, 2013 4:58 AM
>>>>> To: [log in to unmask]
>>>>> Subject: xrootd :: server 3.3.2 bug :: double size reporting
>>>>>
>>>>> Hi! I have a nagging problem with the reporting of size in xrootd:
>>>>>
>>>>> aliprod@storage03: ~ $ echo exit | ~/xrdserver/bin/xrd localhost
>>>>> queryspace / -
>>>>> Disk space approximations (MB):
>>>>> Total         : 82622896
>>>>> Free          : 39192
>>>>> Used          : 0
>>>>> Largest chunk : 7190
>>>>>
>>>>> aliprod@storage03: ~ $ df -B M | grep storage
>>>>> /dev/sdc1            13632778M 13626433M     6346M 100% /storage01
>>>>> /dev/sdc2            13632778M 13625588M     7191M 100% /storage02
>>>>> /dev/sdc3            14045893M 14039832M     6061M 100% /storage03
>>>>>
>>>>> aliprod@storage03: ~ $ df -BM | grep storage | awk 'BEGIN {total=0}
>>>>> {gsub("M",""); total+= $2;} END {print total}'
>>>>> 41311449
>>>>>
>>>>> given that the problem is the queryspace result, i imagine that is a
>>>>> problem internal to xrd. ( it seems that the 3.2.6 version is
>>>>> ok.)(this
>>>>> are ALICE packaged versions)
>>>>>
>>>>> Any idea about the problem and how can i investigate more?
>>>>> Thanks a lot!
>>>>> Adrian
>>>>>
>>>>>
>>>>> ########################################################################
>>>>>
>>>>> Use REPLY-ALL to reply to list
>>>>>
>>>>> To unsubscribe from the XROOTD-L list, click the following link:
>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>>>
>>>>
>>>>
>>>> ########################################################################
>>>>
>>>> Use REPLY-ALL to reply to list
>>>>
>>>> To unsubscribe from the XROOTD-L list, click the following link:
>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>>>
>>>
>>>
>>
>>
>> ########################################################################
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the XROOTD-L list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>
> 


-- 
----------------------------------------------
Adrian Sevcenco, Ph.D.                       |
Institute of Space Science - ISS, Romania    |
adrian.sevcenco at {cern.ch,spacescience.ro} |
----------------------------------------------


########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1