Print

Print


Hi Jerome,

Pavel was already very helpful yesterday, having also your configuration 
(files) at hand would also be very useful. :)

cheers
alessandra

Jerome LAURET wrote:
> 
>    Would it be useful if we provide our configurations as examples to
> be added to the Xrootd site (??!!). 
> http://xrootd.slac.stanford.edu/examples/
> We could make a picture and text and pass to Pete something like
> http://xrootd.slac.stanford.edu/examples/slac/index.html
> 
> 
> Andrew Hanushevsky wrote:
>> Hi Alessandra,
>>
>> He needs supervisors because he has more than 64 data nodes. You will 
>> need them too. Everything is explained in
>>
>> http://xrootd.slac.stanford.edu/doc/olb_config_050601/olb_config.htm
>>
>> in the introduction section.
>>
>> Andy
>>
>> ----- Original Message ----- From: "Alessandra Forti" 
>> <[log in to unmask]>
>> To: <[log in to unmask]>
>> Cc: <[log in to unmask]>
>> Sent: Wednesday, May 24, 2006 7:12 AM
>> Subject: Re: New xrootd development version 20060523-1741 available
>>
>>
>>> Hi,
>>>
>>> I see you have a 3 tiers hierarchy: redirector, supervisors and 
>>> workernodes. What is a supervisor? Other configurations don't seem to 
>>> have this layer. Can supervisors be specialised worker nodes? What is 
>>> olbd? and finally is the software for redirector supervisors and data 
>>> servers the same (i.e. I install the same rpm) but configured 
>>> differently? Why are you using 2 xrootd instances per WN? Have you 
>>> bound each with a different partition? Or do they both serve all the 
>>> available partitions.
>>>
>>> Apologies for all these questions you'll have to bear with me until I 
>>> get up o speed.
>>>
>>> thanks
>>>
>>> cheers
>>> alessandra
>>>
>>> Pavel Jakl wrote:
>>>> Hello,
>>>>
>>>> Yes, it is. But the plot is showing sum of usage at production and 
>>>> development cluster. I have to split the statistics into 2 plots, 
>>>> sorry about not giving you a proper plot.
>>>> I can say, that I couldn't see what Fabrizio is talking about 
>>>> because we still don't have so many users using the xrootd. We 
>>>> wanted to move from our old solution very slowly and see the 
>>>> behavior of xrootd with increasing number of users as time goes. 
>>>> Users are switching from rootd to xrootd by their choice and I can 
>>>> say that this process in completely unclear to me :-)  But the 
>>>> number of users is higher and higher as xrootd gives more advantages.
>>>>
>>>> Cheers
>>>> Pavel
>>>>
>>>> Alessandra Forti wrote:
>>>>> Hi Pavel,
>>>>>
>>>>> thanks this is very useful. Is this the STAR production cluster? I 
>>>>> read also that talk given at chep.
>>>>>
>>>>> cheers
>>>>> alessandra
>>>>>
>>>>>
>>>>> Pavel Jakl wrote:
>>>>>> Hello Alessandra,
>>>>>>
>>>>>> Andy talked mostly about scalability and performance tests of 
>>>>>> xrootd and trying to show the behavior of xrootd at upper limits I 
>>>>>> believe. I don't know nothing about the fact that xrootd uses a 
>>>>>> lots of memory.
>>>>>> We are using xrootd on WNs and we have deployed it on 320 nodes 
>>>>>> with about 130TB. These nodes are heavily loaded by user's jobs, 
>>>>>> mostly ROOT jobs.
>>>>>> For your imagination, I am sending you plots being produced by 
>>>>>> really home-made monitoring with Ganglia toolkit.
>>>>>> I also need to mention, that you have to divide these numbers by 
>>>>>> the factor of two, because we are running two instances of xrootd 
>>>>>> per node (we have a development and production cluster). You can 
>>>>>> see that memory usage is really, really small.
>>>>>>
>>>>>> Cheers
>>>>>> Pavel
>>>>>>
>>>>>> Alessandra Forti wrote:
>>>>>>> Apologies that worries me in using xrootd on the WNs....
>>>>>>> I'm a bit confused right now :(
>>>>>>>
>>>>>>> Alessandra Forti wrote:
>>>>>>>> Hi Peter,
>>>>>>>>
>>>>>>>> after yesterday I also remember that the second thing that 
>>>>>>>> worried me in using dcache on the WN is a comment Andy made 
>>>>>>>> about the use of memory in his talk at CHEP I believe. He said 
>>>>>>>> it xrootd uses a lot of memory but there weren't number specified.
>>>>>>>>
>>>>>>>> My system is double cpu with 2GB of memory per cpu. Considering 
>>>>>>>> that an atlas job can use more than 1 GB we are now at 1.1 I 
>>>>>>>> thik. Will it be enough? I think so but I just wanted to check.
>>>>>>>>
>>>>>>>> cheers
>>>>>>>> alessandra
>>>>>>>>
>>>>>>>>
>>>>>>>> Peter Elmer wrote:
>>>>>>>>>    Hi All,
>>>>>>>>>
>>>>>>>>>   There is now a new xrootd development version: xrootd 
>>>>>>>>> 20060523-1741, please see:
>>>>>>>>>   http://xrootd.slac.stanford.edu/download/20060523-1741/
>>>>>>>>>
>>>>>>>>> for downloads.
>>>>>>>>>
>>>>>>>>>   Relative to the last development build (20060418-0404) this 
>>>>>>>>> includes a variety of small bug fixes, plus one important one 
>>>>>>>>> for the redirector. See
>>>>>>>>> the xrootd.History file for more details. I've included a link 
>>>>>>>>> to the
>>>>>>>>> SL3 debuginfo rpm on the rpm page. (Although I've not tried it 
>>>>>>>>> myself,
>>>>>>>>> so I have no idea if it works! Feedback is welcome.)
>>>>>>>>>
>>>>>>>>>   Gerri, if it is still possible, you could add this to the 
>>>>>>>>> next ROOT
>>>>>>>>> build, too.
>>>>>>>>>
>>>>>>>>>    For the full set of changes and links to rpms/tarballs to 
>>>>>>>>> download see the
>>>>>>>>>  the xrootd web page and/or version history:
>>>>>>>>>
>>>>>>>>>     http://xrootd.slac.stanford.edu
>>>>>>>>>     http://xrootd.slac.stanford.edu/xrootd.History
>>>>>>>>>
>>>>>>>>>  Let us know if there are problems.
>>>>>>>>>
>>>>>>>>>                                    Pete
>>>>>>>>>
>>>>>>>>> ------------------------------------------------------------------------- 
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Peter Elmer     E-mail: [log in to unmask]      Phone: +41 
>>>>>>>>> (22) 767-4644
>>>>>>>>> Address: CERN Division PPE, Bat. 32 2C-14, CH-1211 Geneva 23, 
>>>>>>>>> Switzerland
>>>>>>>>> ------------------------------------------------------------------------- 
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------------------ 
>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------------------ 
>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------------------ 
>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------------------ 
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>> -- 
>>> *******************************************
>>> * Dr Alessandra Forti   *
>>> * Technical Coordinator - NorthGrid Tier2 *
>>> * http://www.hep.man.ac.uk/u/aforti   *
>>> *******************************************
>>>
> 

-- 
*******************************************
* Dr Alessandra Forti			  *
* Technical Coordinator - NorthGrid Tier2 *
* http://www.hep.man.ac.uk/u/aforti	  *
*******************************************