Print

Print


Hi Fabrizio,

so the memory depends on how well the files are distributed and the 
replicas?
I have 1000 nodes (2000 cpus). I can well think that it will never 
happen that 1999 processes try to access 1 xrootd instance. But it might 
be over 100. I guess it is matter of tuning.

cheers
alessandra

Fabrizio Furano wrote:
> Hi Alessandra,
> 
>  typically the memory needs of a server are proportional to the number 
> of clients and to the ongoing activity rates. For a small number of 
> clients (let's say 1-100) the memory needs are quite low. I've done many 
> many stress tests in a pIII machine with 256 megs with no problems at 
> all. The memory becomes an important parameter when you plan thousands 
> of clients per server.
> 
> Fabrizio
> 
> Alessandra Forti wrote:
>> Apologies that worries me in using xrootd on the WNs....
>> I'm a bit confused right now :(
>>
>> Alessandra Forti wrote:
>>> Hi Peter,
>>>
>>> after yesterday I also remember that the second thing that worried me 
>>> in using dcache on the WN is a comment Andy made about the use of 
>>> memory in his talk at CHEP I believe. He said it xrootd uses a lot of 
>>> memory but there weren't number specified.
>>>
>>> My system is double cpu with 2GB of memory per cpu. Considering that 
>>> an atlas job can use more than 1 GB we are now at 1.1 I thik. Will it 
>>> be enough? I think so but I just wanted to check.
>>>
>>> cheers
>>> alessandra
>>>
>>>
>>> Peter Elmer wrote:
>>>>    Hi All,
>>>>
>>>>   There is now a new xrootd development version: xrootd 
>>>> 20060523-1741, please see:
>>>>   http://xrootd.slac.stanford.edu/download/20060523-1741/
>>>>
>>>> for downloads.
>>>>
>>>>   Relative to the last development build (20060418-0404) this 
>>>> includes a variety of small bug fixes, plus one important one for 
>>>> the redirector. See
>>>> the xrootd.History file for more details. I've included a link to the
>>>> SL3 debuginfo rpm on the rpm page. (Although I've not tried it myself,
>>>> so I have no idea if it works! Feedback is welcome.)
>>>>
>>>>   Gerri, if it is still possible, you could add this to the next ROOT
>>>> build, too.
>>>>
>>>>    For the full set of changes and links to rpms/tarballs to 
>>>> download see the
>>>>  the xrootd web page and/or version history:
>>>>
>>>>     http://xrootd.slac.stanford.edu
>>>>     http://xrootd.slac.stanford.edu/xrootd.History
>>>>
>>>>  Let us know if there are problems.
>>>>
>>>>                                    Pete
>>>>
>>>> ------------------------------------------------------------------------- 
>>>>
>>>> Peter Elmer     E-mail: [log in to unmask]      Phone: +41 (22) 
>>>> 767-4644
>>>> Address: CERN Division PPE, Bat. 32 2C-14, CH-1211 Geneva 23, 
>>>> Switzerland
>>>> ------------------------------------------------------------------------- 
>>>>
>>>
>>

-- 
*******************************************
* Dr Alessandra Forti			  *
* Technical Coordinator - NorthGrid Tier2 *
* http://www.hep.man.ac.uk/u/aforti	  *
*******************************************