Ah, another thing.
I committed an experimental but working (I hope!) version of the new
xfer/caching algorithm. It would be very very nice to have some feedback
about its performance, since one of its sub-products (together with the
concurrent xfers) is that the requests are almost sorted. I wonder if
this would increase or decrease the performance in your case.
BTW: what is your case? Histograms from ROOT files? Reconstruction?
Analysis?
Fabrizio
Fabrizio Furano wrote:
> Hi Derek,
>
> Derek Feichtinger wrote:
>> Hi,
>>
>> This is slightly off-topic, but nontheless important for the setup of
>> large direct attached storage systems typically used with xrootd.
>> Maybe some of you have good suggestions or experiences.
>>
>
> Well, I don't know exactly your requirements, but wouldn't it be
> sufficient to look at the traffic by making an average of the data seen
> by each client after the file close ?
>
> Another (better) way could be to setup XrdMon. Why not ?
>
>
> Fabrizio
>
>
>> For our next upgrade of our Tier2 I would need a benchmark with which
>> I can measure whether I can satisfy an I/O requirement per worker node
>> (WN, or CPU core). This has to be tested while all WNs are reading in
>> parallel from all file servers. I just want to assume that the clients
>> from the WNs are reading in a nicely distributed fashion from the file
>> servers, e.g. in the case of 10 file servers and 150 WNs, I would
>> assume that in average 15 WNs are reading at the same time from any
>> file server. But any combination of 15 WNs must be able to yield the
>> desired bandwidth.
>>
>> Naturally, this benchmark is targeted at mimicking a cluster running
>> analysis applications.
>>
>> A primitive test (but not exactly matching the use case) could be
>> using netperf or iperf in UDP mode. E.g. the file servers would
>> receive packets from the required fraction of worker nodes (The
>> sending intervals and packet sizes can be set for netperf). One would
>> gradually increase the sending rate per worker node until UDP packet
>> loss is observed.
>> I'm glad for any suggestions.
>>
>> Cheers,
>> Derek
>>
|