Print

Print


Hello,

I've been reading the documentation on xrootd, and I'm wondering how it 
might work in practice for a particular workload I'm trying to optimize.

xrootd performs very well to provide high aggregate read performance for 
one or many clients. Can someone provide information, or point me 
towards published results, on write intensive and concurrent read/write 
intensive applications? How are large data sets normally loaded into 
xrootd?

I'm trying to understand how well xrootd would perform if there was say 
a stream of data (about 25GB over 8 hours each day, peak write event 
rates of 125,000 messages a second) which had to be written while 
simultaneously there was a number of clients attempting to analysis the 
data - either large historical analyses or perhaps just interested in 
the last 10 minutes worth of events.

Any pointers?

Regards,
Niall