Print

Print


I've tested an upgrade from xrootd 5.3.3 to 5.4.3 on a machine with a ceph storage backend. This caused an immediate drop in performance, which I've tracked to the fact that on recent xrootd versions, pgrw is the default transfer operations client side. Changes on the XrdCeph plugin (like features) didn't seem to be picked up correctly, as by the time the plugin converts the request into aio_read, they're already split into 64kb chunks, which is the main reason for the slowdown. 
Due to the fact that small read sizes might not be optimal for some storage types (ceph included), would it be possible to have this be configurable server-side, by having a configuration parameter determine whether the server supports pgrw, rather than it being tied to the protocol version, as it is currently?

-- 
Reply to this email directly or view it on GitHub:
https://github.com/xrootd/xrootd/issues/1740
You are receiving this because you are subscribed to this thread.

Message ID: <[log in to unmask]>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1