Print

Print


> This comes from @bbockelm comment that the code breaks up the vector read into 128KB chunks which is far smaller than what the server allows. The maximum read per element is 2MB-16 which comes from the largest buffer allowed less the vector read header length that is echoed back in the response. So, the complaint is that 128KB is arbitrary and far too small for certain reads, especially ones that come from Xcache. So, the question iis why that value was chosen and what should the blocking length be? Finally, at some point the http side should simply not use vector read if the range length is greater than some size but what should that be?

Thanks for your reply. I now understand what the problem is. It's the fact that 128KB is too small. The maximum read per element comes from the size of the buffer, which now is configurable via `maxbsz`  It can be between 2Mb to 1Gb. So I believe I can take this number - 16?

-- 
Reply to this email directly or view it on GitHub:
https://github.com/xrootd/xrootd/issues/1976#issuecomment-1578046262
You are receiving this because you are subscribed to this thread.

Message ID: <[log in to unmask]>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1