This is log(10) of number of sub-requests within a vector read - most likely to be between 400 and 1000:
This is log(10) of total extent of the vector read, as you can see there are two peaks, 10M and 100MB:
This is log(10) of individual sub-requests in a vector read, most likely 1kB, up to 100kB:
And log 10 of individual gaps between sub-requests, two peaks, 1kB and between 100k and 200k:
To summarize, if this sample is representative (52 k file accesses, 2.4 M vector read requests), this should actually work rather well with xcache and block size 64 M -- typical vector read will require about 2 blocks to serve it.
Now, if these vector reads indeed get to the ceph server, they will wreck havoc as they will be reading the same block about 500 times to get about 1 k to 200 k of data out. Fixing vector reads on the xrootd-ceph will indeed help.
@rajanandakumar Are there two different types of jobs being run here? Or maybe some of them fail?
My analysis program reports that there are some request offsets that go beyond the end-of-file.
Are these jobs exceeding now? I assume, if they fail, we won't get the full picture ... so we really want them to make it to the end.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Use REPLY-ALL to reply to list
To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1