This is log(10) of number of sub-requests within a vector read - most likely to be between 400 and 1000:
vrd_num_subreq_cum_log

This is log(10) of total extent of the vector read, as you can see there are two peaks, 10M and 100MB:
vrd_tot_extent_cum_B

This is log(10) of individual sub-requests in a vector read, most likely 1kB, up to 100kB:
vrd_extent_cum_B

And log 10 of individual gaps between sub-requests, two peaks, 1kB and between 100k and 200k:
vrd_inner_offs_cum_B

To summarize, if this sample is representative (52 k file accesses, 2.4 M vector read requests), this should actually work rather well with xcache and block size 64 M -- typical vector read will require about 2 blocks to serve it.

Now, if these vector reads indeed get to the ceph server, they will wreck havoc as they will be reading the same block about 500 times to get about 1 k to 200 k of data out. Fixing vector reads on the xrootd-ceph will indeed help.

@rajanandakumar Are there two different types of jobs being run here? Or maybe some of them fail?

My analysis program reports that there are some request offsets that go beyond the end-of-file.

Are these jobs exceeding now? I assume, if they fail, we won't get the full picture ... so we really want them to make it to the end.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.

[ { "@context": "http://schema.org", "@type": "EmailMessage", "potentialAction": { "@type": "ViewAction", "target": "https://github.com/xrootd/xrootd/issues/1259#issuecomment-673668930", "url": "https://github.com/xrootd/xrootd/issues/1259#issuecomment-673668930", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { "@type": "Organization", "name": "GitHub", "url": "https://github.com" } } ]

Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1