Print

Print


Here are the plots for jobs that read more than 98% of file with additional data that came in over the weekend (total of 179k jobs, 10.6k passes the cut -- about 6%):
full_file_read

Now, see how drastically different they are:

  1. number of subreqs in a vector read is below 100, it was up to 1000 before.
  2. total extent of vector read (first to last byte) is smaller
  3. sum of subreq extents is far on the large size
  4. offsets within the vector read are much smaller

This clearly makes it easier for cache to keep up (fewer blocks to request at the same time) and also, when the vector read hits ceph, it has 10-times less vector reads to process (and they are larger individually, too).

Now, assuming the jobs are all the same ... it looks like you have files with different basket sizes and event cluster sizes in the mix. I tried to see a pattern in LFNs but there doesn't seem to be one. File size also doesn't seem to matter, I see the sizes are typically between 2 and 8 GB, with some files below 100 MB.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.

[ { "@context": "http://schema.org", "@type": "EmailMessage", "potentialAction": { "@type": "ViewAction", "target": "https://github.com/xrootd/xrootd/issues/1259#issuecomment-675031292", "url": "https://github.com/xrootd/xrootd/issues/1259#issuecomment-675031292", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { "@type": "Organization", "name": "GitHub", "url": "https://github.com" } } ]

Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1