Here are the plots for jobs that read more than 98% of file with additional data that came in over the weekend (total of 179k jobs, 10.6k passes the cut -- about 6%):
Now, see how drastically different they are:
This clearly makes it easier for cache to keep up (fewer blocks to request at the same time) and also, when the vector read hits ceph, it has 10-times less vector reads to process (and they are larger individually, too).
Now, assuming the jobs are all the same ... it looks like you have files with different basket sizes and event cluster sizes in the mix. I tried to see a pattern in LFNs but there doesn't seem to be one. File size also doesn't seem to matter, I see the sizes are typically between 2 and 8 GB, with some files below 100 MB.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Use REPLY-ALL to reply to list
To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1