Print

Print


@abh3 In this specific case the XCache passthrough seems to be actively making things worse, hence my suggestion, but I understand in the majority of cases it is sane to pass through the read as is rather than blocking and possibly causing a failure. It makes sense to make the xrd-ceph layer more intelligent, rather than what I suggested.

@osschar:

  1. No, we had not disable prefetch... does it really default to 10? That might make a significant difference with our block sizes and ram allocation. Trying it now.
  2. I doubt there is much appetite to retrofit SSDs into old workernodes, but it might be something to bring up. Annoyingly/interestingly, the non-SSD workernodes are do not have universally high rates of this failure and vector reads getting passed through, we have one generation from 2014(!) that is nearly as good as the new SSD nodes, so there is definitely a bit more digging to be done at our end to understand the variance in XCache performance.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.

[ { "@context": "http://schema.org", "@type": "EmailMessage", "potentialAction": { "@type": "ViewAction", "target": "https://github.com/xrootd/xrootd/issues/1259#issuecomment-671582626", "url": "https://github.com/xrootd/xrootd/issues/1259#issuecomment-671582626", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { "@type": "Organization", "name": "GitHub", "url": "https://github.com" } } ]

Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1