Hi all,
We have a cluster "CMS T2_US_Wisconsin" on which (almost) every
compute node is also an xrootd server and has uniform access to shared
storage (HDFS).
When a compute job request a file via xrootd it contacts an xrootd
redirector, which chooses a xrootd server based on default settings.
(AFAIK randomly chosen from least loaded servers.)
Since every compute node is also an xrootd server, there would be
less network traffic if a job requested files from the local xrootd
server. Files would go directly to the compute node needing them,
rather than first to an xrootd server on a randomly choseb node and then
to the requesting compute node.
I was hoping someone would have an idea of how to first request the
file from the localhost xrootd.
One idea is to set in the xrootd server config file:
xrd.allow host <FQDN_or_IPs_of_local_host>
I believe this would cause the redirector to tell the xrootd client
to fetch the file from the same server as the requesting compute job as
desired.
Unfortunately these xrootd servers also must serve to computers
outside of the local cluster. Since xrd.allow doesn't take IP ranges
and there is no xrd.deny satisfying this requirement doesn't appear easy.
I am also investigating how to do this using LFN to PFN rules as
implemented in CERN's storage.xml files. They are loaded like this:
oss.namelib /usr/lib64/libXrdCmsTfc.so
file:/etc/xrootd/storage.xml?protocol=hadoop
I don't know how or if a fallback mechanism is implemented if the first
matching LFC does not result in a successful read.
If anyone has any ideas, let me know!
Thanks,
Chad.
########################################################################
Use REPLY-ALL to reply to list
To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
|