XROOTD-L Archives

Support use of xrootd by HEP experiments

XROOTD-L@LISTSERV.SLAC.STANFORD.EDU

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Adrian Sevcenco <[log in to unmask]>
Reply To:
Support use of xrootd by HEP experiments <[log in to unmask]>
Date:
Fri, 14 Jun 2019 01:02:43 +0300
Content-Type:
multipart/signed
Parts/Attachments:
text/plain (1381 bytes) , smime.p7s (2353 bytes)
Hi! What would be the recommended settings for an xrootd installation
that use a distributed file system like gluster?

beside the dfs what other tweaks will make sure that the load is equally 
distributed among the file servers?

it would seem that the redirector remembers the last data server that 
answered a file query so the same server is used again ...how can this 
be disabled?

also, given the uniformity of the storage space, is there a need for a 
redirector?(or cmsd)? could a simple xrootd (and with dns aliasing)
distribute the load equally?

also, for really large installations (and not only) i seen that there is 
a mechanism for keeping a metadata cache of all namespace(s) - XrdCnsd

can this be used to bypass the metadata query of files and reduce the 
load of the actual block device?

also, i seen the in the documentation the XrdCnsd is started by xrootd,
which i do not think is a safe model (service started by a service)
is there a way for ofs.notify to send the information to an already 
started service?

Also, this inventory is a collection of what? in the data servers the 
namespace is a collection of symlinks

Thank you!
Adrian


########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1


ATOM RSS1 RSS2