Hi Matevz,
OK, I figured out the issue here. You have given this configuration a
proxy roll (i.e. all.role proxy server). The the profile used for that
type of configuration is "distributed file system". That means each server
is equal opportunity and any server will dono matter what the operation.
Of course, a caching proxy is really somewhere in between. While it's a
proxy it does have file locality. The way to get the essence of this
configuration is to delare as a regular cluster....
a) each proxy actually gets "all.role server"
b) you then override the osslib default here by forcibly loading
libXrdPss.so (i.e. ofs.osslibe libXdPss.so).
c) kepp the "stage" option on the exports that is essential.
d) The redirector for the proxu cluster is actually a regigalr manager
(i.e. all.role manager).
e) The all.manager directives never mention the word "proxy" as the
managers work like regulat managers.
f) Of course, you keep all he "pfc" directives and load the proxy cache
plugiin on the servers.
This will then use a jobod profile, which is actualy what you have as the
proxing is really incidental here.
Try that and I will gaurantee it will work the way you want.
Andy
On Fri, 5 Dec 2014, Matevz Tadel wrote:
> Hi,
>
> Alja and I have been testing a caching-proxy cluster, the idea is that there
> are several independent caching-proxies all reporting to a common redirector
> so that one gets more disk space, better performance and redundancy. Here are
> the scripts we've been using:
>
> http://uaf-2.t2.ucsd.edu/~matevz/xrd/proxy-cluster/
>
> pooxy-klus.cfg is the config and start-pooxy.sh the startup script.
> cabinet-10-10-10 is the redirector.
>
> All servers export /store with the stage option and there is a trivial
> oss.stagecmd /opt/stage-fake.pl that basically just touches the file,
> assuming that proxy will then pull in the parts of the file that are actually
> needed.
>
> On proxy servers, we configure xrootd to use
> ofs.osslib /opt/xrootd/lib64/libXrdPss-4.so
> pss.origin xrootd.t2.ucsd.edu:1094
> pss.cachelib libXrdFileCache-4.so
> while cmsd just gets
> xrootd.fslib /opt/xrootd/lib64/libXrdOfs-4.so
> so that it is able to look what files already exist on the disk.
>
> The whole thing works ... but apparently stage option is too strong. When
> opening a file that exists on proxy machine A it also happens that the client
> gets redirected to machine X that does not have it -- so it has to be pulled
> in from remote location one more time.
>
> In particular, this happens after the cluster restart. xrdcp is used to
> access the files. xrdcp does stat before the open. both are redirected
> randomly the first time after restart ... after that the server that was
> selected for open keeps getting selected consistently.
>
> Any ideas what we could do? Or should we be looking for a completely
> different solution?
>
> Could it be proxy cmsds are not seeing the files right? I tried bumping trace
> to debug but there is no info there about found/not found cmsd queries.
>
> Cheers,
> Matevz
>
> ########################################################################
> Use REPLY-ALL to reply to list
>
> To unsubscribe from the XROOTD-DEV list, click the following link:
> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1
>
########################################################################
Use REPLY-ALL to reply to list
To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1
|