Print

Print


Hi Andy,

  we just had a weird config issue, which now should be fixed. But we 
have the idea that there is a little more to do about the cache file 
system, just to be able to build clusters which serve more than one 
purpose (e.g. prod and skim for BaBar).

  Issue 1:
suppose that you have a cache file system with 2 dirs, where you want to 
write:

oss.cache public /kanga/prd1
oss.cache public /kanga/prd2

it seems that if one directory of the cache file system is not writable 
(e.g. prd1 by mistake) the data server can choose to use it anyway, 
causing problems. It should detect this and choose the other one.

  Issue 2:
let's suppose that we want 2 cache file systems in a data server, to 
avoid the "flat" allocation made by the server. Here is an example:

oss.cache skm /kanga/skm1
oss.cache skm /kanga/skm2
oss.cache prd /kanga/prd1
oss.cache prd /kanga/prd2
xrootd.export /prod
xrootd.export /skim

The config documentation says (or, at least I understand from it) that 
you can pass (through the open request) the oss.cgroup directive via 
opaque information specifying the cache group you want to write to. 
Well, we were unable to make it write /prod to the cache group "prd". It 
seems that the server always arbitrarily chooses the cache directory it 
wants between all the entries.

Anyway, even if we make it work, or even if we understand why we were 
not able to, the best solution possible imho would be to associate a 
cache group to an exported directory, maybe in the xrootd.export directive.

e.g. with directoves like:

xrootd.export /prod prd
xrootd.export /skim skm

it could be possible to tell to the server to use the cache group "prd" 
to write /prod data, and the cache group "skm" to write /skim data, 
avoiding the fact that the files inside the cache fs are put all together.



What do you think?

Fabrizio