Print

Print


Hello Pavel

Let me try to answer your questions.

1) The stage command 'stagecmd':

If a file is not on disk xrootd tries to stage the file onto disk. It
calls the stagecmd with two arguments:
   stagecmd <remoteFileName> <localFileName>
where remoteFileName is the name in HPSS and localFileName is the one on
disk. Xrootd forms these names from the file name provide by the client
and the two prefixes oss.remoteroot and oss.localroot
For example:
the xrootd config file contains
oss.remoteroot /fr
oss.localroot  /fl
a client requests a file like:
    xrdcp root://dataserver//my/test.file  test.file
in this case the stage command is called:
  stagecmd  /fr/my/test.file  /fl/my/test.file

If oss.remoteroot and oss.localroot are not specified the arguments to the
stagecmd is just the file name specified by the client.

As you can see the files will always be staged into the same disk if you
use the oss.localroot. If you have more then one disk on an xrootd server
you want to use the cache system and a stagecmd that is aware of the cache
system.

The xrootd actually comes with a cache aware stage command. You can find
it in the utils directory of an xrootd release it is called mps_Stage.
I haven't used it myself but I will find out how to use it.
The utils dir contains a few mps_XXX utils that are used to manage a cache
system. On the xrootd web site there is a document that describes the mps
system: http://xrootd.slac.stanford.edu/doc/mps_config/mps_config.htm


2) The cache file system:

A file that is staged into a cache file system is physically put in any of
the specified caches and a link between this files and the proper file
name is created. For example:

Lets assume you have the following file:
 /home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root
and it is in the cache /data3:

> ls -l /home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root
would show:
/home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root ->
 /data3/%home%starreco%reco%productionHigh%FullField%P04ik%2004%f1.root

As I mentioned above, if you want to use a cache system the stagecmd
has to be aware of it.

3) Your setup:

In the configuration that you described below you export all the cache
system (/dataN) due to the olbd.path and thats maybe something you don't
want to do. This also means the client has to access files with the
name /data3/home/...  making your system setup visible to the client.

Instead, it seems to me you that would like to make files with the name
'/home/....' accessible to users but on the xrootd server these files are
stored in /dataN/... .

You could configure your system  with:

oss.path /home  dread nomig stage

oss.cache /data*

xrootd.export /home

You also have to specify the stagecmd and the mssgwd command. The stage
command you obviously need in order to get a file out of HPSS, and the
mssgwd command is needed because xrootd first checks if a file is present
in HPSS. If you don't want xrootd to do this you could provide a
implementation that returns dummy data.
I could provide you with a little dummy script that implements some of the
mssgwd commands.


I don't now why all your path are migratable (mig). I suspect that one of
the options forces to turn on mig but I don't know which one. I have to
look into this.


The
oss.path / .....
is always there by default.

I hope I clarified some of your questions. Let me know if not.

Cheers,
   Wilko



On Wed, 13 Jul 2005, Pavel Jakl wrote:

> Hi Wilko,
>
> many thanks for advice. With directive oss.check client is redirected to
> other node and staging of requested file is started.
> I have some other questions, if you can help me, it will be super.
>
> I want to stage a file from HPSS for example with path like:
> /home/starreco/reco/productionHigh/FullField/P04ik/2004/040/st_physics_
> 5040131_raw_2010003.MuDst.root
>
> and copy to path /data0 or /data1 or /data2 or /data3 which I've
> specified with directives:
>
> oss.path /data0 dread nomig stage
> oss.path /data1 dread nomig stage
> oss.path /data2 dread nomig stage
> oss.path /data3 dread nomig stage
>
> My question is if my thoughts are right and file with HPPS path:
>
> /home/starreco/reco/productionHigh/FullField/P04ik/2004/040/st_physics_
> 5040131_raw_2010003.MuDst.root
>
> will be staged to local path:
>
> /data0/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/st_ph
> ysics_5040131_raw_2010003.MuDst.root
> i.e that /data0 is path to ve pre-pended to local path. Or is there some
> other directive, which can help me to do that ?
>
> My second question is if I need to have specified oss.mssgwcmd command
> when I don't plan to migrate any files from local disk to HPSS ?
>
> The third question is about oss.cache directive. Can I have the location
> of cache in the same directory as I have the files to export ?
> For example, I have all files in directories:
> /data0, /data1, /data2 or /data3 (i export with directive xrootd.export)
> and can I have a oss.cache /data*.
>
> My fourth problem is with oss.path again, I've specified this:
> oss.path /data0 dread nomig stage
> oss.path /data1 dread nomig stage
> oss.path /data2 dread nomig stage
> oss.path /data3 dread nomig stage
>
> but in log file is this:
> oss.path /data3 r/o  check dread mig nomkeep nomlock nommap norcreate
> stage
> oss.path /data2 r/o  check dread mig nomkeep nomlock nommap norcreate
> stage
> oss.path /data1 r/o  check dread mig nomkeep nomlock nommap norcreate
> stage
> oss.path /data0 r/o  check dread mig nomkeep nomlock nommap norcreate
> stage
>
> as you can see with mig directive and in addation is here a line:
>
> oss.path / r/o  check dread mig nomkeep nomlock nommap norcreate stage
>
> Is this correct ? Becaouse I dont want to stage any files to path /. (Or
> is needed due to that / is parent directory ?)
>
> Thank you
> Pavel
>
> On Tue, 12 Jul 2005 20:46:20 -0700 (PDT)
> Wilko Kroeger <[log in to unmask]> wrote:
>
> >
> > Hello Pavel
> >
> > I suspect that this is a problem with the mss configuration. In the
> > dev. version the mss command is ignored if the path is only stage-able
> > and  nodread, nocheck, nomig are specified.
> > Do you see in the xrdlog file the line
> >   mssgwcmd ignored; no MSS paths present.
> > from the xrootd startup ?
> >
> > If this is the case you could add
> > oss.check
> > to your config which should cause xrootd to use the mssgwcmd command.
> > (oss.dread ot oss.rcreate should also do the trick). But if you
> > specify any of these options xrootd will behave a little bit different
> > from how it is configured now.
> >
> > The problem has been fixed in CVS and will be in the next release.
> >
> >
> > Cheers,
> >    Wilko
> >
> >
> >
> >
> >
> > On Tue, 12 Jul 2005, Pavel Jakl wrote:
> >
> > > Hi for some time,
> > >
> > > Firstly, we've installed the latest development version of xrootd on
> > > 100 nodes (1 redirector + 1 supervisor + 98 dataservers) with the
> > > support of open load balancing and everything is working great and
> > > super. I would like to say, brilliant work.
> > >
> > > To my problem, I want to enable the MSS interface. We've implemented
> > > scripts for performing meta-data operations and staging files from
> > > HPSS. (directives oss.mssgwcmd, oss.stagecmd). I 've inserted
> > > everything in config file for dataserver and tried to obtain a file
> > > from HPSS. In redirector log I found this error message:
> > >
> > >  050712 19:35:48 18366 odc_Locate: starlib.10447:13@rcas6230 given
> > >  error msg 'No servers are available to read the file.' by xrdstar
> > >  path=/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/s
> > >  t_physics_5040131_raw_2010003.MuDst.root
> > >
> > > and found out that scripts was not even called once time. (this I
> > > know from debug support of my script). My question is, if I have
> > > something wrong in my configuration file or I forgot something to
> > > add or I skiped something in reading of documentation.
> > >
> > > I am enclosing a configuration file for dataserver.
> > >
> > > Thank you for any advice
> > > Pavel
> > >
> > >
> > >
>