Print

Print


Hello Pavel

The problem is that the mps.basedir doesn't control where the file is
staged to. The stage command is called from xrootd with two arguments the
first one is the name in the MSS and the second one is the one on disk.
These names are formed by xrootd using the client supplied file name plus
an optional prefix (oss.localroot and oss.remoteroot). If you would set
oss.localroot /home/starlib
The file
/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/file1.root
would be staged to
/home/starlib/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/file1.root
But be aware that the same is true for opening a file. xrootd will try to
open the file in
/home/starlib/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/file1.root
or if you have a file
/data1/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/file1.root
xrootd will open it in
/home/starlib/data1/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/file1.root

Cheers,
   wilko


On Sun, 17 Jul 2005, Pavel Jakl wrote:

> Hello again Wilko,
>
> thank you for your answers (some of them were very usefull for me) . After your advice I tried to set MPS beacause we need to use more than 1 cache file system.
> I have some problem with directive mps.basedir, let me explain to you what I want:
>
> I am requesting this file from HPSS:
>
> /home/starreco/reco/productionHigh/FullField/P04ik/2004/040/file1.root
>
> and want to stage from HPSS to cache system. (/data0 or /data1 or /data2 ...)
>
> Firstly MPS is trying to create directories for symlink:
>
> /home/starreco/reco ...  ()
>
> but this a problem, because I am running xrrotd as user starlib and I don't have permission to this directory.
> So I decided to add directive
>
> mps.basedir  /home/starlib
>
> thank every file will be staged (its symlink to cache file system) like:
>
> /home/starlib/home/starreco/reco ....
>
> But as I mentioned above, the directive basedir is not working and still want to stage file as /home/starreco.
> Can you look at my configuration file, if something is wrong ?
>
> Many thanks
> Pavel
>
> On Fri, 15 Jul 2005 13:05:43 -0700 (PDT)
> Wilko Kroeger <[log in to unmask]> wrote:
>
> >
> > Hello Pavel
> >
> > On Fri, 15 Jul 2005, Pavel Jakl wrote:
> >
> > > Hello again,
> > >
> > > 	In the MPS manual and mps_* scripts, there are things like
> > >
> > > $Config{xfrcmd}    = ($isMPS ? '/opt/xrootd/utils/xfrcmd'
> > >                              : '/usr/etc/ooss/pftp_client');
> > >
> > >   and similar switch for port, nodes, account name ... while both
> > > seem to accept ftp-like commands. Could you please explain what
> > > was the philosophy of having separate commands for oss/mps (but
> > > using the same scripts) and why different port/nodes ?? This is
> > > not entirely clear to us ...
> >
> > The ooss scripts are used at SLAC to manage the cache files systems.
> > The mps scripts evolved out of the ooss scripts and they are kept
> > backwards compatible, meaning that they would work with the ooss scripts
> > installed in /usr/etc/ooss. The $isMPS basically decides if the MPS
> > scripts that come with xrootd are used or the ooss scripts (installed in
> > /usr/etc/ooss). This allows site that have ooss installed (like SLAC) to
> > use the mps scripts.
> >
> > >
> > > 	Since stagecmd for oss is pftp and mps stagecmd is
> > > xfrcmd, it makes us puzzled as per what xfrcmd does in addition
> > > of pftp. This does not appear to be explained in the documentation.
> >
> > The  '/opt/xrootd/utils/xfrcmd' should do the same thing that
> > '/usr/etc/ooss/pftp_client' does for HPSS.
> > Different sites use different MSS systems and even if they use the same
> > system the access methods could be different. Therefore the mps scripts were
> > written so that an xrootd site can provide their own implementation for
> > mss access without modifying the mps scripts. If you are using pftp to access
> > HPSS you could just call it from within /opt/xrootd/utils/xfrcmd.
> >
> > I hope this clarified your issues a littele bit.
> >
> > Cheers,
> >   Wilko
> >
> > >
> > >
> > > Jerome & Pavel
> > >
> > >
> > > On Thu, 14 Jul 2005 20:50:39 -0400
> > > Jerome LAURET <[log in to unmask]> wrote:
> > >
> > > >
> > > > 	Hello Wilko (possibly Andy),
> > > >
> > > > 	Thanks already for your many answers which clarifies
> > > > things quite a lot but we are not entirely sure of some details.
> > > > What we are trying to do is the following (re-explained from
> > > > the start, it may not be possible but ..)
> > > >
> > > > - first, we already have files located as (no typo, note
> > > >    the path "massage" here)
> > > >
> > > >    /data?/starlib/reco/productionHigh/FullField/P04ik/2004/f1.root
> > > >
> > > >    where ? stands for any number between 0 and 3. Each file
> > > >    correspond to its initial source (and continuing with our
> > > >    example)
> > > >
> > > >    /home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root
> > > >
> > > >    so, intrinsically, /data*/starlib/reco/... is equivalent to
> > > >    /home/starreco/reco/... (different PFN, same LFN and in our
> > > >    scheme, the path is slightly modified to fall back on our
> > > >    feet PFN -> LFN)
> > > >
> > > >
> > > > - Our application takes advantage of already catalogued
> > > >    PFN. We would like to do a smooth transition as there is
> > > >    about 50+ TB of data already placed ... We nicely access
> > > >    the files using rootd referencing the files as
> > > > root://node//data1/starlib/reco/productionHigh/FullField/P04ik/2004/f1.root
> > > >    as the files are a-piori known
> > > >
> > > >
> > > > - Xrootd saves us a lot of course, for example, being able
> > > >    to reduce the name space to ONLY
> > > > root://redirector//home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root
> > > >
> > > >    as if we are accessing HPSS files while Xrootd handles it
> > > >    would be reat. BUT we also would like to be able to access
> > > >    the files the usual-current way i.e. within a syntax
> > > >    indicating //node/data1.
> > > >
> > > > - For this to happen, Xrootd would need to understand BOTH
> > > >    /home/starreco/reco using /data* as cache (for example i.e. MPS)
> > > >    AND also be able to access /data*/starlib/reco in direct
> > > >    access mode
> > > >
> > > >    If so, several questions comes to mind
> > > >
> > > >    /home/starreco would be populated with soft-links to files named
> > > >    /data?/%home%starreco%... which shares the space with the files
> > > >    already there.
> > > >
> > > >    * Will Xrootd be happy with that ??? [a priori, why not]
> > > >      Seems like a possible space management nightmare to me
> > > >      (shared space)
> > > >
> > > >    * What happen if in the /home/starreco/reco tree (local not
> > > >      HPSS) a soft-link is accidentely removed ?? Is it re-created ??
> > > >
> > > >    * I assume the %XX%YY% files are managed i.e. deleted along
> > > >      the line of a policy. But folowing the above question, what
> > > >      happens if a file has disappeared but tyhe soft-link remains ??
> > > >      Is Xrootd smart enough to find it and re-stage ??
> > > >
> > > >    * Are a lot of flat files in an identical /data?/ (i.e. files
> > > >      of the form %XX%YY% into a single dir) is read efficient ??
> > > >      Imagine that we get several 10k files in a single directory,
> > > >      will that cause a performance impact of some sort (directory
> > > >      lookup or otherwise) ??
> > > >
> > > >    * By the way, does "readonly" have the meaning of 'cannot open
> > > >      a new file' or really readonly (which terminology seems to
> > > >      be incompatible with writting files into /data*). I assume
> > > >      the former.
> > > >
> > > >
> > > >    * Now for a controversial question. stagecmd command accepts
> > > >      in one of its form two arguments:
> > > >         HPSSPathAsInput  localPath
> > > >      If localPath was only a "suggested path" and stagecmd would
> > > >      RETURN to Xrootd (as sstring) its final file placement,
> > > >      we could implement our own space management mechanism and
> > > >      policy. I can understand this not an item for the wish list
> > > >      but this WOULD have resolved our funny disk mapping and
> > > >      namespace issue [what it would not do is name space reduction
> > > >      at the end to a single namespace but that is another story].
> > > >      Comments ??
> > > >
> > > > - And Pavel would like to know if someone has an example of the
> > > >    /opt/xrootd/utils/xfrcmd script (to be sent to list or privately).
> > > >
> > > >
> > > > 	Thanks & cheers,
> > > >
> > > >
> > > > Wilko Kroeger wrote:
> > > > > Hello Pavel
> > > > >
> > > > > Let me try to answer your questions.
> > > > >
> > > > > 1) The stage command 'stagecmd':
> > > > >
> > > > > If a file is not on disk xrootd tries to stage the file onto disk. It
> > > > > calls the stagecmd with two arguments:
> > > > >    stagecmd <remoteFileName> <localFileName>
> > > > > where remoteFileName is the name in HPSS and localFileName is the one on
> > > > > disk. Xrootd forms these names from the file name provide by the client
> > > > > and the two prefixes oss.remoteroot and oss.localroot
> > > > > For example:
> > > > > the xrootd config file contains
> > > > > oss.remoteroot /fr
> > > > > oss.localroot  /fl
> > > > > a client requests a file like:
> > > > >     xrdcp root://dataserver//my/test.file  test.file
> > > > > in this case the stage command is called:
> > > > >   stagecmd  /fr/my/test.file  /fl/my/test.file
> > > > >
> > > > > If oss.remoteroot and oss.localroot are not specified the arguments to the
> > > > > stagecmd is just the file name specified by the client.
> > > > >
> > > > > As you can see the files will always be staged into the same disk if you
> > > > > use the oss.localroot. If you have more then one disk on an xrootd server
> > > > > you want to use the cache system and a stagecmd that is aware of the cache
> > > > > system.
> > > > >
> > > > > The xrootd actually comes with a cache aware stage command. You can find
> > > > > it in the utils directory of an xrootd release it is called mps_Stage.
> > > > > I haven't used it myself but I will find out how to use it.
> > > > > The utils dir contains a few mps_XXX utils that are used to manage a cache
> > > > > system. On the xrootd web site there is a document that describes the mps
> > > > > system: http://xrootd.slac.stanford.edu/doc/mps_config/mps_config.htm
> > > > >
> > > > >
> > > > > 2) The cache file system:
> > > > >
> > > > > A file that is staged into a cache file system is physically put in any of
> > > > > the specified caches and a link between this files and the proper file
> > > > > name is created. For example:
> > > > >
> > > > > Lets assume you have the following file:
> > > > >  /home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root
> > > > > and it is in the cache /data3:
> > > > >
> > > > >
> > > > >>ls -l /home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root
> > > > >
> > > > > would show:
> > > > > /home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root ->
> > > > >  /data3/%home%starreco%reco%productionHigh%FullField%P04ik%2004%f1.root
> > > > >
> > > > > As I mentioned above, if you want to use a cache system the stagecmd
> > > > > has to be aware of it.
> > > > >
> > > > > 3) Your setup:
> > > > >
> > > > > In the configuration that you described below you export all the cache
> > > > > system (/dataN) due to the olbd.path and thats maybe something you don't
> > > > > want to do. This also means the client has to access files with the
> > > > > name /data3/home/...  making your system setup visible to the client.
> > > > >
> > > > > Instead, it seems to me you that would like to make files with the name
> > > > > '/home/....' accessible to users but on the xrootd server these files are
> > > > > stored in /dataN/... .
> > > > >
> > > > > You could configure your system  with:
> > > > >
> > > > > oss.path /home  dread nomig stage
> > > > >
> > > > > oss.cache /data*
> > > > >
> > > > > xrootd.export /home
> > > > >
> > > > > You also have to specify the stagecmd and the mssgwd command. The stage
> > > > > command you obviously need in order to get a file out of HPSS, and the
> > > > > mssgwd command is needed because xrootd first checks if a file is present
> > > > > in HPSS. If you don't want xrootd to do this you could provide a
> > > > > implementation that returns dummy data.
> > > > > I could provide you with a little dummy script that implements some of the
> > > > > mssgwd commands.
> > > > >
> > > > >
> > > > > I don't now why all your path are migratable (mig). I suspect that one of
> > > > > the options forces to turn on mig but I don't know which one. I have to
> > > > > look into this.
> > > > >
> > > > >
> > > > > The
> > > > > oss.path / .....
> > > > > is always there by default.
> > > > >
> > > > > I hope I clarified some of your questions. Let me know if not.
> > > > >
> > > > > Cheers,
> > > > >    Wilko
> > > > >
> > > > >
> > > > >
> > > > > On Wed, 13 Jul 2005, Pavel Jakl wrote:
> > > > >
> > > > >
> > > > >>Hi Wilko,
> > > > >>
> > > > >>many thanks for advice. With directive oss.check client is redirected to
> > > > >>other node and staging of requested file is started.
> > > > >>I have some other questions, if you can help me, it will be super.
> > > > >>
> > > > >>I want to stage a file from HPSS for example with path like:
> > > > >>/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/st_physics_
> > > > >>5040131_raw_2010003.MuDst.root
> > > > >>
> > > > >>and copy to path /data0 or /data1 or /data2 or /data3 which I've
> > > > >>specified with directives:
> > > > >>
> > > > >>oss.path /data0 dread nomig stage
> > > > >>oss.path /data1 dread nomig stage
> > > > >>oss.path /data2 dread nomig stage
> > > > >>oss.path /data3 dread nomig stage
> > > > >>
> > > > >>My question is if my thoughts are right and file with HPPS path:
> > > > >>
> > > > >>/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/st_physics_
> > > > >>5040131_raw_2010003.MuDst.root
> > > > >>
> > > > >>will be staged to local path:
> > > > >>
> > > > >>/data0/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/st_ph
> > > > >>ysics_5040131_raw_2010003.MuDst.root
> > > > >>i.e that /data0 is path to ve pre-pended to local path. Or is there some
> > > > >>other directive, which can help me to do that ?
> > > > >>
> > > > >>My second question is if I need to have specified oss.mssgwcmd command
> > > > >>when I don't plan to migrate any files from local disk to HPSS ?
> > > > >>
> > > > >>The third question is about oss.cache directive. Can I have the location
> > > > >>of cache in the same directory as I have the files to export ?
> > > > >>For example, I have all files in directories:
> > > > >>/data0, /data1, /data2 or /data3 (i export with directive xrootd.export)
> > > > >>and can I have a oss.cache /data*.
> > > > >>
> > > > >>My fourth problem is with oss.path again, I've specified this:
> > > > >>oss.path /data0 dread nomig stage
> > > > >>oss.path /data1 dread nomig stage
> > > > >>oss.path /data2 dread nomig stage
> > > > >>oss.path /data3 dread nomig stage
> > > > >>
> > > > >>but in log file is this:
> > > > >>oss.path /data3 r/o  check dread mig nomkeep nomlock nommap norcreate
> > > > >>stage
> > > > >>oss.path /data2 r/o  check dread mig nomkeep nomlock nommap norcreate
> > > > >>stage
> > > > >>oss.path /data1 r/o  check dread mig nomkeep nomlock nommap norcreate
> > > > >>stage
> > > > >>oss.path /data0 r/o  check dread mig nomkeep nomlock nommap norcreate
> > > > >>stage
> > > > >>
> > > > >>as you can see with mig directive and in addation is here a line:
> > > > >>
> > > > >>oss.path / r/o  check dread mig nomkeep nomlock nommap norcreate stage
> > > > >>
> > > > >>Is this correct ? Becaouse I dont want to stage any files to path /. (Or
> > > > >>is needed due to that / is parent directory ?)
> > > > >>
> > > > >>Thank you
> > > > >>Pavel
> > > > >>
> > > > >>On Tue, 12 Jul 2005 20:46:20 -0700 (PDT)
> > > > >>Wilko Kroeger <[log in to unmask]> wrote:
> > > > >>
> > > > >>
> > > > >>>Hello Pavel
> > > > >>>
> > > > >>>I suspect that this is a problem with the mss configuration. In the
> > > > >>>dev. version the mss command is ignored if the path is only stage-able
> > > > >>>and  nodread, nocheck, nomig are specified.
> > > > >>>Do you see in the xrdlog file the line
> > > > >>>  mssgwcmd ignored; no MSS paths present.
> > > > >>>from the xrootd startup ?
> > > > >>>
> > > > >>>If this is the case you could add
> > > > >>>oss.check
> > > > >>>to your config which should cause xrootd to use the mssgwcmd command.
> > > > >>>(oss.dread ot oss.rcreate should also do the trick). But if you
> > > > >>>specify any of these options xrootd will behave a little bit different
> > > > >>>from how it is configured now.
> > > > >>>
> > > > >>>The problem has been fixed in CVS and will be in the next release.
> > > > >>>
> > > > >>>
> > > > >>>Cheers,
> > > > >>>   Wilko
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>>On Tue, 12 Jul 2005, Pavel Jakl wrote:
> > > > >>>
> > > > >>>
> > > > >>>>Hi for some time,
> > > > >>>>
> > > > >>>>Firstly, we've installed the latest development version of xrootd on
> > > > >>>>100 nodes (1 redirector + 1 supervisor + 98 dataservers) with the
> > > > >>>>support of open load balancing and everything is working great and
> > > > >>>>super. I would like to say, brilliant work.
> > > > >>>>
> > > > >>>>To my problem, I want to enable the MSS interface. We've implemented
> > > > >>>>scripts for performing meta-data operations and staging files from
> > > > >>>>HPSS. (directives oss.mssgwcmd, oss.stagecmd). I 've inserted
> > > > >>>>everything in config file for dataserver and tried to obtain a file
> > > > >>>>from HPSS. In redirector log I found this error message:
> > > > >>>>
> > > > >>>> 050712 19:35:48 18366 odc_Locate: starlib.10447:13@rcas6230 given
> > > > >>>> error msg 'No servers are available to read the file.' by xrdstar
> > > > >>>> path=/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/s
> > > > >>>> t_physics_5040131_raw_2010003.MuDst.root
> > > > >>>>
> > > > >>>>and found out that scripts was not even called once time. (this I
> > > > >>>>know from debug support of my script). My question is, if I have
> > > > >>>>something wrong in my configuration file or I forgot something to
> > > > >>>>add or I skiped something in reading of documentation.
> > > > >>>>
> > > > >>>>I am enclosing a configuration file for dataserver.
> > > > >>>>
> > > > >>>>Thank you for any advice
> > > > >>>>Pavel
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>
> > > >
> > > > --
> > > >               ,,,,,
> > > >              ( o o )
> > > >           --m---U---m--
> > > >               Jerome
> > >
>