LISTSERV mailing list manager LISTSERV 16.5

Help for XROOTD-L Archives


XROOTD-L Archives

XROOTD-L Archives


XROOTD-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

XROOTD-L Home

XROOTD-L Home

XROOTD-L  July 2005

XROOTD-L July 2005

Subject:

Re: Problem with enabling MSS interface

From:

Pavel Jakl <[log in to unmask]>

Date:

17 Jul 2005 11:27:45 -0400Sun, 17 Jul 2005 11:27:45 -0400

Content-Type:

multipart/mixed

Parts/Attachments:

Parts/Attachments

text/plain (424 lines) , dataserver.cf (424 lines)

Hello again Wilko,

thank you for your answers (some of them were very usefull for me) . After your advice I tried to set MPS beacause we need to use more than 1 cache file system.
I have some problem with directive mps.basedir, let me explain to you what I want:

I am requesting this file from HPSS:

/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/file1.root

and want to stage from HPSS to cache system. (/data0 or /data1 or /data2 ...)

Firstly MPS is trying to create directories for symlink:

/home/starreco/reco ...  ()

but this a problem, because I am running xrrotd as user starlib and I don't have permission to this directory.
So I decided to add directive

mps.basedir  /home/starlib

thank every file will be staged (its symlink to cache file system) like:

/home/starlib/home/starreco/reco ....

But as I mentioned above, the directive basedir is not working and still want to stage file as /home/starreco.
Can you look at my configuration file, if something is wrong ?

Many thanks
Pavel

On Fri, 15 Jul 2005 13:05:43 -0700 (PDT)
Wilko Kroeger <[log in to unmask]> wrote:

> 
> Hello Pavel
> 
> On Fri, 15 Jul 2005, Pavel Jakl wrote:
> 
> > Hello again,
> >
> > 	In the MPS manual and mps_* scripts, there are things like
> >
> > $Config{xfrcmd}    = ($isMPS ? '/opt/xrootd/utils/xfrcmd'
> >                              : '/usr/etc/ooss/pftp_client');
> >
> >   and similar switch for port, nodes, account name ... while both
> > seem to accept ftp-like commands. Could you please explain what
> > was the philosophy of having separate commands for oss/mps (but
> > using the same scripts) and why different port/nodes ?? This is
> > not entirely clear to us ...
> 
> The ooss scripts are used at SLAC to manage the cache files systems.
> The mps scripts evolved out of the ooss scripts and they are kept
> backwards compatible, meaning that they would work with the ooss scripts
> installed in /usr/etc/ooss. The $isMPS basically decides if the MPS
> scripts that come with xrootd are used or the ooss scripts (installed in
> /usr/etc/ooss). This allows site that have ooss installed (like SLAC) to
> use the mps scripts.
> 
> >
> > 	Since stagecmd for oss is pftp and mps stagecmd is
> > xfrcmd, it makes us puzzled as per what xfrcmd does in addition
> > of pftp. This does not appear to be explained in the documentation.
> 
> The  '/opt/xrootd/utils/xfrcmd' should do the same thing that
> '/usr/etc/ooss/pftp_client' does for HPSS.
> Different sites use different MSS systems and even if they use the same
> system the access methods could be different. Therefore the mps scripts were
> written so that an xrootd site can provide their own implementation for
> mss access without modifying the mps scripts. If you are using pftp to access
> HPSS you could just call it from within /opt/xrootd/utils/xfrcmd.
> 
> I hope this clarified your issues a littele bit.
> 
> Cheers,
>   Wilko
> 
> >
> >
> > Jerome & Pavel
> >
> >
> > On Thu, 14 Jul 2005 20:50:39 -0400
> > Jerome LAURET <[log in to unmask]> wrote:
> >
> > >
> > > 	Hello Wilko (possibly Andy),
> > >
> > > 	Thanks already for your many answers which clarifies
> > > things quite a lot but we are not entirely sure of some details.
> > > What we are trying to do is the following (re-explained from
> > > the start, it may not be possible but ..)
> > >
> > > - first, we already have files located as (no typo, note
> > >    the path "massage" here)
> > >
> > >    /data?/starlib/reco/productionHigh/FullField/P04ik/2004/f1.root
> > >
> > >    where ? stands for any number between 0 and 3. Each file
> > >    correspond to its initial source (and continuing with our
> > >    example)
> > >
> > >    /home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root
> > >
> > >    so, intrinsically, /data*/starlib/reco/... is equivalent to
> > >    /home/starreco/reco/... (different PFN, same LFN and in our
> > >    scheme, the path is slightly modified to fall back on our
> > >    feet PFN -> LFN)
> > >
> > >
> > > - Our application takes advantage of already catalogued
> > >    PFN. We would like to do a smooth transition as there is
> > >    about 50+ TB of data already placed ... We nicely access
> > >    the files using rootd referencing the files as
> > > root://node//data1/starlib/reco/productionHigh/FullField/P04ik/2004/f1.root
> > >    as the files are a-piori known
> > >
> > >
> > > - Xrootd saves us a lot of course, for example, being able
> > >    to reduce the name space to ONLY
> > > root://redirector//home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root
> > >
> > >    as if we are accessing HPSS files while Xrootd handles it
> > >    would be reat. BUT we also would like to be able to access
> > >    the files the usual-current way i.e. within a syntax
> > >    indicating //node/data1.
> > >
> > > - For this to happen, Xrootd would need to understand BOTH
> > >    /home/starreco/reco using /data* as cache (for example i.e. MPS)
> > >    AND also be able to access /data*/starlib/reco in direct
> > >    access mode
> > >
> > >    If so, several questions comes to mind
> > >
> > >    /home/starreco would be populated with soft-links to files named
> > >    /data?/%home%starreco%... which shares the space with the files
> > >    already there.
> > >
> > >    * Will Xrootd be happy with that ??? [a priori, why not]
> > >      Seems like a possible space management nightmare to me
> > >      (shared space)
> > >
> > >    * What happen if in the /home/starreco/reco tree (local not
> > >      HPSS) a soft-link is accidentely removed ?? Is it re-created ??
> > >
> > >    * I assume the %XX%YY% files are managed i.e. deleted along
> > >      the line of a policy. But folowing the above question, what
> > >      happens if a file has disappeared but tyhe soft-link remains ??
> > >      Is Xrootd smart enough to find it and re-stage ??
> > >
> > >    * Are a lot of flat files in an identical /data?/ (i.e. files
> > >      of the form %XX%YY% into a single dir) is read efficient ??
> > >      Imagine that we get several 10k files in a single directory,
> > >      will that cause a performance impact of some sort (directory
> > >      lookup or otherwise) ??
> > >
> > >    * By the way, does "readonly" have the meaning of 'cannot open
> > >      a new file' or really readonly (which terminology seems to
> > >      be incompatible with writting files into /data*). I assume
> > >      the former.
> > >
> > >
> > >    * Now for a controversial question. stagecmd command accepts
> > >      in one of its form two arguments:
> > >         HPSSPathAsInput  localPath
> > >      If localPath was only a "suggested path" and stagecmd would
> > >      RETURN to Xrootd (as sstring) its final file placement,
> > >      we could implement our own space management mechanism and
> > >      policy. I can understand this not an item for the wish list
> > >      but this WOULD have resolved our funny disk mapping and
> > >      namespace issue [what it would not do is name space reduction
> > >      at the end to a single namespace but that is another story].
> > >      Comments ??
> > >
> > > - And Pavel would like to know if someone has an example of the
> > >    /opt/xrootd/utils/xfrcmd script (to be sent to list or privately).
> > >
> > >
> > > 	Thanks & cheers,
> > >
> > >
> > > Wilko Kroeger wrote:
> > > > Hello Pavel
> > > >
> > > > Let me try to answer your questions.
> > > >
> > > > 1) The stage command 'stagecmd':
> > > >
> > > > If a file is not on disk xrootd tries to stage the file onto disk. It
> > > > calls the stagecmd with two arguments:
> > > >    stagecmd <remoteFileName> <localFileName>
> > > > where remoteFileName is the name in HPSS and localFileName is the one on
> > > > disk. Xrootd forms these names from the file name provide by the client
> > > > and the two prefixes oss.remoteroot and oss.localroot
> > > > For example:
> > > > the xrootd config file contains
> > > > oss.remoteroot /fr
> > > > oss.localroot  /fl
> > > > a client requests a file like:
> > > >     xrdcp root://dataserver//my/test.file  test.file
> > > > in this case the stage command is called:
> > > >   stagecmd  /fr/my/test.file  /fl/my/test.file
> > > >
> > > > If oss.remoteroot and oss.localroot are not specified the arguments to the
> > > > stagecmd is just the file name specified by the client.
> > > >
> > > > As you can see the files will always be staged into the same disk if you
> > > > use the oss.localroot. If you have more then one disk on an xrootd server
> > > > you want to use the cache system and a stagecmd that is aware of the cache
> > > > system.
> > > >
> > > > The xrootd actually comes with a cache aware stage command. You can find
> > > > it in the utils directory of an xrootd release it is called mps_Stage.
> > > > I haven't used it myself but I will find out how to use it.
> > > > The utils dir contains a few mps_XXX utils that are used to manage a cache
> > > > system. On the xrootd web site there is a document that describes the mps
> > > > system: http://xrootd.slac.stanford.edu/doc/mps_config/mps_config.htm
> > > >
> > > >
> > > > 2) The cache file system:
> > > >
> > > > A file that is staged into a cache file system is physically put in any of
> > > > the specified caches and a link between this files and the proper file
> > > > name is created. For example:
> > > >
> > > > Lets assume you have the following file:
> > > >  /home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root
> > > > and it is in the cache /data3:
> > > >
> > > >
> > > >>ls -l /home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root
> > > >
> > > > would show:
> > > > /home/starreco/reco/productionHigh/FullField/P04ik/2004/f1.root ->
> > > >  /data3/%home%starreco%reco%productionHigh%FullField%P04ik%2004%f1.root
> > > >
> > > > As I mentioned above, if you want to use a cache system the stagecmd
> > > > has to be aware of it.
> > > >
> > > > 3) Your setup:
> > > >
> > > > In the configuration that you described below you export all the cache
> > > > system (/dataN) due to the olbd.path and thats maybe something you don't
> > > > want to do. This also means the client has to access files with the
> > > > name /data3/home/...  making your system setup visible to the client.
> > > >
> > > > Instead, it seems to me you that would like to make files with the name
> > > > '/home/....' accessible to users but on the xrootd server these files are
> > > > stored in /dataN/... .
> > > >
> > > > You could configure your system  with:
> > > >
> > > > oss.path /home  dread nomig stage
> > > >
> > > > oss.cache /data*
> > > >
> > > > xrootd.export /home
> > > >
> > > > You also have to specify the stagecmd and the mssgwd command. The stage
> > > > command you obviously need in order to get a file out of HPSS, and the
> > > > mssgwd command is needed because xrootd first checks if a file is present
> > > > in HPSS. If you don't want xrootd to do this you could provide a
> > > > implementation that returns dummy data.
> > > > I could provide you with a little dummy script that implements some of the
> > > > mssgwd commands.
> > > >
> > > >
> > > > I don't now why all your path are migratable (mig). I suspect that one of
> > > > the options forces to turn on mig but I don't know which one. I have to
> > > > look into this.
> > > >
> > > >
> > > > The
> > > > oss.path / .....
> > > > is always there by default.
> > > >
> > > > I hope I clarified some of your questions. Let me know if not.
> > > >
> > > > Cheers,
> > > >    Wilko
> > > >
> > > >
> > > >
> > > > On Wed, 13 Jul 2005, Pavel Jakl wrote:
> > > >
> > > >
> > > >>Hi Wilko,
> > > >>
> > > >>many thanks for advice. With directive oss.check client is redirected to
> > > >>other node and staging of requested file is started.
> > > >>I have some other questions, if you can help me, it will be super.
> > > >>
> > > >>I want to stage a file from HPSS for example with path like:
> > > >>/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/st_physics_
> > > >>5040131_raw_2010003.MuDst.root
> > > >>
> > > >>and copy to path /data0 or /data1 or /data2 or /data3 which I've
> > > >>specified with directives:
> > > >>
> > > >>oss.path /data0 dread nomig stage
> > > >>oss.path /data1 dread nomig stage
> > > >>oss.path /data2 dread nomig stage
> > > >>oss.path /data3 dread nomig stage
> > > >>
> > > >>My question is if my thoughts are right and file with HPPS path:
> > > >>
> > > >>/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/st_physics_
> > > >>5040131_raw_2010003.MuDst.root
> > > >>
> > > >>will be staged to local path:
> > > >>
> > > >>/data0/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/st_ph
> > > >>ysics_5040131_raw_2010003.MuDst.root
> > > >>i.e that /data0 is path to ve pre-pended to local path. Or is there some
> > > >>other directive, which can help me to do that ?
> > > >>
> > > >>My second question is if I need to have specified oss.mssgwcmd command
> > > >>when I don't plan to migrate any files from local disk to HPSS ?
> > > >>
> > > >>The third question is about oss.cache directive. Can I have the location
> > > >>of cache in the same directory as I have the files to export ?
> > > >>For example, I have all files in directories:
> > > >>/data0, /data1, /data2 or /data3 (i export with directive xrootd.export)
> > > >>and can I have a oss.cache /data*.
> > > >>
> > > >>My fourth problem is with oss.path again, I've specified this:
> > > >>oss.path /data0 dread nomig stage
> > > >>oss.path /data1 dread nomig stage
> > > >>oss.path /data2 dread nomig stage
> > > >>oss.path /data3 dread nomig stage
> > > >>
> > > >>but in log file is this:
> > > >>oss.path /data3 r/o  check dread mig nomkeep nomlock nommap norcreate
> > > >>stage
> > > >>oss.path /data2 r/o  check dread mig nomkeep nomlock nommap norcreate
> > > >>stage
> > > >>oss.path /data1 r/o  check dread mig nomkeep nomlock nommap norcreate
> > > >>stage
> > > >>oss.path /data0 r/o  check dread mig nomkeep nomlock nommap norcreate
> > > >>stage
> > > >>
> > > >>as you can see with mig directive and in addation is here a line:
> > > >>
> > > >>oss.path / r/o  check dread mig nomkeep nomlock nommap norcreate stage
> > > >>
> > > >>Is this correct ? Becaouse I dont want to stage any files to path /. (Or
> > > >>is needed due to that / is parent directory ?)
> > > >>
> > > >>Thank you
> > > >>Pavel
> > > >>
> > > >>On Tue, 12 Jul 2005 20:46:20 -0700 (PDT)
> > > >>Wilko Kroeger <[log in to unmask]> wrote:
> > > >>
> > > >>
> > > >>>Hello Pavel
> > > >>>
> > > >>>I suspect that this is a problem with the mss configuration. In the
> > > >>>dev. version the mss command is ignored if the path is only stage-able
> > > >>>and  nodread, nocheck, nomig are specified.
> > > >>>Do you see in the xrdlog file the line
> > > >>>  mssgwcmd ignored; no MSS paths present.
> > > >>>from the xrootd startup ?
> > > >>>
> > > >>>If this is the case you could add
> > > >>>oss.check
> > > >>>to your config which should cause xrootd to use the mssgwcmd command.
> > > >>>(oss.dread ot oss.rcreate should also do the trick). But if you
> > > >>>specify any of these options xrootd will behave a little bit different
> > > >>>from how it is configured now.
> > > >>>
> > > >>>The problem has been fixed in CVS and will be in the next release.
> > > >>>
> > > >>>
> > > >>>Cheers,
> > > >>>   Wilko
> > > >>>
> > > >>>
> > > >>>
> > > >>>
> > > >>>
> > > >>>On Tue, 12 Jul 2005, Pavel Jakl wrote:
> > > >>>
> > > >>>
> > > >>>>Hi for some time,
> > > >>>>
> > > >>>>Firstly, we've installed the latest development version of xrootd on
> > > >>>>100 nodes (1 redirector + 1 supervisor + 98 dataservers) with the
> > > >>>>support of open load balancing and everything is working great and
> > > >>>>super. I would like to say, brilliant work.
> > > >>>>
> > > >>>>To my problem, I want to enable the MSS interface. We've implemented
> > > >>>>scripts for performing meta-data operations and staging files from
> > > >>>>HPSS. (directives oss.mssgwcmd, oss.stagecmd). I 've inserted
> > > >>>>everything in config file for dataserver and tried to obtain a file
> > > >>>>from HPSS. In redirector log I found this error message:
> > > >>>>
> > > >>>> 050712 19:35:48 18366 odc_Locate: starlib.10447:13@rcas6230 given
> > > >>>> error msg 'No servers are available to read the file.' by xrdstar
> > > >>>> path=/home/starreco/reco/productionHigh/FullField/P04ik/2004/040/s
> > > >>>> t_physics_5040131_raw_2010003.MuDst.root
> > > >>>>
> > > >>>>and found out that scripts was not even called once time. (this I
> > > >>>>know from debug support of my script). My question is, if I have
> > > >>>>something wrong in my configuration file or I forgot something to
> > > >>>>add or I skiped something in reading of documentation.
> > > >>>>
> > > >>>>I am enclosing a configuration file for dataserver.
> > > >>>>
> > > >>>>Thank you for any advice
> > > >>>>Pavel
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>
> > >
> > > --
> > >               ,,,,,
> > >              ( o o )
> > >           --m---U---m--
> > >               Jerome
> >

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
January 2009
December 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use