LISTSERV mailing list manager LISTSERV 16.5

Help for XROOTD-L Archives


XROOTD-L Archives

XROOTD-L Archives


XROOTD-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

XROOTD-L Home

XROOTD-L Home

XROOTD-L  September 2004

XROOTD-L September 2004

Subject:

Re: xrootd/data management use cases from last year's Lyon workshop

From:

Peter Elmer <[log in to unmask]>

Date:

10 Sep 2004 21:16:32 +0200Fri, 10 Sep 2004 21:16:32 +0200

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (178 lines)

  Hi Andy,

On Fri, Sep 10, 2004 at 11:49:15AM -0700, Andrew Hanushevsky wrote:
> >   I don't feel strongly about it. Andy should comment, though. I can perhaps
> > see that being able to turn on debugging in-situ without restarting the
> > server might have some value in some particular (theoretical) situation where
> > the server has begun to behave strangely.
> Actually, in this case I do agree with Artem. It would be nice if we could
> turn on/off debugging without restarting the servers. There's nothing in
> the code that prevents it. This may have to be relegated to the
> local admin interface as opposed to the remotely accessible one.

  What was the "local admin interface"?

> > > > > Admin should be able to read server's log file(s) remotely, via a call. Log
> > > > >   files include main log file, error messages, trace output, pid file.
> > > >
> > > >   IIRC, in the original discussion we had about this some of use felt that
> > > > this would be useful, but overkill, since other tools could be used.
> > >
> > > I think overkill is to use 'ssh' and 'more' to examine up to 48 logs files
> > > on 40 hosts. (48 is xrootd, olb, mlog, plog, slog, Slog x 8).
> Yes, overkill, but you can do that today by simply exporting the
> directories that have the log files.

  I assume you mean NFS export, here? (As opposed to xrootd export ;-) It 
wasn't clear to me that people wanted to do that for /var partitions on the
data movers, but it would eliminate Artem's problem.

> >   The whole thing starts to get rather ugly. In the end you will always have
> > some other log file (Ganglia?) that you want to look at which is on the
> > data server itself. This is a general problem so there must be some tool out
> > there to harvest or display log files like this. If not presumably one can
> > probably make something simple as you have undoubtedly already done.
> >
> >   (i.e. you can try to convince Andy, but I won't help you do it for this
> >    one...)
> Harvesting log files is a good thing and SCS does this for some of their
> own stuff. It's a pain eventhough the script is relatively easy to write.
> The idea is to collect all the log files, insert the host name between the
> timestamp and the text, and then sort the composite to get a full overview
> of events. Sometimes that's useful, mostly not.

  Everything was fine until you tried to "composite" them.

> >    o Doubling the size of the disk cache and putting more than one copy of
> >      every file on different servers in the disk cache (ok, I'm joking).
> Actually, this is a very interesting idea and theoretically significant.
> Suppose we had a dynamic mirroring system. When a server opens a file, it
> copies it to some other server with retention period. That way, when that
> server goes down, any active uses of the file will be redirected to the
> server that has the file (which presumably may or may not choose to
> replicate it as well). This handily solves access to actively used files.
> Unfortuantely, it does nothing for future access to files that reside onoy
> on the server that has gone down.

  Well, this is just part of the general issue of how a file is replicated
when the load on one server is deemed too high. Bringing it in from the MSS
or another site might be overkill, but will always work at the expense of
some (probably not excessive) delay for the client.

  Copying the file to a 2nd server every time one is opened sounds like a 
lot of extra traffic. At a minimum it would double the I/O out of the data
servers (and if the implementation wasn't done in a sensible way it could
do more than that). 

  If the server has really/truly just gone down (something that shouldn't
happen often) it is probably okay to simply delay the client. If it is a load
issue, you can probably invent other ways to force the file replication only
when it matters (i.e. when some server is over the load threshhold). Anybody
for a _2nd_ xrootd/olb system running on the same servers (without the
load threshold) dedicated just to file replication?

> > > The ultimate goal of remote administaration is not to log into any of kan
> > > servers at all. If you need to log in to check what the config file is,
> > > this complicates your life.
> Again, simply export /opt/xrootd/etc

  Doesn't solve the problem, see next point. Also exporting /opt (presumably
via NFS, right?) from all data servers is also somewhat ugly.

> > Simply looking at the config file on disk won't
> > always tell you with what configuration the server was started as it may
> > have been overwritten with a newer version of the config file since the
> > server was stated. You _might_ be able to backtrack through the log files
> > to when the server was started to look at the printout, but if it has
> > been running for many days that might not be so easy. (And in fact the
> > log files could even have been purged.) Andy should comment.
> Quite true. I can modify the start-up script to copy the config file that
> the server was started with under the name, say, "<cfn>.active" to get
> around the moving config file problem. What do people think?

  I really dislike these little "turd" files (feels like VMS). Can't it just 
make a full copy in memory as it is read in the first time and allow that to
be dumped via the adminstrative interface?

> > > > > Admin should be able to give a signal to dlb to rescan file system for
> > > > >   new/gone files.
> > > >
> > > >   The olb (once known as the "dlb") doesn't maintain any state on the data
> > > > servers, or have I misunderstood? I'm not sure what this means. The manager
> > > > olbd obviously does have a cache, but as I understand it it also times
> > > > out entries older than 8 hours.
> > >
> > > So the proposal is to make it admin-induced in addition.
> The olbd is very resistant to forced additions with intended use. So,
> that's out of the question. I could see the need for a forced purge of the
> cache (either server-specific or complete). Of course, selective restarts
> would accomplish the same thing at lower cost.

  If a data server olbd goes away and then comes back, does the manager
olbd clear all entries in its cache from that server olbd?

> >   What exactly would the admin be trying to achieve after having removed
> > a file? If it is simply to save clients some time going through the
> > (ask/be-redirected/not-there/go-back-to-redirector-to-ask-and-refresh)
> > cycle, that could be perhaps be useful, but isn't critical as the client
> > will do the right thing. In practice you probably want some way to something
> > like kXR_refresh without actually opening the file (Kind of like a
> > kXR_forget...) The next client that comes in will then actually trigger
> > the system to find the file. Andy?
> You sort of have that since a server can always send a "file gone"
> notification to the olbd. We have the interface (evenm command line
> script) to do this and it will be (albeit slowly) integrated into the MPS.
> However, Pete is right, all of these types of enhancements simply reduce
> the client overhead but really are not specifically needed for correct
> operation.

  Actually, what happens currently when a file is purged off disk because of 
the staging? (i.e. because space was needed to stage in some other file) I 
guess it is unlikely that this will happen within 8 hours of a file being
used, but theoretically what happens? The next client (assuming that is also
within the 8 hours) is redirected to where it was, doesn't finds it and causes 
the refresh? i.e. the purging system doesn't actually propagate any info to the 
olbd cache? (Which would be fine, the system is robust without that...)

> > > No. Stopped xrootd needs to tell clients to come back in a certain time
> > > that admin specifies (remotely). For example, if unix-admins need to
> > > reboote a machine to apply a patch, we'd rather have clients wait for 10
> > > minutes rather than redirecting them and restaging a file somewhere else.
> > > So we'd need to tell redirector to hold those clients how need to access
> > > that host to for 10 minutes.
> That actually happens now. When a server goes down, clients will be told
> to wait up to 10 minutes if that server is the only server available to
> serve the requested data. Of course, that's relevant for files we know
> about at the time the request is made. The only way you can prevent
> restaging the data elsewhere in all circumstances is to kill the manager
> olb's during he 10 minute outage (or perhaps have an admin function that
> says to hold all requests for 10 minutes).

   This would be an admin function to the xrootd on the redirector, correct?
Would it not be rather "hold all requests which would be redirected to 
server X for 10 minutes"?

> > > I am talking about a system that reacts on errors not monitors it. So what
> > > if a filesystem went don't at 2 am - I am less interested to recieve and
> > > alarm, but I'd rather see xrootd reconfigured and restarted to avoid using
> > > that filesystem. Another example: user job chrashes and give message "file
> > > not existent". This leave user wondering why. If xrootd could not only
> > > print this error to client and it's log, but to pass it to some
> > > intellingent error processing system, such system could attempt to find
> > > out why the file is missing and send user more detaind explanation and
> > > suggestions.
> TYpically, all of this is handled by an external monitoring system with
> scriptable alarms. It's standard industry practice now.

  Saying that something is "standard industry practice" doesn't convince me
of anything, of course, but obviously I agree with this particular statement...

                                   Pete

-------------------------------------------------------------------------
Peter Elmer     E-mail: [log in to unmask]      Phone: +41 (22) 767-4644
Address: CERN Division PPE, Bat. 32 2C-14, CH-1211 Geneva 23, Switzerland
-------------------------------------------------------------------------


Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
January 2009
December 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use