Print

Print


Hi Pete,

>   I don't feel strongly about it. Andy should comment, though. I can perhaps
> see that being able to turn on debugging in-situ without restarting the
> server might have some value in some particular (theoretical) situation where
> the server has begun to behave strangely.
Actually, in this case I do agree with Artem. It would be nice if we could
turn on/off debugging without restarting the servers. There's nothing in
the code that prevents it. This may have to be relegated to the
local admin interface as opposed to the remotely accessible one.

> > > > Admin should be able to read server's log file(s) remotely, via a call. Log
> > > >   files include main log file, error messages, trace output, pid file.
> > >
> > >   IIRC, in the original discussion we had about this some of use felt that
> > > this would be useful, but overkill, since other tools could be used.
> >
> > I think overkill is to use 'ssh' and 'more' to examine up to 48 logs files
> > on 40 hosts. (48 is xrootd, olb, mlog, plog, slog, Slog x 8).
Yes, overkill, but you can do that today by simply exporting the
directories that have the log files.

>    o The adminstrative interface will connect to the xrootd (and not the
>      olbd). I don't know if the olbd protocol would support providing _its_
>      log file to xrootd, etc. etc.
Actually, that may or may not be true. We're in the discussion phase for
this. It's not clear what we want to architecurally do to get olbd
specific information.

>   The whole thing starts to get rather ugly. In the end you will always have
> some other log file (Ganglia?) that you want to look at which is on the
> data server itself. This is a general problem so there must be some tool out
> there to harvest or display log files like this. If not presumably one can
> probably make something simple as you have undoubtedly already done.
>
>   (i.e. you can try to convince Andy, but I won't help you do it for this
>    one...)
Harvesting log files is a good thing and SCS does this for some of their
own stuff. It's a pain eventhough the script is relatively easy to write.
The idea is to collect all the log files, insert the host name between the
timestamp and the text, and then sort the composite to get a full overview
of events. Sometimes that's useful, mostly not.

>    o Doubling the size of the disk cache and putting more than one copy of
>      every file on different servers in the disk cache (ok, I'm joking).
Actually, this is a very interesting idea and theoretically significant.
Suppose we had a dynamic mirroring system. When a server opens a file, it
copies it to some other server with retention period. That way, when that
server goes down, any active uses of the file will be redirected to the
server that has the file (which presumably may or may not choose to
replicate it as well). This handily solves access to actively used files.
Unfortuantely, it does nothing for future access to files that reside onoy
on the server that has gone down.

> > The ultimate goal of remote administaration is not to log into any of kan
> > servers at all. If you need to log in to check what the config file is,
> > this complicates your life.
Again, simply export /opt/xrootd/etc

> Simply looking at the config file on disk won't
> always tell you with what configuration the server was started as it may
> have been overwritten with a newer version of the config file since the
> server was stated. You _might_ be able to backtrack through the log files
> to when the server was started to look at the printout, but if it has
> been running for many days that might not be so easy. (And in fact the
> log files could even have been purged.) Andy should comment.
Quite true. I can modify the start-up script to copy the config file that
the server was started with under the name, say, "<cfn>.active" to get
around the moving config file problem. What do people think?

> > > > Admin should be able to give a signal to dlb to rescan file system for
> > > >   new/gone files.
> > >
> > >   The olb (once known as the "dlb") doesn't maintain any state on the data
> > > servers, or have I misunderstood? I'm not sure what this means. The manager
> > > olbd obviously does have a cache, but as I understand it it also times
> > > out entries older than 8 hours.
> >
> > So the proposal is to make it admin-induced in addition.
The olbd is very resistant to forced additions with intended use. So,
that's out of the question. I could see the need for a forced purge of the
cache (either server-specific or complete). Of course, selective restarts
would accomplish the same thing at lower cost.

>   What exactly would the admin be trying to achieve after having removed
> a file? If it is simply to save clients some time going through the
> (ask/be-redirected/not-there/go-back-to-redirector-to-ask-and-refresh)
> cycle, that could be perhaps be useful, but isn't critical as the client
> will do the right thing. In practice you probably want some way to something
> like kXR_refresh without actually opening the file (Kind of like a
> kXR_forget...) The next client that comes in will then actually trigger
> the system to find the file. Andy?
You sort of have that since a server can always send a "file gone"
notification to the olbd. We have the interface (evenm command line
script) to do this and it will be (albeit slowly) integrated into the MPS.
However, Pete is right, all of these types of enhancements simply reduce
the client overhead but really are not specifically needed for correct
operation.

> > No. Stopped xrootd needs to tell clients to come back in a certain time
> > that admin specifies (remotely). For example, if unix-admins need to
> > reboote a machine to apply a patch, we'd rather have clients wait for 10
> > minutes rather than redirecting them and restaging a file somewhere else.
> > So we'd need to tell redirector to hold those clients how need to access
> > that host to for 10 minutes.
That actually happens now. When a server goes down, clients will be told
to wait up to 10 minutes if that server is the only server available to
serve the requested data. Of course, that's relevant for files we know
about at the time the request is made. The only way you can prevent
restaging the data elsewhere in all circumstances is to kill the manager
olb's during he 10 minute outage (or perhaps have an admin function that
says to hold all requests for 10 minutes).

> > >   How is it supposed to recognize that there is a problem with the filesystem?
> >
> > it gets distinct return code from open, seek, read etc.
>
>   Could you be more specific about which return codes it would get to know
> that it is a "filesystem problem"? (Which I read as "hardware problem", so
> perhaps you should be more specific about which filesystem problems you
> mean.)
Unfortuantely, not always. Sometiomes it just hangs. We can recover from
those conditions that are easily recognizable (i.e., not mounted and I/O
error). Though, it's not clear what you want to do at that point. The
current strategy is for the olbd to claim the file is no longer on the
server. That has potential pitfalls as the devil is in the details here.

> > I am talking about a system that reacts on errors not monitors it. So what
> > if a filesystem went don't at 2 am - I am less interested to recieve and
> > alarm, but I'd rather see xrootd reconfigured and restarted to avoid using
> > that filesystem. Another example: user job chrashes and give message "file
> > not existent". This leave user wondering why. If xrootd could not only
> > print this error to client and it's log, but to pass it to some
> > intellingent error processing system, such system could attempt to find
> > out why the file is missing and send user more detaind explanation and
> > suggestions.
TYpically, all of this is handled by an external monitoring system with
scriptable alarms. It's standard industry practice now.

Andy