Print

Print


Hi, Pete!

I am commenting below, and removed all stuff that are already clear or
irrelevant to xrootd/olb.

> > Admin should be able to stop the server remotely (a la oostopams).
>
>   Presumably needs the administrative interface. Hopefully we can (finally)
> provide this in the near future.
>

> > Admin should be able to audit server's (get it's state and debug info)
> >   remotely, i.e. via a call to the server.
>
> > Admin should be able to turn on debugging remotely, i.e via a call to the
> >   server.
>
>   Since the server can be restarted easily without crashing the clients,
> the server could also be restarted with extra options in the config file.
> I don't feel strongly that this has to be possible via the adminstrative
> interface and don't even know how easy it would be to add this. Andy?

Config files are tailor controled, which means you have to run tailor or
wait for tailor scheduled run to update the config, and then restart.
Seems like a hassle otherwise drop this use case.

> > Admin should be able to read server's log file(s) remotely, via a call. Log
> >   files include main log file, error messages, trace output, pid file.
>
>   IIRC, in the original discussion we had about this some of use felt that
> this would be useful, but overkill, since other tools could be used.

I think overkill is to use 'ssh' and 'more' to examine up to 48 logs files
on 40 hosts. (48 is xrootd, olb, mlog, plog, slog, Slog x 8).

> > Server should be able to log it's host's and it's process' cpu and memory
> >   utilization and other usufull paramemters.
>
>   At the time of the workshop SLAC had no useful system monitoring (despite
> having tens of millions of dollars of equipment). Since that time Yemi has
> deployed the Ganglia monitoring at SLAC. Using some external agent like
> Ganglia to monitor things like cpu and memory usage seems a better structure
> (and is what we have at SLAC and other places). Artem, are you happy with
> that?  (i.e. with no features in xrootd itself for this)

We don't have programmatic access to monitoring info, and therefore can
not manipulate all figures at will. also, other users of xrootd don't use
Ganglia.

> > Admin should be able to dump/load server's configuration remotely.
> > ===>24/7 availability is essential. Stoping 1000+ clients for some simple
> >   tasks like reconfiguration is bad bad bad.
>
>   Well, I think we've done pretty well at SLAC in terms of 24/7 availability.
> (CNAF in particular seems to have problems, though.)
>
>   I'm not so clear on why it is useful to dump the configuration remotely.
> Artem, do you still feel strongly about this?

The ultimate goal of remote administaration is not to log into any of kan
servers at all. If you need to log in to check what the config file is,
this complicates your life.

> > Admin should be able to give a signal to dlb to rescan file system for
> >   new/gone files.
>
>   The olb (once known as the "dlb") doesn't maintain any state on the data
> servers, or have I misunderstood? I'm not sure what this means. The manager
> olbd obviously does have a cache, but as I understand it it also times
> out entries older than 8 hours.

So the proposal is to make it admin-induced in addition.

> > Admin wants to disable access to certain data sets, should it need
> >   so. This means tcl files should not be generated for a user jobs. ??Other
> >   ways to prevent user from accessing some data?? A la inhibit?? Via Xrootd??


>   o Is it possible to prevent access to classes of files?
>
>   Clearly specific portions of the name space can be exported while others
> excluded, but that is a very high granularity thing and dependent a bit
> on how people are constructing their name space. For example BaBar has file
> name spaces like:
>
>    /store/PR/R12/AllEvents/...
>    /store/SPskims/R14/14.4.0d/BSemiExcl/....
>
> so could inhibit things like "/store/SPskims/R14" (release 14 Simulation
> Production skims) or "/store/PR", but other things like individual skims are
> more complicated. [I don't know why the release, 14.4.0d, was put before
> the skim name (BSemiExcl).]

It may be sufficient to do this in xrootd config like this, but again,
need better means of propagating config files to xrootd, and preferably
without restart.

>   In practice, however, we've not found this necessary in BaBar. Artem, what
> was the use case for this in the past?

Inhibiting federations for maintanance, preventing users from running on
bad data.

> > DLB should be dynamically and remotely configured not to redirect requests to
> >   specific hosts, either forever or for specified time.
>
>   I think this is just done by stopping the xrootd on that affected machines.
> The olb can be configured not to accept requests from the manager if their
> is is no xrootd running. Is that sufficient?

No. Stopped xrootd needs to tell clients to come back in a certain time
that admin specifies (remotely). For example, if unix-admins need to
reboote a machine to apply a patch, we'd rather have clients wait for 10
minutes rather than redirecting them and restaging a file somewhere else.
So we'd need to tell redirector to hold those clients how need to access
that host to for 10 minutes.

> > Xrootd should not stop working if hpss goes down.
>
>   Since it was just announced that HPSS is unavailable about 10 minutes before
> I got to writing these lines, we'll see how this goes. I'm not sure we've yet
> really gone through an extended HPSS outage, so we'll presumably learn some
> things this time.

If xrootd will need to stage in a file and get an error other than "file
not existent", it should handle this gracefully, holding client for some
time (externally (and remotely! and dynamically!) configured).

> > When a data host is down, xrootd should automatically avoid this host. It
> >   should report to administrator, via some messaging mechanism,
> >   that a host is down.
>
>   I'm not sure what you meant by "avoid" here, but if a host (i.e. a data
> server) is down its olbd will not be subscribed to the manager and it will
> effectively be ignored.

That's what I meant.

>   As to the "reporting to the administrator" part of this use case, we
> decided to make the "alarm" mechanism external to the xrootd system. This
> should be handled by something else (e.g. like alarms with Ganglia, say).
> Artem, is that sufficient?

May be, but the idea is that xrootd will detect error conditions
immediately, while any external system will need some time. When xrootd
detects error condition, it can react to it by adjusting something, i.e.
turning itselft off, while external system will merely notify something.
Besides we don't really have any kind of alarm system. This is actually
good thing for brainstorming, since we need to clelarly define what do we
monitor and react on or what's unix-admins responsibilities. I'v always
wondered why do we have to tell u-a about dead hosts and file systems, and
not vice-versa.

> > When a files system on a host crashes, xrootd should automatically recover.
> >   It should report, that FS is down.
>
>   How is it supposed to recognize that there is a problem with the filesystem?

it gets distinct return code from open, seek, read etc.

> > DLB should be checking xrootd "health" of a data server and it's
> >   filesystems as a part of load measure. Should report if finds something
> >   wrong. Anything that prevents xrootd or dbl from doing it's job, like
> >   network problems, afs troubles, should be reported.
>
>   Again, it isn't clear to me what exactly should be monitored. Can you
> give examples?

If one of file system is performing significantly worth that another one,
this should be noticed. Just recently disk on bbr-xfer05 was very slow, no
one could noticed, only Remi did I don't know how.

>   Artem, do you agree that this isn't the job of the xrootd system itself, but
> of something like Ganglia? (Or whatever, something designed to do monitoring
> and alarms of systems.) We shouldn't reinvent that wheel.

I don't care what does monitoring and alarming. I gave your some
reasoning for close-coupling it with xrootd. Again: if there an
application that provides data access, it should monitor data access
related performance and alaram when data access has some problems.
Note again, that we don't have any more or less convinient, not
to say any sofisticated alarming.

> > Reporting: dbl should be able to send messages to some other application
> >   for further error handling.
> > ===> Reporting error condition on timely matter is essential. It doesn't
> >   make a lot of sence to build another monitoring system, if dbl is
> >   already doing so.

>   I disagree. A real (complete, full) monitoring system should be used. The
> olbd (once called "dlb") does a very limited set of things as part of its
> load balancing job.

I am talking about a system that reacts on errors not monitors it. So what
if a filesystem went don't at 2 am - I am less interested to recieve and
alarm, but I'd rather see xrootd reconfigured and restarted to avoid using
that filesystem. Another example: user job chrashes and give message "file
not existent". This leave user wondering why. If xrootd could not only
print this error to client and it's log, but to pass it to some
intellingent error processing system, such system could attempt to find
out why the file is missing and send user more detaind explanation and
suggestions.

> > Testing usecase
> > ---------------
> >
> > Dlb sensors should be able to simulate various load conditions on a host in
> > order to test it's functionality.
>
>   I'm not sure how we simulate the load conditions _internal_ to the olbd in a
> way that isn't artificial in such a way that it doesn't really test anything.
> What you want could however presumably be accomplished by providing dummy
> scripts to the "olb.perf" directive.

So, the scripts should be able to simul.ate the load.


> > Admin wants to do most of file operations remotely, i.e. without logging
> > into to a data server.
>
>   This requires the administrative interface, which is currently lacking. We
> should just add this.

> > Admin wants to list files with attributes, i.e. a,m,c- times, file size,
> > full path, file permissions,
> >
> > Admin wants to check whether file ison disk, in hpss.
> > Admin wants to check whether file is backed up or needs back up.
> > Admin wants to pre-stage a file from hpss into disk cache, with confirmation
> > Admin wants to migrate a file to hpss, with confirmation.
> > Admin wants to remove a file from disk cache.
> > Admin wants to copy/relocate a file to another disk cache.
> > Admin wants to change file permissions.
> > Admin wants to pin files on disk for specified time.
> >
> > Admin wants to combine some operations into one, like: migrate+remove,
> > migrate+copy, stage+chmod,
>
>   Some of the above are presumably core functions of the adminstrative
> interface or of XTNetAdmin (or successor). The HPSS ones may or may not be...

Artem.