Hi Artem, On Wed, Mar 02, 2005 at 05:06:00PM -0800, Artem Trunov wrote: > Yuo proposal (d) is indeed extensively flexible and covering all feaseble > use cases, but is probably too complicated, because it requires developing > some messaging between SRM and xrootd olb. I think the complexity in that case comes mostly from that an SRM implementation is needed for the backend. That is why the solutions (a) and (b) are attractive for some set of people: they are relatively simple to set up and cover a large fraction of the things people want to do in a lightweight, scalable manner. > For example, this can be pretty incovinient to implement: > > - when SRM updates a file it must tell xrootd to invalidate that > entry in its redirector cache (and SRM must update all replicas) > > since requires hacking/wrapping SRM tools. It just makes cyclic > dependency: xrootd uses SRM, SRM uses Xrootd. Actually this one is (I think) pretty simple, since the task of doing the work (i.e. managing the cache) is delegated to SRM. My guess is that for "update file" it could just do the following (or equivalent): o remove all existing replicas o send xrootd redirector a kX_Refresh o put in updated file in place of the previous replicas There is no need for synchronization of activites between the two pieces, just a simple message to xrootd. The interactions can (intentionally) be made to be zero for reads when the file is in the disk cache, which as you know is something we want very much. I should probably stop here since I expect that the real details of the interactions between xrootd and SRM will have to be worked out. Since people are interested in it, I expect that they will figure this out systematically at some point. (And, again, for the vast bulk of HEP data, the far more interesting thing is actually making sure that updates to files _cannot_ happen, not creating mechanisms for updates to happen...) > This is coherent with my very old proposal to use xrootd for > communicating with underlying MSS, so you have only one consistent interface. > Unfortunately, administrative interface has been given the lowest priority > in this project. As the result we still use love or hate Pud, and still > can not ask such fundamental questions as: on which hosts(s) a file > is/should be. Well, we made quite a bit of progress on defining further details of the xrootd administrative interface (this was the meeting we scheduled especially for you and you came and fell asleep... ;-) Fabrizio has a small fraction of the full needed set of functionality in the "client admin", and I am looking for someone to systematically finish and package the rest. Pete > On Thu, 3 Mar 2005, Peter Elmer wrote: > > > > (d) xrootd/olbd (ro or r/w) + SRM backend > > > > - new files can be added either via the xrootd interface (if r/w) or > > the SRM one > > - when an xrootd (configured for r/w) creates a new file it must > > ask/tell SRM after selecting a data server > > - when SRM adds a new file to the disk cache (or underlying MSS) > > xrootd can just discover it as in (a) above > > - files can be updated via either interface > > - when xrootd updates a file, it must tell SRM to allow internal > > replicas to be removed > > - when SRM updates a file it must tell xrootd to invalidate that > > entry in its redirector cache (and SRM must update all replicas) > > - files can be _read_ via xrootd with _no_ communication necessary > > between xrootd/olbd and SRM if file is on disk > > - files can be _read_ via SRM with _no_ communication necessary > > between xrootd/olbd and SRM > > - files can be _read_ via xrootd with a mssgwcmd-style communication > > with SRM if file is not currently on disk (and MSS exists) > > - xrootd/olbd decides when files should be replicated and chooses a > > server based on load info and free space info from SRM, SRM on the > > back end arranges for the replication (using whatever mechanism it > > likes, e.g. restaging from MSS or disk-to-disk copy) > > - while (a) can be used for most of the bulk data in HEP, this solution > > has some advantages for some methods of managing user data, etc. > > > > Anything I've missed or gotten wrong? > > > > Pete > > > > ------------------------------------------------------------------------- Peter Elmer E-mail: [log in to unmask] Phone: +41 (22) 767-4644 Address: CERN Division PPE, Bat. 32 2C-14, CH-1211 Geneva 23, Switzerland -------------------------------------------------------------------------