Print

Print


Hey Andy,

Talking with folks, we decided that we'd rather implement things at  
the OFS layer.  The main driver is that we want this to be a simple  
service that translates from the internal protocols to something that  
can be exported.  Using the OFS layer means (a) we don't have to have  
a large disk cache to act as an "in-between"  (making setup and  
configuration simpler among other things) and (b) no latency.

With regards to your various concerns:
a) Authorization: handled by the underlying file system
b) cluster management: Don't particularly want this at this point;  
we're aiming at a far lower scale
c) Extended NS functions: We don't want to expose these; we want a  
read-only file system for authenticated users
d) Persistence: Again, read only.
e) MSS coordination: Definitely don't want to support this.

Luckily, after closely examing the Sfs example, the HDFS  
implementation is dead-simple and I have a working version with all my  
needs met.

Thanks!

Brian

On Jun 5, 2009, at 7:19 PM, Andrew Hanushevsky wrote:

> Hi Brian,
>
> Adding to what Fabrizio said...
>
> If for some reason you still think that a plug-in is more  
> appropriate for HDFS then you should really consider writing an OSS  
> plug-in as opposed to an OFS plug-in. If you write a plug-in at the  
> OFS layer then you are responsible for implementing all of the  
> logical functions performed by that layer. These include, among  
> others, authorization, cluster management, extended name space  
> functions, safe persistence, and MSS co-ordination. The OSS layer is  
> merely responsible for conveying data to/from the underlying storage  
> system as well as basic name space operations to support itself. To  
> me it sounds like HDFS integration is better suited at the OSS  
> layer. Plus there's much less to do at that layer.
>
> Andy
>
> On Fri, 5 Jun 2009, Fabrizio Furano wrote:
>
>> Hi Brian,
>>
>> if I understood correctly it does not seem difficult. Just mount  
>> your fs partitions into a server, install xrootd there and make it  
>> export those partitions by hiding their prefix through the  
>> localroot setting.
>>
>> Imo the easiest way is to use the setup used by alice, pretty  
>> generic and bundled. A pointer to the instructions is this one:
>>
>> http://savannah.cern.ch/project/xrootd
>> or
>> http://alien.cern.ch/twiki/bin/view/AliEn/HowToInstallXrootdNew
>>
>> Of course the option to start it manually and accessing the full  
>> configuration is always ok. The docs are here:
>>
>> http://xrootd.slac.stanford.edu
>>
>> Fabrizio
>>
>> Brian Bockelman wrote:
>>> Hey all,
>>> I've been asked by a few folks about adding an Xrootd interface to  
>>> the file system we use locally, HDFS.  HDFS's security mechanism  
>>> make it so the only true way to be secure is to only allow access  
>>> from within the local cluster; I'd like to be able to securely  
>>> export the file system to ROOT-based applications running on the  
>>> local campus (but not transfer it across the world).  HDFS has a  
>>> FUSE interface which implements most of the POSIX API, but  
>>> apparently not enough to have xrootd work directly on top of it :)
>>> However, HDFS has a pleasant, simple C interface:
>>> http://svn.apache.org/repos/asf/hadoop/core/trunk/src/c++/libhdfs/hdfs.h
>>> I'm trying to provide feedback to those who want it about how hard  
>>> this project would be.  Could someone help me determine:
>>> 1) What exactly would need to be implemented?  (I'm a bit new to  
>>> xrootd; I believe I'm looking at implementing a new subclass of  
>>> OFS?)
>>> 2) What would be needed to do a minimal working prototype that I  
>>> could show to someone.
>>> 3) Is there a sample "simplest implementation" that I could base a  
>>> prototype off of?
>>> 4) What documentation exists to help me along the way?
>>> Thanks!
>>> Brian
>>