Print

Print


Hi Andy,

many thanks for your detailed reply, and also for explaining the design concept! 
Indeed, now as you mentioned it, I found some presentations from the early days that xrootdfs was not really designed as "mountable POSIX for accessing xrootd storage" but rather as an interoperability layer
with other transfer tools. 

> Now, all that said, we have had requests from other user groups to allow xrootd to track true file ownership and be able to export such ownership for existing file systems (your case here).
> So, this development effort is in our plan and you will see it; though it may not be on the timescale that would conveniently address your needs.
If I understand correctly, this would be "only" the part for exporting existing filesystems. Would it also include the reverse way, i.e., a user creates a new file in xrootdfs,
and the xrootd data server creates the file with correct owner / group / permissions on the backing filesystem? Maybe even ACLs - or is this breaking the scope of xrootd? 

I believe the reason for the requests you receive are a result of xrootds "easy of use": It's really simple to set-up a clustered system with a redirector, supporting HA, load balancing and authentication,
with high performance out of the box. Re-using this cluster to export a general purpose filesystem, for example to mount the HPC storage in containers which may run on other Grid sites or in the cloud,
would then be a really cool feature. 

In any case, you are absolutely right - while this means we will certainly follow development of xrootd on this end, we need a solution on a different timescale to address our current needs. 
I am now looking at NFS Ganesha, which should support load balancing, HA and Kerberos authentication for pNFS exports, but I have yet to learn how interoperable it is with BeeGFS. 

Many thanks for your detailed reply, and all the best! 
Oliver


Am 09.08.2017 um 23:12 schrieb Andrew Hanushevsky:
> Hi Oliver,
> 
> Indeed, as you surmised, files are owned by xrootd not the creating user. So, information returned would reflect that. Authorization (the defalt package) uses capabilitiy lists to control access to files based on file path and client multi-factor identity. It is this way because xrootd was never designed to be a general purpose flle system and many general purpose fatures were never implemented. As a specila purpose storage system it has many features useful for exascale data management that simply don't exist in general purpose file systems. Even the FUSE mountable xrootd storage option was developed to address interoperability problems with other data transfer tools; not to provide a seamless general purpose filesystem experience. So, while xrootd overlaps a GPFS in many ways, the focus is very different.
> 
> Now, all that said, we have had requests from other user groups to allow xrootd to track true file ownership and be able to export such ownership for existing file systems (your case here). So, this development effort is in our plan and you will see it; though it may not be on the timescale that would conveniently address your needs.
> 
> Andy
> 
> On Wed, 9 Aug 2017, Oliver Freyermuth wrote:
> 
>> Dear experts,
>>
>> we have a common cluster filesystem (BeeGFS) we are exporting with xrootd for Grid access.
>> In addition, we now consider to export the filesystem (which itself is in a private network no user can access) for user access in our "desktop" network via xrootd,
>> i.e. "mount" the xrootd-exported FS on normal desktop machines so users can access their data on the large BeeGFS storage directly.
>>
>> For this, xrootdfs seems to be the way to go. The scheme would be:
>> - xrootd data servers and redirector mount BeeGFS (i.e. the servers live in the private network and the internal desktop network).
>> - Desktops authenticate via KRB5 with the xrootd redirector / servers.
>> - Desktops and xrootd servers use sssd + Kerberos V + LDAP so they all "know" the same set of users and groups.
>>
>> Now my question is, before I delve deeper in the configuration manual and perform tests:
>> With a single xrootdfs mount (via fstab) on the desktop machines, does the User-Mapping work as expected, i.e. will users see "their" files with correct permissions, and can they use shared directories belonging to their groups?
>> It's crucial that also the files on BeeGFS will be created with the correct UID / GID - since users will also submit jobs working on BeeGFS natively, running with their UID.
>>
>>> From a glance at the documentation, it sadly seems that all files would be handled by the single user which the xrootd servers are running as, and a mapping of users to directory permissions would only be possible via sss authentication + ACLs,
>> which would ignore any existing file permissions on BeeGFS.
>> Is this true?
>> If yes: Is there another way to achieve what we would like with xrootd?
>>
>> We are especially interested in xrootd here since it allows to easily increase the bandwidth by just adding more data servers, and we are running the machinery for Grid purposes in any case.
>> It would also allow mounting our BeeGFS from virtually anywhere (for example, from inside containers of jobs running on desktops, or even in the cloud if we open up our servers further).
>>
>> In case xrootdfs turns out not to be able to solve this, I am also very happy to accept any other options that may come to mind.
>>
>> Many thanks in advance and all the best,
>> Oliver
>>
>> -- 
>> Oliver Freyermuth
>> Universität Bonn
>> Physikalisches Institut
>> Nußallee 12
>> 53115 Bonn
>> -- 
>>
>> ########################################################################
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the XROOTD-L list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>


-- 
Oliver Freyermuth
Universität Bonn
Physikalisches Institut, Raum 1.047
Nußallee 12
53115 Bonn
--

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1