Print

Print


Hi All

Just to give you some background, I am currently evaluating xrootd as a potential high speed scratch disk and datastore for our site. but we do not wish to use GSI or kerberos based authentication. The pwd authentication method seems to adequate for our needs.

I've setup a small two node toy cluster with this sample config

#start config
xrd.protocol xrootd *
xrd.port 1094
xrd.allow host *.tchpc.tcd.ie

all.export /tmp/data
#oss.localroot /tmp

if manager.tchpc.tcd.ie
all.role manager
else
all.role server
xrootd.seclib libXrdSec.so
sec.protocol pwd -d:3 -dir:/home/jtang/.xrd/  -a:1
#sec.protocol unix
#sec.protocol sss
ofs.authorize 1
acc.authdb /home/jtang/Authfile
fi

all.manager manager.tchpc.tcd.ie 1213

cms.allow host *.tchpc.tcd.ie
# end config

I've followed the instructions at http://xrootd.slac.stanford.edu/doc/prod/sec_config.htm#_Toc248670305 but i seem to be at a loss. My small toy system doesnt seem to start up. My manager node appears to be fine, but the xrootd on my 'slave' where i have my /tmp disk which i want to export displays this type of message


=====> xrootd.seclib libXrdSec.so
Config exporting /tmp/data
110811 18:34:35 001 XrootdProtocol: Loading security library libXrdSec.so
++++++ Authentication system initialization started.
110811 18:34:35 001 secpwd_Init: using infodir: /home/jtang/.xrd/
110811 18:34:35 001 sut_Cache::Init: cache allocated for 4 entries
110811 18:34:35 001 sut_Cache::Rehash: Hash table updated (found 0 active entries)
110811 18:34:35 001 sut_Cache::Load: PF file /home/jtang/.xrd/pwdadmin loaded in cache (found 4 entries)
110811 18:34:35 001 sut_Cache::Rehash: Adding ID: +++SrvID; key: 0
110811 18:34:35 001 sut_Cache::Rehash: Adding ID: +++SrvPuk_1; key: 1
110811 18:34:35 001 sut_Cache::Rehash: Adding ID: +++SrvEmail; key: 2
110811 18:34:35 001 sut_Cache::Rehash: Adding ID: –host_1; key: 3
110811 18:34:35 001 sut_Cache::Rehash: Hash table updated (found 4 active entries)
110811 18:34:35 001 sut_Cache::Dump: //-----------------------------------------------------
110811 18:34:35 001 sut_Cache::Dump: //
110811 18:34:35 001 sut_Cache::Dump: //  Capacity:         4
110811 18:34:35 001 sut_Cache::Dump: //  Max index filled: 3
110811 18:34:35 001 sut_Cache::Dump: //
110811 18:34:35 001 sut_Cache::Dump: // #:1  st:4 cn:1  buf:6,0,0,0 mod:11Aug2011:18:30:24 name:+++SrvID
110811 18:34:35 001 sut_Cache::Dump: // #:2  st:4 cn:2  buf:126,0,0,0 mod:11Aug2011:18:30:24 name:+++SrvPuk_1
110811 18:34:35 001 sut_Cache::Dump: // #:3  st:4 cn:1  buf:19,0,0,0 mod:11Aug2011:18:30:24 name:+++SrvEmail
110811 18:34:35 001 sut_Cache::Dump: // #:4  st:2 cn:0  buf:8,24,0,0 mod:11Aug2011:18:30:46 name:–host_1
110811 18:34:35 001 sut_Cache::Dump: //
110811 18:34:35 001 sut_Cache::Dump: //-----------------------------------------------------
110811 18:34:35 001 sut_Cache::Get: locating entry for ID: +++SrvID


it just seems to hang and never connects to my cmsd on the slave node. I was wondering do I need to keep the pwdadmin file/directory on a local disk such that I need to run

xrdpwdadmin  add –host  master.tchpc.tcd.ie -email [log in to unmask]
xrdpwdadmin  add –host  slave.tchpc.tcd.ie -email [log in to unmask]

on the respective hosts? Also, in the manual "xrdpwdadmin  add usertag" is shown as an example, is this just an arbitrary name that i can select for this user or is it more specific such as "user@host" that they are initially coming from? With the above setup and log message from xrootd, i tried logging in with a client from a third machine, but i can never see the /tmp/data share as it hangs. if i comment out the following my toy system works as a completely open system which is not what i want.

xrootd.seclib libXrdSec.so
sec.protocol pwd -d:3 -dir:/home/jtang/.xrd/  -a:1
ofs.authorize 1
acc.authdb /home/jtang/Authfile


My end goal is to setup a /site/scratch where i would create a rule for each user for their own personal space, and a /site/archive where there would be restricted read only access to parts of the filesytem. It seems to me that xrootd would be able to do what I want, I just don't know if I am headed in the right direction or not with my test configs, I would assume my use case isn't a typical one as I am not intending to use for HEP applications, but more for general HPC type activities.


Regards,
Jimmy Tang

--
Trinity Centre for High Performance Computing,
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/