Print

Print


Hi Jacek,

The collector runs xrootd-20050417-0431 on Scientific Linux CERN release 
3.0.3 (SL).

The dataserver runs xrootd-20050417-0431 on Slackware 10.1.0. Here is a 
part of the xrootd log. I don't see anything strange, but maybe it helps 
you:

050428 00:00:00 17956 (c) 2004 Stanford University/SLAC xrd version 
20050417-0431_dbg
050428 00:00:00 17956 XrdSched: scheduling midnight runner in 86400 seconds
050428 00:00:09 18092 XrdSched: running monitor window clock inq=0
050428 00:00:09 18092 XrdSched: scheduling monitor window clock in 15 
seconds
050428 00:00:24 17956 XrdSched: running monitor window clock inq=0
050428 00:00:24 17956 XrdSched: scheduling monitor window clock in 15 
seconds
050428 00:00:31 001 XrdInet: Accepted connection from pcardaab.cern.ch
050428 00:00:31 18092 XrdSched: running ?:16@pcardaab inq=0
050428 00:00:31 18092 ?:16@pcardaab XrdPoll: FD 16 attached to poller 0; 
num=1
050428 00:00:31 18092 XrootdXeq: catac.14795:16@pcardaab login
050428 00:00:39 17956 XrdSched: Now have 3 workers
050428 00:00:39 17956 XrdSched: running monitor window clock inq=0
050428 00:00:39 17956 XrdSched: scheduling monitor window clock in 15 
seconds
050428 00:00:46 18092 XrootdXeq: catac.14795:16@pcardaab disc 0:00:15
050428 00:00:46 18092 catac.14795:16@pcardaab XrdPoll: sending poller 0 
detach for link 16
050428 00:00:46 17957 XrdPoll: Poller 0 detached fd 16 entry 1 now at 1
050428 00:00:46 18092 catac.14795:16@pcardaab XrdPoll: FD 16 detached 
from poller 0; num=0
050428 00:00:54 19190 XrdSched: running monitor window clock inq=0
050428 00:00:54 19190 XrdSched: scheduling monitor window clock in 15 
seconds
050428 00:01:09 17956 XrdSched: running monitor window clock inq=0
050428 00:01:09 17956 XrdSched: scheduling monitor window clock in 15 
seconds
...

And it's full configuration is:
# xrootd
xrootd.fslib /home/catac/mywork/xrootd/lib/arch/libXrdOfs.so
xrootd.export /tmp
xrootd.async off
xrootd.monitor all flush 30s window 15s dest files io info user 
pcardaab.cern.ch:9930
xrd.trace all -debug
oss.readonly
odc.manager pccil 3121

# olbd
olb.port 3121
olb.subscribe pccil 3121

Cheers,
Catalin.

Jacek Becla wrote:
> Hmm, you do get packets with zero length. What version of xrootd are you 
> running and on what OS?
> 
> Jacek
> 
> 
> 
> Catalin Cirstoiu wrote:
> 
>> Hi Jacek,
>>
>> Thanks for the directions. However, I started the collector and when I 
>> do a xrdcp I get this:
>>
>> ...
>> RT locking.ok.0.unlocked
>> RT locking.ok.0.unlocked
>> RT locking.ok.0.unlocked
>> Caught exception 130 "Invalid packet length 0"
>> Caught exception 130 "Invalid packet length 0"
>> RT locking.ok.0.unlocked
>> Caught exception 130 "Invalid packet length 0"
>> RT locking.ok.0.unlocked
>> ...
>>
>> This is why I mentioned in the previous mail the 0-lenght of the packets.
>>
>> Cheers,
>> Catalin.
>>
>> Jacek Becla wrote:
>>
>>> Hi Catalin,
>>>
>>>> No problem. I'll be looking forward to the binary dump. Jacek, can 
>>>> you point Catlin to the binary collector and provide simple 
>>>> instruction on how to capture the information? Thanks.
>>>
>>>
>>>
>>>
>>> the program for dumping packets is not built by default, you will 
>>> have to tweak src/XrdMon/GNUmakefile to enable it (uncomment lines 
>>> 83-85), then build.
>>>
>>> Run the collector "xrdmonCollector", no arguments are necessary. It 
>>> will store collector's logs in ./logs/collector/<xrootdServer>/<port>/.
>>>
>>> You will need to have some activity on the server to trigger at least 
>>> one flush of the collector's buffers.
>>>
>>> Most likely you will not want to wait until it fills up the first log 
>>> file, so create the following link:
>>> ln -s logs/collector/<xrootdServer>/<port>/active.rcv 
>>> 20050427_12:00:00.000_sender:1000.rcv
>>> Then run "xrdmonDumpPackets 20050427_12:00:00.000_sender:1000.rcv"
>>>
>>> This tool will produce files for each packet received and store them 
>>> in /tmp directory. The files will be called "ap.dump.<sequenceNo>"
>>>
>>> Let me know if you have any questions (the dump tool is not 
>>> productized yet, thus all the hassle with symlinks)
>>>
>>> cheers,
>>> Jacek
> 
>