Hi,
the chain we are investigating here is xrootd->local disks. Othe
comments follow:
Fabrizio Furano wrote:
> Hi all,
>
> from what I know there are no magics that we can do for this.
> Kernel/tcp tuning are just wastes of time. IMHO, one of the main
> troubles sources is that the whole chain
>
> xrootd->gpfs->xfs?->disks
>
> looks performant when it comes to huge bulk data xfer, but it can
> drain everything to crap when stressed with a high transaction rate.
> In general, you will never be able to get the same performance which
> you were used to when you had the local disks.
> Armando, have you tried to monitor how many requests per sec are
> being executed?
>
Looking at xrdlog and netstat output I can roughly see ~10 requests
(login and disc) per second
> This was an issue, the main imho. Then my suggestions are:
>
> Put "xrootd.async off" in the data servers. Depending on imper
> scrutable factors the async disk read in the OS can give some problems
> under stress (see up!), and God only knows what happens with gpfs.
>
> The value you have for the requesttimeout is a very old default. The
> "new" default now (more than 1 year) is 600. But I recall to you that,
> under the same setup, to perform our (in)famous performance tests at
> CNAF we put 1200. This means that a lot of requests had to wait up to
> 20 min. Do you remember what a mess?
>
> So, my suggestion is 600. This will decrease the "recover" rate. But,
> again, you cannot expect magics. Still the clients will recover,
> probably less. But then you will try to increase the job rate to the
> historical one and will encounter the same problem....
Ok I added in KanAccess.conf:
rootenv XNet.RequestTimeout 600
and in /opt/xrootd/etc/xrootd.cf:
xrootd.async off
and restart the xrootd daemon.
>
> One more thing: the latest servers contain a fix related to desperate
> kernel (?) situations, dealing with chunk sending. If you find (in a
> period of at least some hours) some complaints in the xrdlog dealing
> with "reset" or something else suspicious, than probably it's a good
> idea to upgrade the servers. For the records, it happened on proof
> under high stress, and was spotted by Gerri.
no suspicious messages about reset, just two of:
080526 13:49:11 3356 XrdProtocol: ?:1789@node132 terminated handshake
not received
and one of:
080526 13:56:16 3356 XrdPoll: Disabled event occured for
babarsgm.11367:5301@node149
The xrootd release:
[root@babarxrd ~]# rpm -qa | grep xro
xrootd-20071101-0808p1.slac
[root@babarxrd ~]#
>
> One more: the process you are debugging contains many threads, which
> spend 99.999% of their life in polling. The interesting thread is just
> the main one, not the pollers. In gdb you can recognize the main one
> because it executes ROOT calls. All the other ones are insignificant.
I'm asking to remote site manager (Pisa Italy) to send me a more
complete strace
>
> Last one: Andy and I put very recently a bugfix in the client for a
> rare but nasty problem related to recovering. That was causing a mess
> for glast. Wilko, has that problem been solved, even partially? I did
> not hear anything yet. (so I can check if you arrived reading up
> here.... test and report the magic lottery winning number 123456789!
> :-D )
>
> Well, but I do not believe that you can upgrade ROOT or recompile it
> with an external xrootd package. Let me know. Anyway, if you need it,
> just look in Savannah.cern.ch....
>
I restarted the daemaon with setting xrootd.async off and client side
still 180 seconds of time out, the actual number if job is 144 and the
connection are:
[root@babarxrd ~]# netstat -natup | grep xro|wc
7726 54082 842134
[root@babarxrd ~]#
the following is a tipical log message in a second:
080526 13:56:16 3356 XrootdXeq: babarsgm.5578:6835@node31 login
080526 13:56:16 3356 XrootdXeq: babarsgm.10409:6836@node114 login
080526 13:56:16 3356 XrootdXeq: babarsgm.20472:6837@node142 login
080526 13:56:16 3356 XrootdXeq: babarsgm.4028:6838@gridwn86 login
080526 13:56:16 3356 XrootdXeq: babarsgm.4028:4143@gridwn86 disc 0:01:09
(ended by babarsgm.4028:6838@gridwn86)
080526 13:56:16 3356 XrootdXeq: babarsgm.20472:6613@node142 disc 0:00:57
(ended by babarsgm.20472:6837@node142)
080526 13:56:17 3356 XrootdXeq: babarsgm.29861:4143@node141 login
080526 13:56:17 3356 XrootdXeq: babarsgm.9600:6613@node53 login
080526 13:56:17 3356 XrootdXeq: babarsgm.29861:6760@node141 disc 0:00:43
(ended by babarsgm.29861:4143@node141)
080526 13:56:17 3356 XrootdXeq: babarsgm.9600:6789@node53 disc 0:00:11
(ended by babarsgm.9600:6613@node53)
080526 13:56:17 3356 XrootdXeq: babarsgm.2092:6760@node66 login
080526 13:56:17 3356 XrootdXeq: babarsgm.2092:5999@node66 disc 0:00:17
(ended by babarsgm.2092:6760@node66)
080526 13:56:17 3356 XrootdXeq: babarsgm.12796:5999@node133 login
080526 13:56:17 3356 XrootdXeq: babarsgm.21691:6789@gridwn36 login
080526 13:56:17 3356 XrootdXeq: babarsgm.21691:4243@gridwn36 disc
0:00:20 (ended by babarsgm.21691:6789@gridwn36)
080526 13:56:17 3356 XrootdXeq: babarsgm.28937:4243@node35 login
080526 13:56:17 3356 XrootdXeq: babarsgm.28937:3895@node35 disc 0:00:20
(ended by babarsgm.28937:4243@node35)
080526 13:56:17 3356 XrootdXeq: babarsgm.6690:6839@node71 login
080526 13:56:17 3356 XrootdXeq: babarsgm.6690:6784@node71 disc 0:00:12
(ended by babarsgm.6690:6839@node71)
080526 13:56:18 3356 XrootdXeq: babarsgm.29861:3895@node141 login
Thanks to every body, in a day I'll submit new job runs with timeout
setting at 600 and I'll post the xrdlog behaviour.
In the meanwhile the scenario is still critical.
Cheers, Armando
> Good luck!
> Fabrizio
>
>
>
>
>
>> Date: Sat, 24 May 2008 20:03:53 +0200
>> From: Armando Fella <[log in to unmask]>
>> To: Wilko Kroeger <[log in to unmask]>
>> Cc: Andrew Hanushevsky <[log in to unmask]>
>> Subject: Re: Xrootd problem in SP production
>>
>> Hi,
>>
>> the following is part af strace of a SP job asking file to xrootd, it
>> hangs in polling. Hope this info cuold help in debugging the problem:
>>
>>
>> connect(14, {sa_family=AF_INET, sin_port=htons(1094),
>> sin_addr=inet_addr("212.189.152.199")}, 16) = -1 EINPROGRESS (Operation
>> now in progress)
>> poll([{fd=14, events=POLLOUT|POLLWRNORM, revents=POLLOUT|POLLWRNORM}],
>> 1, 60000) = 1
>> getsockopt(14, SOL_SOCKET, SO_ERROR, [17179869184], [4]) = 0
>> fcntl64(14, F_SETFD, 0x2 /* FD_??? */) = 0
>> time(NULL) = 1210667903
>> time(NULL) = 1210667903
>> poll([{fd=14, events=POLLOUT|POLLERR|POLLHUP|POLLNVAL,
>> revents=POLLOUT}], 1, 1000) = 1
>> send(14, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\7\334", 20, 0) = 20
>> time(NULL) = 1210667903
>> time(NULL) = 1210667903
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
>> poll([{fd=14, events=POLLIN}], 1, 1000) =
>> ...
>> ...
>>
>> A fresh today xrdlog piece:
>>
>> 080524 19:29:49 30669 XrootdXeq: babarsgm.30980:780@theown2 disc
>> 0:00:04 (ended by babarsgm.30980:756@theown2)
>> 080524 19:29:50 30669 XrootdXeq: babarsgm.30615:780@theown5 login
>> 080524 19:29:50 30669 XrootdXeq: babarsgm.30615:737@theown5 disc
>> 0:00:05 (ended by babarsgm.30615:780@theown5)
>> 080524 19:29:52 30669 XrootdXeq: babarsgm.22625:737@theown16 login
>> 080524 19:29:52 30669 XrootdXeq: babarsgm.22625:741@theown16 disc
>> 0:00:05 (ended by babarsgm.22625:737@theown16)
>> 080524 19:29:55 30669 XrootdXeq: babarsgm.10828:741@node54 login
>> 080524 19:29:55 30669 XrootdXeq: babarsgm.10828:718@node54 disc
>> 0:00:50 (ended by babarsgm.10828:741@node54)
>> 080524 19:29:56 30669 XrootdXeq: babarsgm.32501:718@node8 login
>> 080524 19:29:56 30669 XrootdXeq: babarsgm.32501:109@node8 disc
>> 0:00:32 (ended by babarsgm.32501:718@node8)
>> 080524 19:29:58 30669 XrootdXeq: babarsgm.10828:109@node54 login
>> 080524 19:29:59 30669 XrootdXeq: babarsgm.32501:781@node8 login
>> 080524 19:29:59 30669 XrootdXeq: babarsgm.32501:718@node8 disc
>> 0:00:03 (ended by babarsgm.32501:781@node8)
>>
>>
>>
>> Cheers, Armando
>>
>>
>> Armando Fella ha scritto:
>>> Hi,
>>>
>>> I'm using the xrootd client enbedded in MooseApp SP 24.2.1l
>>> executable. The following is the KanAccess.conf:
>>>
>>> rootenv Root.XTNetFileAllowWanConnect 1
>>> rootenv Root.XTNetFileAllowWanRedirect 1
>>> rootenv XNet.RedirDomainAllowRE *.infn.it
>>> rootenv XNet.ConnectDomainAllowRE *.infn.it
>>> rootenv XNet.RequestTimeout 180
>>>
>>> read /store/cdb/* xrootd $XROOTD_HOST:1094/
>>> write /store/cdb/* error
>>>
>>> read /store/cfg/* xrootd $XROOTD_HOST:1094/
>>> write /store/cfg/* error
>>>
>>> read /store/* xrootd $XROOTD_HOST:1094/
>>>
>>> I'm using root version 5.14-00e (but I'm not sure it is the babar
>>> one, I should check)
>>> The 20 sec is just a case, the login-disc inter time is between 2
>>> min and 3 sec.
>>>
>>> I tried to use directly the xrd in /opt/xrootd/bin/ on server and on
>>> a client wn and it seems to work fine, it transfer the file straight
>>> on.
>>>
>>> I have 2 xrootd servers with this problem and the user bbrmgr (the
>>> xrootd launcher) owns the .rootrc file on first server but on the
>>> second the file is absent
>>>
>>> [bbr-serv08] ~ > cat .rootrc
>>> Root.XTNetFileAllowWanConnect: 1
>>> Root.XTNetFileAllowWanRedirect:1
>>> [bbr-serv08] ~ > XrootD version:
>>>
>>> [bbrmgr@babarxrd ~]$ rpm -qa | grep xroo
>>> xrootd-20071101-0808p1.slac
>>> [bbrmgr@babarxrd ~]$ The system.rootrc is in attachment.
>>>
>>> The problem is still there and the fail rate is 95% when we reach
>>> the limit of 70 jobs asking for cdb cfg bkg on the same xrootd server.
>>>
>>> Please try to find the bug, we are quite stopped by this problem
>>> Please ask if you need more info.
>>>
>>> We did the following actions:
>>>
>>> reinstall xrootd
>>> reboot the machines
>>> modifiy the KanAccess.cfg
>>> modify the /etc/sysctl.conf
>>>
>>>
>>> Thanks, Armando
>>>
>>> Wilko Kroeger ha scritto:
>>>>
>>>> Hello Armando
>>>>
>>>> I cc'ed Andy.
>>>>
>>>> It is not clear what is going wrong. It looks like that it might be
>>>> related to the client.
>>>> Below are more comments.
>>>>
>>>>
>>>> On Thu, 22 May 2008, Armando Fella wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I'd add same information:
>>>>>
>>>>> 1) I try to change the tcp rmem wmem in /etc/sysctl.conf adding
>>>>> the lines:
>>>>>
>>>>> [root@babarxrd ~]# sysctl -p
>>>>> net.ipv4.ip_forward = 0
>>>>> net.ipv4.conf.default.rp_filter = 1
>>>>> net.ipv4.conf.default.accept_source_route = 0
>>>>> kernel.sysrq = 0
>>>>> kernel.core_uses_pid = 1
>>>>> net.core.rmem_max = 16777216
>>>>> net.core.wmem_max = 16777216
>>>>> net.ipv4.tcp_rmem = 4096 87380 16777216
>>>>> net.ipv4.tcp_wmem = 4096 65536 16777216
>>>>> net.core.netdev_max_backlog = 250000
>>>>> [root@babarxrd ~]#
>>>>
>>>> I don't have any experience with tuning the network parameter for
>>>> linux so I can't comment on the changes. At slac we use xrootd very
>>>> little on linux but the machine we are using it (ER processing,
>>>> with ~500 clients) we didn't have to do any tuning).
>>>>
>>>>> but restarting xrootd the wrong behaviour is still present:
>>>>>
>>>>> 080522 16:31:01 30669 XrootdXeq: babarsgm.4308:17@theown15 login
>>>>> 080522 16:31:01 30669 XrootdXeq: babarsgm.4308:15@theown15 disc
>>>>> 0:00:20 (ended by babarsgm.4308:17@theown15)
>>>>> 080522 16:31:03 30669 XrootdXeq: babarsgm.9392:15@node75 login
>>>>> 080522 16:31:03 30669 XrootdXeq: babarsgm.9392:16@node75 disc
>>>>> 0:00:20 (ended by babarsgm.9392:15@node75)
>>>>> 080522 16:31:04 30669 XrootdXeq: babarsgm.24721:16@node119 login
>>>>> 080522 16:31:04 30669 XrootdXeq: babarsgm.24721:18@node119 disc
>>>>> 0:00:20 (ended by babarsgm.24721:16@node119)
>>>>> 080522 16:31:17 30669 XrootdXeq: babarsgm.5109:18@gridwn99 login
>>>>> 080522 16:31:17 30669 XrootdXeq: babarsgm.5109:19@gridwn99 disc
>>>>> 0:00:20 (ended by babarsgm.5109:18@gridwn99)
>>>>> 080522 16:31:21 30669 XrootdXeq: babarsgm.4308:19@theown15 login
>>>>> 080522 16:31:21 30669 XrootdXeq: babarsgm.4308:17@theown15 disc
>>>>> 0:00:20 (ended by babarsgm.4308:19@theown15)
>>>>> 080522 16:31:23 30669 XrootdXeq: babarsgm.9392:17@node75 login
>>>>> 080522 16:31:23 30669 XrootdXeq: babarsgm.9392:15@node75 disc
>>>>> 0:00:20 (ended by babarsgm.9392:17@node75)
>>>>> 080522 16:31:24 30669 XrootdXeq: babarsgm.24721:15@node119 login
>>>>> 080522 16:31:24 30669 XrootdXeq: babarsgm.24721:16@node119 disc
>>>>> 0:00:20 (ended by babarsgm.24721:15@node119)
>>>>
>>>> It is very suspicious that all the disconnects are after 20 sec. Is
>>>> this happening for all SP jobs or only for some of them?
>>>>
>>>> You could try to copy a file from xrootd with xrdcp and check if
>>>> that works fine or if you see a similar behavior.
>>>>
>>>> Do you know if anything changed on the client site, different root
>>>> version, change of timeouts in ~/.rootrc or
>>>> $ROOTSYS/root/etc/rootd/system.rootrc
>>>>
>>>> I guess you are using the babar root version 5.14-00e. Which xrootd
>>>> version are you using?
>>>>
>>>> Sorry that I don't have a more definite answer.
>>>>
>>>> Cheers,
>>>> Wilko
>>>>
>>>>>
>>>>> Cheers, Armando
>>>>>
>>>>>
>>>>>
>>>>> Wilko Kroeger ha scritto:
>>>>>>
>>>>>> Hello Armando
>>>>>>
>>>>>> Sorry for the late reply. I didn't manage to talk to Andy about
>>>>>> it but I will try on Wednesday.
>>>>>> I think all your config files and Start scripts look fine. One
>>>>>> thing you could do is to remove the
>>>>>> all.role server
>>>>>> option. By default xrootd starts up as a server and if this
>>>>>> directive is omitted the xrootd will not try to connect to an
>>>>>> olbd (
>>>>>> 080516 13:17:43 24822 odc_Open: Unable to connect socket to
>>>>>> /tmp/.olb/olbd.admin; connection refused)
>>>>>>
>>>>>> To make sure that your xrootd got started properly you can just
>>>>>> do "ps -ef | grep xrootd" and you should only see the -l
>>>>>> <logfile> and -c <xrootd.cf> options being used.
>>>>>>
>>>>>> What I don't understand is from your xrdlog file:
>>>>>>
>>>>>> 080513 14:36:00 13753 XrootdXeq: babarsgm.15816:3890@gridwn75 login
>>>>>> 080513 14:36:00 13753 XrootdXeq: babarsgm.15816:4155@gridwn75 disc
>>>>>> 0:00:40 (ended by babarsgm.15816:3890@gridwn75)
>>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.15816:4155@gridwn75 login
>>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.15816:3890@gridwn75 disc
>>>>>> 0:00:03 (ended by babarsgm.15816:4155@gridwn75)
>>>>>>
>>>>>> The client babarsgm.15816 was connected but some timeout happened
>>>>>> that caused the client to reconnect at 14:36:00 causing its
>>>>>> previous session
>>>>>> (babarsgm.15816:4155) to be closed. The strange thing now is that
>>>>>> after three seconds at 14:36:03 the client reconnects again which
>>>>>> causes the previous session (from 14:36:00) to be closed. Three
>>>>>> seconds is very short and I don't think that the client or server
>>>>>> has any timeout this short.
>>>>>> Therefore I also think that
>>>>>> rootenv XNet.RequestTimeout 180
>>>>>> would not help.
>>>>>>
>>>>>> I will talk with Andy maybe he has an idea.
>>>>>>
>>>>>> Cheers,
>>>>>> Wilko
>>>>>>
>>>>>>
>>>>>> On Tue, 20 May 2008, Armando Fella wrote:
>>>>>>
>>>>>>> Reminder
>>>>>>>
>>>>>>> Armando Fella ha scritto:
>>>>>>>> Here you can find the files I referred in the email:
>>>>>>>>
>>>>>>>> http://www.cnaf.infn.it/~afella/StartXRD.cf
>>>>>>>> http://www.cnaf.infn.it/~afella/StartXRD
>>>>>>>>
>>>>>>>> Cheers, Armando
>>>>>>>>
>>>>>>>> Armando Fella ha scritto:
>>>>>>>>> *** Discussion title: Simulation Production
>>>>>>>>> Email replies to [log in to unmask] must include:
>>>>>>>>> In-Reply-To: <[log in to unmask]>
>>>>>>>>> Subject: ...change this to be about your reply.
>>>>>>>>>
>>>>>>>>> This is a multi-part message in MIME format.
>>>>>>>>> --------------010909020508030509010608
>>>>>>>>> Content-Type: text/plain; charset=ISO-8859-15; format=flowed
>>>>>>>>> Content-Transfer-Encoding: 7bit
>>>>>>>>>
>>>>>>>>> Hi Wilko,
>>>>>>>>>
>>>>>>>>> the machine has not been reinstalled or kernel upgraded, the
>>>>>>>>> following are the info about OS and architecture:
>>>>>>>>>
>>>>>>>>> [root@babarxrd ~]# uname -a
>>>>>>>>> Linux babarxrd.pi.infn.it 2.6.9-55.EL #1 Thu May 3 23:04:51
>>>>>>>>> CDT 2007 i686 athlon i386 GNU/Linux
>>>>>>>>> [root@babarxrd ~]# cat /etc/redhat-release
>>>>>>>>> Scientific Linux SL release 4.5 (Beryllium)
>>>>>>>>> [root@babarxrd ~]#
>>>>>>>>> [root@babarxrd ~]# rpm -qa --queryformat
>>>>>>>>> '[("%{NAME}","%{VERSION}-%{RELEASE}","%{ARCH}")\n]' | grep xroo
>>>>>>>>> ("xrootd","20071101-0808p1.slac","i386")
>>>>>>>>> [root@babarxrd ~]#
>>>>>>>>>
>>>>>>>>> I suspect that the problem could be in correctness of
>>>>>>>>> StartXRD.cf and StartXRD that I got from the xrootd tgz
>>>>>>>>> package (can you check that in attachment?), I installed the
>>>>>>>>> rpm and I got the two files from the tgz.
>>>>>>>>>
>>>>>>>>> Is it possible to increase the client timeout to permit the
>>>>>>>>> possible network bottleneck to be avoided?
>>>>>>>>> The KanAccess.cfg instruction to add is:
>>>>>>>>>
>>>>>>>>> rootenv XNet.RequestTimeout 180
>>>>>>>>>
>>>>>>>>> is 180 the right value?
>>>>>>>>>
>>>>>>>>> Now I'm not able to increase the site job load so the xrdlog
>>>>>>>>> and pstack probably are not so meaningfull, in any case that
>>>>>>>>> follow :
>>>>>>>>>
>>>>>>>>> [root@babarxrd ~]# pidof xrootd
>>>>>>>>> 24822
>>>>>>>>> [root@babarxrd ~]# pstack 24822
>>>>>>>>> Thread 16 (Thread -1208767584 (LWP 24823)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b8feac in pthread_cond_timedwait@@GLIBC_2.3.2 ()
>>>>>>>>> #2 0x08095948 in XrdSysCondVar::Wait ()
>>>>>>>>> #3 0x0807b04a in XrdBuffManager::Reshape ()
>>>>>>>>> #4 0x0807a8c0 in XrdReshaper ()
>>>>>>>>> #5 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #6 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #7 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 15 (Thread -1209558112 (LWP 24824)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b8feac in pthread_cond_timedwait@@GLIBC_2.3.2 ()
>>>>>>>>> #2 0x08095948 in XrdSysCondVar::Wait ()
>>>>>>>>> #3 0x0808477f in XrdScheduler::TimeSched ()
>>>>>>>>> #4 0x0808303e in XrdStartTSched ()
>>>>>>>>> #5 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #6 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #7 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 14 (Thread -1210348640 (LWP 24825)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b91b4f in [log in to unmask] () from
>>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>>> #2 0x08079b60 in XrdSysSemaphore::Wait ()
>>>>>>>>> #3 0x08083d1a in XrdScheduler::Run ()
>>>>>>>>> #4 0x08083070 in XrdStartWorking ()
>>>>>>>>> #5 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #6 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #7 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 13 (Thread -1211139168 (LWP 24826)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b91b4f in [log in to unmask] () from
>>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>>> #2 0x08079b60 in XrdSysSemaphore::Wait ()
>>>>>>>>> #3 0x08083d1a in XrdScheduler::Run ()
>>>>>>>>> #4 0x08083070 in XrdStartWorking ()
>>>>>>>>> #5 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #6 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #7 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 12 (Thread -1212376160 (LWP 24827)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00895e04 in poll () from /lib/tls/libc.so.6
>>>>>>>>> #2 0x0808130e in XrdPollPoll::Start ()
>>>>>>>>> #3 0x0807fb6c in XrdStartPolling ()
>>>>>>>>> #4 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #5 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #6 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 11 (Thread -1213346912 (LWP 24828)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00895e04 in poll () from /lib/tls/libc.so.6
>>>>>>>>> #2 0x0808130e in XrdPollPoll::Start ()
>>>>>>>>> #3 0x0807fb6c in XrdStartPolling ()
>>>>>>>>> #4 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #5 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #6 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 10 (Thread -1214317664 (LWP 24829)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00895e04 in poll () from /lib/tls/libc.so.6
>>>>>>>>> #2 0x0808130e in XrdPollPoll::Start ()
>>>>>>>>> #3 0x0807fb6c in XrdStartPolling ()
>>>>>>>>> #4 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #5 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #6 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 9 (Thread -1215108192 (LWP 24830)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b92db6 in __nanosleep_nocancel () from
>>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>>> #2 0x080963a8 in XrdSysTimer::Wait ()
>>>>>>>>> #3 0x00210c7d in XrdOdcFinderTRG::Hookup () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #4 0x00210a7c in XrdOdcFinderTRG::Start () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #5 0x00210846 in XrdOdcStartOlb () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #6 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #7 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #8 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 8 (Thread -1215898720 (LWP 24831)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b925cb in __read_nocancel () from
>>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>>> #2 0x08093259 in XrdOucStream::GetLine ()
>>>>>>>>> #3 0x001f1d85 in XrdOfsEvr::recvEvents () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #4 0x001f16d8 in XrdOfsEvRecv () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #5 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #6 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #7 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 7 (Thread -1216689248 (LWP 24832)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b91b4f in [log in to unmask] () from
>>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>>> #2 0x08079b60 in XrdSysSemaphore::Wait ()
>>>>>>>>> #3 0x001f1ac3 in XrdOfsEvr::flushEvents () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #4 0x001f170a in XrdOfsEvFlush () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #5 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #6 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #7 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 6 (Thread -1217479776 (LWP 24833)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x007ffb30 in do_sigwaitinfo () from /lib/tls/libc.so.6
>>>>>>>>> #2 0x007ffc0f in sigwaitinfo () from /lib/tls/libc.so.6
>>>>>>>>> #3 0x0020c8f2 in XrdOssAioWait () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #4 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #5 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #6 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 5 (Thread -1218270304 (LWP 24834)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x007ffb30 in do_sigwaitinfo () from /lib/tls/libc.so.6
>>>>>>>>> #2 0x007ffc0f in sigwaitinfo () from /lib/tls/libc.so.6
>>>>>>>>> #3 0x0020c8f2 in XrdOssAioWait () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #4 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #5 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #6 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 4 (Thread -1219060832 (LWP 24835)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b92db6 in __nanosleep_nocancel () from
>>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>>> #2 0x0020dcbe in XrdOssSys::CacheScan () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #3 0x001ff4ec in XrdOssCacheScan () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #4 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #5 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #6 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 3 (Thread -1219851360 (LWP 24836)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b92db6 in __nanosleep_nocancel () from
>>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>>> #2 0x001eba6f in XrdOfsIdleScan () from
>>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> #3 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #4 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #5 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 2 (Thread -1220641888 (LWP 24837)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b927d8 in accept () from /lib/tls/libpthread.so.0
>>>>>>>>> #2 0x0808d002 in XrdNetSocket::Accept ()
>>>>>>>>> #3 0x0805c833 in XrdXrootdAdmin::Start ()
>>>>>>>>> #4 0x0805c4bb in XrdXrootdInitAdmin ()
>>>>>>>>> #5 0x08095856 in XrdSysThread_Xeq ()
>>>>>>>>> #6 0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>>> #7 0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>>> Thread 1 (Thread -1208764736 (LWP 24822)):
>>>>>>>>> #0 0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>>> #1 0x00b927d8 in accept () from /lib/tls/libpthread.so.0
>>>>>>>>> #2 0x0808a1fe in XrdNet::do_Accept_TCP ()
>>>>>>>>> #3 0x080899de in XrdNet::Accept ()
>>>>>>>>> #4 0x080893c8 in XrdInet::Accept ()
>>>>>>>>> #5 0x0807f359 in mainAccept ()
>>>>>>>>> #6 0x0807f6d1 in main ()
>>>>>>>>> [root@babarxrd ~]# The xrdlog contains the start daemon
>>>>>>>>> messages, no jobs are running now:
>>>>>>>>>
>>>>>>>>> 080516 13:16:33 001 Scalla is starting. . .
>>>>>>>>> Copr. 2007 Stanford University, xrd version 20071101-0808p1_dbg
>>>>>>>>> Config using configuration file /opt/xrootd/etc/xrootd.cf
>>>>>>>>> ++++++ xrootd [log in to unmask] initialization started.
>>>>>>>>> Config maximum number of connections restricted to 65535
>>>>>>>>> Copr. 2007 Stanford University, xrootd version 2.9.0 build
>>>>>>>>> 20071101-0808p1
>>>>>>>>> ++++++ xrootd protocol initialization started.
>>>>>>>>> =====> xrootd.fslib /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>> =====> xrootd.export /store
>>>>>>>>> Config warning: 'xrootd.seclib' not specified; strong
>>>>>>>>> authentication disabled!
>>>>>>>>> Copr. 2007 Stanford University, Ofs Version 20071101-0808p1_dbg
>>>>>>>>> ++++++ File system initialization started.
>>>>>>>>> =====> all.role server
>>>>>>>>> ++++++ Configuring server role. . .
>>>>>>>>> Config effective /opt/xrootd/etc/xrootd.cf ofs configuration:
>>>>>>>>> ofs.role server
>>>>>>>>> ofs.fdscan 9 120 1200
>>>>>>>>> ofs.maxdelay 60
>>>>>>>>> ofs.trace 0
>>>>>>>>> ------ File system server initialization completed.
>>>>>>>>> Copr. 2007, Stanford University, oss Version 20071101-0808p1_dbg
>>>>>>>>> ++++++ Storage system initialization started.
>>>>>>>>> =====> oss.localroot /data
>>>>>>>>> =====> oss.path /store r/o
>>>>>>>>> 080516 13:16:33 24822 odc_Open: Unable to connect socket to
>>>>>>>>> /tmp/.olb/olbd.admin; connection refused
>>>>>>>>> Config effective /opt/xrootd/etc/xrootd.cf oss configuration:
>>>>>>>>> oss.alloc 0 0 0
>>>>>>>>> oss.cachescan 600
>>>>>>>>> oss.compdetect *
>>>>>>>>> oss.fdlimit 32767 65535
>>>>>>>>> oss.maxdbsize 0
>>>>>>>>> oss.localroot /data
>>>>>>>>> oss.trace 0
>>>>>>>>> oss.xfr 1 9437184 30 10800
>>>>>>>>> oss.memfile off max 1062334464
>>>>>>>>> oss.defaults r/w nocheck nodread nomig norcreate nostage
>>>>>>>>> oss.path /store r/o nocheck nodread nomig norcreate
>>>>>>>>> nostage
>>>>>>>>> ------ Storage system initialization completed.
>>>>>>>>> Config warning: 'xrootd.prepare logdir' not specified; prepare
>>>>>>>>> tracking disabled.
>>>>>>>>> Config exporting /store
>>>>>>>>> ------ xrootd protocol initialization completed.
>>>>>>>>> ------ xrootd [log in to unmask]:1094 initialization
>>>>>>>>> completed.
>>>>>>>>> 080516 13:17:43 24822 odc_Open: Unable to connect socket to
>>>>>>>>> /tmp/.olb/olbd.admin; connection refused
>>>>>>>>> 080516 13:18:53 24822 odc_Open: Unable to connect socket to
>>>>>>>>> /tmp/.olb/olbd.admin; connection refused
>>>>>>>>> ....
>>>>>>>>> ....
>>>>>>>>> ....
>>>>>>>>>
>>>>>>>>> Thank you for all the help
>>>>>>>>>
>>>>>>>>> Cheers, Armando
>>>>>>>>>
>>>>>>>>> Wilko Kroeger ha scritto:
>>>>>>>>>
>>>>>>>>>> *** Discussion title: Simulation Production
>>>>>>>>>> Email replies to [log in to unmask] must include:
>>>>>>>>>> In-Reply-To:
>>>>>>>>>> <[log in to unmask]>
>>>>>>>>>> Subject: ...change this to be about your reply.
>>>>>>>>>>
>>>>>>>>>> Hello Armando
>>>>>>>>>>
>>>>>>>>>> The config certainly looks fine and also the load that you
>>>>>>>>>> describe should not cause problems. It is a little bit
>>>>>>>>>> strange that you suddenly see this problem. Do you know if
>>>>>>>>>> anything happened to the machine itself
>>>>>>>>>> (update kernel,..)?
>>>>>>>>>>
>>>>>>>>>> The disconnect messages you see are due to timeouts. The
>>>>>>>>>> client does not receive the response from the server within a
>>>>>>>>>> certain timeout and therefore it will reconnect to the server
>>>>>>>>>> and close (disconnect) its previous session.
>>>>>>>>>>
>>>>>>>>>> If you still have the problem you could take a pstack
>>>>>>>>>> pstack <xrootd_pid> > outfile
>>>>>>>>>> and send it (or put some where at slac) me and Andy pstack
>>>>>>>>>> output and maybe also the xrdlog file.
>>>>>>>>>> I assume the server is a linux machine.
>>>>>>>>>>
>>>>>>>>>> Cheers,
>>>>>>>>>> Wilko
>>>>>>>>>>
>>>>>>>>>> On Thu, 15 May 2008, Armando Fella wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> we are experiencing unexpected problems on xrootd system
>>>>>>>>>>> dedicated to cdb cfg and bkg access in SP production
>>>>>>>>>>> (24.2.1l). Three days ago the xrootd server (no oldb) of the
>>>>>>>>>>> unique machine serving around 100 SP jobs switched off and
>>>>>>>>>>> since its restart the fail rate was around 95%. The server
>>>>>>>>>>> worked properly for 6 months with peak 200 jobs asking files.
>>>>>>>>>>>
>>>>>>>>>>> I checked the parameters in xrootd.cf, but they are in the
>>>>>>>>>>> standards:
>>>>>>>>>>>
>>>>>>>>>>> [root@babarxrd ~]# cat /opt/xrootd/etc/xrootd.cf
>>>>>>>>>>> xrootd.fslib /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>>>> xrootd.export /store
>>>>>>>>>>>
>>>>>>>>>>> oss.localroot /data
>>>>>>>>>>> oss.path /store r/o
>>>>>>>>>>>
>>>>>>>>>>> all.role server
>>>>>>>>>>>
>>>>>>>>>>> [root@babarxrd ~]# The xrdlog shows continuously login and
>>>>>>>>>>> disconnecting connection:
>>>>>>>>>>>
>>>>>>>>>>> [root@babarxrd ~]# tail xrdlog
>>>>>>>>>>> 080513 14:36:00 13753 XrootdXeq: babarsgm.3924:4170@gridwn24
>>>>>>>>>>> login
>>>>>>>>>>> 080513 14:36:00 13753 XrootdXeq: babarsgm.3924:3890@gridwn24
>>>>>>>>>>> disc 0:01:49 (ended by babarsgm.3924:4170@gridwn24)
>>>>>>>>>>> 080513 14:36:00 13753 XrootdXeq:
>>>>>>>>>>> babarsgm.15816:3890@gridwn75 login
>>>>>>>>>>> 080513 14:36:00 13753 XrootdXeq:
>>>>>>>>>>> babarsgm.15816:4155@gridwn75 disc 0:00:40 (ended by
>>>>>>>>>>> babarsgm.15816:3890@gridwn75)
>>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq:
>>>>>>>>>>> babarsgm.15816:4155@gridwn75 login
>>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq:
>>>>>>>>>>> babarsgm.15816:3890@gridwn75 disc 0:00:03 (ended by
>>>>>>>>>>> babarsgm.15816:4155@gridwn75)
>>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq:
>>>>>>>>>>> babarsgm.15208:3890@gridwn65 login
>>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq:
>>>>>>>>>>> babarsgm.15208:4141@gridwn65 disc 0:00:36 (ended by
>>>>>>>>>>> babarsgm.15208:3890@gridwn65)
>>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.3924:4141@gridwn24
>>>>>>>>>>> login
>>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.3924:4170@gridwn24
>>>>>>>>>>> disc 0:00:03 (ended by babarsgm.3924:4141@gridwn24)
>>>>>>>>>>> [root@babarxrd ~]#
>>>>>>>>>>> The xrootd version is:
>>>>>>>>>>>
>>>>>>>>>>> [root@babarxrd ~]# rpm -qa | grep xroot
>>>>>>>>>>> xrootd-20071101-0808p1.slac
>>>>>>>>>>> [root@babarxrd ~]#
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Other symptoms are the quick raising of connection (70 jobs):
>>>>>>>>>>>
>>>>>>>>>>> [root@babarxrd ~]# netstat -natup | grep xroot |wc
>>>>>>>>>>> 3715 26005 404935
>>>>>>>>>>> [root@babarxrd ~]# lsof -i | grep xroot |grep
>>>>>>>>>>> gridwn131.pi.infn.it|wc
>>>>>>>>>>> 104 936 12480
>>>>>>>>>>> [root@babarxrd ~]# lsof -i | grep xroot |grep
>>>>>>>>>>> gridwn84.pi.infn.it|wc
>>>>>>>>>>> 205 1845 24395
>>>>>>>>>>> [root@babarxrd ~]# lsof -i | grep xroot |grep
>>>>>>>>>>> gridwn76.pi.infn.it|wc
>>>>>>>>>>> 115 1035 13685
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> dstat output (not so much throughput):
>>>>>>>>>>>
>>>>>>>>>>> [root@babarxrd ~]# ./dstat
>>>>>>>>>>> ----total-cpu-usage---- -dsk/total- -net/total- ---paging--
>>>>>>>>>>> ---system--
>>>>>>>>>>> usr sys idl wai hiq siq| read writ| recv send| in out |
>>>>>>>>>>> int csw
>>>>>>>>>>> 40 1 58 2 0 0| 25k 27k| 0 0 | 0
>>>>>>>>>>> 3.7B|1699 265
>>>>>>>>>>> 0 4 96 0 0 0| 0 64k| 31k 520k| 0 0
>>>>>>>>>>> |1937 391
>>>>>>>>>>> 0 5 95 0 0 0| 0 16k| 38k 619k| 0 0
>>>>>>>>>>> |2109 480
>>>>>>>>>>> 0 4 96 0 0 0| 16k 0 | 34k 638k| 0 0
>>>>>>>>>>> |2045 349
>>>>>>>>>>> 0 3 96 1 0 0| 24k 120k| 36k 592k| 0 0
>>>>>>>>>>> |2068 419
>>>>>>>>>>> 1 3 96 0 0 0| 0 248k| 36k 547k| 0 0
>>>>>>>>>>> |2068 482
>>>>>>>>>>> 0 3 96 1 0 0| 20k 24k| 33k 580k| 0 0
>>>>>>>>>>> |2001 344
>>>>>>>>>>> 0 4 96 0 0 0|4096B 8192B| 39k 660k| 0 0
>>>>>>>>>>> |2126 437
>>>>>>>>>>> [root@babarxrd ~]#
>>>>>>>>>>> Client side the KanAccess.cfg file is:
>>>>>>>>>>>
>>>>>>>>>>> rootenv XNet.RedirDomainAllowRE *.infn.it
>>>>>>>>>>> rootenv XNet.ConnectDomainAllowRE *.infn.it
>>>>>>>>>>>
>>>>>>>>>>> read /store/cfg/* xrootd $XROOTD_HOST:1094/
>>>>>>>>>>> write /store/cfg/* error
>>>>>>>>>>>
>>>>>>>>>>> read /store/cdb/* xrootd $XROOTD_HOST:1094/
>>>>>>>>>>> write /store/cdb/* error
>>>>>>>>>>>
>>>>>>>>>>> read /store/* xrootd $XROOTD_HOST:1094/
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> please hints.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks, Armando
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> ===================================================
>>>>>>>>>>> Armando Fella (BaBar support at INFN-CNAF Tier-1)
>>>>>>>>>>> ---------------------------------------------------
>>>>>>>>>>> Viale Berti Pichat 6/2, 40127, Bologna
>>>>>>>>>>> office 3, via Ranzani 13/2
>>>>>>>>>>>
>>>>>>>>>>> Email: armando.fella at cnaf.infn.it
>>>>>>>>>>> armando.fella at pi.infn.it
>>>>>>>>>>> armando.fella at gmail.com
>>>>>>>>>>>
>>>>>>>>>>> Phone in Bologna: +39 051 6092 902
>>>>>>>>>>> Phone in Pisa: +39 050 2214 231
>>>>>>>>>>> ===================================================
>>>>>>>>>>>
>>>>>>>>>>> ---------------------------------------------------------------------
>>>>>>>>>>> Unless unavoidable, no Word, Excel or PowerPoint
>>>>>>>>>>> attachments, please.
>>>>>>>>>>> See http://www.gnu.org/philosophy/no-word-attachments.html
>>>>>>>>>>>
>>>>>>>>>>> ---------------------------------------------------------------------
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> ===================================================
>>>>>>> Armando Fella (BaBar support at INFN-CNAF Tier-1)
>>>>>>> ---------------------------------------------------
>>>>>>> Viale Berti Pichat 6/2, 40127, Bologna
>>>>>>> office 3, via Ranzani 13/2
>>>>>>>
>>>>>>> Email: armando.fella at cnaf.infn.it
>>>>>>> armando.fella at pi.infn.it
>>>>>>> armando.fella at gmail.com
>>>>>>>
>>>>>>> Phone in Bologna: +39 051 6092 902
>>>>>>> Phone in Pisa: +39 050 2214 231
>>>>>>> ===================================================
>>>>>>>
>>>>>>> ---------------------------------------------------------------------
>>>>>>>
>>>>>>>
>>>>>>> Unless unavoidable, no Word, Excel or PowerPoint attachments,
>>>>>>> please.
>>>>>>> See http://www.gnu.org/philosophy/no-word-attachments.html
>>>>>>>
>>>>>>> ---------------------------------------------------------------------
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>
>>>>> --
>>>>> ===================================================
>>>>> Armando Fella (BaBar support at INFN-CNAF Tier-1)
>>>>> ---------------------------------------------------
>>>>> Viale Berti Pichat 6/2, 40127, Bologna
>>>>> office 3, via Ranzani 13/2
>>>>>
>>>>> Email: armando.fella at cnaf.infn.it
>>>>> armando.fella at pi.infn.it
>>>>> armando.fella at gmail.com
>>>>>
>>>>> Phone in Bologna: +39 051 6092 902
>>>>> Phone in Pisa: +39 050 2214 231
>>>>> ===================================================
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>>
>>>>> Unless unavoidable, no Word, Excel or PowerPoint attachments, please.
>>>>> See http://www.gnu.org/philosophy/no-word-attachments.html
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>>
>>>>>
>>>
>>
--
====================================================
Armando Fella (BaBar support at INFN-CNAF Tier-1)
----------------------------------------------------
Viale Berti Pichat 6/2, 40127, Bologna
office 3, via Ranzani 13/2
Mail: armando.fella at cnaf.infn.it
Phone in Bologna: +39 051 6092 902
Phone in Pisa: +39 050 2214 330
====================================================
|