LISTSERV mailing list manager LISTSERV 16.5

Help for XROOTD-L Archives


XROOTD-L Archives

XROOTD-L Archives


XROOTD-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

XROOTD-L Home

XROOTD-L Home

XROOTD-L  May 2008

XROOTD-L May 2008

Subject:

Re: Xrootd problem in SP production (fwd)

From:

Fabrizio Furano <[log in to unmask]>

Date:

26 May 2008 10:49:09 +0200Mon, 26 May 2008 10:49:09 +0200

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (814 lines)

Hi all,

  from what I know there are no magics that we can do for this. Kernel/tcp 
tuning are just wastes of time. IMHO, one of the main troubles sources is 
that the whole chain

xrootd->gpfs->xfs?->disks

  looks performant when it comes to huge bulk data xfer, but it can drain 
everything to crap when stressed with a high transaction rate. In general, 
you will never be able to get the same performance which you were used to 
when you had the local disks.
  Armando, have you tried to monitor how many requests per sec are being 
executed?

  This was an issue, the main imho. Then my suggestions are:

  Put "xrootd.async off" in the data servers. Depending on imper scrutable 
factors the async disk read in the OS can give some problems under stress 
(see up!), and God only knows what happens with gpfs.

  The value you have for the requesttimeout is a very old default. The 
"new" default now (more than 1 year) is 600. But I recall to you that, 
under the same setup, to perform our (in)famous performance tests at CNAF 
we put 1200. This means that a lot of requests had to wait up to 20 min. Do 
you remember what a mess?

  So, my suggestion is 600. This will decrease the "recover" rate. But, 
again, you cannot expect magics. Still the clients will recover, probably 
less. But then you will try to increase the job rate to the historical one 
and will encounter the same problem....

  One more thing: the latest servers contain a fix related to desperate 
kernel (?) situations, dealing with chunk sending. If you find (in a period 
of at least some hours) some complaints in the xrdlog dealing with "reset" 
or something else suspicious, than probably it's a good idea to upgrade the 
servers. For the records, it happened on proof under high stress, and was 
spotted by Gerri.

  One more: the process you are debugging contains many threads, which 
spend 99.999% of their life in polling. The interesting thread is just the 
main one, not the pollers. In gdb you can recognize the main one because it 
executes ROOT calls. All the other ones are insignificant.

  Last one: Andy and I put very recently a bugfix in the client for a rare 
but nasty problem related to recovering. That was causing a mess for glast. 
Wilko, has that problem been solved, even partially? I did not hear 
anything yet. (so I can check if you arrived reading up here.... test and 
report the magic lottery winning number 123456789! :-D )

  Well, but I do not believe that you can upgrade ROOT or recompile it with 
an external xrootd package. Let me know. Anyway, if you need it, just look 
in Savannah.cern.ch....

  Good luck!
  Fabrizio





> Date: Sat, 24 May 2008 20:03:53 +0200
> From: Armando Fella <[log in to unmask]>
> To: Wilko Kroeger <[log in to unmask]>
> Cc: Andrew Hanushevsky <[log in to unmask]>
> Subject: Re: Xrootd problem in SP production
> 
> Hi,
> 
> the following is part af strace of a SP job asking file to xrootd, it 
> hangs in polling. Hope this info cuold help in debugging the problem:
> 
> 
> connect(14, {sa_family=AF_INET, sin_port=htons(1094),
> sin_addr=inet_addr("212.189.152.199")}, 16) = -1 EINPROGRESS (Operation
> now in progress)
> poll([{fd=14, events=POLLOUT|POLLWRNORM, revents=POLLOUT|POLLWRNORM}],
> 1, 60000) = 1
> getsockopt(14, SOL_SOCKET, SO_ERROR, [17179869184], [4]) = 0
> fcntl64(14, F_SETFD, 0x2 /* FD_??? */)  = 0
> time(NULL)                              = 1210667903
> time(NULL)                              = 1210667903
> poll([{fd=14, events=POLLOUT|POLLERR|POLLHUP|POLLNVAL,
> revents=POLLOUT}], 1, 1000) = 1
> send(14, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\7\334", 20, 0) = 20
> time(NULL)                              = 1210667903
> time(NULL)                              = 1210667903
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) = 0
> poll([{fd=14, events=POLLIN}], 1, 1000) =
> ...
> ...
> 
> A fresh today xrdlog piece:
> 
> 080524 19:29:49 30669 XrootdXeq: babarsgm.30980:780@theown2 disc 0:00:04 
> (ended by babarsgm.30980:756@theown2)
> 080524 19:29:50 30669 XrootdXeq: babarsgm.30615:780@theown5 login
> 080524 19:29:50 30669 XrootdXeq: babarsgm.30615:737@theown5 disc 0:00:05 
> (ended by babarsgm.30615:780@theown5)
> 080524 19:29:52 30669 XrootdXeq: babarsgm.22625:737@theown16 login
> 080524 19:29:52 30669 XrootdXeq: babarsgm.22625:741@theown16 disc 
> 0:00:05 (ended by babarsgm.22625:737@theown16)
> 080524 19:29:55 30669 XrootdXeq: babarsgm.10828:741@node54 login
> 080524 19:29:55 30669 XrootdXeq: babarsgm.10828:718@node54 disc 0:00:50 
> (ended by babarsgm.10828:741@node54)
> 080524 19:29:56 30669 XrootdXeq: babarsgm.32501:718@node8 login
> 080524 19:29:56 30669 XrootdXeq: babarsgm.32501:109@node8 disc 0:00:32 
> (ended by babarsgm.32501:718@node8)
> 080524 19:29:58 30669 XrootdXeq: babarsgm.10828:109@node54 login
> 080524 19:29:59 30669 XrootdXeq: babarsgm.32501:781@node8 login
> 080524 19:29:59 30669 XrootdXeq: babarsgm.32501:718@node8 disc 0:00:03 
> (ended by babarsgm.32501:781@node8)
> 
> 
> 
> Cheers,  Armando
> 
> 
> Armando Fella ha scritto:
>> Hi,
>>
>> I'm using the xrootd client enbedded in MooseApp SP 24.2.1l 
>> executable. The following is the KanAccess.conf:
>>
>> rootenv Root.XTNetFileAllowWanConnect 1
>> rootenv Root.XTNetFileAllowWanRedirect 1
>> rootenv XNet.RedirDomainAllowRE *.infn.it
>> rootenv XNet.ConnectDomainAllowRE *.infn.it
>> rootenv XNet.RequestTimeout 180
>>
>> read /store/cdb/* xrootd $XROOTD_HOST:1094/
>> write /store/cdb/* error
>>
>> read /store/cfg/* xrootd $XROOTD_HOST:1094/
>> write /store/cfg/* error
>>
>> read /store/* xrootd $XROOTD_HOST:1094/
>>
>> I'm using root version 5.14-00e (but I'm not sure it is the babar one, 
>> I should check)
>> The 20 sec is just a case, the login-disc inter time is between 2 min 
>> and 3 sec.
>>
>> I tried to use directly the xrd in /opt/xrootd/bin/ on server and on a 
>> client wn and it seems to work fine, it transfer the file straight on.
>>
>> I have 2 xrootd servers with this problem and the user bbrmgr (the 
>> xrootd launcher) owns the .rootrc file on first server but on the 
>> second the file is absent
>>
>> [bbr-serv08] ~ > cat .rootrc
>> Root.XTNetFileAllowWanConnect: 1
>> Root.XTNetFileAllowWanRedirect:1
>> [bbr-serv08] ~ > XrootD version:
>>
>> [bbrmgr@babarxrd ~]$ rpm -qa | grep xroo
>> xrootd-20071101-0808p1.slac
>> [bbrmgr@babarxrd ~]$    The system.rootrc is in attachment.
>>
>> The problem is still there and the fail rate is 95% when we reach the 
>> limit of 70 jobs asking for cdb cfg bkg on the same xrootd server.
>>
>> Please try to find the bug, we are quite stopped by this problem
>> Please ask if you need more info.
>>
>> We did the following actions:
>>
>> reinstall xrootd
>> reboot the machines
>> modifiy the KanAccess.cfg
>> modify the /etc/sysctl.conf
>>
>>
>> Thanks,  Armando
>>
>> Wilko Kroeger ha scritto:
>>>
>>> Hello Armando
>>>
>>> I cc'ed Andy.
>>>
>>> It is not clear what is going wrong. It looks like that it might be 
>>> related to the client.
>>> Below are more comments.
>>>
>>>
>>> On Thu, 22 May 2008, Armando Fella wrote:
>>>
>>>> Hi,
>>>>
>>>> I'd add same information:
>>>>
>>>> 1) I try to change the tcp rmem wmem in /etc/sysctl.conf adding the 
>>>> lines:
>>>>
>>>> [root@babarxrd ~]# sysctl -p
>>>> net.ipv4.ip_forward = 0
>>>> net.ipv4.conf.default.rp_filter = 1
>>>> net.ipv4.conf.default.accept_source_route = 0
>>>> kernel.sysrq = 0
>>>> kernel.core_uses_pid = 1
>>>> net.core.rmem_max = 16777216
>>>> net.core.wmem_max = 16777216
>>>> net.ipv4.tcp_rmem = 4096 87380 16777216
>>>> net.ipv4.tcp_wmem = 4096 65536 16777216
>>>> net.core.netdev_max_backlog = 250000
>>>> [root@babarxrd ~]#
>>>
>>> I don't have any experience with tuning the network parameter for 
>>> linux so I can't comment on the changes. At slac we use xrootd very 
>>> little on linux but the machine we are using it (ER processing, with 
>>> ~500 clients) we didn't have to do any tuning).
>>>
>>>> but restarting xrootd the wrong behaviour is still present:
>>>>
>>>> 080522 16:31:01 30669 XrootdXeq: babarsgm.4308:17@theown15 login
>>>> 080522 16:31:01 30669 XrootdXeq: babarsgm.4308:15@theown15 disc 
>>>> 0:00:20 (ended by babarsgm.4308:17@theown15)
>>>> 080522 16:31:03 30669 XrootdXeq: babarsgm.9392:15@node75 login
>>>> 080522 16:31:03 30669 XrootdXeq: babarsgm.9392:16@node75 disc 
>>>> 0:00:20 (ended by babarsgm.9392:15@node75)
>>>> 080522 16:31:04 30669 XrootdXeq: babarsgm.24721:16@node119 login
>>>> 080522 16:31:04 30669 XrootdXeq: babarsgm.24721:18@node119 disc 
>>>> 0:00:20 (ended by babarsgm.24721:16@node119)
>>>> 080522 16:31:17 30669 XrootdXeq: babarsgm.5109:18@gridwn99 login
>>>> 080522 16:31:17 30669 XrootdXeq: babarsgm.5109:19@gridwn99 disc 
>>>> 0:00:20 (ended by babarsgm.5109:18@gridwn99)
>>>> 080522 16:31:21 30669 XrootdXeq: babarsgm.4308:19@theown15 login
>>>> 080522 16:31:21 30669 XrootdXeq: babarsgm.4308:17@theown15 disc 
>>>> 0:00:20 (ended by babarsgm.4308:19@theown15)
>>>> 080522 16:31:23 30669 XrootdXeq: babarsgm.9392:17@node75 login
>>>> 080522 16:31:23 30669 XrootdXeq: babarsgm.9392:15@node75 disc 
>>>> 0:00:20 (ended by babarsgm.9392:17@node75)
>>>> 080522 16:31:24 30669 XrootdXeq: babarsgm.24721:15@node119 login
>>>> 080522 16:31:24 30669 XrootdXeq: babarsgm.24721:16@node119 disc 
>>>> 0:00:20 (ended by babarsgm.24721:15@node119)
>>>
>>> It is very suspicious that all the disconnects are after 20 sec. Is 
>>> this happening for all SP jobs or only for some of them?
>>>
>>> You could try to copy a file from xrootd with xrdcp and check if that 
>>> works fine or if you see a similar behavior.
>>>
>>> Do you know if anything changed on the client site, different root 
>>> version, change of timeouts in ~/.rootrc or 
>>> $ROOTSYS/root/etc/rootd/system.rootrc
>>>
>>> I guess you are using the babar root version 5.14-00e. Which xrootd 
>>> version are you using?
>>>
>>> Sorry that I don't have a more definite answer.
>>>
>>> Cheers,
>>>    Wilko
>>>
>>>>
>>>> Cheers,  Armando
>>>>
>>>>
>>>>
>>>> Wilko Kroeger ha scritto:
>>>>>
>>>>> Hello Armando
>>>>>
>>>>> Sorry for the late reply. I didn't manage to talk to Andy about it 
>>>>> but I will try on Wednesday.
>>>>> I think all your config files and Start scripts look fine. One 
>>>>> thing you could do is to remove the
>>>>> all.role server
>>>>> option. By default xrootd starts up as a server and if this 
>>>>> directive is omitted the xrootd will not try to connect to an olbd  (
>>>>>  080516 13:17:43 24822 odc_Open: Unable to connect socket to
>>>>>  /tmp/.olb/olbd.admin; connection refused)
>>>>>
>>>>> To make sure that your xrootd got started properly you can just do 
>>>>> "ps -ef | grep xrootd" and you should only see the -l <logfile> and 
>>>>> -c <xrootd.cf> options being used.
>>>>>
>>>>> What I don't understand is from your xrdlog file:
>>>>>
>>>>> 080513 14:36:00 13753 XrootdXeq: babarsgm.15816:3890@gridwn75 login
>>>>> 080513 14:36:00 13753 XrootdXeq: babarsgm.15816:4155@gridwn75 disc
>>>>>  0:00:40 (ended by babarsgm.15816:3890@gridwn75)
>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.15816:4155@gridwn75 login
>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.15816:3890@gridwn75 disc
>>>>>   0:00:03 (ended by babarsgm.15816:4155@gridwn75)
>>>>>
>>>>> The client babarsgm.15816 was connected but some timeout happened 
>>>>> that caused the client to reconnect at 14:36:00 causing its 
>>>>> previous session
>>>>> (babarsgm.15816:4155) to be closed. The strange thing now is that 
>>>>> after three seconds at 14:36:03 the client reconnects again which 
>>>>> causes the previous session (from 14:36:00) to be closed. Three 
>>>>> seconds is very short and I don't think that the client or server 
>>>>> has any timeout this short.
>>>>> Therefore I also think that
>>>>> rootenv XNet.RequestTimeout 180
>>>>> would not help.
>>>>>
>>>>> I will talk with Andy maybe he has an idea.
>>>>>
>>>>> Cheers,
>>>>>    Wilko
>>>>>
>>>>>
>>>>> On Tue, 20 May 2008, Armando Fella wrote:
>>>>>
>>>>>> Reminder
>>>>>>
>>>>>> Armando Fella ha scritto:
>>>>>>> Here you can find the files I referred in the email:
>>>>>>>
>>>>>>> http://www.cnaf.infn.it/~afella/StartXRD.cf
>>>>>>> http://www.cnaf.infn.it/~afella/StartXRD
>>>>>>>
>>>>>>> Cheers,  Armando
>>>>>>>
>>>>>>> Armando Fella ha scritto:
>>>>>>>> *** Discussion title: Simulation Production
>>>>>>>> Email replies to [log in to unmask] must include:
>>>>>>>>   In-Reply-To: <[log in to unmask]>
>>>>>>>>   Subject: ...change this to be about your reply.
>>>>>>>>
>>>>>>>> This is a multi-part message in MIME format.
>>>>>>>> --------------010909020508030509010608
>>>>>>>> Content-Type: text/plain; charset=ISO-8859-15; format=flowed
>>>>>>>> Content-Transfer-Encoding: 7bit
>>>>>>>>
>>>>>>>> Hi Wilko,
>>>>>>>>
>>>>>>>> the machine has not been reinstalled or kernel upgraded, the 
>>>>>>>> following are the info about OS and architecture:
>>>>>>>>
>>>>>>>> [root@babarxrd ~]# uname -a
>>>>>>>> Linux babarxrd.pi.infn.it 2.6.9-55.EL #1 Thu May 3 23:04:51 CDT 
>>>>>>>> 2007 i686 athlon i386 GNU/Linux
>>>>>>>> [root@babarxrd ~]# cat /etc/redhat-release
>>>>>>>> Scientific Linux SL release 4.5 (Beryllium)
>>>>>>>> [root@babarxrd ~]#
>>>>>>>> [root@babarxrd ~]# rpm -qa --queryformat 
>>>>>>>> '[("%{NAME}","%{VERSION}-%{RELEASE}","%{ARCH}")\n]' | grep xroo
>>>>>>>> ("xrootd","20071101-0808p1.slac","i386")
>>>>>>>> [root@babarxrd ~]#
>>>>>>>>
>>>>>>>> I suspect that the problem could be in correctness of 
>>>>>>>> StartXRD.cf and StartXRD that I got from the xrootd tgz package 
>>>>>>>> (can you check that in attachment?), I installed the rpm and I 
>>>>>>>> got the two files from the tgz.
>>>>>>>>
>>>>>>>> Is it possible to increase the client timeout to permit the 
>>>>>>>> possible network bottleneck to be avoided?
>>>>>>>> The KanAccess.cfg instruction to add is:
>>>>>>>>
>>>>>>>>  rootenv XNet.RequestTimeout 180
>>>>>>>>
>>>>>>>> is 180 the right value?
>>>>>>>>
>>>>>>>> Now I'm not able to increase the site job load so the xrdlog and 
>>>>>>>> pstack probably are not so meaningfull, in any case that follow :
>>>>>>>>
>>>>>>>> [root@babarxrd ~]# pidof xrootd
>>>>>>>> 24822
>>>>>>>> [root@babarxrd ~]# pstack 24822
>>>>>>>> Thread 16 (Thread -1208767584 (LWP 24823)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b8feac in pthread_cond_timedwait@@GLIBC_2.3.2 ()
>>>>>>>> #2  0x08095948 in XrdSysCondVar::Wait ()
>>>>>>>> #3  0x0807b04a in XrdBuffManager::Reshape ()
>>>>>>>> #4  0x0807a8c0 in XrdReshaper ()
>>>>>>>> #5  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #6  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #7  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 15 (Thread -1209558112 (LWP 24824)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b8feac in pthread_cond_timedwait@@GLIBC_2.3.2 ()
>>>>>>>> #2  0x08095948 in XrdSysCondVar::Wait ()
>>>>>>>> #3  0x0808477f in XrdScheduler::TimeSched ()
>>>>>>>> #4  0x0808303e in XrdStartTSched ()
>>>>>>>> #5  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #6  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #7  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 14 (Thread -1210348640 (LWP 24825)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b91b4f in [log in to unmask] () from 
>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>> #2  0x08079b60 in XrdSysSemaphore::Wait ()
>>>>>>>> #3  0x08083d1a in XrdScheduler::Run ()
>>>>>>>> #4  0x08083070 in XrdStartWorking ()
>>>>>>>> #5  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #6  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #7  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 13 (Thread -1211139168 (LWP 24826)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b91b4f in [log in to unmask] () from 
>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>> #2  0x08079b60 in XrdSysSemaphore::Wait ()
>>>>>>>> #3  0x08083d1a in XrdScheduler::Run ()
>>>>>>>> #4  0x08083070 in XrdStartWorking ()
>>>>>>>> #5  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #6  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #7  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 12 (Thread -1212376160 (LWP 24827)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00895e04 in poll () from /lib/tls/libc.so.6
>>>>>>>> #2  0x0808130e in XrdPollPoll::Start ()
>>>>>>>> #3  0x0807fb6c in XrdStartPolling ()
>>>>>>>> #4  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #5  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #6  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 11 (Thread -1213346912 (LWP 24828)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00895e04 in poll () from /lib/tls/libc.so.6
>>>>>>>> #2  0x0808130e in XrdPollPoll::Start ()
>>>>>>>> #3  0x0807fb6c in XrdStartPolling ()
>>>>>>>> #4  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #5  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #6  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 10 (Thread -1214317664 (LWP 24829)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00895e04 in poll () from /lib/tls/libc.so.6
>>>>>>>> #2  0x0808130e in XrdPollPoll::Start ()
>>>>>>>> #3  0x0807fb6c in XrdStartPolling ()
>>>>>>>> #4  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #5  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #6  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 9 (Thread -1215108192 (LWP 24830)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b92db6 in __nanosleep_nocancel () from 
>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>> #2  0x080963a8 in XrdSysTimer::Wait ()
>>>>>>>> #3  0x00210c7d in XrdOdcFinderTRG::Hookup () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #4  0x00210a7c in XrdOdcFinderTRG::Start () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #5  0x00210846 in XrdOdcStartOlb () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #6  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #7  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #8  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 8 (Thread -1215898720 (LWP 24831)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b925cb in __read_nocancel () from /lib/tls/libpthread.so.0
>>>>>>>> #2  0x08093259 in XrdOucStream::GetLine ()
>>>>>>>> #3  0x001f1d85 in XrdOfsEvr::recvEvents () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #4  0x001f16d8 in XrdOfsEvRecv () from /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #5  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #6  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #7  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 7 (Thread -1216689248 (LWP 24832)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b91b4f in [log in to unmask] () from 
>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>> #2  0x08079b60 in XrdSysSemaphore::Wait ()
>>>>>>>> #3  0x001f1ac3 in XrdOfsEvr::flushEvents () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #4  0x001f170a in XrdOfsEvFlush () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #5  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #6  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #7  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 6 (Thread -1217479776 (LWP 24833)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x007ffb30 in do_sigwaitinfo () from /lib/tls/libc.so.6
>>>>>>>> #2  0x007ffc0f in sigwaitinfo () from /lib/tls/libc.so.6
>>>>>>>> #3  0x0020c8f2 in XrdOssAioWait () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #4  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #5  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #6  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 5 (Thread -1218270304 (LWP 24834)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x007ffb30 in do_sigwaitinfo () from /lib/tls/libc.so.6
>>>>>>>> #2  0x007ffc0f in sigwaitinfo () from /lib/tls/libc.so.6
>>>>>>>> #3  0x0020c8f2 in XrdOssAioWait () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #4  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #5  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #6  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 4 (Thread -1219060832 (LWP 24835)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b92db6 in __nanosleep_nocancel () from 
>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>> #2  0x0020dcbe in XrdOssSys::CacheScan () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #3  0x001ff4ec in XrdOssCacheScan () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #4  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #5  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #6  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 3 (Thread -1219851360 (LWP 24836)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b92db6 in __nanosleep_nocancel () from 
>>>>>>>> /lib/tls/libpthread.so.0
>>>>>>>> #2  0x001eba6f in XrdOfsIdleScan () from 
>>>>>>>> /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> #3  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #4  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #5  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 2 (Thread -1220641888 (LWP 24837)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b927d8 in accept () from /lib/tls/libpthread.so.0
>>>>>>>> #2  0x0808d002 in XrdNetSocket::Accept ()
>>>>>>>> #3  0x0805c833 in XrdXrootdAdmin::Start ()
>>>>>>>> #4  0x0805c4bb in XrdXrootdInitAdmin ()
>>>>>>>> #5  0x08095856 in XrdSysThread_Xeq ()
>>>>>>>> #6  0x00b8d3cc in start_thread () from /lib/tls/libpthread.so.0
>>>>>>>> #7  0x0089fc3e in clone () from /lib/tls/libc.so.6
>>>>>>>> Thread 1 (Thread -1208764736 (LWP 24822)):
>>>>>>>> #0  0x007bd7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
>>>>>>>> #1  0x00b927d8 in accept () from /lib/tls/libpthread.so.0
>>>>>>>> #2  0x0808a1fe in XrdNet::do_Accept_TCP ()
>>>>>>>> #3  0x080899de in XrdNet::Accept ()
>>>>>>>> #4  0x080893c8 in XrdInet::Accept ()
>>>>>>>> #5  0x0807f359 in mainAccept ()
>>>>>>>> #6  0x0807f6d1 in main ()
>>>>>>>> [root@babarxrd ~]# The xrdlog contains the start daemon 
>>>>>>>> messages, no jobs are running now:
>>>>>>>>
>>>>>>>> 080516 13:16:33 001 Scalla is starting. . .
>>>>>>>> Copr.  2007 Stanford University, xrd version 20071101-0808p1_dbg
>>>>>>>> Config using configuration file /opt/xrootd/etc/xrootd.cf
>>>>>>>> ++++++ xrootd [log in to unmask] initialization started.
>>>>>>>> Config maximum number of connections restricted to 65535
>>>>>>>> Copr.  2007 Stanford University, xrootd version 2.9.0 build 
>>>>>>>> 20071101-0808p1
>>>>>>>> ++++++ xrootd protocol initialization started.
>>>>>>>> =====> xrootd.fslib /opt/xrootd/lib/libXrdOfs.so
>>>>>>>> =====> xrootd.export /store
>>>>>>>> Config warning: 'xrootd.seclib' not specified; strong 
>>>>>>>> authentication disabled!
>>>>>>>> Copr.  2007 Stanford University, Ofs Version 20071101-0808p1_dbg
>>>>>>>> ++++++ File system initialization started.
>>>>>>>> =====> all.role server
>>>>>>>> ++++++ Configuring server role. . .
>>>>>>>> Config effective /opt/xrootd/etc/xrootd.cf ofs configuration:
>>>>>>>>        ofs.role server
>>>>>>>>        ofs.fdscan     9 120 1200
>>>>>>>>        ofs.maxdelay   60
>>>>>>>>        ofs.trace      0
>>>>>>>> ------ File system server initialization completed.
>>>>>>>> Copr.  2007, Stanford University, oss Version 20071101-0808p1_dbg
>>>>>>>> ++++++ Storage system initialization started.
>>>>>>>> =====> oss.localroot /data
>>>>>>>> =====> oss.path /store r/o
>>>>>>>> 080516 13:16:33 24822 odc_Open: Unable to connect socket to 
>>>>>>>> /tmp/.olb/olbd.admin; connection refused
>>>>>>>> Config effective /opt/xrootd/etc/xrootd.cf oss configuration:
>>>>>>>>        oss.alloc        0 0 0
>>>>>>>>        oss.cachescan    600
>>>>>>>>        oss.compdetect   *
>>>>>>>>        oss.fdlimit      32767 65535
>>>>>>>>        oss.maxdbsize    0
>>>>>>>>        oss.localroot /data
>>>>>>>>        oss.trace        0
>>>>>>>>        oss.xfr          1 9437184 30 10800
>>>>>>>>        oss.memfile off  max 1062334464
>>>>>>>>        oss.defaults  r/w  nocheck nodread nomig norcreate nostage
>>>>>>>>        oss.path /store r/o  nocheck nodread nomig norcreate nostage
>>>>>>>> ------ Storage system initialization completed.
>>>>>>>> Config warning: 'xrootd.prepare logdir' not specified; prepare 
>>>>>>>> tracking disabled.
>>>>>>>> Config exporting /store
>>>>>>>> ------ xrootd protocol initialization completed.
>>>>>>>> ------ xrootd [log in to unmask]:1094 initialization 
>>>>>>>> completed.
>>>>>>>> 080516 13:17:43 24822 odc_Open: Unable to connect socket to 
>>>>>>>> /tmp/.olb/olbd.admin; connection refused
>>>>>>>> 080516 13:18:53 24822 odc_Open: Unable to connect socket to 
>>>>>>>> /tmp/.olb/olbd.admin; connection refused
>>>>>>>> ....
>>>>>>>> ....
>>>>>>>> ....
>>>>>>>>
>>>>>>>>  Thank you for all the help
>>>>>>>>
>>>>>>>> Cheers,  Armando
>>>>>>>>
>>>>>>>> Wilko Kroeger ha scritto:
>>>>>>>>
>>>>>>>>> *** Discussion title: Simulation Production
>>>>>>>>> Email replies to [log in to unmask] must include:
>>>>>>>>>   In-Reply-To: 
>>>>>>>>> <[log in to unmask]>
>>>>>>>>>   Subject: ...change this to be about your reply.
>>>>>>>>>
>>>>>>>>> Hello Armando
>>>>>>>>>
>>>>>>>>> The config certainly looks fine and also the load that you 
>>>>>>>>> describe should not cause problems. It is a little bit strange 
>>>>>>>>> that you suddenly see this problem. Do you know if anything 
>>>>>>>>> happened to the machine itself
>>>>>>>>> (update kernel,..)?
>>>>>>>>>
>>>>>>>>> The disconnect messages you see are due to timeouts. The client 
>>>>>>>>> does not receive the response from the server within a certain 
>>>>>>>>> timeout and therefore it will reconnect to the server and close 
>>>>>>>>> (disconnect) its previous session.
>>>>>>>>>
>>>>>>>>> If you still have the problem you could take a pstack
>>>>>>>>>   pstack <xrootd_pid>  > outfile
>>>>>>>>> and send it (or put some where at slac) me and Andy pstack 
>>>>>>>>> output and maybe also the xrdlog file.
>>>>>>>>> I assume the server is a linux machine.
>>>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>>     Wilko
>>>>>>>>>
>>>>>>>>> On Thu, 15 May 2008, Armando Fella wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> we are experiencing unexpected problems on xrootd system 
>>>>>>>>>> dedicated to cdb cfg and bkg access in SP production 
>>>>>>>>>> (24.2.1l). Three days ago the xrootd server (no oldb) of the 
>>>>>>>>>> unique machine serving around 100 SP jobs switched off and 
>>>>>>>>>> since its restart the fail rate was around 95%. The server 
>>>>>>>>>> worked properly for 6 months with peak 200 jobs asking files.
>>>>>>>>>>
>>>>>>>>>> I checked the parameters in xrootd.cf, but they are in the 
>>>>>>>>>> standards:
>>>>>>>>>>
>>>>>>>>>> [root@babarxrd ~]# cat /opt/xrootd/etc/xrootd.cf
>>>>>>>>>> xrootd.fslib /opt/xrootd/lib/libXrdOfs.so
>>>>>>>>>> xrootd.export /store
>>>>>>>>>>
>>>>>>>>>> oss.localroot /data
>>>>>>>>>> oss.path /store r/o
>>>>>>>>>>
>>>>>>>>>> all.role server
>>>>>>>>>>
>>>>>>>>>> [root@babarxrd ~]# The xrdlog shows continuously login and 
>>>>>>>>>> disconnecting connection:
>>>>>>>>>>
>>>>>>>>>> [root@babarxrd ~]# tail xrdlog
>>>>>>>>>> 080513 14:36:00 13753 XrootdXeq: babarsgm.3924:4170@gridwn24 
>>>>>>>>>> login
>>>>>>>>>> 080513 14:36:00 13753 XrootdXeq: babarsgm.3924:3890@gridwn24 
>>>>>>>>>> disc 0:01:49 (ended by babarsgm.3924:4170@gridwn24)
>>>>>>>>>> 080513 14:36:00 13753 XrootdXeq: babarsgm.15816:3890@gridwn75 
>>>>>>>>>> login
>>>>>>>>>> 080513 14:36:00 13753 XrootdXeq: babarsgm.15816:4155@gridwn75 
>>>>>>>>>> disc 0:00:40 (ended by babarsgm.15816:3890@gridwn75)
>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.15816:4155@gridwn75 
>>>>>>>>>> login
>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.15816:3890@gridwn75 
>>>>>>>>>> disc 0:00:03 (ended by babarsgm.15816:4155@gridwn75)
>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.15208:3890@gridwn65 
>>>>>>>>>> login
>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.15208:4141@gridwn65 
>>>>>>>>>> disc 0:00:36 (ended by babarsgm.15208:3890@gridwn65)
>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.3924:4141@gridwn24 
>>>>>>>>>> login
>>>>>>>>>> 080513 14:36:03 13753 XrootdXeq: babarsgm.3924:4170@gridwn24 
>>>>>>>>>> disc 0:00:03 (ended by babarsgm.3924:4141@gridwn24)
>>>>>>>>>> [root@babarxrd ~]#
>>>>>>>>>> The xrootd version is:
>>>>>>>>>>
>>>>>>>>>> [root@babarxrd ~]# rpm -qa | grep xroot
>>>>>>>>>> xrootd-20071101-0808p1.slac
>>>>>>>>>> [root@babarxrd ~]#
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Other symptoms are the quick raising of connection (70 jobs):
>>>>>>>>>>
>>>>>>>>>> [root@babarxrd ~]# netstat -natup | grep xroot |wc
>>>>>>>>>> 3715   26005  404935
>>>>>>>>>> [root@babarxrd ~]# lsof -i | grep xroot |grep 
>>>>>>>>>> gridwn131.pi.infn.it|wc
>>>>>>>>>>  104     936   12480
>>>>>>>>>> [root@babarxrd ~]# lsof -i | grep xroot |grep 
>>>>>>>>>> gridwn84.pi.infn.it|wc
>>>>>>>>>>  205    1845   24395
>>>>>>>>>> [root@babarxrd ~]# lsof -i | grep xroot |grep 
>>>>>>>>>> gridwn76.pi.infn.it|wc
>>>>>>>>>>  115    1035   13685
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> dstat output (not so much throughput):
>>>>>>>>>>
>>>>>>>>>> [root@babarxrd ~]# ./dstat
>>>>>>>>>> ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- 
>>>>>>>>>> ---system--
>>>>>>>>>> usr sys idl wai hiq siq| read  writ| recv  send|  in   out | 
>>>>>>>>>> int csw
>>>>>>>>>> 40   1  58   2   0   0|  25k   27k|   0     0 |   0   
>>>>>>>>>> 3.7B|1699   265
>>>>>>>>>> 0   4  96   0   0   0|   0    64k|  31k  520k|   0     0 
>>>>>>>>>> |1937   391
>>>>>>>>>> 0   5  95   0   0   0|   0    16k|  38k  619k|   0     0 
>>>>>>>>>> |2109   480
>>>>>>>>>> 0   4  96   0   0   0|  16k    0 |  34k  638k|   0     0 
>>>>>>>>>> |2045   349
>>>>>>>>>> 0   3  96   1   0   0|  24k  120k|  36k  592k|   0     0 
>>>>>>>>>> |2068   419
>>>>>>>>>> 1   3  96   0   0   0|   0   248k|  36k  547k|   0     0 
>>>>>>>>>> |2068   482
>>>>>>>>>> 0   3  96   1   0   0|  20k   24k|  33k  580k|   0     0 
>>>>>>>>>> |2001   344
>>>>>>>>>> 0   4  96   0   0   0|4096B 8192B|  39k  660k|   0     0 
>>>>>>>>>> |2126   437
>>>>>>>>>> [root@babarxrd ~]#
>>>>>>>>>> Client side the KanAccess.cfg file is:
>>>>>>>>>>
>>>>>>>>>> rootenv XNet.RedirDomainAllowRE *.infn.it
>>>>>>>>>> rootenv XNet.ConnectDomainAllowRE *.infn.it
>>>>>>>>>>
>>>>>>>>>> read /store/cfg/* xrootd $XROOTD_HOST:1094/
>>>>>>>>>> write /store/cfg/* error
>>>>>>>>>>
>>>>>>>>>> read /store/cdb/* xrootd $XROOTD_HOST:1094/
>>>>>>>>>> write /store/cdb/* error
>>>>>>>>>>
>>>>>>>>>> read /store/* xrootd $XROOTD_HOST:1094/
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> please hints.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks,  Armando
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> -- 
>>>>>>>>>> ===================================================
>>>>>>>>>> Armando Fella (BaBar support at INFN-CNAF Tier-1)
>>>>>>>>>> ---------------------------------------------------
>>>>>>>>>> Viale Berti Pichat 6/2, 40127, Bologna
>>>>>>>>>> office 3, via Ranzani 13/2
>>>>>>>>>>
>>>>>>>>>> Email: armando.fella at cnaf.infn.it
>>>>>>>>>>      armando.fella at pi.infn.it
>>>>>>>>>>      armando.fella at gmail.com
>>>>>>>>>>
>>>>>>>>>> Phone in Bologna:  +39 051 6092 902
>>>>>>>>>> Phone in Pisa:     +39 050 2214 231
>>>>>>>>>> ===================================================
>>>>>>>>>>
>>>>>>>>>> --------------------------------------------------------------------- 
>>>>>>>>>> Unless unavoidable, no Word, Excel or PowerPoint attachments, 
>>>>>>>>>> please.
>>>>>>>>>> See http://www.gnu.org/philosophy/no-word-attachments.html
>>>>>>>>>>
>>>>>>>>>> --------------------------------------------------------------------- 
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>> -- 
>>>>>> ===================================================
>>>>>> Armando Fella (BaBar support at INFN-CNAF Tier-1)
>>>>>> ---------------------------------------------------
>>>>>> Viale Berti Pichat 6/2, 40127, Bologna
>>>>>> office 3, via Ranzani 13/2
>>>>>>
>>>>>> Email: armando.fella at cnaf.infn.it
>>>>>>      armando.fella at pi.infn.it
>>>>>>      armando.fella at gmail.com
>>>>>>
>>>>>> Phone in Bologna:  +39 051 6092 902
>>>>>> Phone in Pisa:     +39 050 2214 231
>>>>>> ===================================================
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>>
>>>>>> Unless unavoidable, no Word, Excel or PowerPoint attachments, please.
>>>>>> See http://www.gnu.org/philosophy/no-word-attachments.html
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>>
>>>>>>
>>>>
>>>> -- 
>>>> ===================================================
>>>> Armando Fella (BaBar support at INFN-CNAF Tier-1)
>>>> ---------------------------------------------------
>>>> Viale Berti Pichat 6/2, 40127, Bologna
>>>> office 3, via Ranzani 13/2
>>>>
>>>> Email: armando.fella at cnaf.infn.it
>>>>      armando.fella at pi.infn.it
>>>>      armando.fella at gmail.com
>>>>
>>>> Phone in Bologna:  +39 051 6092 902
>>>> Phone in Pisa:     +39 050 2214 231
>>>> ===================================================
>>>>
>>>> ---------------------------------------------------------------------
>>>>
>>>> Unless unavoidable, no Word, Excel or PowerPoint attachments, please.
>>>> See http://www.gnu.org/philosophy/no-word-attachments.html
>>>>
>>>> ---------------------------------------------------------------------
>>>>
>>>>
>>
> 


Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
January 2009
December 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use