LISTSERV mailing list manager LISTSERV 16.5

Help for XROOTD-L Archives


XROOTD-L Archives

XROOTD-L Archives


XROOTD-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

XROOTD-L Home

XROOTD-L Home

XROOTD-L  December 2009

XROOTD-L December 2009

Subject:

Re: xrootd with more than 65 machines

From:

Andrew Hanushevsky <[log in to unmask]>

Date:

11 Dec 2009 14:41:18 -0800 (PST)Fri, 11 Dec 2009 14:41:18 -0800 (PST)

Content-Type:

MULTIPART/MIXED

Parts/Attachments:

Parts/Attachments

TEXT/PLAIN (281 lines)

Hi Wen, Could you start everything up and provide me a pointer to the 
manager log file, supervisor log file, and one data server logfile all of 
which cover the same time-frame (from start to some point where you think 
things are working or not). That way I can see what is happening. At the 
moment I only see two "bad" things in the config file:

1) Only atlas-bkp1.cs.wisc.edu is designated as a manager but you claim, 
via the all.manager directive, that there are three (bkp2 and bkp3). 
While it should work, the log file will be dense with error messages. 
Please correct this to be consistent and make it easier to see real 
errors.

2) Please use cms.space not olb.space (for historical reasons the latter 
is still accepted and over-rides the former, but that will soon end), and 
please use only one (the config file uses both directives).

The xrootd has an internal mechanism to connect servers with supervisors 
to allow for maximum reliability. You cannot change that algorithm and 
there is no need to do so. You should *never* tell anyone to directly 
connect to a supervisor. If you do, you will likely get unreachable nodes.

As for dropping data servers, it would appear to me, given the flurry of 
such activity, that something either crashed or was restarted. That's why 
it would be good to see the complete log of each one of the entities.

Andy

On Fri, 11 Dec 2009, wen guan wrote:

> Hi Andrew,
>
>     I read the document. and write a config
> file(http://wisconsin.cern.ch/~wguan/xrdcluster.cfg).
>     I used my conf, I can see manager is dispatch message to
> supervisor. But I cannot see any dataserver tries to connect to the
> supervisor. At the same time, in the manager's log, I can see some
> dataserver are Dropped.
>    How does xrootd decide which dataserver will connect supervisor?
> should I specify some dataservers to connect the supervisor?
>
>
> (*) supervisor log
> 091211 15:07:00 30028 Dispatch manager.0:20@atlas-bkp2 for state dlen=42
> 091211 15:07:00 30028 manager.0:20@atlas-bkp2 do_State:
> /atlas/xrootd/users/wguan/test/test131141
> 091211 15:07:00 30028 manager.0:20@atlas-bkp2 do_StateFWD: Path find
> failed for state /atlas/xrootd/users/wguan/test/test131141
>
> (*)manager log
> 091211 04:13:24 15661 Admit c185.chtc.wisc.edu TSpace=5587GB NumFS=1
> FSpace=5693644MB MinFR=57218MB Util=0
> 091211 04:13:24 15661 Admit c185.chtc.wisc.edu adding path: w /atlas
> 091211 04:13:24 15661 server.10585:[log in to unmask]:1094
> do_Space: 5696231MB free; 0% util
> 091211 04:13:24 15661 Protocol:
> server.10585:[log in to unmask]:1094 logged in.
> 091211 04:13:24 001 XrdInet: Accepted connection from [log in to unmask]
> 091211 04:13:24 15661 XrdSched: running ?:[log in to unmask] inq=0
> 091211 04:13:24 15661 XrdProtocol: matched protocol cmsd
> 091211 04:13:24 15661 ?:[log in to unmask] XrdPoll: FD 79 attached
> to poller 2; num=22
> 091211 04:13:24 15661 Add server.21739:[log in to unmask] bumps
> server.15905:[log in to unmask]:1094 #63
> 091211 04:13:24 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:24 15661 Drop_Node:
> server.15905:[log in to unmask]:1094 dropped.
> 091211 04:13:24 15661 Add Shoved
> server.21739:[log in to unmask]:1094 to cluster; id=63.78; num=64;
> min=51
> 091211 04:13:24 15661 Update Counts Parm1=1 Parm2=0
> 091211 04:13:24 15661 Admit c187.chtc.wisc.edu TSpace=5587GB NumFS=1
> FSpace=5721854MB MinFR=57218MB Util=0
> 091211 04:13:24 15661 Admit c187.chtc.wisc.edu adding path: w /atlas
> 091211 04:13:24 15661 server.21739:[log in to unmask]:1094
> do_Space: 5721854MB free; 0% util
> 091211 04:13:24 15661 Protocol:
> server.21739:[log in to unmask]:1094 logged in.
> 091211 04:13:24 15661 XrdLink: Unable to recieve from
> c187.chtc.wisc.edu; connection reset by peer
> 091211 04:13:24 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:24 15661 XrdSched: scheduling drop node in 60 seconds
> 091211 04:13:24 15661 Remove_Node
> server.21739:[log in to unmask]:1094 node 63.78
> 091211 04:13:24 15661 Protocol: server.21739:[log in to unmask] logged out.
> 091211 04:13:24 15661 server.21739:[log in to unmask] XrdPoll: FD
> 79 detached from poller 2; num=21
> 091211 04:13:27 15661 Dispatch server.24718:[log in to unmask]:1094
> for status dlen=0
> 091211 04:13:27 15661 server.24718:[log in to unmask]:1094 do_Status: suspend
> 091211 04:13:27 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:27 15661 Node: c177.chtc.wisc.edu service suspended
> 091211 04:13:27 15661 XrdLink: No RecvAll() data from c177.chtc.wisc.edu FD=16
> 091211 04:13:27 15661 Update Counts Parm1=0 Parm2=0
> 091211 04:13:27 15661 Remove_Node
> server.24718:[log in to unmask]:1094 node 0.3
> 091211 04:13:27 15661 Protocol: server.21656:[log in to unmask] logged out.
> 091211 04:13:27 15661 server.21656:[log in to unmask] XrdPoll: FD
> 16 detached from poller 2; num=20
> 091211 04:13:27 15661 XrdLink: No RecvAll() data from c179.chtc.wisc.edu FD=21
> 091211 04:13:27 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:27 15661 Remove_Node
> server.17065:[log in to unmask]:1094 node 1.4
> 091211 04:13:27 15661 Protocol: server.7978:[log in to unmask] logged out.
> 091211 04:13:27 15661 server.7978:[log in to unmask] XrdPoll: FD 21
> detached from poller 1; num=21
> 091211 04:13:27 15661 State: Status changed to suspended
> 091211 04:13:27 15661 Send status to redirector.15656:14@atlas-bkp2
> 091211 04:13:27 15661 Dispatch server.12937:[log in to unmask]:1094
> for status dlen=0
> 091211 04:13:27 15661 server.12937:[log in to unmask]:1094 do_Status: suspend
> 091211 04:13:27 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:27 15661 Node: c182.chtc.wisc.edu service suspended
> 091211 04:13:27 15661 XrdLink: No RecvAll() data from c182.chtc.wisc.edu FD=19
> 091211 04:13:27 15661 Update Counts Parm1=0 Parm2=0
> 091211 04:13:27 15661 Remove_Node
> server.12937:[log in to unmask]:1094 node 7.10
> 091211 04:13:27 15661 Protocol: server.26620:[log in to unmask] logged out.
> 091211 04:13:27 15661 server.26620:[log in to unmask] XrdPoll: FD
> 19 detached from poller 2; num=19
> 091211 04:13:27 15661 Dispatch server.10842:[log in to unmask]:1094
> for status dlen=0
> 091211 04:13:27 15661 server.10842:[log in to unmask]:1094 do_Status: suspend
> 091211 04:13:27 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:27 15661 Node: c178.chtc.wisc.edu service suspended
> 091211 04:13:27 15661 XrdLink: No RecvAll() data from c178.chtc.wisc.edu FD=15
> 091211 04:13:27 15661 Update Counts Parm1=0 Parm2=0
> 091211 04:13:27 15661 Remove_Node
> server.10842:[log in to unmask]:1094 node 9.12
> 091211 04:13:27 15661 Protocol: server.11901:[log in to unmask] logged out.
> 091211 04:13:27 15661 server.11901:[log in to unmask] XrdPoll: FD
> 15 detached from poller 1; num=20
> 091211 04:13:27 15661 Dispatch server.5535:[log in to unmask]:1094
> for status dlen=0
> 091211 04:13:27 15661 server.5535:[log in to unmask]:1094 do_Status: suspend
> 091211 04:13:27 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:27 15661 Node: c181.chtc.wisc.edu service suspended
> 091211 04:13:27 15661 XrdLink: No RecvAll() data from c181.chtc.wisc.edu FD=17
> 091211 04:13:27 15661 Update Counts Parm1=0 Parm2=0
> 091211 04:13:27 15661 Remove_Node
> server.5535:[log in to unmask]:1094 node 5.8
> 091211 04:13:27 15661 Protocol: server.13984:[log in to unmask] logged out.
> 091211 04:13:27 15661 server.13984:[log in to unmask] XrdPoll: FD
> 17 detached from poller 0; num=21
> 091211 04:13:27 15661 Dispatch server.23711:[log in to unmask]:1094
> for status dlen=0
> 091211 04:13:27 15661 server.23711:[log in to unmask]:1094 do_Status: suspend
> 091211 04:13:27 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:27 15661 Node: c183.chtc.wisc.edu service suspended
> 091211 04:13:27 15661 XrdLink: No RecvAll() data from c183.chtc.wisc.edu FD=22
> 091211 04:13:27 15661 Update Counts Parm1=0 Parm2=0
> 091211 04:13:27 15661 Remove_Node
> server.23711:[log in to unmask]:1094 node 8.11
> 091211 04:13:27 15661 Protocol: server.27735:[log in to unmask] logged out.
> 091211 04:13:27 15661 server.27735:[log in to unmask] XrdPoll: FD
> 22 detached from poller 2; num=18
> 091211 04:13:27 15661 XrdLink: No RecvAll() data from c184.chtc.wisc.edu FD=20
> 091211 04:13:27 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:27 15661 Remove_Node
> server.4131:[log in to unmask]:1094 node 3.6
> 091211 04:13:27 15661 Protocol: server.26787:[log in to unmask] logged out.
> 091211 04:13:27 15661 server.26787:[log in to unmask] XrdPoll: FD
> 20 detached from poller 0; num=20
> 091211 04:13:27 15661 Dispatch server.10585:[log in to unmask]:1094
> for status dlen=0
> 091211 04:13:27 15661 server.10585:[log in to unmask]:1094 do_Status: suspend
> 091211 04:13:27 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:27 15661 Node: c185.chtc.wisc.edu service suspended
> 091211 04:13:27 15661 XrdLink: No RecvAll() data from c185.chtc.wisc.edu FD=23
> 091211 04:13:27 15661 Update Counts Parm1=0 Parm2=0
> 091211 04:13:27 15661 Remove_Node
> server.10585:[log in to unmask]:1094 node 6.9
> 091211 04:13:27 15661 Protocol: server.8524:[log in to unmask] logged out.
> 091211 04:13:27 15661 server.8524:[log in to unmask] XrdPoll: FD 23
> detached from poller 0; num=19
> 091211 04:13:27 15661 Dispatch server.20264:[log in to unmask]:1094
> for status dlen=0
> 091211 04:13:27 15661 server.20264:[log in to unmask]:1094 do_Status: suspend
> 091211 04:13:27 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:27 15661 Node: c180.chtc.wisc.edu service suspended
> 091211 04:13:27 15661 XrdLink: No RecvAll() data from c180.chtc.wisc.edu FD=18
> 091211 04:13:27 15661 Update Counts Parm1=0 Parm2=0
> 091211 04:13:27 15661 Remove_Node
> server.20264:[log in to unmask]:1094 node 4.7
> 091211 04:13:27 15661 Protocol: server.14636:[log in to unmask] logged out.
> 091211 04:13:27 15661 server.14636:[log in to unmask] XrdPoll: FD
> 18 detached from poller 1; num=19
> 091211 04:13:27 15661 Dispatch server.1656:[log in to unmask]:1094
> for status dlen=0
> 091211 04:13:27 15661 server.1656:[log in to unmask]:1094 do_Status: suspend
> 091211 04:13:27 15661 Update Counts Parm1=-1 Parm2=0
> 091211 04:13:27 15661 Node: c186.chtc.wisc.edu service suspended
> 091211 04:13:27 15661 XrdLink: No RecvAll() data from c186.chtc.wisc.edu FD=24
> 091211 04:13:27 15661 Update Counts Parm1=0 Parm2=0
> 091211 04:13:27 15661 Remove_Node
> server.1656:[log in to unmask]:1094 node 2.5
> 091211 04:13:27 15661 Protocol: server.7849:[log in to unmask] logged out.
> 091211 04:13:27 15661 server.7849:[log in to unmask] XrdPoll: FD 24
> detached from poller 1; num=18
> 091211 04:14:14 15661 XrdSched: running drop node inq=0
> 091211 04:14:14 15661 XrdSched: running drop node inq=0
> 091211 04:14:14 15661 XrdSched: running drop node inq=0
> 091211 04:14:14 15661 XrdSched: running drop node inq=0
> 091211 04:14:14 15661 XrdSched: running drop node inq=0
> 091211 04:14:14 15661 XrdSched: running drop node inq=0
> 091211 04:14:14 15661 XrdSched: running drop node inq=0
> 091211 04:14:14 15661 XrdSched: running drop node inq=0
> 091211 04:14:14 15661 XrdSched: running drop node inq=0
> 091211 04:14:14 15661 XrdSched: running drop node inq=0
> 091211 04:14:14 15661 XrdSched: scheduling drop node in 13 seconds
> 091211 04:14:14 15661 XrdSched: scheduling drop node in 13 seconds
> 091211 04:14:14 15661 XrdSched: scheduling drop node in 13 seconds
> 091211 04:14:14 15661 XrdSched: scheduling drop node in 13 seconds
> 091211 04:14:14 15661 XrdSched: scheduling drop node in 13 seconds
> 091211 04:14:14 15661 XrdSched: scheduling drop node in 13 seconds
> 091211 04:14:14 15661 XrdSched: scheduling drop node in 13 seconds
> 091211 04:14:14 15661 XrdSched: scheduling drop node in 13 seconds
> 091211 04:14:14 15661 XrdSched: scheduling drop node in 13 seconds
> 091211 04:14:14 15661 XrdSched: scheduling drop node in 13 seconds
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 Drop_Node 63.66 cancelled.
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 Drop_Node 63.68 cancelled.
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 Drop_Node 63.69 cancelled.
> 091211 04:14:24 15661 Drop_Node 63.67 cancelled.
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 Drop_Node 63.70 cancelled.
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 Drop_Node 63.71 cancelled.
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 Drop_Node 63.72 cancelled.
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 Drop_Node 63.73 cancelled.
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 Drop_Node 63.74 cancelled.
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 Drop_Node 63.75 cancelled.
> 091211 04:14:24 15661 XrdSched: running drop node inq=1
> 091211 04:14:24 15661 Drop_Node 63.76 cancelled.
> 091211 04:14:24 15661 XrdSched: Now have 68 workers
> 091211 04:14:24 15661 XrdSched: running drop node inq=0
> 091211 04:14:24 15661 Drop_Node 63.77 cancelled.
> 091211 04:14:24 15661 XrdSched: running drop node inq=0
>
> Wen
>
>
> On Fri, Dec 11, 2009 at 9:50 PM, Andrew Hanushevsky
> <[log in to unmask]> wrote:
>> Hi Wen,
>>
>> To go past 64 data servers you will need to setup one or more supervisors.
>> This does not logically change the current configuration you have. You only
>> need to configure one or more *new* servers (or at least xrootd processes)
>> whose role is supervisor. We'd like them to run in separate machines for
>> reliability purposes, but they could run on the manager node as long as you
>> give each one a unique instance name (i.e., -n option).
>>
>> The front part of the cmsd reference explains how to do this.
>>
>> http://xrootd.slac.stanford.edu/doc/prod/cms_config.htm
>>
>> Andy
>>
>> On Fri, 11 Dec 2009, wen guan wrote:
>>
>>> Hi Andrew,
>>>
>>>   Is there any change to configure xrootd with more than 65
>>> machines? I used the configure below but it doesn't work.  Should I
>>> configure some machines' manager to be supvervisor?
>>>
>>> http://wisconsin.cern.ch/~wguan/xrdcluster.cfg
>>>
>>>
>>> Wen
>>>
>>
>

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
January 2009
December 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use