Print

Print


This is much, much appreciated. Thanks, Douglas!

I will add that I found it simpler to have a script that started the 
manager and X number of supervisors on a single machine, so I didn't 
have to deal with more than one type of node, but that doesn't really 
affect how useful the cluster will be.

-Daniel
On 04/11/2013 04:14 PM, Douglas Smith wrote:
> Ok, some notes on the testing on yilis, for use, please use them.
> If you need more details just ask.  But there are ~120 nodes up
> and running, loaded with ~10k chunks of data.  This is the pt12
> object only.  You can access this, by pointing the mysql client at
> port 4040 on yili0001, and try queries there.
>
> The code is install on each node at '/u1/douglas/prod', and you
> can find log files under the 'xrootd-run' dir. there.  the controller
> machine is yili0001, and yili0002 is setup as the xrootd supervisor.
> The workers are listed in the file '/u1/douglas/list.txt'.  To start
> the servers on the controller there is the script start_all.py in
> /u1/douglas, and to stop them there is stop_all.py.  To do this
> on all the workers at once, there is an ad-hoc script for now,
> but I have a new qserv-admin in python that will replace this,
> it just hasn't been de-bugged yet.
>
> Let me know what else you want to know about the test bed
> here.
>
> Douglas
>
> ########################################################################
> Use REPLY-ALL to reply to list
>
> To unsubscribe from the QSERV-L list, click the following link:
> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1