Print

Print


Dear Fabrice,

I think it is very important to make sure we all have the same view of the goals of this work. I will express the perspective of CC-IN2P3.

At CC-IN2P3 we provided 3 machines installed with a typical software environment, very much like the one the future DELL machines will be installed with. The objective was for the Qserv team to validate that the software is installable and works on top of that reference environment and identify the possibly missing dependencies. I understand from your previous e-mails that this goal is reached, but please feel free to correct me if I'm wrong.

Now, the next step is to package Qserv so that we (CC-IN2P3) can deploy it on a larger cluster using the current tools we use for that, which are based currently on Puppet. For this step, we strongly prefer Qserv to be packaged int the form of RPMs. We intend the software to be installed in the local disk of the machines in the cluster. In particular, I think it is not wise at all to rely on the existence of a shared file system among the nodes in the Qserv cluster for deployment purposes. I understand that Qserv is designed to be a shared-nothing database, so I'm assuming a shared file system is not needed at runtime, but again, I may be wrong.

From your message I understand that the Qserv team does not intend to provide RPMs. Yvan offered help to build the RPMs based on the information and experience you have so far gathered deploying Qserv on the 3 test machines. Yvan's offer still holds.

In summary, from our perspective the next steps would be:

With your contribution, package Qserv in the form of RPMs
We at CC-IN2P3 will use those RPMs to validate that we can use them to install Qserv with our existing tools

Once we have reached that stage, any update to the Qserv would be delivered as a new release in the form of RPMs. At CC-IN2P3 we would take care of deploying those updates to the machines in the cluster.

We do intend to explore other techniques such as Docker containers but not before making sure we have reached the first milestone for us which is to have Qserv installed in the cluster using our current tools.

Best regards,


On 2014/11/12, at 23:07 , Fabrice Jammes <[log in to unmask]> wrote:

> Hello,
> 
> Whereas it is planned in the long-term, Qserv team will not work on rpm packaging in the short-term.
> 
> With Andy Salnikov, from SLAC, we propose two strategies in order to install Qserv on the future DELL cluster:
> 
> Note that binaries used by the worker and the master are exactly the same, there total size with dependencies is around 1GB, but not everything is used.
> 
> 1. On the master node, we can install the binaries in /sps/lsst/Qserv/stack, and then in the post-install script, configure the worker nodes to use these binaries.
> But could you please confirm us that sps filesystem will be able to support the load of the 25-50 nodes using these binaries during Qserv execution?
> 
> 2. We can build Qserv on the master node of the cluster, rsync the binaries to /sps/lsst/Qserv/stack and then each worker node would run a post-install script which would rsync the binaries from /sps to its local filesystem and then configure the node.
> 
> Which one of these solutions would you recommend?
> 
> Furthermore, do you think we could get an access to  LSST distribution server (https://sw.lsstcorp.org/eupspkg/) on the master node of the cluster? This would ease a lot our work.
> 
> Cheers,
> 
> Fabrice
> 


Fabio Hernandez

CNRS – IN2P3 Computing Centre · Lyon (France)     ·     e-mail: [log in to unmask]     ·     tel: +33 4 78 93 08 80



########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1