Print

Print


Jacek Becla>
> Andy, IN2P3 is getting this cluster for free from Dell,
> the decisions were made well many many month ago, by
> Dell people. We didn't have many choices/options.
> We are very grateful for the 1 TB size disks, there
> are plenty of them!

You got the point :) This is more of an arrangement than a technical
choice.

Cheers,

> Jacek
> 
> 
> On 01/16/2015 05:33 PM, Andrew Hanushevsky wrote:
> >Hmmm, why are you going with 1TB disks? Certainly 2TB disks would be
> >just as good and probably more cost effective. Frankly, I'd price out
> >each size (1TB, 2TB, 3TB and even 4TB -- they should be all available
> >now in the 2.5" form factor).
> >
> >Andy
> >
> >On Fri, 16 Jan 2015, Yvan Calas wrote:
> >
> >>Hi,
> >>
> >>>On 15 Jan 2015, at 00:16, Fabrice Jammes <[log in to unmask]>
> >>>wrote:
> >>>
> >>>Here's what Qserv team would like to have on the cluster, in addition
> >>>to what we have defined previously:
> >>>
> >>>- Scientific Linux on all node, in order to get C++11 support
> >>
> >>Do you need SL7 only because of C++11, or is there any other reason?
> >>Is it possible to have full C++11 support on SL6 nodes actually?
> >>
> >>As I already told you, it might take time to install SL7 on servers at
> >>CC-IN2P3 (probably 2 months or more).
> >>
> >>>- 10TB shared storage available for all nodes and able to support a
> >>>large amount of io during data-loading
> >>
> >>The features of the new Dell machines are as follow:
> >>
> >>- DELL PowerEdge R620
> >> + 2 x Processors Intel Xeon E5-2603v2 1.80 Ghz 4 cores, 10 Mo cache,
> >>6.4 GT/s , 80W
> >> + RAM: 16 Go DDR-3 1600MHz (2x8Go)
> >> + 10 x 1 TB disk Nearline SAS 6 Gbps 7200 Tpm 2,5" - hotplug
> >> + 1 x RAID card H710p with 1 GB nvram
> >> + 1 x 1 GbE with 4 ports Broadcom® 5720 Base-T card
> >> + 1 x iDRAC 7 Enterprise card
> >> + redundant power supply
> >>
> >>>and some questions:
> >>>
> >>>- what will be the disk architecture (which kind of RAID, or
> >>>something else, or nothing?)
> >>
> >>Since there is 10TB on each server, we plan to configure them in
> >>RAID-6 (2 parity disks - 7.4 TB available), or in RAID-5 (one parity
> >>disk - 8.4TB available) in you need more space on each node. If you
> >>are thinking of another better RAID configuration for qserv, please
> >>let us know ;)
> >>
> >>Note that we plan to install the 25 first machines int the computing
> >>center at the beginning of week 5 (26-27/01/2015).
> >>
> >>>- we don't know a lot about Puppet and would like to know which kind
> >>>of feature it offers (system monitoring, service restart, ...)?
> >>
> >>qserv admins at CC-IN2P3 (mainly myself) will write a puppet module in
> >>order to:
> >>
> >>- deploy the qserv software automatically,
> >>- tune the OS and qserv parameters.
> >>
> >>Moreover, since the soft will run as qserv user (as it was the case
> >>last year on the 300+ nodes), I guess that you will be able to restart
> >>the service, change qserv configuration files if needed, etc. using sudo.
> >>
> >>The monitoring will be based on Nagios (probes to define and write)
> >>and collectd/smurf mainly. However plots generated by smurf will be
> >>only accessible from inside CC-IN2P3. If some extra monitoring is
> >>needed, we will deploy it.
> >>
> >>
> >>>Would it be possible to talk to a Puppet/monitoring expert when these
> >>>kind of questions occurs?
> >>
> >>I am not an expert of puppet, but I can try to answer to all of your
> >>questions ;) If I don't know the answer, I will ask to Mattieu, and if
> >>really needed you can contact him directly.
> >>
> >>Cheers,
> >>
> >>Yvan
> >>
> >>
> >>---
> >>Yvan Calas
> >>CC-IN2P3 -- Storage Group
> >>21 Avenue Pierre de Coubertin
> >>CS70202
> >>F-69627 Villeurbanne Cedex
> >>Tel: +33 4 72 69 41 73
> >>
> >>
> >>########################################################################
> >>Use REPLY-ALL to reply to list
> >>
> >>To unsubscribe from the QSERV-L list, click the following link:
> >>https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1
> >>
> >
> >########################################################################
> >Use REPLY-ALL to reply to list
> >
> >To unsubscribe from the QSERV-L list, click the following link:
> >https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1
> 

-- 
Mattieu Puel
Sysadmins team manager
IN2P3 Computing Centre
http://cc.in2p3.fr
+33 (0)4 78 93 08 80

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1