Print

Print


...do you need to procure 1Gb switch ports then? How many?

 	Antonio

Yang, Wei wrote Today:

> Here is a summary (and my opinion) after talking to several of you on Friday about the proof configuration:
>
> 1. Spindles. Both David and Bart, and the Wisc Proof people think that more spindles are important. R510 can host 12 data disks while R710 can only host 6 (for system and data together). So R510 clearly has an advantage. There were suggestion to  use 1TB drives instead of 2TB drives to save money for CPUs. After carefully examined the config, I think we should stay with 2TB drives because doing so will cut disk space by half (~50TB) and only give us 8 more cores.
>
> 2. 10Gb. The price quote for Dell PowerConnect 8024 10Gb switch is pretty expensive, ~$9400. Someone who previously worked at AGL Tier 2 (who I happened to meet on Friday) told me that AGLT2 had bad experience with the Dell switches. So I suggest that we go with 1Gb network. This should not have negative impact of the data loading and offloading speed as long as users have several data transfer streams in parallel. The needs for inter-cluster network should be minimized by Proof itself.
>
> 3. The head node may still need to be a 10Gb so that users can use it to manually transfer data in and out. I will look for a R710 to hook it to the Tier 2's 8024F switch.
>
> 4. DDM site. We discussed of having a DDM site (SRM endpoint) for the cluster in order to allow direct DDM transfer in and out, in addition to use dq2-get to write directly to the cluster. I think a DDM site is a good think to have but I need to talk to the US Tier 3 technical people on catalog setup (so that is no guarantee) , and we should remember that with a DDM site it is always easy to copy data in but hard to delete them.
>
> So i will work with Teri Church on the price of a R710 with 10Gb and several R510s with 1Gb unless I hear objections ...
>
> regards,
> Wei Yang  |  [log in to unmask]  |  650-926-3338(O)
>
>
>
>

__
.   __      ____
.  /_/\    /_/_/      Antonio Ceseracciu             Network
. /_/\_\  /_/        [log in to unmask]     Specialist
./_/_/\_\ \_\ _       SLAC +1 (650) 926 2895       SLAC/SCCS
._/    \_\ \_\_\