LISTSERV mailing list manager LISTSERV 16.5

Help for XROOTD-L Archives


XROOTD-L Archives

XROOTD-L Archives


XROOTD-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

XROOTD-L Home

XROOTD-L Home

XROOTD-L  November 2005

XROOTD-L November 2005

Subject:

RE: FW: SC2005 Bandwidth Challenge Result (fwd)

From:

"Cottrell, Les" <[log in to unmask]>

Date:

20 Nov 2005 23:46:29 -0800Sun, 20 Nov 2005 23:46:29 -0800

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (126 lines)

Hi Rene,
 
The mini-PetaCache at SLAC consisted of 10 * Sun v20z's each with dual 1.8GHz AMD Opteron cpus with 2GB main memory with a Chelsio or Neterion 10GE NIC and a 36 or 73GByte SCSI system disk; plus 4 * Sun v20z's each with dual 2.6GHz AMD Opteron cpus 4GB of main memory, with a Neterion or Chelsio 10GE NIC, a 73GB SCSI system disk and a dual 2 Gbits/s fibre channel connection to a Sun 3510 fibre channel 12 disk tray.

The mini-Petacache at SC05 had 10 Sun v20z's with dual 2.6GHz AMD Opteron cpus with 4GB main memory, a Neterion 10GE NIC and a 73GB SCSI system disk; plus 4 Sun v20z's with dual 1.8GHz AMD Opteron cpus with 2GB main memory a Chelsio 10GE NIC and a SCSI 36 GB SCSI system disk.  Each 2.6GHz cpu also had dual 2 Gbits/s fibre channel HBA's connected to Storcloud for a total of 20TBytes.  

We ran xrootd with 125 clients per host, and had 3 pairs of machines for each (of two) 10Gbits/s waves from SLAC to the SLAC/FNAL booth at SC05. The waves were provided by ESnet, one was a shared routed wave, the second was a dedicated layer 2 wave. Using standard (Linux 2.6.12 New Reno) TCP, we achieved about 9.8Gbits/s single direction (3 hosts to 3 hosts), and over 16Gbits/s peak (for 5 mins) for both directions simultaneously. 

For more details see: http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2005/hiperf.html
-----Original Message-----
From: Andrew Hanushevsky [mailto:[log in to unmask]] 
Sent: Sunday, November 20, 2005 9:38 PM
To: Cottrell, Les
Subject: Re: FW: SC2005 Bandwidth Challenge Result (fwd)

Hi Les,

Could you give Rene the details (pleasew copy me); thanks.

Andy


---------- Forwarded message ----------
Date: Sun, 20 Nov 2005 08:14:40 +0100 (MET)
From: Rene Brun <[log in to unmask]>
To: Andrew Hanushevsky <[log in to unmask]>
Cc: Peter Elmer <[log in to unmask]>, Rene Brun <[log in to unmask]>,
     Fabrizio Furano <[log in to unmask]>,
     Gerardo Ganis <[log in to unmask]>,
     Fons Rademakers <[log in to unmask]>,
     Jean-Yves Nief <[log in to unmask]>, [log in to unmask]
Subject: Re: FW: SC2005 Bandwidth Challenge Result  (fwd)

Hi Andy,

Congratulations!

Could you give more details on teh setup on both sides, in particular the mini-Petacache at SLAC?

Rene

On Sat, 19 Nov
2005, Andrew Hanushevsky wrote:

> See the last paragraph where the SC05 bandwidth challenge was won 
> using xrootd. Seems like the server is the fastest thing out there today.
>
> Andy
>
> ---------- Forwarded message ----------
> Date: Thu, 17 Nov 2005 15:24:25 -0800
> From: "Cottrell, Les" <[log in to unmask]>
> To: [log in to unmask]
> Cc: "Rao, Nageswara S." <[log in to unmask]>, W. R. Wing <[log in to unmask]>,
>     scs-l <[log in to unmask]>, William E. Johnston <[log in to unmask]>,
>     Kevin Oberman <[log in to unmask]>,
>     "Calder, Neil" <[log in to unmask]>
> Subject: FW: SC2005 Bandwidth Challenge Result
>
> Attached is email from Harvey Newman concerning the Bandwidth Challenge.
>
> We (SLAC) need to especially thank UltraScienceNet and ESnet for the timely provision of the two ESnet waves (one routed, the other layer 2 via UltraScienceNet). Nagi Rao, Steven and Bill Wing from UltraScienNet and Kevin Oberman from ESnet provided invaluable support whenever requested.
>
> With each of these 10Gbits/s (per direction) we were able to successfully read and send ~ 15Gbits/sec over long periods (~8.5Gbits/s in one direction and ~6.5Gbits/s simultaneously in the other.), see sea01-05.jpg for the Esnet router layer 3 wave and sea06-10.jpg for the USN layer 2 wave. We also simultaneoulsy wrote about 3 Tbytes/hour to StorCloud. The aggregate from the SLAC/FNAL booth was about 45-55Gbits/s (see alllinks.jpg). The aggregate from SLAC and FNAL peaked around 150Gbits/s (see bw.jpg, note the read outs were at 20 second intervals but the display is averaged over a longer interval), and during the challenge we easily exceeded last years record of about 101 Gbits/s (which was suatained for about 100 seconds) during most of the challenge time.  BTW I just heard we did win this year's bandwidth challenge, awards this afternoon.
>
> With a single direction we were able to get up to 9.8Gbits/s.
>
> The main application we used was Andy Hanushevsky's xrootd between a mini-PetaCache cluster built in the SLAC booth and a mini-PetaCache cluster built at SLAC.
>
> -----Original Message-----
> From: Harvey Newman [mailto:[log in to unmask]]
> Sent: Thursday, November 17, 2005 10:21 AM
> To: Harvey Newman
> Cc: ultralight; Conrad Steenberg; Iosif LeGrand; Julian Bunn; Rick 
> Wilkinson; Suresh Singh; Xun Su; Saima Iqbal; Michael Thomas; 
> Frank.Van.Lingen; Yang Xia; Dan Nae; Bradley, W. Scott; Philippe 
> Galvez; ICFA SCIC; Michael Stanton; Doug Walsten; Philippe Levy; US 
> CMS Collaboration Board; US CMS Advisory Board; US CMS Level1/Level2; 
> Harvey Newman
> Subject: Re: SC2005 Bandwidth Challenge Result
>
>
> Dear Colleagues,
>
> Congratulations for a great job yesterday and throughout the week
>   by the HEP team.
>
> The SC2005 BWC from HEP was designed to preview the scale and complexity of data operations among many sites interconnected with many 10 Gbps links. We had 22 10 Gbps optical "waves" connected to the Caltech/CACR and FNAL/SLAC booths.
>
> We reached a measured peak of 150.7 Gbps, and sustained more than 100 
> Gbps for several hours using multiple applications based on TCP and in 
> many cases FAST: bbcp, xrootd, gridftp and dcache. We transported 470 
> Terabytes of physics data in under
> 24 hours.
>
> The SCInet Sc2005 network team assigned taps to monitor 17 of the waves at our booths and recorded a peak of 131 Gbps during a 15 minute measurement period last evening.
>
> We are awaiting official word but discussions with the judges last night indicated that we outpaced the competition by a wide margin.
>
> The exercise was not at all trivial. We needed to work through repeated  system and/or network interface crashes under stress.
> A great number of kernel, configuration and routing issues had to be worked out in the days before the BWC itself. It is a tribute to the team that these were all worked through successfully.
>
> The result was a great learning experience, and it had lasting value in several areas (a partial list):
>
> TCP optimization and Linux kernel building (including FAST); Performance optimization and tuning of applications:
>   bbcp and xrootd from SLAC gave surprisingly good results
>    for example
> Use of production and test clusters at FNAL reaching
>   more than 20 Gbps of network throughput  Stability limits of server and network interfaces (and heating)
>    under heavy loads.W. R. Wing [[log in to unmask]]
>
> We also were very pleased with the participation of our international
>   partners from Brazil, Japan and Korea who worked hard in the
>   days to weeks before the competition to be able to participate
>   effectively.
>
> More as the day progresses. A BWC award session will take place  at 15:30 - 17:00 Pacific time.
>
> Best regards
> Harvey
>
>



Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
January 2009
December 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use