Print

Print


	I was asking the same question an hour ago ... The
article is not very specific and I am "dying" to learn more
about it ;-)  ...

	What a milestone however (very nice indeed).

Rene Brun wrote:
> Hi Andy,
> 
> Congratulations!
> 
> Could you give more details on teh setup on both sides,
> in particular the mini-Petacache at SLAC?
> 
> Rene
> 
> On Sat, 19 Nov 2005, Andrew Hanushevsky wrote:
> 
>> See the last paragraph where the SC05 bandwidth challenge was won using
>> xrootd. Seems like the server is the fastest thing out there today.
>>
>> Andy
>>
>> ---------- Forwarded message ----------
>> Date: Thu, 17 Nov 2005 15:24:25 -0800
>> From: "Cottrell, Les" <[log in to unmask]>
>> To: [log in to unmask]
>> Cc: "Rao, Nageswara S." <[log in to unmask]>, W. R. Wing <[log in to unmask]>,
>>     scs-l <[log in to unmask]>, William E. Johnston <[log in to unmask]>,
>>     Kevin Oberman <[log in to unmask]>,
>>     "Calder, Neil" <[log in to unmask]>
>> Subject: FW: SC2005 Bandwidth Challenge Result
>>
>> Attached is email from Harvey Newman concerning the Bandwidth Challenge.
>>
>> We (SLAC) need to especially thank UltraScienceNet and ESnet for the 
>> timely provision of the two ESnet waves (one routed, the other layer 2 
>> via UltraScienceNet). Nagi Rao, Steven and Bill Wing from 
>> UltraScienNet and Kevin Oberman from ESnet provided invaluable support 
>> whenever requested.
>>
>> With each of these 10Gbits/s (per direction) we were able to 
>> successfully read and send ~ 15Gbits/sec over long periods 
>> (~8.5Gbits/s in one direction and ~6.5Gbits/s simultaneously in the 
>> other.), see sea01-05.jpg for the Esnet router layer 3 wave and 
>> sea06-10.jpg for the USN layer 2 wave. We also simultaneoulsy wrote 
>> about 3 Tbytes/hour to StorCloud. The aggregate from the SLAC/FNAL 
>> booth was about 45-55Gbits/s (see alllinks.jpg). The aggregate from 
>> SLAC and FNAL peaked around 150Gbits/s (see bw.jpg, note the read outs 
>> were at 20 second intervals but the display is averaged over a longer 
>> interval), and during the challenge we easily exceeded last years 
>> record of about 101 Gbits/s (which was suatained for about 100 
>> seconds) during most of the challenge time.  BTW I just heard we did 
>> win this year's bandwidth challenge, awards this afternoon.
>>
>> With a single direction we were able to get up to 9.8Gbits/s.
>>
>> The main application we used was Andy Hanushevsky's xrootd between a 
>> mini-PetaCache cluster built in the SLAC booth and a mini-PetaCache 
>> cluster built at SLAC.
>>
>> -----Original Message-----
>> From: Harvey Newman [mailto:[log in to unmask]]
>> Sent: Thursday, November 17, 2005 10:21 AM
>> To: Harvey Newman
>> Cc: ultralight; Conrad Steenberg; Iosif LeGrand; Julian Bunn; Rick 
>> Wilkinson; Suresh Singh; Xun Su; Saima Iqbal; Michael Thomas; 
>> Frank.Van.Lingen; Yang Xia; Dan Nae; Bradley, W. Scott; Philippe 
>> Galvez; ICFA SCIC; Michael Stanton; Doug Walsten; Philippe Levy; US 
>> CMS Collaboration Board; US CMS Advisory Board; US CMS Level1/Level2; 
>> Harvey Newman
>> Subject: Re: SC2005 Bandwidth Challenge Result
>>
>>
>> Dear Colleagues,
>>
>> Congratulations for a great job yesterday and throughout the week
>>   by the HEP team.
>>
>> The SC2005 BWC from HEP was designed to preview the scale and 
>> complexity of data operations among many sites interconnected with 
>> many 10 Gbps links. We had 22 10 Gbps optical "waves" connected to the 
>> Caltech/CACR and FNAL/SLAC booths.
>>
>> We reached a measured peak of 150.7 Gbps, and sustained more than 100 
>> Gbps for several hours using multiple applications based on TCP and in 
>> many cases FAST: bbcp, xrootd, gridftp and dcache. We transported 470 
>> Terabytes of physics data in under
>> 24 hours.
>>
>> The SCInet Sc2005 network team assigned taps to monitor 17 of the 
>> waves at our booths and recorded a peak of 131 Gbps during a 15 minute 
>> measurement period last evening.
>>
>> We are awaiting official word but discussions with the judges last 
>> night indicated that we outpaced the competition by a wide margin.
>>
>> The exercise was not at all trivial. We needed to work through 
>> repeated  system and/or network interface crashes under stress.
>> A great number of kernel, configuration and routing issues had to be 
>> worked out in the days before the BWC itself. It is a tribute to the 
>> team that these were all worked through successfully.
>>
>> The result was a great learning experience, and it had lasting value 
>> in several areas (a partial list):
>>
>> TCP optimization and Linux kernel building (including FAST); 
>> Performance optimization and tuning of applications:
>>   bbcp and xrootd from SLAC gave surprisingly good results
>>    for example
>> Use of production and test clusters at FNAL reaching
>>   more than 20 Gbps of network throughput  Stability limits of server 
>> and network interfaces (and heating)
>>    under heavy loads.W. R. Wing [[log in to unmask]]
>>
>> We also were very pleased with the participation of our international
>>   partners from Brazil, Japan and Korea who worked hard in the
>>   days to weeks before the competition to be able to participate
>>   effectively.
>>
>> More as the day progresses. A BWC award session will take place  at 
>> 15:30 - 17:00 Pacific time.
>>
>> Best regards
>> Harvey
>>
>>

-- 
              ,,,,,
             ( o o )
          --m---U---m--
              Jerome