Print

Print


Hi Wei,

Thanks! Some questions. What is involved in the "mean prepare inputs time" step? Is it copying input file from (local?) storage to worker node? Is there some preparation of the input data beyond moving it around? 

Can we find CPU percent normalized to the execution step only? I.e. exclude file copy overhead. 

Don't understand definition on page 2. utilization = cpuconsumption/cpufactor/(stoptime-starttime). What is cpufactor? 

Cheers.

							Charlie
--
Charles C. Young
M.S. 43, Stanford Linear Accelerator Center       
P.O. Box 20450                                         
Stanford, CA 94309                                      
[log in to unmask]                                
voice  (650) 926 2669                         
fax    (650) 926 2923                       
CERN GSM +41 76 487 2069 

> -----Original Message-----
> From: [log in to unmask] 
> [mailto:[log in to unmask]] On 
> Behalf Of Wei Yang
> Sent: Thursday, April 23, 2009 10:16 PM
> To: atlas-sccs-planning-l
> Subject: FW: [Usatlas-prodsys-l] first HammerCloud jobs for US cloud
> 
> FYI, some of the analysis stress test over the grid.
> 
> SLAC is significant faster in "mean prepare inputs time" 
> because (I think) we are doing direct root file reading. It 
> doesn't means our network IO is faster, and these type of 
> stress test didn't put high enough stress on our storage.
> 
> Wei Yang  |  [log in to unmask]  |  650-926-3338(O)
> 
> 
> 
> 
> ------ Forwarded Message
> From: Nurcan Ozturk <[log in to unmask]>
> Date: Thu, 23 Apr 2009 14:25:36 -0500 (CDT)
> To: <[log in to unmask]>
> Subject: [Usatlas-prodsys-l] first HammerCloud jobs for US cloud
> 
> Hello Tier1 and Tier2's,
> 
> The results of the first HammerCloud test with metrics were 
> reported this morning in the ADC Operations meeting, please 
> see the slides at:
> 
> http://indico.cern.ch/getFile.py/access?subContId=1&contribId=
> 3&resId=0&mate
> rialId=slides&confId=57312
> 
> This report was on a test monitored at:
> 
> http://gangarobot.cern.ch/st/test_253/
> 
> Please check your site's status from the slides and from the 
> monitoring link. ANALY_SWT2_CPB was missed in the test, it'll 
> be put in.
> 
> As you will see from the monitoring page, there is a list of 
> input datasets being used in the tests:
> 
> Input DS Patterns:
>      mc08.*Wmunu*.recon.AOD.e*_s*_r5*tid*
>      mc08.*Zprime_mumu*.recon.AOD.e*_s*_r5*tid*
>      mc08.*Zmumu*.recon.AOD.e*_s*_r5*tid*
>      mc08.*T1_McAtNlo*.recon.AOD.e*_s*_r5*tid*
>      mc08.*H*zz4l*.recon.AOD.e*_s*_r5*tid*
>      mc08.*.recon.AOD.e*_s*_r5*tid*
> 
> Please make it sure that you have a good replica of the 
> matching datasets at your site. Otherwise jobs will keep 
> failing on the problematic files.
> 
> We will need to discuss in the next FacilityWGAP meeting on 
> Tuesday how often and at what scale we like to have 
> HammerCloud to run in the US cloud.
> 
> Please also note on the slides that two mailing lists are setup now:
> 
> [log in to unmask]
> [log in to unmask]
> 
> Please subscribe as appropriate.
> 
> Regards,
> Nurcan.
> _______________________________________________
> Usatlas-prodsys-l mailing list
> [log in to unmask]
> https://lists.bnl.gov/mailman/listinfo/usatlas-prodsys-l
> 
> ------ End of Forwarded Message
> 
>