FYI, some of the analysis stress test over the grid. SLAC is significant faster in "mean prepare inputs time" because (I think) we are doing direct root file reading. It doesn't means our network IO is faster, and these type of stress test didn't put high enough stress on our storage. Wei Yang | [log in to unmask] | 650-926-3338(O) ------ Forwarded Message From: Nurcan Ozturk <[log in to unmask]> Date: Thu, 23 Apr 2009 14:25:36 -0500 (CDT) To: <[log in to unmask]> Subject: [Usatlas-prodsys-l] first HammerCloud jobs for US cloud Hello Tier1 and Tier2's, The results of the first HammerCloud test with metrics were reported this morning in the ADC Operations meeting, please see the slides at: http://indico.cern.ch/getFile.py/access?subContId=1&contribId=3&resId=0&mate rialId=slides&confId=57312 This report was on a test monitored at: http://gangarobot.cern.ch/st/test_253/ Please check your site's status from the slides and from the monitoring link. ANALY_SWT2_CPB was missed in the test, it'll be put in. As you will see from the monitoring page, there is a list of input datasets being used in the tests: Input DS Patterns: mc08.*Wmunu*.recon.AOD.e*_s*_r5*tid* mc08.*Zprime_mumu*.recon.AOD.e*_s*_r5*tid* mc08.*Zmumu*.recon.AOD.e*_s*_r5*tid* mc08.*T1_McAtNlo*.recon.AOD.e*_s*_r5*tid* mc08.*H*zz4l*.recon.AOD.e*_s*_r5*tid* mc08.*.recon.AOD.e*_s*_r5*tid* Please make it sure that you have a good replica of the matching datasets at your site. Otherwise jobs will keep failing on the problematic files. We will need to discuss in the next FacilityWGAP meeting on Tuesday how often and at what scale we like to have HammerCloud to run in the US cloud. Please also note on the slides that two mailing lists are setup now: [log in to unmask] [log in to unmask] Please subscribe as appropriate. Regards, Nurcan. _______________________________________________ Usatlas-prodsys-l mailing list [log in to unmask] https://lists.bnl.gov/mailman/listinfo/usatlas-prodsys-l ------ End of Forwarded Message