LISTSERV mailing list manager LISTSERV 16.5

Help for ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L Archives

ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L  August 2007

ATLAS-SCCS-PLANNING-L August 2007

Subject:

RE: eval01 comparison

From:

"Gordon Watts" <[log in to unmask]>

Date:

23 Aug 2007 06:25:02 -0700Thu, 23 Aug 2007 06:25:02 -0700

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (99 lines)

Hi,
  No -- don't do extra work. If you have an occasion to repeat the test
for other reasons, do take a look if you have time. I got into a
discussion with someone recently who claimed that because everything was
in .so's that there would be very little memory increase. My claim was
the same as yours: that most of the space in an executable was
per-process data tables. This was a discussion concerning how many cores
can we have before memory gets too expensive...

	Cheers,
		Gordon.

> -----Original Message-----
> From: Stephen J. Gowdy [mailto:[log in to unmask]]
> Sent: Thursday, August 23, 2007 3:22 PM
> To: Gordon Watts
> Cc: ATLAS SCCS Planning
> Subject: RE: eval01 comparison
> 
> Hi Gordon,
>  	No, sorry I didn't. As all the code is in shared libraries at
> least that part should be common. I expect though that most of the
> memory
> used by jobs is data space which cannot be shared, so it is probably a
> small effect. Would you like me to repeat it to get these numbers?
> 
>  						regards,
> 
>  						Stephen.
> 
> On Thu, 23 Aug 2007, Gordon Watts wrote:
> 
> > Hi Stephan,
> >  Did you look at total memory usage on the machine as you added the
> > cores? Was it linear, or linear with a negative intercept
(indicating
> a
> > great deal of shared code between the processes)? Since you can't
> > predict which jobs will be running on any one machine, this probably
> > isn't a relevant number for this discussion, however, I'm curious to
> > know how well ATLAS jobs do in this circumstance.
> >
> > 	Cheers,
> > 		Gordon.
> >
> >> -----Original Message-----
> >> From: [log in to unmask] [mailto:owner-
> >> [log in to unmask]] On Behalf Of Stephen J.
> Gowdy
> >> Sent: Tuesday, August 21, 2007 4:42 PM
> >> To: ATLAS SCCS Planning
> >> Subject: eval01 comparison
> >>
> >> Hi All,
> >>  	So I've ran 1, 2, 4 and 8 jobs simultaneously on eval01. I also
> >> ran one job on yakut04 to compare CPU speeds (so only look at CPU
> time
> >> for that, wall time is probably not good to compare in this case).
> > I've
> >> attached the spreadsheet with the numbers if anyone wants to look
at
> >> them.
> >>  	The basic conclusion is that the new Intel CPU's GHz are worth
> >> about 15% more than the Opterons (this is a big change from the
P4s,
> >> where IIRC BaBar say a 30% drop). Overall we loose about 5% when
> >> running eight jobs on the same machine, with some coming from less
> CPU
> >> efficiency and more system time used (some more user time too,
> >> particularly in going from
> >> 4 to 8 jobs).
> >>  	The job run was a simulation job which is the most CPU intensive
> >> part of the ATLAS job suite. (only 3 events which took about 30
> >> minutes).
> >>
> >>  						regards,
> >>
> >>  						Stephen.
> >>
> >> --
> >>   /------------------------------------+-------------------------\
> >> |Stephen J. Gowdy, SLAC               | CERN     Office: 32-2-A22|
> >> |http://www.slac.stanford.edu/~gowdy/ | CH-1211 Geneva 23        |
> >> |                                     | Switzerland              |
> >> |EMail: [log in to unmask]       | Tel: +41 22 767 5840     |
> >>   \------------------------------------+-------------------------/
> >
> 
> --
>   /------------------------------------+-------------------------\
> |Stephen J. Gowdy, SLAC               | CERN     Office: 32-2-A22|
> |http://www.slac.stanford.edu/~gowdy/ | CH-1211 Geneva 23        |
> |                                     | Switzerland              |
> |EMail: [log in to unmask]       | Tel: +41 22 767 5840     |
>   \------------------------------------+-------------------------/



Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

September 2016
July 2016
June 2016
May 2016
April 2016
March 2016
November 2015
September 2015
July 2015
June 2015
May 2015
April 2015
February 2015
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
September 2013
August 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use