LISTSERV mailing list manager LISTSERV 16.5

Help for ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L Archives

ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L  March 2009

ATLAS-SCCS-PLANNING-L March 2009

Subject:

RE: [SLAC #163429] Request to use few memfs machines for ATLAS testing

From:

"Young, Charles C." <[log in to unmask]>

Date:

12 Mar 2009 12:25:16 -0700Thu, 12 Mar 2009 12:25:16 -0700

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (104 lines)




> -----Original Message-----
> From: Wei Yang via RT [mailto:[log in to unmask]] 
> Sent: Thursday, March 12, 2009 11:30 AM
> To: Young, Charles C.
> Cc: atlas-sccs-planning-l; Moss, Leonard J.
> Subject: Re: [SLAC #163429] Request to use few memfs machines 
> for ATLAS testing
> 
> 
> Hi Neal,
> 
> Can you add user jchapman to LSF user group atlpetagrp? Also, 
> when can we enable LSF queue atlpetaq?
> 
> Hi Charlie,
> 
> For usage with direct login to SLAC machines, atlint01 is 
> available (8 cores and 8GB). Memfs machine can also be use 

Atlint01 does not say linux64. 

[young@yakut08 ~]$ lshosts | grep atlint01
atlint01      LINUX INTEL_30  12.0     -      -      -     No (linux linux32 rhel40)
[young@yakut08 ~]$ lshosts | grep memfs01
memfs01       LINUX AMD_1800   5.5     2 16000M  8189M    Yes (bs linux linux64 rhel40 memfs)
[young@yakut08 ~]$ 

Does it matter? 

> via batch. I think Randy and Neal want to know how long John 
> will use them.

Them being memfs? It's hard to be precise but I would guess a few months. On and off rather than consistent usage all the time. I am referring to the test/debug part, and not the production part. 

> 
> For "production" use, I don't know if you mean the normal 
> atlas production channel via Panda. If so, someone else in 
> ATLAS production will have to decide how to handle this large 
> memory requirement. A pilot based system doesn't really tells 
> a site the job requirements because pilots don't know what 
> jobs they will run. The only thing SLAC can do is to setup 
> different Panda "sites" and imposed jobs requirement at site 
> level. However in this case setting up another Panda site 
> will not help because normal production will not use it.
> 
> If the "production" will be done by John himself, I think we 
> can setup another site ANALY_SITE_TEST (despite the prefix 
> ANALY_) and have the site use the dedicated atlaspetaq.

This is very useful information. We should not look at it as "what production wants" but "how do we get these jobs done". For example, if not using pilots is best, that is what we should do. If setting up another "site" is best, that is what we should do. Who do we need to make an informed decision with all affected parties in the discussion? My first guess:

	SCCS: Wei + anyone? 
	John 
	Production: Borat K.  
	Panda: ?

> 
> Regards,
> Wei Yang  |  [log in to unmask]  |  650-926-3338(O)
> 
> > From: "Young, Charles C." <[log in to unmask]>
> > Date: Thu, 12 Mar 2009 08:26:27 -0700
> > To: unix-admin <[log in to unmask]>
> > Cc: atlas-sccs-planning-l 
> <[log in to unmask]>, 
> > "Moss, Leonard J." <[log in to unmask]>
> > Subject: RE: [SLAC #163429] Request to use few memfs machines for 
> > ATLAS testing
> > 
> > Hi Wei,
> > 
> > I know John has run the jobs (that require ~4 GB) at CERN already. 
> > There are no fundamental problems. However, it would be 
> useful to make 
> > a quick test here, either interactively or normal batch. There is 
> > enough memory on the typical interactive node, but it may 
> impact other 
> > users on that node. Hence the suggestion of doing it in a 
> fenced off 
> > area. Once production starts, there may be unforeseen 
> problems and it 
> > would once again be useful to be able to go in and debug.
> > 
> > When it comes to production, we will want to go through the normal 
> > channels as much as possible. The only things special that 
> I am aware 
> > of is the large memory requirement. If that means defining another 
> > "site", I guess we have to do it. Does this mean that each 
> "site" is 
> > expected to be homogeneous with no variations in the 
> properties of its 
> > CPUs? Memory, swap space, etc. I would naively expect a bit 
> more flexibility. Cheers.
> 
> 
> 
> 



Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

September 2016
July 2016
June 2016
May 2016
April 2016
March 2016
November 2015
September 2015
July 2015
June 2015
May 2015
April 2015
February 2015
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
September 2013
August 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use