LISTSERV mailing list manager LISTSERV 16.5

Help for ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L Archives

ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L  March 2007

ATLAS-SCCS-PLANNING-L March 2007

Subject:

Minutes of ATLAS/SCCS Planning Meeting 21st March 2007

From:

"Stephen J. Gowdy" <[log in to unmask]>

Date:

21 Mar 2007 20:55:41 +0100 (CET)Wed, 21 Mar 2007 20:55:41 +0100 (CET)

Content-Type:

TEXT/PLAIN

Parts/Attachments:

Parts/Attachments

TEXT/PLAIN (171 lines)

ATLAS SCCS Planning 21Mar2007
-----------------------------

  9am, SCCS Conf Rm A, to call in +1 510 665 5437, press 1, 3935#

Present: Wei, Chuck, JohnB, Stephen, Randy, Gordon, Charlie, Gary,
 	 Richard, Len

Agenda:

1. DQ2 Status/Web Proxy

    Running out of space. Recent jobs seem to be more data intensive
    than old jobs, as they are reconstruction jobs as they need the
    output from the simulation. Hope to get the new space available,
    currently are turning off the Fetcher for 12 hours, then on
    again. Currently only keeping three days worth of data. Only have
    232GBs worth of space free currently. The AOD distribution has used
    about 600GB and only a 1 or 2 GB per day so not something that
    could be turned off to make a big difference.

    Some email discussion about DQ2 0.3. Recently installed 0.2.12, but
    not really in production. Paul Nilsson has got things working but
    don't believe will put it into production. Want to deploy 0.3 in a
    quick timescale. We want to install DQ2 via RPMs instead of
    pacman. Will be slightly better to allow patching of the web
    server, if not the MySQL (as require version 5 and RHEL4 comes with
    MySQL 4). Discussions going on about having central US installation
    or site administrators to do it. If it is by folk outside SLAC will
    need to get those folks accounts at SLAC. Timescale is still at
    least a month before 0.3 will be used.

2. Tier-2 Hardware

    In purchasing, not gotten out. Seems to be very slow over there
    just now.

    Is the memfs server moving out of building 50, so does that free
    power for other uses? Power usage needs to be prioritised. The ATLAS
    (and GLAST) disk should be able to be powered up by the space freed
    up. Can't see where to put the batch nodes till the racks are built
    in May. Looks like it will move in a couple of weeks.

    Should also discuss if tories and nomas should be removed to
    provide power to the newer machines. For the lab it makes sense.

3. xrootd/srm

    No news. From the Tier-2 meeting quite sure that people will want
    it. SW Tier-2 is looking at xrootd but they want SRM
    interface. Hope can get something, or at least decide if it is
    doable by the next Tier-2 meeting, that should be sometime in
    June.

4. AOB

    - Web Site

    At the Tier-2 meeting they mentioned they wanted a common
    look-and-feel for the Tier-2 web sites. Wanted to raise this at
    today's Facilities meeting but it was cancelled, will need to try
    it next week. A quite look round would point this towards being a
    Wiki page, but which isn't actually Wiki but is static.

    The goal of a common technology is probably unrealistic, a common
    look-and-feel might be. The page that InfoMedia solutions have done
    for us is probably better than what we have. Believe these are
    static UNIX web pages (could be static windows ones but isn't a big
    difference). New pages look much more professional and more like a
    Wiki page to look more like other Tier-2s.

    Should be one more round of feedback to InfoMedia before moving all
    the content. Will go back to them and discuss adding content.

    - Tier-3 Discussion at UCSD Tier-2 meeting

    Discussion of network and Tier-3 user support. There was not so
    many ATLAS folk but quite a few CMS. There was faculty person from
    Santa Cruz. It looks like he doesn't know much about what we do at
    the Tier-2. He wanted to know about the best way to use resources,
    like having their own mini-farm at Santa Cruz. Also had a similar
    request from Andy Langford. While making the Tier-2 proposal also
    discussed hosting institution machines at the Tier-2. Probably
    won't work given the power issues at SLAC, but would really like to
    do it.

    At UCSC they don't have any substantial bandwidth to their
    Campus. Normally sites would be interested in having one to two
    racks of systems. Many sites still have a learning curve on running
    more than a few systems.

    They did come out with a number of 20TB per user, where they are
    looking at $300/TB. May be the 20TB is based on money, we could see
    what could be provided for the same money, certainly less.

    This meeting was organised by Internet2. This was to try to get the
    network people and physicists at Universities together. It did wake
    up some of them to start them talking. No concrete followup to the
    meeting at the moment beyond circulating some information.

    - Miron Livny (CONDOR)

    Will be visiting tomorrow.at 10am. He will be at SLAC for the
    morning. Hasn't been around SLAC much but has been involved in the
    Grid for HEP for many many years. Will be valuable for him to see
    how life is in the trenches of the OSG.

    Will make sure there are a good number of appropriate people
    present.

    - CMS Support

    For priority for hardware resources ATLAS should get appropriate
    response. There is the idea of the Grid where you can easily use
    the same resources for different purposes. To allow SLAC to be used
    efficiently could support CMS use. CMS currently need batch
    visibility of the Internet, which isn't possible at SLAC (although
    have heard the opposite). Need to make sure we don't get distracted
    by CMS in making sure SLAC is fully efficient for ATLAS. CMS did
    run batch jobs at SLAC before ATLAS. At the time they didn't find
    it interesting as they did need a large amount of resources. Would
    actually like to put CMS lower down the list than other lab
    requests (like from KIPAC and GLAST). It is important that SLAC is
    seen as an important Grid site for HEP. This could occasionally
    attending to a CMS issue prior to KIPAC (as an example). It is the
    human cycles which are the important (and scarce) resource.

    There is a perception of SLAC not being fully ready. Need to try to
    understand this. There are problems with disk space currently but
    many sites have this issue. There is the issue of xrootd not being
    fully available for ATLAS use via an SRM interface but this is a
    bonus and not needed for production use.

    - Next Meeting

    Stephen will be at the ATLAS Software Week in Munich;

    http://indico.cern.ch/conferenceDisplay.py?confId=5060

    Will clarify on Tuesday next week if there is a meeting and who
    will be chairing it.

Action Items:
-------------

070321 Gordon	Discuss perception of SLAC Tier-2 with external folk.

070321 Wei	Send example new web page link to list

061108 Richard	Discuss with SLAC Security longterm approach to ATLAS VO
        061115 No information.
        061213 Nothing happened yet.
        070103 No information.
        070110 Richard & BobC in Denver, Stephen will email them.
        070124 Don't know the status.
        070131 Don't believe this happened.
        070207 Have not done this. Randy has talked to Heather, didn't
 	      have any time to comment today but she is aware about
 	      it. Will treat each VO as an enclave, if you are using
 	      anonymous accounts need to be able to show how ran a job
 	      and when. The main issue is that VOs are not legal
 	      entities and anyone can declare themselves a VO. Would
 	      need to actually test that the required information can
 	      actually be found.
        070221 No info.
        070228 No info.
        070314 No info.
        070321 No info.


Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

September 2016
July 2016
June 2016
May 2016
April 2016
March 2016
November 2015
September 2015
July 2015
June 2015
May 2015
April 2015
February 2015
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
September 2013
August 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use