Print

Print


ATLAS SCCS Planning 21Mar2007
-----------------------------

  9am, SCCS Conf Rm A, to call in +1 510 665 5437, press 1, 3935#

Present: Wei, Chuck, JohnB, Stephen, Randy, Gordon, Charlie, Gary,
 	 Richard, Len

Agenda:

1. DQ2 Status/Web Proxy

    Running out of space. Recent jobs seem to be more data intensive
    than old jobs, as they are reconstruction jobs as they need the
    output from the simulation. Hope to get the new space available,
    currently are turning off the Fetcher for 12 hours, then on
    again. Currently only keeping three days worth of data. Only have
    232GBs worth of space free currently. The AOD distribution has used
    about 600GB and only a 1 or 2 GB per day so not something that
    could be turned off to make a big difference.

    Some email discussion about DQ2 0.3. Recently installed 0.2.12, but
    not really in production. Paul Nilsson has got things working but
    don't believe will put it into production. Want to deploy 0.3 in a
    quick timescale. We want to install DQ2 via RPMs instead of
    pacman. Will be slightly better to allow patching of the web
    server, if not the MySQL (as require version 5 and RHEL4 comes with
    MySQL 4). Discussions going on about having central US installation
    or site administrators to do it. If it is by folk outside SLAC will
    need to get those folks accounts at SLAC. Timescale is still at
    least a month before 0.3 will be used.

2. Tier-2 Hardware

    In purchasing, not gotten out. Seems to be very slow over there
    just now.

    Is the memfs server moving out of building 50, so does that free
    power for other uses? Power usage needs to be prioritised. The ATLAS
    (and GLAST) disk should be able to be powered up by the space freed
    up. Can't see where to put the batch nodes till the racks are built
    in May. Looks like it will move in a couple of weeks.

    Should also discuss if tories and nomas should be removed to
    provide power to the newer machines. For the lab it makes sense.

3. xrootd/srm

    No news. From the Tier-2 meeting quite sure that people will want
    it. SW Tier-2 is looking at xrootd but they want SRM
    interface. Hope can get something, or at least decide if it is
    doable by the next Tier-2 meeting, that should be sometime in
    June.

4. AOB

    - Web Site

    At the Tier-2 meeting they mentioned they wanted a common
    look-and-feel for the Tier-2 web sites. Wanted to raise this at
    today's Facilities meeting but it was cancelled, will need to try
    it next week. A quite look round would point this towards being a
    Wiki page, but which isn't actually Wiki but is static.

    The goal of a common technology is probably unrealistic, a common
    look-and-feel might be. The page that InfoMedia solutions have done
    for us is probably better than what we have. Believe these are
    static UNIX web pages (could be static windows ones but isn't a big
    difference). New pages look much more professional and more like a
    Wiki page to look more like other Tier-2s.

    Should be one more round of feedback to InfoMedia before moving all
    the content. Will go back to them and discuss adding content.

    - Tier-3 Discussion at UCSD Tier-2 meeting

    Discussion of network and Tier-3 user support. There was not so
    many ATLAS folk but quite a few CMS. There was faculty person from
    Santa Cruz. It looks like he doesn't know much about what we do at
    the Tier-2. He wanted to know about the best way to use resources,
    like having their own mini-farm at Santa Cruz. Also had a similar
    request from Andy Langford. While making the Tier-2 proposal also
    discussed hosting institution machines at the Tier-2. Probably
    won't work given the power issues at SLAC, but would really like to
    do it.

    At UCSC they don't have any substantial bandwidth to their
    Campus. Normally sites would be interested in having one to two
    racks of systems. Many sites still have a learning curve on running
    more than a few systems.

    They did come out with a number of 20TB per user, where they are
    looking at $300/TB. May be the 20TB is based on money, we could see
    what could be provided for the same money, certainly less.

    This meeting was organised by Internet2. This was to try to get the
    network people and physicists at Universities together. It did wake
    up some of them to start them talking. No concrete followup to the
    meeting at the moment beyond circulating some information.

    - Miron Livny (CONDOR)

    Will be visiting tomorrow.at 10am. He will be at SLAC for the
    morning. Hasn't been around SLAC much but has been involved in the
    Grid for HEP for many many years. Will be valuable for him to see
    how life is in the trenches of the OSG.

    Will make sure there are a good number of appropriate people
    present.

    - CMS Support

    For priority for hardware resources ATLAS should get appropriate
    response. There is the idea of the Grid where you can easily use
    the same resources for different purposes. To allow SLAC to be used
    efficiently could support CMS use. CMS currently need batch
    visibility of the Internet, which isn't possible at SLAC (although
    have heard the opposite). Need to make sure we don't get distracted
    by CMS in making sure SLAC is fully efficient for ATLAS. CMS did
    run batch jobs at SLAC before ATLAS. At the time they didn't find
    it interesting as they did need a large amount of resources. Would
    actually like to put CMS lower down the list than other lab
    requests (like from KIPAC and GLAST). It is important that SLAC is
    seen as an important Grid site for HEP. This could occasionally
    attending to a CMS issue prior to KIPAC (as an example). It is the
    human cycles which are the important (and scarce) resource.

    There is a perception of SLAC not being fully ready. Need to try to
    understand this. There are problems with disk space currently but
    many sites have this issue. There is the issue of xrootd not being
    fully available for ATLAS use via an SRM interface but this is a
    bonus and not needed for production use.

    - Next Meeting

    Stephen will be at the ATLAS Software Week in Munich;

    http://indico.cern.ch/conferenceDisplay.py?confId=5060

    Will clarify on Tuesday next week if there is a meeting and who
    will be chairing it.

Action Items:
-------------

070321 Gordon	Discuss perception of SLAC Tier-2 with external folk.

070321 Wei	Send example new web page link to list

061108 Richard	Discuss with SLAC Security longterm approach to ATLAS VO
        061115 No information.
        061213 Nothing happened yet.
        070103 No information.
        070110 Richard & BobC in Denver, Stephen will email them.
        070124 Don't know the status.
        070131 Don't believe this happened.
        070207 Have not done this. Randy has talked to Heather, didn't
 	      have any time to comment today but she is aware about
 	      it. Will treat each VO as an enclave, if you are using
 	      anonymous accounts need to be able to show how ran a job
 	      and when. The main issue is that VOs are not legal
 	      entities and anyone can declare themselves a VO. Would
 	      need to actually test that the required information can
 	      actually be found.
        070221 No info.
        070228 No info.
        070314 No info.
        070321 No info.