LISTSERV mailing list manager LISTSERV 16.5

Help for ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L Archives

ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L  May 2007

ATLAS-SCCS-PLANNING-L May 2007

Subject:

Minutes of ATLAS/SCCS Planning Meeting 2nd May 2007

From:

"Stephen J. Gowdy" <[log in to unmask]>

Date:

2 May 2007 18:54:58 +0200 (CEST)Wed, 2 May 2007 18:54:58 +0200 (CEST)

Content-Type:

TEXT/PLAIN

Parts/Attachments:

Parts/Attachments

TEXT/PLAIN (131 lines)

ATLAS SCCS Planning 02May2007
-----------------------------

  9am, SCCS Conf Rm A, to call in +1 510 665 5437, press 1, 3935#

Present: Stephen, Wei, Chuck, Richard, Booker, JohnB

Agenda:

1. DQ2 Status/Web Proxy

    Early this week or during the weekend CERN changed their ROOT
    Certificate Authority so we had to change our installation to
    follow their update or couldn't transfer files. Is working
    now. Would have been useful to have been informed. Has also been a
    problem at two other sites on OSG. Copied the new CA from the new
    version of Globus, Globus and no longer releases patches with
    updated certificates for the old version of Globus that we are
    using. There is an official list of CAs that we should keep up to
    date, need to make sure we are doing that.

    - Users needing access to web services from batch

    Don't know that the environment will support this generically in
    the future. Perhaps the current ATLAS specific solution is the best
    way forward. Ideally would have a specific set of on-site services
    which contact specific off-site services.

    For the moment can check the IP address of the node being used to
    set the http_proxy if it is needed.

    Other people (non-ATLAS) will probably have the same issue in the
    future at SLAC. Should address their (whoever they are)
    requirements as they come up.

2. Tier-2 Hardware

    One storage machine has the OS. Should get two of the machines up
    and running, allow Len to do some testing with the third
    machine. There is some question about how many parity disks we
    want, one or two. There is some discussion at HEPiX about how long
    it takes to reconstruct an array after loosing a disk, we would be
    at risk during the reconstruction. Len will try to measure how long
    it takes and try to get information from other labs. With double
    parity it is more reliable but there is a write-time and space
    parity. An element of the discussion would be the type of data on
    it, if it was really just a cache of data stored elsewhere or if it
    was the primary storage. Even if we believe it is just a cache an
    worry would be how long it would take to reimport all the data
    again from BNL (probably around a week). Could setup different
    areas for production with double parity.

    Local access should be preferred to use xrootd. Could also have a
    different area for DQ2 for users to avoid them interfering with
    production. We can setup the production area and DQ2 space then we
    can decide what to do for users.

    The water cooled rack is arriving, which is on track. Have a good
    feeling about the providers, so expect it will continue
    be on schedule. There have also been good support from Jim even
    though he doesn't like the design.

3. xrootd/srm

    Have the machine requested. Once Booker is off the Hot Seat he'll
    install SL3 on it.

    During last weeks Facilities meeting Boston said they preferred
    gsiftp. The person that operates the BNL DQ2 said SRM gave them a
    set of problems. They thought they could get rid of FTS if they
    didn't need to use SRM... not clear what the right picture is. SRM
    has put and get methods which use SRM copy method, but SRM copy uses
    another protocol (generally gsiftp).

    There was a discussion today at the Grid Deployment Board on the
    support of the CASTOR-XROOTD. CERN clearly couldn't support this
    themselves. SLAC will support XROOTD but this is not a core
    part. There should be some discussion about how this gets
    supported, as we are currently bottle necked on Andy's time. ALICE
    should probably work with SLAC to somehow arrange support or
    collaboration. Could currently support the interface of xrootd to
    everything.

    Should have a discussion somewhere about using the PetaCache for
    the ATLAS Tag data.

4. AOB

    None.

Action Items:
-------------

070502 Stephen	Email Gordon about his action item

070502 Stephen	Arrange meeting about ATLAS TAG data on PetaCache

070502 Wei	Check CA certificate update mechanism

070321 Gordon   Discuss perception of SLAC Tier-2 with external folk.
          070404 no info
          070411 no info

061108 Richard  Discuss with SLAC Security longterm approach to ATLAS VO
          061115 No information.
          061213 Nothing happened yet.
          070103 No information.
          070110 Richard & BobC in Denver, Stephen will email them.
          070124 Don't know the status.
          070131 Don't believe this happened.
          070207 Have not done this. Randy has talked to Heather, didn't
               have any time to comment today but she is aware about
               it. Will treat each VO as an enclave, if you are using
               anonymous accounts need to be able to show how ran a job
               and when. The main issue is that VOs are not legal
               entities and anyone can declare themselves a VO. Would
               need to actually test that the required information can
               actually be found.
          070221 No info.
          070228 No info.
          070314 No info.
          070321 No info.
          070404 No info.
          070411 No info.
          070418 No info.
          070425 No info.
 	 070502 Drop till concrete action comes up.



Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

September 2016
July 2016
June 2016
May 2016
April 2016
March 2016
November 2015
September 2015
July 2015
June 2015
May 2015
April 2015
February 2015
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
September 2013
August 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use