Print

Print


It looks like the xrootd mount in atlint01 is accessible, just very very very slow. Maybe some kind of massive zombie job of mine is just gumming up the works badly.

-Bart

Yang, Wei wrote:
[log in to unmask]" type="cite">
Hi Bart, David,

any news on this?

regards,
Wei Yang  |  [log in to unmask]  |  650-926-3338(O)


On Apr 21, 2010, at 12:03 PM, Bart Butler wrote:

  
I'll try to run a few jobs tonight and see what happens.

-Bart

Yang, Wei wrote:
    
[add Andy Hass ...]

Hi David, Booker,

I mounted the xrootd space of the proof cluster at /xrootd/proof on atlint01.  It looks like we have ~1.8TB total on the cluster. So something ~ 1TB should work.

The cluster should be able to access T2 storage if your provide the URL of those root file to process. But the whole idea of using proof is to avoid network traffic as much as possible. As we are still validation the functions, it would be good to try both. Or if you put half of the data on proof cluster, and leave the other half on T2 storage (no NFS please). 

The proof master node is boer0123. If you copy files to the cluster, the xroot URL is root://boer0123//atlas/proof (I suggest you to create a fizisist sub-dir). 

Booker, it looks like proof also leaves some file in the cluster. How would you suggest to manage the space, by user, by group, or something else?

regards,
Wei Yang  |  
[log in to unmask]
  |  650-926-3338(O)


On Apr 21, 2010, at 8:40 AM, David W. Miller wrote:

  

      
Hi Booker and Wei,

I have a few questions: from what machine do we launch the jobs? Any machine at SLAC, but specifying the URI correctly? Also, if the data are on atlasuserdisk or usr in /xrootd/atlas/ is that sufficient?

Thanks,
David

On Apr 21, 2010, at 17:36 PM, Ariel Schwartzman wrote:

    

        
From: Booker Bense <[log in to unmask]>

Date: April 21, 2010 16:09:51 PM GMT+02:00
To: "Schwartzman, Ariel G." 
<[log in to unmask]>

Cc: "Yang, Wei" 
<[log in to unmask]>

Subject: Re: Proof cluster ready for testing


On Wed, 21 Apr 2010, Ariel Schwartzman wrote:

      

          
Hi Booker,

I cannot access this machine remotely:

        

            
ssh -Y boer0123.slac.stanford.edu
          

              
ssh: connect to host boer0123.slac.stanford.edu port 22: Operation timed out

        

            
It's on the slac internal network, you'll need to login to a slac 
machine and run root programs from there. You shouldn't need
login access to the master node.

_ Booker C. Bense


      

          
==========================================
David W. Miller
------------------------------------------
SLAC
Stanford University
Department of Physics

SLAC Info: Building 84, B-156. Tel: +1.650.926.3730
CERN Info: Building 01, 1-041. Tel: +41.76.487.2484

EMAIL:    
[log in to unmask]

HOMEPAGE: 
http://cern.ch/David.W.Miller

==========================================