LISTSERV mailing list manager LISTSERV 16.5

Help for ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L Archives

ATLAS-SCCS-PLANNING-L Archives


ATLAS-SCCS-PLANNING-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L Home

ATLAS-SCCS-PLANNING-L  June 2010

ATLAS-SCCS-PLANNING-L June 2010

Subject:

Re: [Fwd: Re: Proof cluster ready for testing]

From:

"Yang, Wei" <[log in to unmask]>

Date:

13 Jun 2010 23:49:37 -0700Sun, 13 Jun 2010 23:49:37 -0700

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (528 lines)

Hi Bart,

Thanks for the interesting analysis. Comments embedded below.

regards,
Wei Yang  |  [log in to unmask]  |  650-926-3338(O)

On Jun 13, 2010, at 1:06 AM, Bart Butler wrote:

> Correct me if this makes no sense, but if the remote files are replicated enough on xrootd such that they are being read from different disks (that is, you have 36 workers, each reading a different file from 36 identical disks) then processing rate would still scale linearly with # of workers, but could still be limited by the individual disk speed, which could help explain why smaller file size seems to help so much given that the network bandwidth isn't being saturated. Of course, if the disk multiplicity in the T2 is much lower such that this is a totally unreasonable scenario, then yeah, the T2 storage can't be the bottleneck.

Interesting and valid argument. I didn't think of this scenario. In that case, each disk imposes a limit of 8MB/s.

Where are those files on T2 storage? Are they at

root://atl-xrdr//atlas/xrootd/atlasuserdisk/user10.ZacharyMarshall.data10_7TeV.00152409.physics_MinBias.recon.ESD.f238.V1.2010.04.12_AANT_sub0657077

If so, files are almost evenly distributed on 16 data servers. Each whole file resides on one of the data servers, and are spread across an array 32 (usable) disks on that data server). So we should be able to see the rate flat out once there are more than 16 workers if the disk array is the limit.

> Also, the same argument can apply to the local worker disks. If the files are replicated to the local disks and then read into memory, the worker hard drives could be the limiting factor. I sort of doubt the CPU speed could be the bottleneck, considering I turned off a lot of calculations in one of my tests and got a negligible speed increase.

I doubt the CPU is the bottleneck too. Once I put it under ganglia monitoring, we will have a better idea. There are other possibilities similar to what you described in the 36 disk scenario but may be caused be other reasons. One is that the data transfer speed from T2 to workers are limited by inefficiency of network and its protocol. The other is that if the T2 storage was busy and was dealing with large # of Panda jobs, then the rate may also scale linearly (below what disk, CPU and network can handle when they are idle).

> And if 4 cores are sharing the same disk, removing these branches for all 4 cores could reduce seek time considerably for a single disk and maybe account for the non-linear speed-up we see when we turn off branches/reduce the size of the ntuple.

Maybe, it is hard to tell.

> They don't. That error is from the client ROOT session running on atlint01. We use a macro to load all files matching a pattern (typically *.root*) in a directory into a TDSet. The macro has always worked fine using xrootd URLs to the T2 storage, but it doesn't seem to work with the cluster xrootd storage for some reason.  Either my xrootd URL for the cluster is wrong or something is configured differently between the two storage locations. Can you verify that the URL I am using is correct?

OK I know why. The T2 storage has an obsolete feature. The proof cluster uses a new, simplified setup that doesn't have it. However, without this feature, gSystem->OpenDirectory() will not return anything if the directory is xrootd path (root://...). To work around, you will need to modify you macro and do something like this (pseudo code)

posixDirName="/xrootd/proof/bcbutler/user10.ZacharyMarshall..._AANT/";
xrootDirName="root://boer0123//atlas/proof/bcbutler/user10.ZacharyMarshall..._AANT/";

void *dirp = gSystem->OpenDirectory(posixDirName);
...
    TString path = TString(xrootDirName + fn);

You will have to run the client on atlint01 because /xrootd/proof/... is only available on atlint01.

>
> The dirName passed to the macro is:
> root://boer0123//atlas/proof/bcbutler/user10.ZacharyMarshall.data10_7TeV.00152409.physics_MinBias.recon.ESD.f238.V1.2010.04.12_AANT/
>
> 7    Int_t GetTDSetFromDirectory(TDSet *dset,
> 8                                 const char* match= "*.root*",
> 9                                 TString dirName = "",
> 10                                 Int_t NFilesToAttach = -1)
> 11    {
> 12      void *dirp = gSystem->OpenDirectory(dirName);
> 13      char *ent;
> 14      int i = 0;
> 15      while ((ent = const_cast<char*>(gSystem->GetDirEntry(dirp)))) {
> 16
> 17          TString fn   =  TString(ent);
> 18          TString path = TString(dirName + fn);
> 19          TRegexp matchExp(match,kTRUE);
> 20
> 21          if (fn.Contains(matchExp)) {
> 22            dset->Add(path);
> 23            i++;
> 24          }
> 25          if (i == NFilesToAttach) break;
> 26      }
> 27      cout << "N Files attached = " << i << endl;
> 28      gSystem->FreeDirectory(dirp);
> 29      return i;
> 30    }
>> regards,
>> Wei Yang  |
>> [log in to unmask]
>>   |  650-926-3338(O)
>>
>>
>> On Jun 11, 2010, at 8:26 AM, Bart Butler wrote:
>>
>>
>>
>>> Pre-compilation works with the 5.26-proof version. My AFS filesystem was read-only and the save didn't go through. One problem solved.
>>>
>>> More observations (keep in mind all rates have ~10% uncertainty from run to run, maybe related to T2 storage load):
>>>
>>> 1. The GUI Is Screwy: GUI progress bar makes no sense, even when the job seems to be running. The processing time elapsed clock increments a second every 10 seconds or so, which makes the events/sec rates about 10X bigger than they actually are. That said, the # of events processed number is off, as the GUI said the job had processed ~50K events while each of the workers had processed between 30K and 40K events. I think the X11 forwarding from SLAC might not have the bandwidth to keep this updated in real time (though the PROOF Lite GUI on atlint01 works fine forwarded from SLAC...maybe it's this speedometer gizmo?) so I'm going to try batch mode to see if that helps.
>>>
>>> Update: Batch mode fixes this, real time rates which update and make sense, and PROOF obeys stop commands now. Stay away from the GUI (at least remotely).
>>>
>>> 2. Speed scales linearly with number of workers for these tests (running off T2 xrootd).  The ntuple used is around 90kb/evt and the analysis writes a lot of histograms and. With all 36 workers, 1550 evts/sec (~43 evts/sec/worker). Pretty slow.
>>>
>>> 3. Turning off most histograms gives about a 3-6% speed improvement (1625 evts/sec, 45 evts/sec/worker)--->negligible.
>>>
>>> 4. Turning all MC truth calculations off but still running on the same dataset: 1660 evts/sec, 46 evts/sec/worker.
>>>
>>> 5. Turning MC branches off but still running on the same dataset: 2075 evts/sec, 58 evts/sec/worker.
>>>
>>> 6. Use data instead of MC (no truth information and thus smaller ntuple ~83kb/evt). 3471 evt/sec, 96 evts/sec/worker.
>>>
>>> 7. Same as #6 but run from the proof cluster storage. Says it can't find the files. Is root://boer0123//atlas/proof/bcbutler/user10.ZacharyMarshall.data10_7TeV.00152409.physics_MinBias.recon.ESD.f238.V1.2010.04.12_AANT/*.root* not correct?
>>>
>>> Exact error:
>>> Srv err: Unable to open directory /atlas/proof/bcbutler/user10.ZacharyMarshall.data10_7TeV.00152409.physics_MinBias.recon.ESD.f238.V1.2010.04.12_AANT/
>>>
>>> I'm not sure how helpful this is for deciding machinery other than confirming things we already knew...it's clear with this big ntuple (and this particular analysis) that the cluster's processing speed is input-limited rather than CPU limited, so maybe we should focus on more CPUs rather than faster CPUs? Small changes in ntuple size result in massive speed increases, though the picture is muddied a bit with the addition complication of turning branches on and off (which is clearly not a negligible effect, but smaller than slimming).
>>>
>>> I'm not sure exactly what turning a branch off does when reading from a network drive...is the entire tree transferred and then only the enabled branches are loaded into memory? That would indicate that local disk speed would be something worthwhile to invest in, given turning truth branches off gave a 25% increase from test 4 to 5.
>>>
>>> The biggest gain in speed was obtained from using a smaller ntuple (no truth) but the increase in speed seems very large compared to the actual size difference of the ntuples per event...I'm not sure how to interpret this.
>>>
>>> -Bart
>>>
>>> Yang, Wei wrote:
>>>
>>>
>>>> I just sudo to bcbutler and ran root, and killed all your sessions:
>>>>
>>>> root [1] TProof::Reset("boer0123",1);
>>>>
>>>> | Message from server:
>>>> | CleanupSessions: hard-reset: signalling active sessions for termination
>>>>
>>>> | Message from server:
>>>> | CleanupSessions: hard-reset: cleaning up client: requested by: bcbutler.7796:35@atl-prod05
>>>> | CleanupSessions: hard-reset: forwarding the reset request to next tier(s)
>>>> | Send: cleanup request to
>>>>
>>>> [log in to unmask]:1093
>>>>
>>>>  for user: bcbutler
>>>> | Send: cleanup request to
>>>>
>>>> [log in to unmask]:1093
>>>>
>>>>  for user: bcbutler
>>>> | Send: cleanup request to
>>>>
>>>> [log in to unmask]:1093
>>>>
>>>>  for user: bcbutler
>>>> | Send: cleanup request to
>>>>
>>>> [log in to unmask]:1093
>>>>
>>>>  for user: bcbutler
>>>> | Send: cleanup request to
>>>>
>>>> [log in to unmask]:1093
>>>>
>>>>  for user: bcbutler
>>>> | Send: cleanup request to
>>>>
>>>> [log in to unmask]:1093
>>>>
>>>>  for user: bcbutler
>>>> | Send: cleanup request to
>>>>
>>>> [log in to unmask]:1093
>>>>
>>>>  for user: bcbutler
>>>> | Send: cleanup request to
>>>>
>>>> [log in to unmask]:1093
>>>>
>>>>  for user: bcbutler
>>>> | Send: cleanup request to
>>>>
>>>> [log in to unmask]:1093
>>>>
>>>>  for user: bcbutler
>>>>
>>>> I actually ran it twice. The first time it seems to have killed all sessions on workers but left the one on master behind. The second time it kill the one on master as well.
>>>>
>>>> regards,
>>>> Wei Yang  |
>>>>
>>>> [log in to unmask]
>>>>
>>>>   |  650-926-3338(O)
>>>>
>>>>
>>>> On Jun 10, 2010, at 2:51 AM, Bart Butler wrote:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> Another thing that has been consistently a problem with the cluster is the seeming inability to kill a job. I hit Cancel/Stop on the GUI and waited for about 10 minutes for the process to complete. This did nothing, so I hit Ctrl-C, which got me this:
>>>>>
>>>>> Enter A/a to switch asynchronous, S/s to stop, Q/q to quit, any other key to continue: s
>>>>> Info in <TSignalHandler::Notify>: Processing interrupt signal ... s
>>>>>
>>>>> Enter A/a to switch asynchronous, S/s to stop, Q/q to quit, any other key to continue: s
>>>>> Info in <TSignalHandler::Notify>: Processing interrupt signal ... s
>>>>>
>>>>> Info in <TMonitor::Select>: *** interrupt occured ***
>>>>>
>>>>> Enter A/a to switch asynchronous, S/s to stop, Q/q to quit, any other key to continue: s
>>>>> Info in <TSignalHandler::Notify>: Processing interrupt signal ... s
>>>>> Info in <TMonitor::Select>: *** interrupt occured ***
>>>>>
>>>>> Selecting q for quit doesn't seem to do anything either. Eventually I had to log into atlint01 again and kill the root process manually. On a subsequent attempt, I was able to reconnect successfully, but my previous session was still running (or crashed) and it just reconnected to that (stalled) session. I'm trying to kill that session again now so I can run a longer test.
>>>>>
>>>>> Short version: there seems to be an issue with killing jobs cleanly which seems to leave the cluster in an unusable state (at least until some internal timeout completes?), but it does seem to be running (from the T2 storage at the moment, longer T2 vs. cluster storage performance test in the works assuming this session ever dies) using all workers provided the shared library is pre-compiled. This is something I saw in local sessions and back in May, so that it is not really surprising that it's an issue with the cluster too considering my code to make the package hasn't changed much. Figuring out on-the-fly compilation on the workers would be nice down the road. Getting the ability to kill jobs cleanly now is necessary though.
>>>>>
>>>>> -Bart
>>>>>
>>>>> Yang, Wei wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Hi Bart,
>>>>>>
>>>>>> I made another attempt. Here is what I used to start at client side (assuming bash) on a rhel5-64 machine
>>>>>>
>>>>>> . /afs/slac/g/atlas/packages/gcc432/setup.sh
>>>>>> export ROOTSYS=/afs/slac.stanford.edu/g/atlas/packages/root/root5.26.00b-slc5_amd64-gcc43
>>>>>> export PATH=${PATH}:/$ROOTSYS/bin
>>>>>> export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ROOTSYS/lib
>>>>>> $ROOTSYS/bin/root
>>>>>>
>>>>>> It seems I was able to load a .par file.  Can you give it a try? Also, remember on atlint01, if you copy a file to /xrootd/proof/bcbutler, you should TDset::Add("root://boer0123//atlas/proof/bcbulter/..."). However, I found that reading from T2 storage seems to be faster than reading from the disks in the proof cluster (without localizer).
>>>>>>
>>>>>> regards,
>>>>>> Wei Yang  |
>>>>>>
>>>>>>
>>>>>> [log in to unmask]
>>>>>>
>>>>>>
>>>>>>   |  650-926-3338(O)
>>>>>>
>>>>>>
>>>>>> On Apr 29, 2010, at 3:25 PM, Bart Butler wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> First thing first, I think I killed your cluster. The xrootd mount is no longer readable from atlint01 and I can't submit PROOF jobs to it anymore. This happened after killing my client root session manually after a massively-screwed up job.
>>>>>>>
>>>>>>> Secondly, I am have a hell of a time compiling my shared library correctly. Which version of ROOT is the cluster running? If I'm not running the exact same root version and gcc version as every worker node, I can't make binaries (which is what Booker did with his test package it seems. I do it too when I run PROOF-Lite). And if I can't make binaries, I have to submit source packages. This should be fine but it's never worked well for me. My first theory was that because the packages are kept in a common place on xrootd in my user space, the compilation errors I was getting from some workers were because all 32 (I was never able to connect to 4 of the 36 workers) tried to compile the package at the same time in the same place. Running on a single worker worked fine (but of course was slow). I don't think this compilation issue was the whole story though, because if the single worker thing worked, the next time all workers should have been able to load the compiled version with
>>>>>>>
>>>>>>>
>>>>>>> out problems assuming they are all running the same version of ROOT, and they crashed and burned just as badly that time. That's when the cluster itself crashed.
>>>>>>>
>>>>>>> Another thing was that making TDSets from the Tier 2 xrootd storage worked fine, but when I tried using the same files I had copied to the cluster xrootd storage it couldn't find them for some reason.
>>>>>>>
>>>>>>> My log files should be in /xrootd/proof/bcbutler if you guys get the cluster working again.
>>>>>>>
>>>>>>> -Bart
>>>>>>>
>>>>>>>
>>>>>>> Yang, Wei wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> Hi Bart, David,
>>>>>>>>
>>>>>>>> any news on this?
>>>>>>>>
>>>>>>>> regards,
>>>>>>>> Wei Yang  |
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> [log in to unmask]
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>   |  650-926-3338(O)
>>>>>>>>
>>>>>>>>
>>>>>>>> On Apr 21, 2010, at 12:03 PM, Bart Butler wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> I'll try to run a few jobs tonight and see what happens.
>>>>>>>>>
>>>>>>>>> -Bart
>>>>>>>>>
>>>>>>>>> Yang, Wei wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> [add Andy Hass ...]
>>>>>>>>>>
>>>>>>>>>> Hi David, Booker,
>>>>>>>>>>
>>>>>>>>>> I mounted the xrootd space of the proof cluster at /xrootd/proof on atlint01.  It looks like we have ~1.8TB total on the cluster. So something ~ 1TB should work.
>>>>>>>>>>
>>>>>>>>>> The cluster should be able to access T2 storage if your provide the URL of those root file to process. But the whole idea of using proof is to avoid network traffic as much as possible. As we are still validation the functions, it would be good to try both. Or if you put half of the data on proof cluster, and leave the other half on T2 storage (no NFS please).
>>>>>>>>>>
>>>>>>>>>> The proof master node is boer0123. If you copy files to the cluster, the xroot URL is root://boer0123//atlas/proof (I suggest you to create a fizisist sub-dir).
>>>>>>>>>>
>>>>>>>>>> Booker, it looks like proof also leaves some file in the cluster. How would you suggest to manage the space, by user, by group, or something else?
>>>>>>>>>>
>>>>>>>>>> regards,
>>>>>>>>>> Wei Yang  |
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> [log in to unmask]
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>   |  650-926-3338(O)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Apr 21, 2010, at 8:40 AM, David W. Miller wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> Hi Booker and Wei,
>>>>>>>>>>>
>>>>>>>>>>> I have a few questions: from what machine do we launch the jobs? Any machine at SLAC, but specifying the URI correctly? Also, if the data are on atlasuserdisk or usr in /xrootd/atlas/ is that sufficient?
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> David
>>>>>>>>>>>
>>>>>>>>>>> On Apr 21, 2010, at 17:36 PM, Ariel Schwartzman wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> From: Booker Bense <[log in to unmask]>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Date: April 21, 2010 16:09:51 PM GMT+02:00
>>>>>>>>>>>> To: "Schwartzman, Ariel G."
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> <[log in to unmask]>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Cc: "Yang, Wei"
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> <[log in to unmask]>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Subject: Re: Proof cluster ready for testing
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, 21 Apr 2010, Ariel Schwartzman wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Booker,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I cannot access this machine remotely:
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> ssh -Y boer0123.slac.stanford.edu
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> ssh: connect to host boer0123.slac.stanford.edu port 22: Operation timed out
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>> It's on the slac internal network, you'll need to login to a slac
>>>>>>>>>>>> machine and run root programs from there. You shouldn't need
>>>>>>>>>>>> login access to the master node.
>>>>>>>>>>>>
>>>>>>>>>>>> _ Booker C. Bense
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> ==========================================
>>>>>>>>>>> David W. Miller
>>>>>>>>>>> ------------------------------------------
>>>>>>>>>>> SLAC
>>>>>>>>>>> Stanford University
>>>>>>>>>>> Department of Physics
>>>>>>>>>>>
>>>>>>>>>>> SLAC Info: Building 84, B-156. Tel: +1.650.926.3730
>>>>>>>>>>> CERN Info: Building 01, 1-041. Tel: +41.76.487.2484
>>>>>>>>>>>
>>>>>>>>>>> EMAIL:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> [log in to unmask]
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> HOMEPAGE:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> http://cern.ch/David.W.Miller
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> ==========================================
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>>>
>>
>>
>>
>




Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

September 2016
July 2016
June 2016
May 2016
April 2016
March 2016
November 2015
September 2015
July 2015
June 2015
May 2015
April 2015
February 2015
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
September 2013
August 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use