LISTSERV mailing list manager LISTSERV 16.5

Help for HPS-SOFTWARE Archives


HPS-SOFTWARE Archives

HPS-SOFTWARE Archives


HPS-SOFTWARE@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

HPS-SOFTWARE Home

HPS-SOFTWARE Home

HPS-SOFTWARE  April 2014

HPS-SOFTWARE April 2014

Subject:

Re: software schedule

From:

Maurik Holtrop <[log in to unmask]>

Reply-To:

Software for the Heavy Photon Search Experiment <[log in to unmask]>

Date:

Sat, 5 Apr 2014 15:35:37 -0400

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (130 lines)

Hello Jeremy,

The SRS data catalog looks like a really nice project, but I am not so convinced it really suits our needs. I am going to be a bit critical of it here.

The proposed scheme here looks to me to have a lot of potential points of failure. It would be much better to have any type of data catalog run local to where the data resides. I don't like the idea that everything is going to depend on successful ssh commands to SLAC. This could be a very serious potential failure point. Shame on Tony for using Oracle as a backend and making the SRS not portable!

There has never been ssh access from the counting house to the outside, neither is there such access from the batch farm. That would mean that only with a double hop ssh you could communicate with the SLAC SRS. I see this as a problem, since it is exactly at these locations that you would want to query the catalog, or add data to the catalog.

One of the key features of the SRS appears to be that it can look at the files locally and then get the useful information from these files. If these files are not local, this won't work. Running cron jobs to make up for the deficiency does not sound like such a good idea because this is likely to be unreliable. Files may disappear before the job runs, etc. I don't see these as simple issues.

Additionally, not running local will mean that you cannot click on "download" and expect to get the file. 

Other key features appear to be missing from the SRS. I see no way to filter the presented data on properties, such as "give me all the HPS data with 1.1 GeV beam energy that are fully reconstructed", and then add "status good" or something like that. I don't need a catalog simply to find the files, I need a catalog to find the files worth looking at. I then need this catalog to not require me to endlessly click around to get the files I need. I need to be able to query the system from a script for files with certain properties and then get a list of all those files, and where they reside. Then the script can run my analysis over all the returned files.

The other observation that I make looking at SRS is that no matter what experiment I choose from the top bar, I get the same "key locations: nothing to display", and the same tree of files on the left. It seems to be still rather sparsely populated, even though some of the data is dated from 2010. 

If sounds to me that with additional work the missing features could be added, but if the whole thing must run at SLAC, I for one would be happier with a simple MySQL database running locally.

Best,
	Maurik





On Apr 4, 2014, at 9:02 PM, McCormick, Jeremy I. <[log in to unmask]> wrote:

> Hi,
> 
> Thanks for the information.  I am going to CC this to the software list.
> 
> I just had a long talk with two of the experts here on the SRS data catalog.  It is being used for three experiments, which are EXO, Fermi telecope and the (upcoming) LSST.  I think we should try and make it work rather than rolling our own, and they are more than willing to work with us on this and support its use.  It will be getting some major feature additions too such as a REST web interface, that are planned to occur before we start running, so this should benefit us.
> 
> Accessing the SRS data catalog does require an external connection.  If outbound SSH is allowed from the counting house (which I assume it is), then we can use Homer's strategy of connecting to SLAC via a password-less account and executing a script located here to make an entry for a file along with its metadata.  This could be done at any point in the data processing or reconstruction chain, provided the SSH key for accessing SLAC is available to the script running this update process.  It would be best to make a new shared account at SLAC for this purpose rather than use someone's user account.
> 
> The other strategy, which I think would work for the reconstruction (but not the live data taking?), is having a periodic cron job that updates the data catalog based on looking at a directory where the files are stored at JLab.  It would have a timestamp file that it touched.  Files that are newer than the timestamp would be processed and input into the data catalog.  If the job failed, then the timestamp wouldn't be updated.  Tony indicated that updating the same exact "logical file name" in the catalog simply overwrites it with the new information, so there is no need to worry about duplication.
> 
> I'm not sure that run quality is a built-in concept of the catalog but it can be handled through meta data tags.  You should talk to Matt about this.  He has all kinds of ideas about it.
> 
> So breaking down your workflow (which is very useful information), I would do the following if we were to use the SRS catalog:
> 
> 1.1 When the files go from RAID to tape, trigger a script using password-less SSH login to make an entry in the data catalog for each file.  (Not sure what would trigger this though.  How do we insert a hook when this happens?)
> 
> 2.1 For the recon, execute a cron script that looks for new files in the output directory where the recon files go and updates the data catalog accordingly.  This could run every 10-15 minutes.  (If they are only staged to disk for a short time and then written to tape, this might not work.)
> 
> 2.2 My first suggestion is saving the log files from the batch job and pulling out the required meta data such as number of events processed from there.  Then update the catalog similar to 2.1, e.g. asynchronously via a cron job.
> 
> 3.1 Do this like 2.2.
> 
> For replication to SLAC, I'd trigger the update via a cron job that used rsync to see what files needed to be copied to SLAC and made entries for the new files in the catalog as they were copied over.
> 
> I do see some problems here having to do with...
> 
> A) Usage of tape storage, in which case using an rsync or an asynchronous process might not be able to catch the data on disk before it was written to tape.  In this case, you would need to fire off a script that did it.  I don't actually know much about tape storage.  If a simple 'ls' command returns reasonable results, then having a cron job handle this might work fine provided the directory locations are known.
> 
> B) External access via the batch system, if you want to update a data catalog within the actual batch job itself.  I assume the batch system does NOT have internet access to outside sites, in which case that is problemmatical to try and update the file catalog from within your batch script.  I think you may not want to do this anyways.
> 
> I'm not sure about running the actual app at JLab.  Unfortunately, it uses an Oracle database here at SLAC.  Not sure there's one of those just lying around for us to use at JLab and probably setting it up there is more trouble than it is worth.  Since both EXO and Fermi have gotten this working remotely, I think it should definitely be possible for us, too.  We should give it a try.
> 
> Thoughts?
> 
> --Jeremy
> 
> -----Original Message-----
> From: Maurik Holtrop [mailto:[log in to unmask]] 
> Sent: Friday, April 04, 2014 5:13 PM
> To: McCormick, Jeremy I.
> Subject: Re: software schedule
> 
> Hi Jeremy,
> 
> I think that the system that Homer showed is fine for the job. I just think that we have to run this at JLab, perhaps with some integration glue with the Silo tape storage system.
> 
> Let me run through what I think the data catalog needs to do.
> 
> 
> 1.	A run is finished in the counting house. It will produce something like 60 to 100 files, with some naming convention that includes the run number and a sequence number. These file are automatically copied from the counting house RAID to the Silo cache storage and spun to tape.
> 
> 	1.	At this point, I would like these files to be added to the data catalog, including date and time, and some of the run information from the DAQ, like the number of events, the DAQ settings and the user comment.
> 
> 2.	The data is next processed by reconstruction scripts. These scripts will process the 100 files for the run.
> 
> 	1.	 For each file processed the number of events in the original raw data file is added to the catalog.
> 	2.	When a processing job is done, it will write to the catalog the number of events it processed (should be the same as 1, unless there was a file read error), and the number of events it wrote out.
> 
> 3.	Some post processing jobs are run on each of the files from step 2. These jobs do some simple quality checking, i.e. number of events with identified e- and e+, number of events that pass certain cuts. Determine the amount of noise, the quality of the tracks.
> 
> 	1.	Quality of the data is determined from step 3 in some quantitative manner and added to the catalog.
> 
> 
> We would then want to be able to query the data catalog for correlated information from the data quality check, such as "how many e+ e- pairs were there for all the run files", and then plot this quantity versus the time of the run. 
> 
> If the SLAC data catalog cannot do things like store the data quality information per run file, it would not be all that difficult (but time consuming) to do this in a MySQL database. (That is what I did way back when I was still smart.)
> 
> Best,
> Maurik
> 
> 
> 
> On Apr 4, 2014, at 7:43 PM, McCormick, Jeremy I. <[log in to unmask]> wrote:
> 
> 
> 	I'm going to talk to Tony about the SRS data catalog access from JLAB.  Let you know what I find out...
> 	
> 	-----Original Message-----
> 	From: Maurik Holtrop [mailto:[log in to unmask]] 
> 	Sent: Friday, April 04, 2014 4:24 PM
> 	To: McCormick, Jeremy I.
> 	Subject: Re: software schedule
> 	
> 	I think what is missing for the SLAC database is that it would be impossible, or nearly so, to update it with the run data as it is produced, and the same issue would exist for any scripts that run the batch jobs. We'd be much better of having a data catalog that runs at Jlab. 
> 	A very, very, long time ago when I did a lot of processing of data at Jlab I had setup a simply MySQL database that I filled from the scripts. Nothing fancy, but it did keep track of the jobs for me. I am also surprised that there isn't any effort to make a data catalog system. I asked around a little and no one seemed to know. I'll ask a bit more.
> 	We could possibly just install the SLAC system at Jlab? Then at least it can be used there.
> 	
> 	- Maurik 
> 	
> 	
> 
> ########################################################################
> Use REPLY-ALL to reply to list
> 
> To unsubscribe from the HPS-SOFTWARE list, click the following link:
> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
June 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use