Print

Print


Hi, Takashi.

I CC'd your question to the software list for future reference...

> Where do I save this file? 

For now we would need to put the uncompressed db file on NFS and point to it using the java_args to the JobManager within the job scripts.

The relative path of 'hps_conditions.db' can simply be replaced with a full file path.

In the future, we might think of automating this so that you set 'offline_conditions_db = True' in the JobManager and it automatically uses a copy managed by hps-mc (this would need to be coded up though).

> When the DB is updated, how do I get the latest file? Do I need to ask you to generate the file? 

Someone would need to regenerate the file from a SQL dump and upload it to the github and then you would download and decompress it.

I will make sure the script to do this is checked into the hps-conditions-backup github.

> I have a more serious problem. As you may have heard, there was a bug in the field map generation. I need to re-run some MC as soon as the field map is corrected. How do I update HPS-MC? Once HPSJAVA is updated, all I need to do is to build HPS-MC from scratch?  
> Another related question. We need to study next year's configuration. Once next year's detector is created in HPSJAVA, how do I update HPS-MC? Build HPS-MC from scratch?

To manage the version of HPS Java used by hps-mc, you can either rebuild hps-mc which gets you the current master of hps-java OR set the environment variable HPSJAVA_JAR to the full path of the bin jar that you want to use.  The bin jars could be built locally by you or downloaded from the Nexus repository.

You can also set an hps-java tag when building hps-mc like this:

cmake -DHPSJAVA_TAG=hps-java-4.0

Though I have not used the last option much.

The environment variable is how we have been setting the jar file for recent jobs, e.g. for the tuple branch.

To manage the actual versions of hps-java itself, for instance if you need to run jobs from branches, then you can checkout hps-java, use a command like 'git checkout name_of_branch' to get the branch you want to build, and then compile the project using 'mvn -DskipTests'.  Then copy or symlink to the bin jar that is built.  Copying the full bin jar into the job directory on NFS is probably the safest option.

--Jeremy


From: Maruyama, Takashi
Sent: Thursday, August 23, 2018 10:33:13 AM
To: McCormick, Jeremy I.
Subject: Re: running hps-java locally
 

Where do I save this file? When the DB is updated, how do I get the latest file? Do I need to ask you to generate the file? 


I have a more serious problem. As you may have heard, there was a bug in the field map generation. I need to re-run some MC as soon as the field map is corrected. How do I update HPS-MC? Once HPSJAVA is updated, all I need to do is to build HPS-MC from scratch?


Another related question. We need to study next year's configuration. Once next year's detector is created in HPSJAVA, how do I update HPS-MC? Build HPS-MC from scratch?


From: McCormick, Jeremy I.
Sent: Tuesday, August 21, 2018 10:58:35 AM
To: Maruyama, Takashi
Cc: hps-software
Subject: Re: running hps-java locally
 

Correction:

The file to download and use locally is actually here

https://github.com/JeffersonLab/hps-conditions-backup/blob/master/hps_conditions.db.tar.gz


From: [log in to unmask] <[log in to unmask]> on behalf of McCormick, Jeremy I. <[log in to unmask]>
Sent: Tuesday, August 21, 2018 11:08:44 AM
To: Maruyama, Takashi
Cc: hps-software
Subject: running hps-java locally
 

Hi,


I have a basic working version of hps-java on a branch (iss320) which can load conditions from a SQLite db that I exported from MySQL.  This means that you should be able to run the recon entirely offline without needing to install MySQL locally and import the conditions db.


The issue branch is here:


https://github.com/JeffersonLab/hps-java/tree/iss320

What are the next steps?  Should we attempt to run it in hps-mc?


I have only run the readout and I did not check the results carefully (just saw that it wrote out some events); perhaps we need to run & validate the recon as well before doing anything else?  It is possible, for instance, that some values of conditions data might not be the same, e.g. due to rounding/precision errors.  So that should be checked carefully.

It is a fairly simple matter to run the bin jar and point to the local db file...

java -Dorg.hps.conditions.url=jdbc:sqlite:hps_conditions.db -jar hps-java-bin.jar [...]

The db file with the exported conditions data can be obtained from here:

https://github.com/JeffersonLab/hps-conditions-backup/blob/master/hps_conditions_for_sqlite.sql.tar.gz


It should be downloaded into the working directory and decompressed using 'tar -zxvf'.


To run in hps-mc should be pretty straightforward; I think we would just need something like this passed to the JobManager in each script:


java_args=["-Dorg.hps.conditions.url=jdbc:sqlite:hps_conditions.db"]

We would need to either copy over the db file to each batch node and read from this relative path, symlink the file from each node, or read the file from a fixed/hard-coded NFS location.  (All of these options require some relatively minor changes to the python scripts.)


Let me know how you want to proceed and what I can do to help...


BTW I will be leaving on a 3 month trip next Tuesday evening, and I do not plan on doing any HPS work during this time, so please get back to me ASAP on this so hopefully we can get this running successfully before I leave.

--Jeremy



Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1



Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1