Print

Print


Ok, for the large scale test, the best way to get a large test set
of data will be to run the duplicator over the pt12 data.  I've copied
the pt12 data already to in2p3, so that data is there.

But the duplicator should be run on each node, so each node gets
data produced for itself.  And this all runs in parallel, so hopefully it
won't take too long.  But the basic example command for the pt12
data is:

qserv/master/examples/makeChunk.py -S 100 -s 10 --dupe --node-count=120 
--node=1 --chunk-prefix=Object --theta-name=ra_PS --phi-name=decl_PS 
--schema Object.sql Object.txt

And this was for the yili test here at slac, which had 120 nodes, and
I was using 100 stripes, and 10 substripes (to produce ~10k chunks).
This was on the Object table.  And you might have to tweak the paths
for the makeChunk.py script, and a path to the schema, and a path
to the cvs file: Object.txt.  And each node will need the appropriate
node number for the --node option.

But we should copy the pt12 data to each node first.  Then run the
above command on each node, with the appropriate paths.  Then in
a few hours each node should have its data.  Then on each node
separately the data will get loaded.

Douglas

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1