Serge,
> As long as this information gets produced in fairly coherent fashion
> (i.e. I can do the join on small sets of related files in memory)
> that's OK,
For now, W13 would take 352MB for an objectId/RA/dec array in
memory; somewhat more if you want it in, e.g., a hash table for easy
lookup. Any Object-to-Foo matching table will be generated at the same
granularity and aligned with the Foo pieces.
> For now we can just include the columns we need by dumping from the
> W13 production database.
Makes sense.
> Is that a safe assumption though? If I have to deal with joins that
> involve 10s of terabytes of ASCII files though, this is going to be
> non-trivial to do well.
In *2031*, you might have to do 40G Objects * 24 bytes = 960 GB.
I don't think that'll be a huge problem.
--
Kian-Tat Lim, LSST Data Management, [log in to unmask]
########################################################################
Use REPLY-ALL to reply to list
To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1
|