On Jun 10, 2014, at 7:57 PM, Kian-Tat Lim <[log in to unmask]> wrote:
> Serge,
>
>> If there were, we could just use it for objectId queries as well. This
>> is tempting to me, except that I think the partitioner needs something
>> like a central “objectId" -> chunk mapping unless we force people to
>> supply an associated object position for all partitioned tables.
>
> It doesn't necessarily need to be central -- we could spray the
> list of objectIds to the workers and have each check its own chunk(s).
If I’m not misunderstanding what you mean, this has computational complexity Θ(N) and requires p*Θ(N) bytes to be sent over the network (where N is the number of input records and p is the number of workers). Using an the index structure and parallelizing across p workers, we should get computation costs of Θ(N/p * (log N/p + log K)), where K is the number of object IDs, and 2*Θ(N) bytes sent over the network. I think it could work well if the partitioning input is relatively small (which should be the case when continuously loading small batches), though it does mean we’d have to move components of the partitioner into MySQL UDFs to avoid the costs of a query per objectId.
I think this could work well if the partitioning input is relatively small (which should be the case when continuously loading small batches), though it does mean we’d have to move components of the partitioner into MySQL UDFs to avoid the costs of a query per objectId. But I’m not so sure it wouldn’t become a bottleneck if someone hands us a really big pile of relatively local data (not coming across some thin WAN pipe) all at once.
Also, wouldn’t the workers serving queries be less insulated from the performance effects of loading data in this setup?
>> And if we need it for the partitioner, we might as well take advantage
>> on the czar. Once we are there, it does not seem like a big stretch to
>> allow using the same sorts of indexes on columns other than the PK of
>> the director table.
>
> True.
>
> I think you're right that the per-chunk overhead is always going
> to be too big. Too bad, though -- MySQL has indexes down in the
> workers, but we can't use them very effectively. On top of that, I
> don't think there's much of a way to compress the in-memory
> representation of the objectId (or secondary) index. Maybe a Bloom
> filter?
For compression, I still think we should do the run-length encoding I proposed in a previous e-mail, as there is little or no storage penalty in the general case, and what sounds like around a factor of 1000 savings for LSST. But the bloom filter idea is really interesting… if we forget about the partitioner for a second, there’s no reason to require a perfect mapping from objectIds to chunks - it should be perfectly OK to dispatch to more than one chunk (as long as it’s a small number). I’ll have to read up on bloom filter space/accuracy tradeoffs - it’s definitely something we should think about when we are revisiting the objectId indexing.
########################################################################
Use REPLY-ALL to reply to list
To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1
|