LISTSERV mailing list manager LISTSERV 16.5

Help for QSERV-L Archives


QSERV-L Archives

QSERV-L Archives


QSERV-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

QSERV-L Home

QSERV-L Home

QSERV-L  June 2014

QSERV-L June 2014

Subject:

Re: Secondary indexes

From:

Serge Monkewitz <[log in to unmask]>

Reply-To:

General discussion for qserv (LSST prototype baseline catalog)

Date:

Tue, 10 Jun 2014 19:30:24 -0700

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (23 lines)

On Jun 10, 2014, at 6:33 PM, Kian-Tat Lim <[log in to unmask]> wrote:

> Serge and Daniel had a conversation on HipChat today about secondary
> indexes.  It sounds like this is meant to handle indexes in addition to
> objectId that map from something to chunk number.  I'm concerned because
> adding indexing on the czar adds another layer, another copy of data,
> more (shared) czar state, and another place for things to get out of
> sync and go wrong.
> 
> Is there some way for us to lower the overhead of issuing queries to
> chunks so that we can just use "normal" local per-chunk indexes instead
> of a central index?

If there were, we could just use it for objectId queries as well. This is tempting to me, except that I think the partitioner needs something like a central “objectId" -> chunk mapping unless we force people to supply an associated object position for all partitioned tables. And if we need it for the partitioner, we might as well take advantage on the czar. Once we are there, it does not seem like a big stretch to allow using the same sorts of indexes on columns other than the PK of the director table. Still, I’d personally be quite happy to reduce scope.

I don’t really know how much we can reduce overhead for issuing queries - that’s more of a Daniel/AndyH question. I think the new async xrootd client work and result marshaling rewrite should provide nice gains. We could maybe also look at something like sending multiple short queries per chunk to improve throughput without hurting latency too much. But that has its own complexity cost, and I am worried that broadcasting large amounts of short queries might interfere with shared scans.

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

March 2018
February 2018
January 2018
December 2017
August 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use