Print

Print


Mario,

> >	Here's something for you and the team to think about in November: how
> >would you modify qserv to download and cache chunks on-demand?
> >
> >	Imagine the following scenario: scientist M at university H has access
> >to 200+ nodes w. petabytes of storage.

	The same exact number of nodes and amount of storage as we have?
Or something smaller?

	Are you anticipating that many users will *not* do full-table
scans?  Once they do, they have everything (for that table).

	Chunks are expected to be multiple terabytes in size, which
means that downloads are hours long.

> >It may also solve our issue with the number of replicas needed to
> >guard against failure, since we could configure our Archive center
> >database to fetch any chunks that it doesn't have (e.g., because the
> >nodes have failed) from the Chilean or French site.

	It's not clear that it's faster to get from France or Chile than
from a local backup.  In any case, copying from anywhere else still
means that we're down.

-- 
Kian-Tat Lim, LSST Data Management, [log in to unmask]

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1