Print

Print


As we just talked, my numbers are for data chunks,
index is up to 2x larger, so we can use 2x larger
numbers. Data+index come in separate files, so
they can be transferred in parallel, so I think
it'd be unfair to assume 3x my numbers though

Jacek



On 9/24/2013 3:07 PM, Jacek Becla wrote:
>
>> 	Chunks are expected to be multiple terabytes in size, which
>> means that downloads are hours long.
>
>
> K-T,
>
> Based on the baseline, which assumes flat 20K chunks per tables,
> the largest chunk will be 255 GB. The numbers are (in GB,
> DR1 --> DR11)
>    - Object:    2 -->   4
>    - ObjExtra: 25 -->  69
>    - Source:    9 --> 255
>    - ForcedSrc: 2 -->  98
>
> This is in LDM-141, dbL2, L141 (and nearby)
>
> And, that is before compression.
>
> We talked about keeping chunk size const rather than #chunks
> constants, which will probably make us go with DR1-size chunk
> sizes, thus keeping chunk size closer to 25 GB than 1/4 TB)
>
> Jacek
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1