LISTSERV mailing list manager LISTSERV 16.5

Help for QSERV-L Archives


QSERV-L Archives

QSERV-L Archives


QSERV-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

QSERV-L Home

QSERV-L Home

QSERV-L  December 2015

QSERV-L December 2015

Subject:

shared scans

From:

John Gates <[log in to unmask]>

Reply-To:

General discussion for qserv (LSST prototype baseline catalog)

Date:

Wed, 2 Dec 2015 09:17:50 -0800

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (79 lines)

There's some existing code for shared scan scheduling. I'm writing blurb 
about what I know about the worker scheduling and task handling. In the 
last couple of months I re-wrote a very large fraction of this code so 
query cancellation would work. I tried to make it flexible and 
understandable as I expect scheduling will be one of those places where 
there will be a lot of experimentation and it could get complicated.

The worker gets TaskMsg's from the czar and uses these to create 
wbase::Task objects (the czar does all the analysis). The important 
thing about a Task is that it can be given to a wcontrol::Scheduler (to 
queue it), the worker knows how to run the query it contains, and it can 
be canceled.  Whatever else happens, for the cancellation code to work 
any scan scheduler needs to work with Task's.

The scheduling is now done by wsched::BlendScheduler, which in turn 
passes Task's flagged as part of a query needing multiple chunks to the 
wsched::ScanScheduler, and other Tasks go to the GroupScheduler.

The GroupScheduler is much like a fifo with the exception that it tries 
to group queries by chunk id in an attempt to reduce disk I/O.

The ScanScheduler groups all Tasks by chunk id and then works through 
sequentially through all the chunk ids on the worker, and finally wraps 
back around to the lowest chunk id. It makes no attempt to lock chunks 
in memory, it is only trying to limit disk I/O to reading one chunk into 
memory at a time.

To change the behavior of the schedulers, ::queCmd() and ::_ready() are 
the primary functions that need to be changed.

The Schedulers all get their threads from the same util::ThreadPool 
found in EventThread.h. I have some concern about a lots of context 
switching and complicated _ready() functions taking up too much CPU 
time, and adding a util::PseudoThreadPool that creates/destorys threads 
up to a maximum as needed and shares the same interface as ThreadPool. I 
believe this would be simple with the biggest change being that 
schedulers would need to know about the ThreadPool.

The code for scheduling is based on the code found in util::Command.h 
and util::EventThread.h. Tasks are based off of util::Command and 
Command's are easy to pass around and run.


It is easy to switch between Schedulers and change the number of threads 
each can use as well as the total available from the pool, which is set 
in the code below.

SsiService::SsiService(XrdSsiLogger* log) {

  ...

     // TODO: set poolSize and all maxThreads values from config file.
     uint poolSize = std::max(static_cast<uint>(24), 
std::thread::hardware_concurrency());
     // TODO: set GroupScheduler group size from configuration file
     // TODO: Consider limiting the number of chunks being accessed at a 
time
     //       by GroupScheduler and ScanScheduler
     //_foreman = 
wcontrol::Foreman::newForeman(std::make_shared<wsched::FifoScheduler>(), 
poolSize);
     //_foreman = 
wcontrol::Foreman::newForeman(std::make_shared<wsched::GroupScheduler>(12), 
poolSize);
     // poolSize should be greater than either 
GroupScheduler::maxThreads or ScanScheduler::maxThreads
     _foreman = wcontrol::Foreman::newForeman(
             std::make_shared<wsched::BlendScheduler>(
std::make_shared<wsched::GroupScheduler>(20, 10),
std::make_shared<wsched::ScanScheduler>(20)),
             poolSize);
}

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

March 2018
February 2018
January 2018
December 2017
August 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use