The xrootd fix appears to have done the trick :) I'm running Jacek's load test on the cluster.

On 12/03/15 03:30, Fabrice Jammes wrote:
[log in to unmask]" type="cite">
Hello,

John, you're branch is now running on the cluster with latest xrootd version, and .... it works much better :-)

fjammes@ccosvms0070:~/src/qserv-cluster/shmux (master=)$ time mysql --host ccqserv125 --port 4040 --user qsmaster LSST -e "SELECT ra, decl FROM Object WHERE deepSourceId = 2322920177142607;";
+------------------+-------------------+
| ra               | decl              |
+------------------+-------------------+
| 29.3088063472755 | -86.3088404611897 |
+------------------+-------------------+

real    0m0.835s
user    0m0.008s
sys     0m0.023s
fjammes@ccosvms0070:~/src/qserv-cluster/shmux (master=)$
fjammes@ccosvms0070:~/src/qserv-cluster/shmux (master=)$
fjammes@ccosvms0070:~/src/qserv-cluster/shmux (master=)$
fjammes@ccosvms0070:~/src/qserv-cluster/shmux (master=)$ time mysql --host ccqserv125 --port 4040 --user qsmaster LSST -e "SELECT count(*) FROM Object;";
+----------------+
| SUM(QS1_COUNT) |
+----------------+
|     1889695615 |
+----------------+

real    0m29.622s
user    0m0.010s
sys     0m0.025s

Feel free to test it extensively.

Use qserv-cluster/shmux/run.sh to reinstall Qserv from scratch.

Cheers

On 12/02/2015 07:02 PM, John Gates wrote:
[log in to unmask]" type="cite">
The branch for me is tickets/DM-2699.

I made the change to admin/templates/configuration/etc/lsp.cf and pushed it to github.

Thank you :)

On 12/02/15 09:30, Fabrice Jammes wrote:
[log in to unmask]" type="cite">
Hi John,

Andy and you did great job chasing the bug :-)
I'll try to re-install all of it tonight, during our meeting. Could you please:
- recall me your branch name,
- add "ssi.trace all debug" in lsp.cf server section (please watch ccqserv126 lsp.cf for example) in this branch if still needed
- push to github the latest version of your branch, it'll run on the cluster.

Thanks,

Fabrice

On 12/02/2015 05:25 PM, John Gates wrote:
[log in to unmask]" type="cite"> Hi Fabrice,

Can you rebuild the containers with the latest xrootd code so we can test his change? Or tell me what I need to do to build the containers and distribute them.

Thank you,
John


-------- Forwarded Message --------
Subject: Re: int2p3 cluster problem
Date: Wed, 2 Dec 2015 01:45:34 -0800
From: Andrew Hanushevsky <[log in to unmask]>
To: Gates, John H <[log in to unmask]>
CC: Fabrice Jammes <[log in to unmask]>, Becla, Jacek <[log in to unmask]>, Fritz Mueller <[log in to unmask]>


Hi John,

I have pushed the fix to the main xrdssi branch. That eans that it has to 
be merged into the LSST close of the branch and xroot rebuilt. Hopefully, 
the build is not part of the container at this point. Also, I rolloed in 
all of the accumulated patches to the base xrootd/cmsd. That tag should 
correspond to 4.3.0-rc4 (we currently have RC3 but this includes new 
patches since then).

Andy

On Tue, 1 Dec 2015, Gates, John H wrote:

> There's a little bit more information from the czar-consol.log file. For
> chunk 4483, the last thing in the log is "Sending a fcntl command for
> ...". It looks like it should be followed by a "Sending read command ...
> ", which never happens.
>
> -John
>
> ---- czar-consol.log grep -- 4483  (failed)
> 151130 18:26:38 281 SsiSched: TaskXeqEvent: [[2015-11-30 18:26:38.644839
> +0000][Debug  ][File              ]
> [0xf0076270@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/1034] Sending a
> read command for handle 0x4 to 10.158.37.135:1094
> SessRelTask: [0x7fdef03bce00] [2015-11-30 18:26:40.448304 +0000][Debug
> ][File              ]
> [0xf039db40@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/3668] Sending a
> read command for handle 0x38 to 10.158.37.126:1094
> SessProcReq: [0x7fdef03effd0] Task=0x7fde3c006230 processing
> id=0[2015-11-30 18:26:40.544837 +0000][Debug  ][File              ]
> [0xf03ef770@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/3846] Sending a
> fcntl command for handle 0x3d to 10.158.37.133:1094
> TaskKill: [0x7fdec0007300] Status = isReady mhPend=0 id=0[2015-11-30
> 18:26:40.704483 +0000][Debug  ][File              ]
> [0xf0435710@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4006] Sending an
> open command
> SessOpen: [0x7fdef051ff20] Opening
> xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4483
> [2015-11-30 18:26:41.189518 +0000][Debug  ][File              ]
> [0xf0520060@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4483] Sending an
> open command
> [151130 18:26:41 268 SsiSched: running TaskReal0x7fde24007df0] Status =
> isReady mhPend=0 id=[2015-11-30 18:26:41.198308 +0000][Debug
> ][File              ]
> [0xf0520060@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4483] Open has
> returned with status [SUCCESS]
> [2015-11-30 18:26:41.198334 +0000][Debug  ][File              ]
> [0xf0520060@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4483]
> successfully opened at 10.158.37.126:1094, handle: 0x51, session id: 1
> TaskXeqEvent: [0x7fde30007210] [2015-11-30 18:26:41.204531 +0000][Debug
> ][File              ]
> [0xf0520060@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4483] Sending a
> write command for handle 0x51 to 10.158.37.126:1094
> [2015-11-30 18:26:41.208073 +0000][Debug  ][File              ]
> [0xf0520060@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4483] Sending a
> fcntl command for handle 0x51 to 10.158.37.126:1094
> : running TaskRealTaskXeqEvent:  inq=
>
>
> ----- czar-consul.log grep 4435  (successful)
> TaskSetBuff: [0x7fde90006d30[2015-11-30 18:26:41.044359 +0000][Debug
> ][File              ]
> [0xf04c44f0@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4307] Sending a
> truncate command for handle 0x4b to 10.158.37.129:1094
> SessOpen: [0x7fdef04fe520] Opening
> xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4435
> [2015-11-30 18:26:41.131018 +0000][Debug  ][File              ]
> [0xf04fe660@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4435] Sending an
> open command
> TaskXeqEvent: [2015-11-30 18:26:41.154194 +0000][Debug
> ][File              ]
> [0xf04fe660@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4435] Open has
> returned with status [SUCCESS]
> [0x7fde080081c0]  sess=ok id=0 Status = [2015-11-30 18:26:41.154214
> +0000][Debug  ][File              ]
> [0xf04fe660@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4435]
> successfully opened at 10.158.37.126:1094, handle: 0x4f, session id: 1
> TaskXeqEvent: [0x7fded400d5a0[2015-11-30 18:26:41.156974 +0000][Debug
> ][File              ]
> [0xf04fe660@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4435] Sending a
> write command for handle 0x4f to 10.158.37.126:1094
> TaskXeqEvent: [2015-11-30 18:26:41.205640 +0000][Debug
> ][File              ]
> [0xf04fe660@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4435] Sending a
> fcntl command for handle 0x4f to 10.158.37.126:1094
> [2015-11-30 18:26:41.212676 +0000][Debug  ][File              ]
> [0xf04fe660@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4435] Sending a
> read command for handle 0x4f to 10.158.37.126:1094
> 151130 18:26:41 311 SsiSchedSessProcReq: [2015-11-30 18:26:41.255720
> +0000][Debug  ][File              ]
> [0xf04fe660@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4435] Sending a
> read command for handle 0x4f to 10.158.37.126:1094
> TaskXeqEvent: [0x7fde0c006af0]  sess=ok id=0[2015-11-30 18:26:41.301306
> +0000][Debug  ][File              ]
> [0xf04fe660@xroot://ccqserv125.in2p3.fr:1094//chk/LSST/4435] Sending a
> truncate command for handle 0x4f to 10.158.37.126:1094
>
>
>
> On 12/01/15 02:15, Andrew Hanushevsky wrote:
>> Hi John,
>>
>> Well, assuming the IP address is for 126 the first question is why
>> doesn't resolve via DNS. Not that it should matter as the cmsd doesn't
>> really care that a node has a dns name. What I see i the log is that
>> that IP address did log in. So, other than not having a DNS name
>> nothing seems unusual. Are the missing chunks always related to 126?
>>
>> Andy
>>
>> On Mon, 30 Nov 2015, John Gates wrote:
>>
>>> Hi Andy,
>>>
>>>
>>> I'm hoping you can shed some light to the cause of this. The problem
>>> is that ProcessResponse is not being called for a few, seemingly
>>> random, chunks. I've not completed the query at all on the cluster
>>> and it only occurs on the cluster.
>>>
>>> I've tried the query select count(*) from Object; on the cluster an
>>> it failed with 9 queries in flight. I've only looked at jobId=1945 in
>>> depth, which corresponds to chunk 4483. I've included relevant parts
>>> of the log files below.
>>>
>>>
>>> The cmsd.log entry for ccqserv126 was a little odd in that most
>>> workers seem to have 2 sets of entries in the log file and ccqserv126
>>> is missing the set where it is mentioned by name and only has the set
>>> where it is mentioned by ip address. The qserv-czar.log file looked
>>> fine except that process responses is never called for chunk4483.
>>>
>>> The worker log looks fine for chunk 4483 but is missing entries I see
>>> for other chunks (flagged below with "*****"). I expect to see
>>> something like the following for 4483:
>>>    ssi_fctl: 0:/chk/LSST/XXXX query resp status
>>>    ssi_fctl: 0:/chk/LSST/XXXX resp ready
>>>    ssi_Finalize: 0:/chk/LSST/XXXX [bound odRsp] Calling Finished(0)
>>>
>>> Thanks,
>>> John
>>>
>>> Failed executive job ids.
>>> < 1945
>>> < 2280
>>> < 4788
>>> < 5409
>>> < 5522
>>> < 5572
>>> < 6765
>>> < 7063
>>> < 7422
>>>
>>>
>>> ------- 1945 -> chunk 4483 --- found on ccqserv126
>>> czar log:
>>> 2015-11-30T18:26:41.188Z [0x7fdeef7fe700] DEBUG root
>>> (core/modules/qproc/TaskMsgFactory2.cc:153) - SELECT count(*) AS
>>> QS1_COUNT FROM LSST.Object_4483 AS QST_1_
>>> 2015-11-30T18:26:41.189Z [0x7fdeef7fe700] INFO  root
>>> (core/modules/ccontrol/MergingHandler.cc:226) - setError: code: 0,
>>> message:
>>> 2015-11-30T18:26:41.189Z [0x7fdeef7fe700] DEBUG qdisp.Executive
>>> (core/modules/qdisp/Executive.cc:106) - Executive::add(job(id=1945
>>> payload.len=123 ru=/chk/LSST/4483))
>>> 2015-11-30T18:26:41.189Z [0x7fdeef7fe700] DEBUG root
>>> (core/modules/qdisp/JobQuery.h:104) - JobQuery JQ_jobId=1945
>>> desc=job(id=1945 payload.len=123 ru=/chk/LSST/4483)
>>> 2015-11-30T18:26:41.189Z [0x7fdeef7fe700] INFO  qdisp.Executive
>>> (core/modules/qdisp/Executive.cc:129) - Executive: Add job with
>>> path=/chk/LSST/4483
>>> 2015-11-30T18:26:41.189Z [0x7fdeef7fe700] DEBUG root
>>> (core/modules/qdisp/MessageStore.cc:53) - Add msg: 4483 1200
>>> Executive: Add job with path=/chk/LSST/4483
>>> 2015-11-30T18:26:41.189Z [0x7fdeef7fe700] DEBUG root
>>> (core/modules/qdisp/JobQuery.cc:55) - runJob {job(id=1945
>>> payload.len=123 ru=/chk/LSST/4483) : 2015-11-30T18:26:41+0000,
>>> Unknown, 0, }
>>> 2015-11-30T18:26:41.189Z [0x7fdeef7fe700] INFO  root
>>> (core/modules/ccontrol/MergingHandler.cc:226) - setError: code: 0,
>>> message:
>>> 2015-11-30T18:26:41.189Z [0x7fdeef7fe700] DEBUG root
>>> (core/modules/qdisp/QueryResource.cc:51) - QueryResource JQ_jobId=1945
>>> 2015-11-30T18:26:41.189Z [0x7fdeef7fe700] INFO qproc.QuerySession
>>> (core/modules/qproc/QuerySession.cc:383) - Non-subchunked
>>> 2015-11-30T18:26:41.189Z [0x7fdeef7fe700] DEBUG root
>>> (core/modules/qproc/TaskMsgFactory2.cc:151) - no nextFragment
>>> 2015-11-30T18:26:41.198Z [0x7fde727fc700] DEBUG root
>>> (core/modules/qdisp/QueryRequest.cc:57) - jobId=1945 New QueryRequest
>>> with payload(123)
>>> 2015-11-30T18:26:41.198Z [0x7fde727fc700] DEBUG root
>>> (core/modules/qdisp/QueryRequest.cc:74) - jobId=1945 Requesting,
>>> payload size: [123]
>>> 2015-11-30T18:26:41.204Z [0x7fde727fc700] DEBUG root
>>> (core/modules/qdisp/QueryResource.cc:55) - ~QueryResource()
>>> JQ_jobId=1945
>>> 2015-11-30T18:26:41.208Z [0x7fde6bfff700] DEBUG root
>>> (core/modules/qdisp/QueryRequest.cc:81) - jobId=1945 RelRequestBuffer
>>>
>>> ----- czar cmsd.log: - missing second server login message for
>>> ccqserv126
>>> 151130 18:21:57 225 Protocol: Primary
>>> [log in to unmask]:1094">server.199:[log in to unmask]:1094 logged in.
>>> =====> Routing for 10.158.37.126: local pub4 prv4
>>> =====> Route all4: 10.158.37.126 Dest=[::10.158.37.126]:1094
>>>   *** there should be something like  "Protocol: Primary
>>> server.199@ccqserv126" but there's nothing
>>>
>>> 151130 18:21:35 221 Protocol: Primary
>>> [log in to unmask]:1094">server.198:[log in to unmask]:1094 logged in.
>>> =====> Routing for 10.158.37.127: local pub4 prv4
>>> =====> Route all4: 10.158.37.127 Dest=[::10.158.37.127]:1094
>>> 151130 18:22:11 205 Protocol: Primary server.199:28@ccqserv127:1094
>>> logged in.
>>> =====> Routing for 10.158.37.127: local pub4 prv4
>>> =====> Route all4: 10.158.37.127 Dest=[::10.158.37.127]:1094
>>>
>>>
>>>
>>> ------ ccqserv126 cmsd.log looks a lot like the other cmsd.log files
>>> [2015-11-30T18:26:41.198Z] [0x7f054c250700] INFO  root
>>> (core/modules/xrdsvc/SsiService.cc:105) - Got provision call where
>>> rName is: /chk/LSST/4483
>>> 151130 18:26:41 237 [log in to unmask]">qserv.232:[log in to unmask] ssi_open: /chk/LSST/4483
>>> 151130 18:26:41 237 [log in to unmask]">qserv.232:[log in to unmask] ssi_write:
>>> 0:/chk/LSST/4483 rsz=123 wsz=123
>>> 151130 18:26:41 237 [log in to unmask]">qserv.232:[log in to unmask] ssi_Activate:
>>> 0:/chk/LSST/4483 [new wtReq] oucbuff rqsz=123
>>> 151130 18:26:41 257 [log in to unmask]">qserv.232:[log in to unmask] ssi_DoIt:
>>> 0:/chk/LSST/4483 [begun xqReq] Calling session Process
>>> [2015-11-30T18:26:41.205Z] [0x7f05447f8700] INFO  root
>>> (core/modules/xrdsvc/SsiSession.cc:61) - ProcessRequest,
>>> service=/chk/LSST/4483
>>> 151130 18:26:41 257 [log in to unmask]">qserv.232:[log in to unmask] ssi_GetRequest:
>>> 0:/chk/LSST/4483 [begun xqReq] sz=123
>>> [2015-11-30T18:26:41.205Z] [0x7f05447f8700] INFO  root
>>> (core/modules/xrdsvc/SsiSession.cc:68) - GetRequest took 6.6e-05 seconds
>>> [2015-11-30T18:26:41.205Z] [0x7f05447f8700] INFO  root
>>> (core/modules/xrdsvc/SsiSession.cc:99) - Decoding TaskMsg of size 123
>>> [2015-11-30T18:26:41.205Z] [0x7f05447f8700] DEBUG root
>>> (core/modules/wbase/Task.cc:111) - Task(...) tSeq=80  :count=81 0, 1,
>>> 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
>>> 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
>>> 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
>>> 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71,
>>> 72, 73, 74, 75, 76, 77, 78, 79, 80
>>> 151130 18:26:41 257 [log in to unmask]">qserv.232:[log in to unmask] ssi_BindDone:
>>> 0:/chk/LSST/4483 [begun xqReq] Bind called; session set
>>> 151130 18:26:41 257 [log in to unmask]">qserv.232:[log in to unmask] ssi_RelReqBuff:
>>> 0:/chk/LSST/4483 [bound xqReq] called
>>> [2015-11-30T18:26:41.205Z] [0x7f05447f8700] DEBUG root
>>> (core/modules/wsched/BlendScheduler.cc:75) - BlendScheduler::queCmd
>>> tSeq=80
>>> [2015-11-30T18:26:41.205Z] [0x7f05447f8700] DEBUG BlendSched
>>> (core/modules/wsched/BlendScheduler.cc:93) - Blend chose group
>>> [2015-11-30T18:26:41.205Z] [0x7f05447f8700] DEBUG BlendSched
>>> (core/modules/wsched/BlendScheduler.cc:100) - Blend queCmd tSeq=80
>>> [2015-11-30T18:26:41.205Z] [0x7f05447f8700] INFO  root
>>> (core/modules/xrdsvc/SsiSession.cc:137) - BindRequest took 0.000167
>>> seconds
>>> [2015-11-30T18:26:41.205Z] [0x7f0544ff9700] DEBUG BlendSched
>>> (core/modules/wsched/BlendScheduler.cc:156) -
>>> BlendScheduler::_ready() groups(r=1, q=1, flight=0) scan(r=0, q=0,
>>> flight=0)
>>> [2015-11-30T18:26:41.206Z] [0x7f05447f8700] INFO  root
>>> (core/modules/xrdsvc/SsiSession.cc:138) - Enqueued TaskMsg for
>>> Resource(/chk/LSST/4483) in 0.000167 seconds
>>> ...
>>> [2015-11-30T18:26:41.206Z] [0x7f0544ff9700] DEBUG BlendSched
>>> (core/modules/wsched/BlendScheduler.cc:112) -
>>> BlendScheduler::commandStart tSeq=80
>>> [2015-11-30T18:26:41.206Z] [0x7f0544ff9700] DEBUG Foreman
>>> (core/modules/wdb/QueryRunner.cc:137) - Exec in flight for Db =
>>> q_fd5b7faeb8710396bed8ae38be7ad9ef
>>> [2015-11-30T18:26:41.206Z] [0x7f0544ff9700] WARN  Foreman
>>> (core/modules/wdb/QueryRunner.cc:115) - QueryRunner overriding dbName
>>> with LSST
>>> [2015-11-30T18:26:41.208Z] [0x7f0544ff9700] DEBUG root
>>> (core/modules/wdb/QueryRunner.cc:242) - _transmit last=1 tSeq=80
>>> [2015-11-30T18:26:41.208Z] [0x7f0544ff9700] DEBUG root
>>> (core/modules/wdb/QueryRunner.cc:263) - _transmitHeader
>>> [2015-11-30T18:26:41.208Z] [0x7f0544ff9700] INFO  root
>>> (core/modules/proto/ProtoHeaderWrap.cc:52) - msgBuf size=256 ->
>>> [[0]=40, [1]=13, [2]=2, [3]=0, [4]=0, ..., [251]=48, [252]=48,
>>> [253]=48, [254]=48, [255]=48]
>>> 151130 18:26:41 237 [log in to unmask]">qserv.232:[log in to unmask] ssi_fctl:
>>> 0:/chk/LSST/4483 query resp status
>>> 151130 18:26:41 237 [log in to unmask]">qserv.232:[log in to unmask] ssi_fctl:
>>> 0:/chk/LSST/4483 resp not ready
>>> [2015-11-30T18:26:41.208Z] [0x7f0544ff9700] INFO  root
>>> (core/modules/xrdsvc/SsiSession_ReplyChannel.cc:85) - sendStream,
>>> checking stream 0 len=256 last=0
>>> 151130 18:26:41 237 [log in to unmask]">qserv.232:[log in to unmask] ssi_Done:
>>> 0:/chk/LSST/4483 [bound xqReq] wtrsp sent; resp here
>>> 151130 18:26:41 184 [log in to unmask]">qserv.232:[log in to unmask] ssi_ProcessResponse:
>>> 0:/chk/LSST/4483 [bound xqReq] Response presented wtr=0
>>> 151130 18:26:41 184 [log in to unmask]">qserv.232:[log in to unmask] ssi_ProcessResponse:
>>> 0:/chk/LSST/4483 [bound doRsp] Resp strm
>>>    ***** the above is the last mention of 4483 in the log file.
>>> Should there be lines like the one below for chunk 4435 ???
>>>    ***** 151130 18:26:41 237 [log in to unmask]">qserv.232:[log in to unmask] ssi_fctl:
>>> 0:/chk/LSST/4435 query resp status
>>>    ***** 151130 18:26:41 237 [log in to unmask]">qserv.232:[log in to unmask] ssi_fctl:
>>> 0:/chk/LSST/4435 resp ready
>>>    ***** 151130 18:26:41 158 [log in to unmask]">qserv.232:[log in to unmask] ssi_Finalize:
>>> 0:/chk/LSST/4435 [bound odRsp] Calling Finished(0)
>>> [2015-11-30T18:26:41.208Z] [0x7f0544ff9700] INFO  root
>>> (core/modules/xrdsvc/ChannelStream.cc:91) - last=0 [[0]=40, [1]=13,
>>> [2]=2, [3]=0, [4]=0, [5]=0, [6]=21, [7]=47, [8]=0, [9]=0, ...,
>>> [246]=48, [247]=48, [248]=48, [249]=48, [250]=48, [251]=48, [252]=48,
>>> [253]=48, [254]=48, [255]=48]
>>> [2015-11-30T18:26:41.208Z] [0x7f0544ff9700] INFO  root
>>> (core/modules/xrdsvc/ChannelStream.cc:94) -  trying to append message
>>> (flowing)
>>> [2015-11-30T18:26:41.208Z] [0x7f0544ff9700] DEBUG root
>>> (core/modules/wdb/QueryRunner.cc:253) - _transmit last=1 tSeq=80
>>> resultString=[[0]=8, [1]=0, [2]=16, [3]=1, [4]=26, ..., [42]=48,
>>> [43]=50, [44]=55, [45]=16, [46]=0]
>>> [2015-11-30T18:26:41.208Z] [0x7f0544ff9700] INFO  root
>>> (core/modules/xrdsvc/SsiSession_ReplyChannel.cc:85) - sendStream,
>>> checking stream 0x7f0538007a20 len=47 last=1
>>> [2015-11-30T18:26:41.209Z] [0x7f0544ff9700] INFO  root
>>> (core/modules/xrdsvc/ChannelStream.cc:91) - last=1 [[0]=8, [1]=0,
>>> [2]=16, [3]=1, [4]=26, [5]=29, [6]=10, [7]=27, [8]=10, [9]=9, ...,
>>> [37]=10, [38]=6, [39]=50, [40]=54, [41]=56, [42]=48, [43]=50,
>>> [44]=55, [45]=16, [46]=0]
>>> [2015-11-30T18:26:41.209Z] [0x7f0544ff9700] INFO  root
>>> (core/modules/xrdsvc/ChannelStream.cc:94) -  trying to append message
>>> (flowing)
>>> [2015-11-30T18:26:41.209Z] [0x7f0544ff9700] DEBUG BlendSched
>>> (core/modules/wsched/BlendScheduler.cc:132) -
>>> BlendScheduler::commandFinish tSeq=80
>>>
>>>
>>>
>>>
>
>






Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1





Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1