Print

Print


Andy, this is just weird.   I don't understand what the RAL server is doing.  Take a look.  This is the data-server/dcache-pool side log with the TPC client logging (dCache):  The last few iterations of the read look like this, after I fixed it to request exactly the remaining number of bytes. 

```
05 Aug 2021 16:02:10 (dcatest04-5) [] Read response received for dteam:test1/domatest/jwalder/HTTP_1GB on ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1: status 0, got 4194304 more bytes.
05 Aug 2021 16:02:10 (dcatest04-5) [] Read of dteam:test1/domatest/jwalder/HTTP_1GB on ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1: wrote 4194304, so far 994050048, expected 1000000000.
05 Aug 2021 16:02:10 (dcatest04-5) [] sendReadRequest to ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1, fhandle 0, offset 994050048, chunksize 4194304.
05 Aug 2021 16:02:10 (dcatest04-5) [] Decoder ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735: adding read response.
05 Aug 2021 16:02:10 (dcatest04-5) [] Read response received for dteam:test1/domatest/jwalder/HTTP_1GB on ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1: status 0, got 4194304 more bytes.
05 Aug 2021 16:02:10 (dcatest04-5) [] Request to stat stat[0x0,,] is for third-party transfer.
05 Aug 2021 16:02:11 (dcatest04-5) [] Read of dteam:test1/domatest/jwalder/HTTP_1GB on ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1: wrote 4194304, so far 998244352, expected 1000000000.
05 Aug 2021 16:02:11 (dcatest04-5) [] sendReadRequest to ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1, fhandle 0, offset 998244352, chunksize 1755648.
05 Aug 2021 16:02:11 (dcatest04-5) [] Decoder ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735: adding read response.
05 Aug 2021 16:02:11 (dcatest04-5) [] Read response received for dteam:test1/domatest/jwalder/HTTP_1GB on ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1: status 4000, got 1755648 more bytes.
05 Aug 2021 16:02:11 (dcatest04-5) [] Read of dteam:test1/domatest/jwalder/HTTP_1GB on ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1: wrote 1755648, so far 1000000000, expected 1000000000.
05 Aug 2021 16:02:11 (dcatest04-5) [] Waiting for more data for dteam:test1/domatest/jwalder/HTTP_1GB on ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1
05 Aug 2021 16:02:11 (dcatest04-5) [] Decoder ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735: adding read response.
05 Aug 2021 16:02:11 (dcatest04-5) [] Read response received for dteam:test1/domatest/jwalder/HTTP_1GB on ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1: status 0, got 2438656 more bytes.
05 Aug 2021 16:02:11 (dcatest04-5) [] Read of dteam:test1/domatest/jwalder/HTTP_1GB on ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1: wrote 2438656, so far 1002438656, expected 1000000000.
05 Aug 2021 16:02:11 (dcatest04-5) [] Read for dteam:test1/domatest/jwalder/HTTP_1GB on ceph-gw8.gridpp.rl.ac.uk:1094, channel c312c735, stream 1, completed without checksum verification.
05 Aug 2021 16:02:11 (dcatest04-5) [] fireDelayedSync (result 0), (error null), (serverError null); syncRequest org.dcache.xrootd.protocol.messages.SyncRequest@48874739, isFirstSync false
```
It seems that they are still sending back kXR_oksofar on the final call, and then delivering more bytes after that, even though with that call we clearly reach 1000000000.

Is it the TPC client's responsibility to _**ignore**_ kXR_oksofar if it has **_already_** received the number of bytes expected?


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/xrootd/xrootd/issues/1454#issuecomment-893811229
########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1