LISTSERV mailing list manager LISTSERV 16.5

Help for XROOTD-L Archives


XROOTD-L Archives

XROOTD-L Archives


XROOTD-L@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

XROOTD-L Home

XROOTD-L Home

XROOTD-L  May 2021

XROOTD-L May 2021

Subject:

Re: http ingest using xcache fails

From:

Bertrand RIGAUD <[log in to unmask]>

Reply-To:

Support use of xrootd by HEP experiments <[log in to unmask]>

Date:

Tue, 11 May 2021 11:13:50 +0200

Content-Type:

multipart/signed

Parts/Attachments:

Parts/Attachments

text/plain (441 lines) , smime.p7s (441 lines)

Hello,

Ok, so first, to answer your questions :

xrootd version : 5.1.1

XrdClHttp comes from https://xrootd.slac.stanford.edu/binaries/stable/slc/7/x86_64/xrdcl-http-5.1.1-1.el7.x86_64.rpm

Here is the XrdClHttp plugin config:

url = http://*
lib = /usr/lib64/libXrdClHttp-5.so
enable = true

Ok now a little more about context:

We plan to add a cache for public data (GWOSC) and give access to it through cvmfs: https://computing.docs.ligo.org/guide/cvmfs/

There is already an app called stashcache using xrootd to do that but that implies that xrootd must be installed and configured on both the source server and the cache server (https://cvmfs.readthedocs.io/en/stable/_images/xcache1.svg)

In order to be less "invasive", we're trying to deploy this architecture (https://cvmfs.readthedocs.io/en/stable/_images/xcache2.svg)

All this is explained in this doc: https://cvmfs.readthedocs.io/en/stable/cpt-xcache.html

Back to the GWOSC data, we learned that the cvmfs GWOSC repo has catalog and data separated, i.e data is stored on another server defined by CVMFS_EXTERNAL_URL in cvmfs config (https://cvmfs.readthedocs.io/en/stable/cpt-large-scale.html?highlight=CVMFS_EXTERNAL_URL#creating-large-secure-repositories)
Thus, files can be accessed directly by their name through a http server.

For example here is a typical file (about 500MB):

$ curl -v -o my_file.gwf http://fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
  0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to fiona.uvalight.net port 8000 (#0)
* Trying 145.146.100.30...
* Connected to fiona.uvalight.net (145.146.100.30) port 8000 (#0)
> GET /gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf HTTP/1.1
> User-Agent: curl/7.29.0
> Host: fiona.uvalight.net:8000
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: Keep-Alive
< Content-Length: 506173152
<
  0 482M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{ [data not shown]
100 482M 100 482M 0 0 100M 0 0:00:04 0:00:04 --:--:-- 109M
* Connection #0 to host fiona.uvalight.net left intact

Now here is what I get when downloading through our xcache server:

$ curl -v -o my_file.gwf http://our_xcache_server:1094//http://fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
  % Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
  0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to our_xcache_server port 1094 (#0)
* Trying xxx.xxx.xxx.xxx...
* Connected to our_xcache_server (xxx.xxx.xxx.xxx) port 1094 (#0)
> GET //http://fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf HTTP/1.1
> User-Agent: curl/7.29.0
> Host: our_xcache_server:1094
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: Keep-Alive
< Content-Length: 506173152
<
{ [data not shown]
 99 482M 99 480M 0 0 50.0M 0 0:00:09 0:00:09 --:--:-- 16.0M* transfer closed with 2856672 bytes remaining to read
 99 482M 99 480M 0 0 50.0M 0 0:00:09 0:00:09 --:--:-- 0
* Closing connection 0
curl: (18) transfer closed with 2856672 bytes remaining to read

As you can see here, there are 2856672 bytes missing



If we take a look at the cvmfs client view, we can see that the catalog also indicates the right size of the file:

$ ll /cvmfs/gwosc.osgstorage.org/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
-rw-r--r-- 1 cvmfs cvmfs 506173152 Jun 14 2018 /cvmfs/gwosc.osgstorage.org/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf



On xcache server (fresh and cleared), here are logs until the first chunk is transfered:

210511 10:41:35 131934 anon.0:22@clientmachine sysXrdHttp: received dlen: 16
210511 10:41:35 131934 anon.0:22@clientmachine sysXrdHttp: received dump: 71 69 84 32 47 47 104 116 116 112 58 47 47 102 105 00
210511 10:41:35 131934 anon.0:22@clientmachine sysXrdHttp: Protocol matched. https: 0
210511 10:41:35 131934 sysXrdHttp: Reset
210511 10:41:35 131934 sysXrdHttp: XrdHttpReq request ended.
210511 10:41:35 131934 anon.0:22@clientmachine sysXrdHttp: Process. lp:0x7f7f08005378 reqstate: 0
210511 10:41:35 131934 anon.0:22@clientmachine sysXrdHttp: Setting host: [::ffff:xxx.xxx.xxx.xxx]
210511 10:41:35 131934 sysXrdHttp: getDataOneShot BuffAvailable: 1048576 maxread: 1048576
210511 10:41:35 131934 sysXrdHttp: read 204 of 1048576 bytes
210511 10:41:35 131934 sysXrdHttp: rc:127 got hdr line: GET //http://fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf HTTP/1.1

210511 10:41:35 131934 sysXrdHttp: Parsing first line: GET //http://fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf HTTP/1.1

210511 10:41:35 131934 sysXrdHttp: rc:25 got hdr line: User-Agent: curl/7.29.0

210511 10:41:35 131934 sysXrdHttp: rc:37 got hdr line: Host: our_xcache_server:1094

210511 10:41:35 131934 sysXrdHttp: rc:13 got hdr line: Accept: */*

210511 10:41:35 131934 sysXrdHttp: rc:2 got hdr line:

210511 10:41:35 131934 sysXrdHttp: rc:2 detected header end.
210511 10:41:35 131934 XrootdBridge: unknown.1:22@clientmachine login as nobody
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Process. lp:0x7f7f08005378 reqstate: 0
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Process is exiting rc:0
210511 10:41:35 131934 unknown.1:22@clientmachine Pss_Stat: url=[log in to unmask]:8000//gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?" target="_blank">http:[log in to unmask]:8000//gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?
210511 10:41:35 131934 XrdPfc_Cache: debug LocalFilePath '/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf', why=ForAccess
210511 10:41:35 131934 XrdPfc_Cache: info LocalFilePath '/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf', why=ForAccess -> ENOENT
210511 10:41:35 131934 Posix_P2L: stat /gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?src=[log in to unmask]:8000&" target="_blank">http:[log in to unmask]:8000& pfn2lfn /http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
210511 10:41:35 131934 sysXrdHttp: XrdHttpReq::Data! final=0
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: PostProcessHTTPReq req: 2 reqstate: 0
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Stat for GET /http:/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf stat=-4294902523 506173152 37 0 0 1620722495 1300 xrootd xrootd
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Process. lp:0 reqstate: 0
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Process is exiting rc:0
210511 10:41:35 131934 unknown.1:22@clientmachine Pss_Open: url=[log in to unmask]:8000//gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?" target="_blank">http:[log in to unmask]:8000//gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?
210511 10:41:35 131934 XrdPfc_Cache: debug LocalFilePath '/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf', why=ForAccess
210511 10:41:35 131934 XrdPfc_Cache: info LocalFilePath '/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf', why=ForAccess -> ENOENT
210511 10:41:35 131934 Posix_P2L: file /gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?src=[log in to unmask]:8000&" target="_blank">http:[log in to unmask]:8000& pfn2lfn /http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
210511 10:41:35 131934 XrdPfc_Cache: info Attach() [log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?" target="_blank">http:[log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?
210511 10:41:35 131934 XrdPfc_Cache: debug GetFile http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf, io 0x7f7f080464b0
210511 10:41:35 131934 XrdPfc_IO: debug initCachedStat got stat from client res = 0, size = 506173152 [log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?" target="_blank">http:[log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?
210511 10:41:35 131934 XrdPfc_File: dump Open() open file for disk cache http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
210511 10:41:35 131934 XrdPfc_File: debug Open() Creating new file info, data size = 506173152 num blocks = 121 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
210511 10:41:35 131934 XrdPfc_Cache: debug inc_ref_cnt http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf, cnt at exit = 1
210511 10:41:35 131934 XrdPfc_File: debug AddIO() io = 0x7f7f080464b0 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
210511 10:41:35 131934 XrdPfc_Cache: debug Attach() [log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf" target="_blank">http:[log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf? location: <deferred open>
210511 10:41:35 131961 XrdPfc_File: dump Prefetch enter to check download status http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
210511 10:41:35 131934 sysXrdHttp: XrdHttpReq::Data! final=1
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: PostProcessHTTPReq req: 2 reqstate: 1
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: fhandle:0:0:0:0
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Sending resp: 200 header len:70
210511 10:41:35 131934 sysXrdHttp: Sending 70 bytes
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Process. lp:0 reqstate: 1
210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Process is exiting rc:0
210511 10:41:35 131934 XrdPfc_IO: dump Read() 0x7f7f080464b0 off: 0 size: 1048576 [log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?" target="_blank">http:[log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?
210511 10:41:35 131961 XrdPfc_File: dump PrepareBlockRequest() idx=0, block=0x7f7ed80008c0, prefetch=True, offset=0, size=4194304, buffer=0x7f7ed8001000
210511 10:41:35 131961 XrdPfc_File: dump Prefetch take block 0 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
210511 10:41:35 131934 XrdPfc_File: dump Read() idx 0 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
210511 10:41:35 131934 XrdPfc_File: dump inc_ref_count 0x7f7ed80008c0 refcnt 1 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
210511 10:41:35 131934 XrdPfc_File: dump Read() 0x7f7f0804b000inc_ref_count for existing block 0x7f7ed80008c0 idx = 0 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
210511 10:41:35 131961 XrdPfc_File: dump Prefetch enter to check download status http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf

As far as I understand, "data size = 506173152", xcache got the right informations.

I think there must be something at the end of transaction between xcache (XrdClHttp) and "cvmfs" http server that is stopping it. I'm not sure it is related with the file size as whether the size, it is always at the last chunk that the problem occurs.

regards,

Bertrand Rigaud

Centre de Calcul de l'IN2P3 - CNRS
21 avenue Pierre de Coubertin
69627 Villeurbanne CEDEX
Tél : 04.78.93.08.80

----- Mail original -----
De: "Matevz Tadel" <[log in to unmask]>
À: "Bertrand RIGAUD" <[log in to unmask]>, "Yang, Wei" <[log in to unmask]>
Cc: "xrootd-l" <[log in to unmask]>
Envoyé: Lundi 10 Mai 2021 18:26:37
Objet: Re: http ingest using xcache fails

Hi,

The error occurs or is detected by XrdClHttp:
[2021-05-06 08:53:09.092614 +0200][Error ][XrdClHttp ] Could not read URL: [log in to unmask]:8000//path/to/file?" target="_blank">http:[log in to unmask]:8000//path/to/file?, error: [ERROR] Internal error: no such device or address: Result Invalid Read in request after 3 attempts

Now we have to figure out why it happens in XrdClHttp for the original source server. I'd guess there is some issue with file size.

What files-size does xrootd report at open / attach time? Is this is the same as file-size on the server?

What version of xrootd is this?

I've never used XrdClHttp so I don;t know how it's packaged / versioned ... does it have a separate version?

Cheers,
Matevz

On 5/10/21 3:07 AM, Bertrand RIGAUD wrote:
> Hello,
>
> well I don't think XcacheH is related to this issue. It's the same behaviour whether or not XcacheH is activated
>
> in xcache data folder, I have the file but the size is just under the entire size (the last chunk is missing)
> it depends on the pfc.blocksize I choose.
>
> let's say I have a file of 10MB and a pfc.blocksize set to 4MB, my downloaded file (in xcache data folder and by extension on the client machine) will be 8MB, and, in the xcache logs, I will have two 4MB chunks OK, and the last chunk will be failed and it will be mentionned that the rest (2MB) can't be downloaded because it cannot be read.
>
> [2021-05-10 10:30:18.127994 +0200][Error ][XrdClHttp ] Could not read URL: http://u23@source_server:8000//path/to/the/file?, error: [ERROR] Internal error: no such device or address: Result Invalid Read in request after 3 attempts
>
>
>
> I performed another test:
>
> I downloaded the file from the source server using curl. I have the fully downloaded file (same amount of bytes that is told by curl "Content-Length" attribute).
> I copied this file on a simple VM at our site and expose it through a basic http server (python -m SimpleHTTPServer 80)
> I downloaded the entire file through xcache! No error message in xcache logs, file is fully downloaded on xcache data folder and on client machine.
>
> So,
>
> there is no problem with the file itself
> there is no problem when downloading from the source server using curl or wget
> there is no problem between xcache and my basic http server
> but there is a problem between xcache and the source server
>
> What in the communication between xcache and the source server can prevents from reading the last chunk of a file?
>
> regards,
>
> Bertrand Rigaud
>
> Centre de Calcul de l'IN2P3 - CNRS
> 21 avenue Pierre de Coubertin
> 69627 Villeurbanne CEDEX
> Tél : 04.78.93.08.80
>
> ----- Mail original -----
> De: "Yang, Wei" <[log in to unmask]>
> À: "Bertrand RIGAUD" <[log in to unmask]>, "xrootd-l" <[log in to unmask]>
> Envoyé: Samedi 8 Mai 2021 00:46:50
> Objet: Re: http ingest using xcache fails
>
> Hmm, I think XcacheH is still more or less experimental. That said, it should still work. Can you go inside the cache directory to find a file name http/path/to/my/file and see if that file is fully cache (size, checksum) ?
>
> regards,
> --
> Wei Yang | [log in to unmask] | 650-926-3338(O)
>
> -----Original Message-----
> From: <[log in to unmask]> on behalf of Bertrand RIGAUD <[log in to unmask]>
> Date: Thursday, May 6, 2021 at 4:47 AM
> To: <[log in to unmask]>
> Subject: http ingest using xcache fails
>
> Hi,
>
> Trying to deploy this architecture (https://cvmfs.readthedocs.io/en/stable/_images/xcache2.svg), I'm facing a problem when downloading a file through http.
>
> Everythting works well till the last chunk is downloaded.
>
> As an example, here is a simple curl performed from the client machine (and this is the same behaviour with cvmfs client) :
>
> ### Through xcache server ###
>
> $ curl -v -o file1 http://my_xcache_server:1094//http://path/to/my/file
> % Total % Received % Xferd Average Speed Time Time Time Current
> Dload Upload Total Spent Left Speed
> 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to my_xcache_server port 1094 (#0)
> * Trying xxx.xxx.xxx.xxx...
> * Connected to my_xcache_server (xxx.xxx.xxx.xxx) port 1094 (#0)
> > GET //http://path/to/my/file HTTP/1.1
> > User-Agent: curl/7.29.0
> > Host: my_xcache_server:1094
> > Accept: */*
> >
> < HTTP/1.1 200 OK
> < Connection: Keep-Alive
> < Content-Length: 506173152
> <
> { [data not shown]
> 99 482M 99 480M 0 0 57.9M 0 0:00:08 0:00:08 --:--:-- 29.9M* transfer closed with 2856672 bytes remaining to read
> 99 482M 99 480M 0 0 57.7M 0 0:00:08 0:00:08 --:--:-- 11.5M
> * Closing connection 0
> curl: (18) transfer closed with 2856672 bytes remaining to read
>
>
> ### Direct download ###
>
> curl -v -o file2 http://path/to/my/file
> % Total % Received % Xferd Average Speed Time Time Time Current
> Dload Upload Total Spent Left Speed
> 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to source_server port 8000 (#0)
> * Trying 145.146.100.30...
> * Connected to source_server (xxx.xxx.xxx.xxx) port 8000 (#0)
> > GET path/to/my/file HTTP/1.1
> > User-Agent: curl/7.29.0
> > Host: source_server:8000
> > Accept: */*
> >
> < HTTP/1.1 200 OK
> < Connection: Keep-Alive
> < Content-Length: 506173152
> <
> { [data not shown]
> 100 482M 100 482M 0 0 101M 0 0:00:04 0:00:04 --:--:-- 101M
> * Connection #0 to host source_server left intact
>
> ### Xcache server logs (last chunk) ###
>
> 210506 08:53:05 155627 sysXrdHttp: XrdHttpReq::Data! final=1
> 210506 08:53:05 155627 unknown.2:23@clientmachine sysXrdHttp: PostProcessHTTPReq req: 2 reqstate: 481
> 210506 08:53:05 155627 unknown.2:23@clientmachine sysXrdHttp: Got data vectors to send:1
> 210506 08:53:05 155627 sysXrdHttp: Sending 1048576 bytes
> 210506 08:53:05 155627 unknown.2:23@clientmachine sysXrdHttp: Process. lp:0 reqstate: 481
> 210506 08:53:05 155627 unknown.2:23@clientmachine sysXrdHttp: Process is exiting rc:0
> 210506 08:53:05 155627 XrdPfc_IO: dump Read() 0x7fc1d00019f0 off: 503316480 size: 1048576 [log in to unmask]:8000/http/source.server:8000/path/to/file?" target="_blank">http:[log in to unmask]:8000/http/source.server:8000/path/to/file?
> 210506 08:53:05 155627 XrdPfc_File: dump Read() idx 120 http/source.server:8000/path/to/file
> 210506 08:53:05 155627 XrdPfc_File: dump inc_ref_count 0x7fc1a0403530 refcnt 1 http/source.server:8000/path/to/file
> 210506 08:53:05 155627 XrdPfc_File: dump Read() 0x7fc1f0028000inc_ref_count for existing block 0x7fc1a0403530 idx = 120 http/source.server:8000/path/to/file
> XcacheH: stagein list snapshot: available workers: 10, list length: 0
> [2021-05-06 08:53:09.092614 +0200][Error ][XrdClHttp ] Could not read URL: [log in to unmask]:8000//path/to/file?" target="_blank">http:[log in to unmask]:8000//path/to/file?, error: [ERROR] Internal error: no such device or address: Result Invalid Read in request after 3 attempts
> 210506 08:53:09 155645 XrdPfc_File: dump Prefetch enter to check download status http/source.server:8000/path/to/file
> 210506 08:53:09 155645 XrdPfc_File: debug Prefetch file is complete, stopping prefetch. http/source.server:8000/path/to/file
> 210506 08:53:09 155626 XrdPfc_File: dump ProcessBlockResponse block=0x7fc1a0403530, idx=120, off=503316480, res=-6 http/source.server:8000/path/to/file
> 210506 08:53:09 155626 XrdPfc_File: debug ProcessBlockResponse after failed prefetch on io 0x7fc1d00019f0 disabling prefetching on this io. http/source.server:8000/path/to/file
> 210506 08:53:09 155626 XrdPfc_File: error ProcessBlockResponse block 0x7fc1a0403530, idx=120, off=503316480 error=-6 http/source.server:8000/path/to/file
> 210506 08:53:09 155627 XrdPfc_File: dump Read() requested block finished 0x7fc1a0403530, is_failed()=True http/source.server:8000/path/to/file
> 210506 08:53:09 155627 XrdPfc_File: error Read() io 0x7fc1d00019f0, block 120 finished with error 6 no such device or address http/source.server:8000/path/to/file
> 210506 08:53:09 155627 XrdPfc_File: dump Read() dec_ref_count 0x7fc1a0403530 idx = 120 http/source.server:8000/path/to/file
> 210506 08:53:09 155627 XrdPfc_File: dump free_block block 0x7fc1a0403530 idx = 120 http/source.server:8000/path/to/file
> 210506 08:53:09 155627 XrdPfc_IO: warning Read() error in File::Read(), exit status=-6, error=no such device or address [log in to unmask]:8000/http/source.server:8000/path/to/file?" target="_blank">http:[log in to unmask]:8000/http/source.server:8000/path/to/file?
> 210506 08:53:09 155627 ofs_read: unknown.2:23@clientmachine Unable to read /http:/source.server:8000/path/to/file; no such device or address
> 210506 08:53:09 155627 sysXrdHttp: XrdHttpReq::Error
> 210506 08:53:09 155627 unknown.2:23@clientmachine sysXrdHttp: PostProcessHTTPReq req: 2 reqstate: 482
> 210506 08:53:09 155627 unknown.2:23@clientmachine sysXrdHttp: PostProcessHTTPReq mapping Xrd error [3005] to status code [500]
> 210506 08:53:09 155627 unknown.2:23@clientmachine sysXrdHttp: Stopping request because more data is expected but no data has been read.
> 210506 08:53:09 155627 sysXrdHttp: XrdHttpReq request ended.
> 210506 08:53:09 155627 sysXrdHttp: Cleanup
> 210506 08:53:09 155627 sysXrdHttp: Reset
> 210506 08:53:09 155627 sysXrdHttp: XrdHttpReq request ended.
> 210506 08:53:09 155627 XrootdXeq: unknown.2:23@clientmachine disc 0:00:09 (send failure)
> 210506 08:53:09 155627 XrdPfc_File: debug ioActive start for io 0x7fc1d00019f0 http/path/to/file
> 210506 08:53:09 155627 XrdPfc_File: info ioActive for io 0x7fc1d00019f0, active_prefetches 0, allow_prefetching False, ioactive_false_reported False, ios_in_detach 0
> 210506 08:53:09 155627 XrdPfc_File: info io_map.size() 1, block_map.size() 0, file http/path/to/file
> 210506 08:53:09 155627 XrdPfc_File: info ioActive for io 0x7fc1d00019f0 returning False, file http/path/to/file
> 210506 08:53:09 155627 XrdPfc_IO: info DetachFinalize() 0x7fc1d00019f0
> 210506 08:53:09 155627 XrdPfc_Cache: debug ReleaseFile http/path/to/file, io 0x7fc1d00019f0
> 210506 08:53:09 155627 XrdPfc_File: debug RemoveIO() io = 0x7fc1d00019f0 http/path/to/file
> 210506 08:53:09 155627 XrdPfc_Cache: debug dec_ref_cnt http/path/to/file, cnt at entry = 1
> 210506 08:53:09 155627 XrdPfc_File: debug FinalizeSyncBeforeExit requesting sync to write detach stats http/path/to/file
> 210506 08:53:09 155627 XrdPfc_Cache: debug dec_ref_cnt http/path/to/file, scheduling final sync
> 210506 08:53:09 155627 XrdPfc_IO: debug ~IOEntireFile() 0x7fc1d00019f0 [log in to unmask]:8000/http/path/to/file?" target="_blank">http:[log in to unmask]:8000/http/path/to/file?
> 210506 08:53:09 155627 Posix_PrepIODisable: Disabling defered open [log in to unmask]:8000//path/to/file?" target="_blank">http:[log in to unmask]:8000//path/to/file?
> 210506 08:53:09 160872 XrdPfc_File: dump Sync() http/path/to/file
> 210506 08:53:09 160872 XrdPfc_File: dump Sync 0 blocks written during sync http/path/to/file
> 210506 08:53:09 160872 XrdPfc_Cache: debug dec_ref_cnt http/path/to/file, cnt at entry = 1
> 210506 08:53:09 160872 XrdPfc_File: debug FinalizeSyncBeforeExit sync not required http/path/to/file
> 210506 08:53:09 160872 XrdPfc_Cache: debug dec_ref_cnt http/path/to/file, cnt after sync_check and dec_ref_cnt = 0
> 210506 08:53:09 160872 XrdPfc_File: debug ~File() close info http/path/to/file
> 210506 08:53:09 160872 XrdPfc_File: debug ~File() close output http/path/to/file
> 210506 08:53:09 160872 XrdPfc_File: debug ~File() ended, prefetch score = 3.95238 http/path/to/file
> 210506 08:53:11 155631 Posix_DDestroy: DLY destory of 1 objects; 0 already lost.
> 210506 08:53:11 155631 Posix_DDestroy: DLY destory end; 0 objects deferred and 0 lost.
>
> ### Xcache config ###
>
> all.role proxy server
>
> all.export /http:/
> all.export /https:/
>
> ofs.osslib libXrdPss.so
>
> xrootd.seclib /usr/lib64/libXrdSec.so
>
> pss.origin =http,https
>
> # US data servers
> pss.permit source.server
> pss.permit source.server
> pss.permit source.server
> pss.permit source.server
> pss.permit source.server
> pss.permit source.server
> pss.permit source.server
> # EU data servers
> pss.permit source.server
> pss.permit source.server
> pss.permit source.server
>
> pss.cachelib libXrdPfc.so
> pss.config streams 8
>
> # XcacheH
> pss.namelib -lfncachesrc+ /usr/lib64/XrdName2NameXcacheH.so cacheLife=1d cacheBlockSize=4m
> pss.ccmlib /usr/lib64/XrdName2NameXcacheH.so
>
> oss.localroot /xcache/ns
>
> # Metadata directories (cinfo files)
> oss.space meta /xcache/meta
>
> # Data directories
> oss.space data /xcache/data
>
> # Xcache spaces assignement
> pfc.spaces data meta
>
> if exec xrootd
> xrd.protocol http:1094 libXrdHttp.so
> fi
>
> pfc.diskusage 0.90 0.95
>
> pfc.ram 6g
> pfc.blocksize 4M
> pfc.prefetch 32
>
> pfc.trace dump
> http.trace all
> pss.trace all
> pss.debug
>
> ### xrootd version: 5.1.1 ###
>
> As a result, I got an almost file downloaded on client side, and also in xcache server, cached data is an almost file. Just this last chunk missing.
>
> Are there directives missing in congig file?
>
> Thank you,
>
> Bertrand Rigaud
>
> Centre de Calcul de l'IN2P3 - CNRS
> 21 avenue Pierre de Coubertin
> 69627 Villeurbanne CEDEX
> Tél : 04.78.93.08.80
>
> ########################################################################
> Use REPLY-ALL to reply to list
>
> To unsubscribe from the XROOTD-L list, click the following link:
> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>
> ########################################################################
> Use REPLY-ALL to reply to list
>
> To unsubscribe from the XROOTD-L list, click the following link:
> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
January 2009
December 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use