Print

Print


It is a typo. Thanks for catching this.

regards,
--
Wei Yang  |  [log in to unmask]  |  650-926-3338(O)

-----Original Message-----
From: Bertrand RIGAUD <[log in to unmask]>
Date: Tuesday, May 11, 2021 at 2:34 AM
To: Wei Yang <[log in to unmask]>
Cc: Matevz Tadel <[log in to unmask]>, xrootd-l <[log in to unmask]>
Subject: Re: http ingest using xcache fails

    I don't know if this really matters, but is there a typo here?
    
    line 25 - #define HTTP_FILE_PLUG_IN_AVOIDRANGE_ENV "XRDCLHTTP_AVOIDRANAGE"
    
    XRDCLHTTP_AVOIDRANAGE => XRDCLHTTP_AVOIDRANGE
    
    Bertrand Rigaud
    
    Centre de Calcul de l'IN2P3 - CNRS
    21 avenue Pierre de Coubertin
    69627 Villeurbanne CEDEX
    Tél : 04.78.93.08.80
    
    ----- Mail original -----
    De: "Yang, Wei" <[log in to unmask]>
    À: "Bertrand RIGAUD" <[log in to unmask]>, "Matevz Tadel" <[log in to unmask]>
    Cc: "xrootd-l" <[log in to unmask]>
    Envoyé: Mardi 11 Mai 2021 11:25:32
    Objet: Re: http ingest using xcache fails
    
    Not document yet except in the code: https://github.com/xrootd/xrdcl-http/blob/master/src/XrdClHttp/HttpFilePlugIn.hh
    
    I am still trying to understand why HTTP range is not supported, and how to find out in advance.
    
    regards,
    --
    Wei Yang  |  [log in to unmask]  |  650-926-3338(O)
    
    -----Original Message-----
    From: Bertrand RIGAUD <[log in to unmask]>
    Date: Tuesday, May 11, 2021 at 2:22 AM
    To: Matevz Tadel <[log in to unmask]>
    Cc: Wei Yang <[log in to unmask]>, xrootd-l <[log in to unmask]>
    Subject: Re: http ingest using xcache fails
    
        Hello,
        
        OK Yeah! adding ?xrddclhttp_avoidrange to the curl request seems to work ! :)
        
        Now I need to find a way to use it with cvmfs !
        
        Is this parameter documented somewhere?
        
        regards,
        
        Bertrand Rigaud
        
        Centre de Calcul de l'IN2P3 - CNRS
        21 avenue Pierre de Coubertin
        69627 Villeurbanne CEDEX
        Tél : 04.78.93.08.80
        
        ----- Mail original -----
        De: "Bertrand RIGAUD" <[log in to unmask]>
        À: "Matevz Tadel" <[log in to unmask]>
        Cc: "Yang, Wei" <[log in to unmask]>, "xrootd-l" <[log in to unmask]>
        Envoyé: Mardi 11 Mai 2021 11:13:50
        Objet: Re: http ingest using xcache fails
        
        Hello,
        
        Ok, so first, to answer your questions :
        
        xrootd version : 5.1.1
        
        XrdClHttp comes from https://xrootd.slac.stanford.edu/binaries/stable/slc/7/x86_64/xrdcl-http-5.1.1-1.el7.x86_64.rpm
        
        Here is the XrdClHttp plugin config:
        
        url = http://*
        lib = /usr/lib64/libXrdClHttp-5.so
        enable = true
        
        Ok now a little more about context:
        
        We plan to add a cache for public data (GWOSC) and give access to it through cvmfs: https://computing.docs.ligo.org/guide/cvmfs/
        
        There is already an app called stashcache using xrootd to do that but that implies that xrootd must be installed and configured on both the source server and the cache server (https://cvmfs.readthedocs.io/en/stable/_images/xcache1.svg)
        
        In order to be less "invasive", we're trying to deploy this architecture (https://cvmfs.readthedocs.io/en/stable/_images/xcache2.svg)
        
        All this is explained in this doc: https://cvmfs.readthedocs.io/en/stable/cpt-xcache.html
        
        Back to the GWOSC data, we learned that the cvmfs GWOSC repo has catalog and data separated, i.e data is stored on another server defined by CVMFS_EXTERNAL_URL in cvmfs config (https://cvmfs.readthedocs.io/en/stable/cpt-large-scale.html?highlight=CVMFS_EXTERNAL_URL#creating-large-secure-repositories)
        Thus, files can be accessed directly by their name through a http server. 
        
        For example here is a typical file (about 500MB): 
        
        $ curl -v -o my_file.gwf http://fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
          % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                         Dload  Upload   Total   Spent    Left  Speed
          0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* About to connect() to fiona.uvalight.net port 8000 (#0)
        *   Trying 145.146.100.30...
        * Connected to fiona.uvalight.net (145.146.100.30) port 8000 (#0)
        > GET /gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf HTTP/1.1
        > User-Agent: curl/7.29.0
        > Host: fiona.uvalight.net:8000
        > Accept: */*
        > 
        < HTTP/1.1 200 OK
        < Connection: Keep-Alive
        < Content-Length: 506173152
        < 
          0  482M    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0{ [data not shown]
        100  482M  100  482M    0     0   100M      0  0:00:04  0:00:04 --:--:--  109M
        * Connection #0 to host fiona.uvalight.net left intact
        
        Now here is what I get when downloading through our xcache server:
        
        $ curl -v -o my_file.gwf http://our_xcache_server:1094//http://fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
          % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                         Dload  Upload   Total   Spent    Left  Speed
          0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* About to connect() to our_xcache_server port 1094 (#0)
        *   Trying xxx.xxx.xxx.xxx...
        * Connected to our_xcache_server (xxx.xxx.xxx.xxx) port 1094 (#0)
        > GET //http://fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf HTTP/1.1
        > User-Agent: curl/7.29.0
        > Host: our_xcache_server:1094
        > Accept: */*
        > 
        < HTTP/1.1 200 OK
        < Connection: Keep-Alive
        < Content-Length: 506173152
        < 
        { [data not shown]
         99  482M   99  480M    0     0  50.0M      0  0:00:09  0:00:09 --:--:-- 16.0M* transfer closed with 2856672 bytes remaining to read
         99  482M   99  480M    0     0  50.0M      0  0:00:09  0:00:09 --:--:--     0
        * Closing connection 0
        curl: (18) transfer closed with 2856672 bytes remaining to read
        
        As you can see here, there are 2856672 bytes missing
        
        
        
        If we take a look at the cvmfs client view, we can see that the catalog also indicates the right size of the file:
        
        $ ll /cvmfs/gwosc.osgstorage.org/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        -rw-r--r-- 1 cvmfs cvmfs 506173152 Jun 14  2018 /cvmfs/gwosc.osgstorage.org/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        
        
        
        On xcache server (fresh and cleared), here are logs until the first chunk is transfered:
        
        210511 10:41:35 131934 anon.0:22@clientmachine sysXrdHttp: received dlen: 16
        210511 10:41:35 131934 anon.0:22@clientmachine sysXrdHttp: received dump: 71 69 84 32 47 47 104 116 116 112 58 47 47 102 105 00 
        210511 10:41:35 131934 anon.0:22@clientmachine sysXrdHttp: Protocol matched. https: 0
        210511 10:41:35 131934 sysXrdHttp:  Reset
        210511 10:41:35 131934 sysXrdHttp:  XrdHttpReq request ended.
        210511 10:41:35 131934 anon.0:22@clientmachine sysXrdHttp:  Process. lp:0x7f7f08005378 reqstate: 0
        210511 10:41:35 131934 anon.0:22@clientmachine sysXrdHttp:  Setting host: [::ffff:xxx.xxx.xxx.xxx]
        210511 10:41:35 131934 sysXrdHttp: getDataOneShot BuffAvailable: 1048576 maxread: 1048576
        210511 10:41:35 131934 sysXrdHttp: read 204 of 1048576 bytes
        210511 10:41:35 131934 sysXrdHttp:  rc:127 got hdr line: GET //http://fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf HTTP/1.1
        
        210511 10:41:35 131934 sysXrdHttp:  Parsing first line: GET //http://fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf HTTP/1.1
        
        210511 10:41:35 131934 sysXrdHttp:  rc:25 got hdr line: User-Agent: curl/7.29.0
        
        210511 10:41:35 131934 sysXrdHttp:  rc:37 got hdr line: Host: our_xcache_server:1094
        
        210511 10:41:35 131934 sysXrdHttp:  rc:13 got hdr line: Accept: */*
        
        210511 10:41:35 131934 sysXrdHttp:  rc:2 got hdr line: 
        
        210511 10:41:35 131934 sysXrdHttp:  rc:2 detected header end.
        210511 10:41:35 131934 XrootdBridge: unknown.1:22@clientmachine login as nobody
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp:  Process. lp:0x7f7f08005378 reqstate: 0
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Process is exiting rc:0
        210511 10:41:35 131934 unknown.1:22@clientmachine Pss_Stat: url=http:[log in to unmask]:8000//gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?
        210511 10:41:35 131934 XrdPfc_Cache: debug LocalFilePath '/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf', why=ForAccess
        210511 10:41:35 131934 XrdPfc_Cache: info LocalFilePath '/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf', why=ForAccess -> ENOENT
        210511 10:41:35 131934 Posix_P2L: stat /gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?src=http:[log in to unmask]:8000& pfn2lfn /http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        210511 10:41:35 131934 sysXrdHttp:  XrdHttpReq::Data! final=0
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: PostProcessHTTPReq req: 2 reqstate: 0
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Stat for GET /http:/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf stat=-4294902523 506173152 37 0 0 1620722495 1300 xrootd xrootd
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp:  Process. lp:0 reqstate: 0
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Process is exiting rc:0
        210511 10:41:35 131934 unknown.1:22@clientmachine Pss_Open: url=http:[log in to unmask]:8000//gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?
        210511 10:41:35 131934 XrdPfc_Cache: debug LocalFilePath '/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf', why=ForAccess
        210511 10:41:35 131934 XrdPfc_Cache: info LocalFilePath '/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf', why=ForAccess -> ENOENT
        210511 10:41:35 131934 Posix_P2L: file /gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?src=http:[log in to unmask]:8000& pfn2lfn /http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        210511 10:41:35 131934 XrdPfc_Cache: info Attach() http:[log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?
        210511 10:41:35 131934 XrdPfc_Cache: debug GetFile http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf, io 0x7f7f080464b0
        210511 10:41:35 131934 XrdPfc_IO: debug initCachedStat got stat from client res = 0, size = 506173152 http:[log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?
        210511 10:41:35 131934 XrdPfc_File: dump Open() open file for disk cache http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        210511 10:41:35 131934 XrdPfc_File: debug Open() Creating new file info, data size = 506173152 num blocks = 121 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        210511 10:41:35 131934 XrdPfc_Cache: debug inc_ref_cnt http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf, cnt at exit = 1
        210511 10:41:35 131934 XrdPfc_File: debug AddIO() io = 0x7f7f080464b0 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        210511 10:41:35 131934 XrdPfc_Cache: debug Attach() http:[log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf? location: <deferred open>
        210511 10:41:35 131961 XrdPfc_File: dump Prefetch enter to check download status http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        210511 10:41:35 131934 sysXrdHttp:  XrdHttpReq::Data! final=1
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: PostProcessHTTPReq req: 2 reqstate: 1
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: fhandle:0:0:0:0
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Sending resp: 200 header len:70
        210511 10:41:35 131934 sysXrdHttp: Sending 70 bytes
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp:  Process. lp:0 reqstate: 1
        210511 10:41:35 131934 unknown.1:22@clientmachine sysXrdHttp: Process is exiting rc:0
        210511 10:41:35 131934 XrdPfc_IO: dump Read() 0x7f7f080464b0 off: 0 size: 1048576 http:[log in to unmask]:8000/http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf?
        210511 10:41:35 131961 XrdPfc_File: dump PrepareBlockRequest() idx=0, block=0x7f7ed80008c0, prefetch=True, offset=0, size=4194304, buffer=0x7f7ed8001000
        210511 10:41:35 131961 XrdPfc_File: dump Prefetch take block 0 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        210511 10:41:35 131934 XrdPfc_File: dump Read() idx 0 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        210511 10:41:35 131934 XrdPfc_File: dump inc_ref_count 0x7f7ed80008c0 refcnt  1 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        210511 10:41:35 131934 XrdPfc_File: dump Read() 0x7f7f0804b000inc_ref_count for existing block 0x7f7ed80008c0 idx = 0 http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        210511 10:41:35 131961 XrdPfc_File: dump Prefetch enter to check download status http/fiona.uvalight.net:8000/gwdata/O1/strain.16k/frame.v1/L1/1134559232/L-L1_LOSC_16_V1-1134620672-4096.gwf
        
        As far as I understand, "data size = 506173152", xcache got the right informations.
        
        I think there must be something at the end of transaction between xcache (XrdClHttp) and "cvmfs" http server that is stopping it. I'm not sure it is related with the file size as whether the size, it is always at the last chunk that the problem occurs.
        
        regards,
        
        Bertrand Rigaud
        
        Centre de Calcul de l'IN2P3 - CNRS
        21 avenue Pierre de Coubertin
        69627 Villeurbanne CEDEX
        Tél : 04.78.93.08.80
        
        ----- Mail original -----
        De: "Matevz Tadel" <[log in to unmask]>
        À: "Bertrand RIGAUD" <[log in to unmask]>, "Yang, Wei" <[log in to unmask]>
        Cc: "xrootd-l" <[log in to unmask]>
        Envoyé: Lundi 10 Mai 2021 18:26:37
        Objet: Re: http ingest using xcache fails
        
        Hi,
        
        The error occurs or is detected by XrdClHttp:
        [2021-05-06 08:53:09.092614 +0200][Error  ][XrdClHttp         ] Could not read URL: http:[log in to unmask]:8000//path/to/file?, error: [ERROR] Internal error: no such device or address: Result Invalid Read in request after 3 attempts
        
        Now we have to figure out why it happens in XrdClHttp for the original source server. I'd guess there is some issue with file size.
        
        What files-size does xrootd report at open / attach time? Is this is the same as file-size on the server?
        
        What version of xrootd is this?
        
        I've never used XrdClHttp so I don;t know how it's packaged / versioned ... does it have a separate version?
        
        Cheers,
        Matevz
        
        On 5/10/21 3:07 AM, Bertrand RIGAUD wrote:
        > Hello,
        > 
        > well I don't think XcacheH is related to this issue. It's the same behaviour whether or not XcacheH is activated
        > 
        > in xcache data folder, I have the file but the size is just under the entire size (the last chunk is missing)
        > it depends on the pfc.blocksize I choose.
        > 
        > let's say I have a file of 10MB and a pfc.blocksize set to 4MB, my downloaded file (in xcache data folder and by extension on the client machine) will be 8MB, and, in the xcache logs, I will have two 4MB chunks OK, and the last chunk will be failed and it will be mentionned that the rest (2MB) can't be downloaded because it cannot be read.
        > 
        > [2021-05-10 10:30:18.127994 +0200][Error  ][XrdClHttp         ] Could not read URL: http://u23@source_server:8000//path/to/the/file?, error: [ERROR] Internal error: no such device or address: Result Invalid Read in request after 3 attempts
        > 
        > 
        > 
        > I performed another test:
        > 
        > I downloaded the file from the source server using curl. I have the fully downloaded file (same amount of bytes that is told by curl "Content-Length" attribute).
        > I copied this file on a simple VM at our site and expose it through a basic http server (python -m SimpleHTTPServer 80)
        > I downloaded the entire file through xcache! No error message in xcache logs, file is fully downloaded on xcache data folder and on client machine.
        > 
        > So,
        > 
        > there is no problem with the file itself
        > there is no problem when downloading from the source server using curl or wget
        > there is no problem between xcache and my basic http server
        > but there is a problem between xcache and the source server
        > 
        > What in the communication between xcache and the source server can prevents from reading the last chunk of a file?
        > 
        > regards,
        > 
        > Bertrand Rigaud
        > 
        > Centre de Calcul de l'IN2P3 - CNRS
        > 21 avenue Pierre de Coubertin
        > 69627 Villeurbanne CEDEX
        > Tél : 04.78.93.08.80
        > 
        > ----- Mail original -----
        > De: "Yang, Wei" <[log in to unmask]>
        > À: "Bertrand RIGAUD" <[log in to unmask]>, "xrootd-l" <[log in to unmask]>
        > Envoyé: Samedi 8 Mai 2021 00:46:50
        > Objet: Re: http ingest using xcache fails
        > 
        > Hmm, I think XcacheH is still more or less experimental. That said, it should still work. Can you go inside the cache directory to find a file name http/path/to/my/file and see if that file is fully cache (size, checksum) ?
        > 
        > regards,
        > --
        > Wei Yang  |  [log in to unmask]  |  650-926-3338(O)
        > 
        > -----Original Message-----
        > From: <[log in to unmask]> on behalf of Bertrand RIGAUD <[log in to unmask]>
        > Date: Thursday, May 6, 2021 at 4:47 AM
        > To: <[log in to unmask]>
        > Subject: http ingest using xcache fails
        > 
        >     Hi,
        >     
        >     Trying to deploy this architecture (https://cvmfs.readthedocs.io/en/stable/_images/xcache2.svg), I'm facing a problem when downloading a file through http.
        >     
        >     Everythting works well till the last chunk is downloaded.
        >     
        >     As an example, here is a simple curl performed from the client machine (and this is the same behaviour with cvmfs client) :
        >     
        >     ### Through xcache server ###
        >     
        >     $ curl -v -o file1  http://my_xcache_server:1094//http://path/to/my/file
        >       % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
        >                                      Dload  Upload   Total   Spent    Left  Speed
        >       0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* About to connect() to my_xcache_server port 1094 (#0)
        >     *   Trying xxx.xxx.xxx.xxx...
        >     * Connected to my_xcache_server (xxx.xxx.xxx.xxx) port 1094 (#0)
        >     > GET //http://path/to/my/file HTTP/1.1
        >     > User-Agent: curl/7.29.0
        >     > Host: my_xcache_server:1094
        >     > Accept: */*
        >     > 
        >     < HTTP/1.1 200 OK
        >     < Connection: Keep-Alive
        >     < Content-Length: 506173152
        >     < 
        >     { [data not shown]
        >      99  482M   99  480M    0     0  57.9M      0  0:00:08  0:00:08 --:--:-- 29.9M* transfer closed with 2856672 bytes remaining to read
        >      99  482M   99  480M    0     0  57.7M      0  0:00:08  0:00:08 --:--:-- 11.5M
        >     * Closing connection 0
        >     curl: (18) transfer closed with 2856672 bytes remaining to read
        >     
        >     
        >     ### Direct download ###
        >     
        >     curl -v -o file2 http://path/to/my/file
        >       % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
        >                                      Dload  Upload   Total   Spent    Left  Speed
        >       0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* About to connect() to source_server port 8000 (#0)
        >     *   Trying 145.146.100.30...
        >     * Connected to source_server (xxx.xxx.xxx.xxx) port 8000 (#0)
        >     > GET path/to/my/file HTTP/1.1
        >     > User-Agent: curl/7.29.0
        >     > Host: source_server:8000
        >     > Accept: */*
        >     > 
        >     < HTTP/1.1 200 OK
        >     < Connection: Keep-Alive
        >     < Content-Length: 506173152
        >     < 
        >     { [data not shown]
        >     100  482M  100  482M    0     0   101M      0  0:00:04  0:00:04 --:--:--  101M
        >     * Connection #0 to host source_server left intact
        >     
        >     ### Xcache server logs (last chunk) ###
        >     
        >     210506 08:53:05 155627 sysXrdHttp:  XrdHttpReq::Data! final=1
        >     210506 08:53:05 155627 unknown.2:23@clientmachine sysXrdHttp: PostProcessHTTPReq req: 2 reqstate: 481
        >     210506 08:53:05 155627 unknown.2:23@clientmachine sysXrdHttp: Got data vectors to send:1
        >     210506 08:53:05 155627 sysXrdHttp: Sending 1048576 bytes
        >     210506 08:53:05 155627 unknown.2:23@clientmachine sysXrdHttp:  Process. lp:0 reqstate: 481
        >     210506 08:53:05 155627 unknown.2:23@clientmachine sysXrdHttp: Process is exiting rc:0
        >     210506 08:53:05 155627 XrdPfc_IO: dump Read() 0x7fc1d00019f0 off: 503316480 size: 1048576 http:[log in to unmask]:8000/http/source.server:8000/path/to/file?
        >     210506 08:53:05 155627 XrdPfc_File: dump Read() idx 120 http/source.server:8000/path/to/file
        >     210506 08:53:05 155627 XrdPfc_File: dump inc_ref_count 0x7fc1a0403530 refcnt  1 http/source.server:8000/path/to/file
        >     210506 08:53:05 155627 XrdPfc_File: dump Read() 0x7fc1f0028000inc_ref_count for existing block 0x7fc1a0403530 idx = 120 http/source.server:8000/path/to/file
        >     XcacheH: stagein list snapshot: available workers: 10, list length: 0
        >     [2021-05-06 08:53:09.092614 +0200][Error  ][XrdClHttp         ] Could not read URL: http:[log in to unmask]:8000//path/to/file?, error: [ERROR] Internal error: no such device or address: Result Invalid Read in request after 3 attempts
        >     210506 08:53:09 155645 XrdPfc_File: dump Prefetch enter to check download status http/source.server:8000/path/to/file
        >     210506 08:53:09 155645 XrdPfc_File: debug Prefetch file is complete, stopping prefetch. http/source.server:8000/path/to/file
        >     210506 08:53:09 155626 XrdPfc_File: dump ProcessBlockResponse block=0x7fc1a0403530, idx=120, off=503316480, res=-6 http/source.server:8000/path/to/file
        >     210506 08:53:09 155626 XrdPfc_File: debug ProcessBlockResponse after failed prefetch on io 0x7fc1d00019f0 disabling prefetching on this io. http/source.server:8000/path/to/file
        >     210506 08:53:09 155626 XrdPfc_File: error ProcessBlockResponse block 0x7fc1a0403530, idx=120, off=503316480 error=-6 http/source.server:8000/path/to/file
        >     210506 08:53:09 155627 XrdPfc_File: dump Read() requested block finished 0x7fc1a0403530, is_failed()=True http/source.server:8000/path/to/file
        >     210506 08:53:09 155627 XrdPfc_File: error Read() io 0x7fc1d00019f0, block 120 finished with error 6 no such device or address http/source.server:8000/path/to/file
        >     210506 08:53:09 155627 XrdPfc_File: dump Read() dec_ref_count 0x7fc1a0403530 idx = 120 http/source.server:8000/path/to/file
        >     210506 08:53:09 155627 XrdPfc_File: dump free_block block 0x7fc1a0403530  idx =  120 http/source.server:8000/path/to/file
        >     210506 08:53:09 155627 XrdPfc_IO: warning Read() error in File::Read(), exit status=-6, error=no such device or address http:[log in to unmask]:8000/http/source.server:8000/path/to/file?
        >     210506 08:53:09 155627 ofs_read: unknown.2:23@clientmachine Unable to read /http:/source.server:8000/path/to/file; no such device or address
        >     210506 08:53:09 155627 sysXrdHttp:  XrdHttpReq::Error
        >     210506 08:53:09 155627 unknown.2:23@clientmachine sysXrdHttp: PostProcessHTTPReq req: 2 reqstate: 482
        >     210506 08:53:09 155627 unknown.2:23@clientmachine sysXrdHttp: PostProcessHTTPReq mapping Xrd error [3005] to status code [500]
        >     210506 08:53:09 155627 unknown.2:23@clientmachine sysXrdHttp: Stopping request because more data is expected but no data has been read.
        >     210506 08:53:09 155627 sysXrdHttp:  XrdHttpReq request ended.
        >     210506 08:53:09 155627 sysXrdHttp:  Cleanup
        >     210506 08:53:09 155627 sysXrdHttp:  Reset
        >     210506 08:53:09 155627 sysXrdHttp:  XrdHttpReq request ended.
        >     210506 08:53:09 155627 XrootdXeq: unknown.2:23@clientmachine disc 0:00:09 (send failure)
        >     210506 08:53:09 155627 XrdPfc_File: debug ioActive start for io 0x7fc1d00019f0 http/path/to/file
        >     210506 08:53:09 155627 XrdPfc_File: info ioActive for io 0x7fc1d00019f0, active_prefetches 0, allow_prefetching False, ioactive_false_reported False, ios_in_detach 0
        >     210506 08:53:09 155627 XrdPfc_File: info 	io_map.size() 1, block_map.size() 0, file http/path/to/file
        >     210506 08:53:09 155627 XrdPfc_File: info ioActive for io 0x7fc1d00019f0 returning False, file http/path/to/file
        >     210506 08:53:09 155627 XrdPfc_IO: info DetachFinalize() 0x7fc1d00019f0
        >     210506 08:53:09 155627 XrdPfc_Cache: debug ReleaseFile http/path/to/file, io 0x7fc1d00019f0
        >     210506 08:53:09 155627 XrdPfc_File: debug RemoveIO() io = 0x7fc1d00019f0 http/path/to/file
        >     210506 08:53:09 155627 XrdPfc_Cache: debug dec_ref_cnt http/path/to/file, cnt at entry = 1
        >     210506 08:53:09 155627 XrdPfc_File: debug FinalizeSyncBeforeExit requesting sync to write detach stats http/path/to/file
        >     210506 08:53:09 155627 XrdPfc_Cache: debug dec_ref_cnt http/path/to/file, scheduling final sync
        >     210506 08:53:09 155627 XrdPfc_IO: debug ~IOEntireFile() 0x7fc1d00019f0 http:[log in to unmask]:8000/http/path/to/file?
        >     210506 08:53:09 155627 Posix_PrepIODisable: Disabling defered open http:[log in to unmask]:8000//path/to/file?
        >     210506 08:53:09 160872 XrdPfc_File: dump Sync() http/path/to/file
        >     210506 08:53:09 160872 XrdPfc_File: dump Sync 0 blocks written during sync http/path/to/file
        >     210506 08:53:09 160872 XrdPfc_Cache: debug dec_ref_cnt http/path/to/file, cnt at entry = 1
        >     210506 08:53:09 160872 XrdPfc_File: debug FinalizeSyncBeforeExit sync not required http/path/to/file
        >     210506 08:53:09 160872 XrdPfc_Cache: debug dec_ref_cnt http/path/to/file, cnt after sync_check and dec_ref_cnt = 0
        >     210506 08:53:09 160872 XrdPfc_File: debug ~File() close info  http/path/to/file
        >     210506 08:53:09 160872 XrdPfc_File: debug ~File() close output   http/path/to/file
        >     210506 08:53:09 160872 XrdPfc_File: debug ~File() ended, prefetch score = 3.95238 http/path/to/file
        >     210506 08:53:11 155631 Posix_DDestroy: DLY destory of 1 objects; 0 already lost.
        >     210506 08:53:11 155631 Posix_DDestroy: DLY destory end; 0 objects deferred and 0 lost.
        >     
        >     ### Xcache config ###
        >     
        >     all.role     proxy server
        >     
        >     all.export /http:/
        >     all.export /https:/
        >     
        >     ofs.osslib      libXrdPss.so
        >     
        >     xrootd.seclib /usr/lib64/libXrdSec.so
        >     
        >     pss.origin =http,https
        >     
        >     # US data servers
        >     pss.permit      source.server
        >     pss.permit      source.server
        >     pss.permit      source.server
        >     pss.permit      source.server
        >     pss.permit      source.server
        >     pss.permit      source.server
        >     pss.permit      source.server
        >     # EU data servers
        >     pss.permit      source.server
        >     pss.permit      source.server
        >     pss.permit      source.server
        >     
        >     pss.cachelib    libXrdPfc.so
        >     pss.config streams 8
        >     
        >     # XcacheH
        >     pss.namelib -lfncachesrc+ /usr/lib64/XrdName2NameXcacheH.so cacheLife=1d cacheBlockSize=4m
        >     pss.ccmlib /usr/lib64/XrdName2NameXcacheH.so
        >     
        >     oss.localroot   /xcache/ns
        >     
        >     # Metadata directories (cinfo files)
        >     oss.space meta /xcache/meta
        >     
        >     # Data directories
        >     oss.space data /xcache/data
        >     
        >     # Xcache spaces assignement
        >     pfc.spaces data meta
        >     
        >     if exec xrootd
        >       xrd.protocol http:1094 libXrdHttp.so
        >     fi
        >     
        >     pfc.diskusage 0.90 0.95
        >     
        >     pfc.ram 6g
        >     pfc.blocksize 4M
        >     pfc.prefetch 32
        >     
        >     pfc.trace dump
        >     http.trace   all
        >     pss.trace  all
        >     pss.debug
        >     
        >     ### xrootd version: 5.1.1 ###
        >     
        >     As a result, I got an almost file downloaded on client side, and also in xcache server, cached data is an almost file. Just this last chunk missing.
        >     
        >     Are there directives missing in congig file?
        >     
        >     Thank you,
        >     
        >     Bertrand Rigaud
        >     
        >     Centre de Calcul de l'IN2P3 - CNRS
        >     21 avenue Pierre de Coubertin
        >     69627 Villeurbanne CEDEX
        >     Tél : 04.78.93.08.80
        >     
        >     ########################################################################
        >     Use REPLY-ALL to reply to list
        >     
        >     To unsubscribe from the XROOTD-L list, click the following link:
        >     https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
        > 
        > ########################################################################
        > Use REPLY-ALL to reply to list
        > 
        > To unsubscribe from the XROOTD-L list, click the following link:
        > https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
        >


########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1