Print

Print


Sorry, haven't noticed this before ... but I have noticed the issue report and 
fixed it :)

\m

On 9/6/19 12:10 PM, Andrew Hanushevsky wrote:
> I don't know the reason why there is a 16M limit and it clearly is not ideal for 
> Ceph or HDFS type storage systems. Matevz, do you know? Can the limit be removed?
> 
> Andy
> 
> On Fri, 6 Sep 2019, Sam Skipsey wrote:
> 
>> Hm. Adding that line to the config prevents the xrootd cache service from
>> staying up (and removing it makes it work again).
>>
>> 190906 19:03:37 19779 XrdFileCache_a2x: get block size 64M may not be
>> greater than 16777216
>> 190906 19:03:37 19779 XrdFileCache_Manager: error Cache::Config() error in
>> parsing
>>
>> Setting it to the maximum,
>>
>> pfc.blocksize 16M
>>
>> does significantly improve the rate though (I get about 80 to 90% of the
>> direct rate - around 300MB/s).
>>
>> thanks.
>>
>> Is it possible to have the arbitrary limit on pfc.blocksize removed in
>> later releases?
>>
>> Sam
>>
>>
>>
>> On Fri, 6 Sep 2019 at 18:03, Yang, Wei <
>> [log in to unmask]> wrote:
>>
>>> Hi Sam,
>>>
>>>
>>>
>>> One thing that maybe useful for Xcache to talk to a CEPH storage is to set
>>> the Xcache "*pfc.blocksize*" to the same as the CEPH native bucket size.
>>> So if the CEPH bucket size is 64M, then you can add the following to
>>> cephc02's config
>>>
>>>
>>>
>>> pfc.blocksize 64M
>>>
>>>
>>>
>>> The default is 1M
>>>
>>>
>>>
>>> regards,
>>>
>>> -- 
>>>
>>> Wei Yang  |  [log in to unmask]  |  650-926-3338 (O)
>>>
>>>
>>>
>>>
>>>
>>> *From: *<[log in to unmask]> on behalf of Sam Skipsey <
>>> [log in to unmask]>
>>> *Date: *Friday, September 6, 2019 at 6:20 AM
>>> *To: *xrootd-l <[log in to unmask]>
>>> *Subject: *Help debugging slow Xrootd Proxy Cache <-> xrootd server
>>> transfers
>>>
>>>
>>>
>>> Hello everyone:
>>>
>>>
>>>
>>> I'm currently seeing some odd behaviour, and would appreciate some insight
>>> from people more deeply aware of xrootd than I am.
>>>
>>>
>>>
>>> Currently we are testing an xrootd setup using internal Xrootd disk
>>> caching proxies to improve performance against a Ceph object store.
>>>
>>>
>>>
>>> The configuration is:
>>>
>>>
>>>
>>> Xrootd server [with ceph plugin] on cephs02.beowulf.cluster
>>>
>>>
>>>
>>> Xrootd cache on cephc02.beowulf.cluster
>>>
>>> Cache is backed by a software raid-0 array of 6 SSDs. The SSD array
>>> achieves > 1.5GB/s transfer rates when tested, and is not an I/O bottleneck.
>>>
>>> Cephc02 is configured as a direct proxy for cephs02.
>>>
>>>
>>>
>>> Firewall is configured so all nodes are in the trusted zone relative to
>>> each other, so no ports blocked.
>>>
>>>
>>>
>>> The problem is that connections proxied through cephc02's xrootd server
>>> are extremely slow (10x to 15x slower) than direct connections to the
>>> xrootd server on cephs02.
>>>
>>>
>>>
>>> From cephc02, directly copying from cephs02:
>>>
>>>
>>>
>>> [root@cephc02 ~]# time xrdcp root://cephs02:1095/ecpool:testfile2GB
>>> testfile2GB-in-1
>>>
>>> [1.863GB/1.863GB][100%][==================================================][381.5MB/s] 
>>>
>>>
>>>
>>>
>>> versus connecting via the proxy on cephc02 (cold cache):
>>>
>>> [root@cephc02 ~]# time xrdcp -v -v -v root://
>>> 10.1.50.12:1094/ecpool:testfile2GB test-from-cache2
>>>
>>> [1.863GB/1.863GB][100%][==================================================][20.51MB/s] 
>>>
>>>
>>>
>>>
>>> (once the cache is warm, fetching from the cache itself is v fast, at >
>>> 1GB/s)
>>>
>>>
>>>
>>>
>>>
>>> Whilst I'd expect some caching overhead, this seems unuseably slow.
>>>
>>> What am I doing wrong here?
>>>
>>>
>>>
>>> Any help appreciated,
>>>
>>>
>>>
>>> Sam Skipsey
>>>
>>> University of Glasgow
>>>
>>>
>>>
>>>
>>>
>>> The Cache and Server authenticate with a shared secret, and their relevant
>>> configs are:
>>>
>>>
>>>
>>> cephc02:
>>>
>>>
>>>
>>> [root@cephc02 ~]# cat /etc/xrootd/xrootd-cache.cfg
>>> ofs.osslib    libXrdPss.so
>>> pss.cachelib  libXrdFileCache.so
>>> pfc.ram      16g
>>> pfc.trace     info
>>> pfc.diskusage 0.90 0.95
>>> oss.localroot /cache/
>>>
>>> all.export /xroot:/
>>> all.export /root:/
>>> all.export *
>>> pss.origin 10.1.50.2:1095
>>> xrd.allow host *.beowulf.cluster
>>>
>>> #inbound security protocol, for authenticating to the xrootd-ceph
>>> #sec.protocol
>>> xrootd.seclib /usr/lib64/libXrdSec.so
>>> sec.protocol sss -s /etc/gateway/xrootd/sss.keytab.grp -c
>>> /etc/gateway/xrootd/sss.keytab.grp
>>> sec.protbind cephs02.beowulf.cluster:1095 only sss
>>>
>>> #outside security protocol, for authenticating users wanting to use the
>>> proxy
>>> #sec.protocol
>>> sec.protbind localhost only none
>>>
>>> xrd.report 127.0.0.1:9527 every 5s all
>>>
>>>
>>>
>>> -
>>>
>>>
>>>
>>> cephs02:
>>>
>>>
>>>
>>> # The export directive indicates which paths are to be exported. While the
>>> # default is '/tmp', we indicate it anyway to show you this directive.
>>> #
>>> all.export *?
>>> all.export /
>>>
>>> # The adminpath and pidpath variables indicate where the pid and various
>>> # IPC files should be placed
>>> #
>>> all.adminpath /var/spool/xrootd
>>> all.pidpath /var/run/xrootd
>>> xrootd.async segsize 67108864
>>> xrd.buffers maxbsz 67108864
>>>
>>> # Configure sss security - this is a shared secret between the xrootd-ceph
>>> and the xrootd-proxies, so the proxies are trusted to talk to the
>>> xrootd-ceph
>>> #
>>> xrootd.seclib /opt/xrootd/lib64/libXrdSec.so
>>> sec.protocol sss -s /etc/gateway/xrootd/sss.keytab.grp -c
>>> /etc/gateway/xrootd/sss.keytab.grp
>>> sec.protbind * only sss
>>>
>>> xrootd.seclib /opt/xrootd/lib64/libXrdSec.so
>>> #sec.protocol host
>>> #sec.protbind localhost none
>>>
>>> # Configure rados connection
>>> #   /this needs to be configured for the right stripe width
>>> ofs.osslib +cksio /opt/xrootd/lib64/libXrdCeph.so admin@ecpool
>>> ,1,8388608,83886080
>>> ofs.xattrlib /opt/xrootd/lib64/libXrdCephXattr.so
>>> xrootd.chksum adler32
>>>
>>> # Configure the port
>>> #
>>> xrd.port 1095
>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> Use REPLY-ALL to reply to list
>>>
>>> To unsubscribe from the XROOTD-L list, click the following link:
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>>
>>> ------------------------------
>>>
>>> Use REPLY-ALL to reply to list
>>>
>>> To unsubscribe from the XROOTD-L list, click the following link:
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>>
>>
>> ########################################################################
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the XROOTD-L list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>
> 
> ########################################################################
> Use REPLY-ALL to reply to list
> 
> To unsubscribe from the XROOTD-L list, click the following link:
> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1