Happy to be corrected if I'm wrong, but I also suspect this will not work for the reason you guessed. We use both the XRootD Ceph plugin and XCache at RAL, although not in the way you are looking at. XrdCeph uses a rados pool directly which presents a very simple object storage interface, and as you noted, supports a limited set of operations (i.e. none of the filesystem ones). The XCache (in the mode we use it at least) seems to rely on having a filesystem underpinning it, as it creates the directory structure for the files it caches. This is unlikely to work if the underlying storage doesn't support that. I guess fundamentally it could be made to work with XrdCeph as a storage backend, but as it currently relies on operations unsupported by the XrdCeph plugin, this would require development work. You should be able to use a CephFS mount as the storage for the XCache though, which is something I think would be interesting to try, especially as you could run multiple XCache servers on top of the same shared filesystem for shared cache goodness. This might not be a good idea though, and I'd be interested to hear if anyone is trying this. Cheers, Tom From: [log in to unmask] <[log in to unmask]> On Behalf Of Diego Ciangottini Sent: 30 October 2019 10:09 To: [log in to unmask] Subject: XCache on ceph with plugin: Operation not supported Hi everyone, I started to take a look at xrootd ceph plugin with version 4.10.1, since I'd like to understand if we can use ceph rbd as backend for an xcache instance. So, first of all, is it supported? With a blind shot I tried using a basic local setup and the cache configuration here(*). With a xrdcp I do not get errore, but the cache is not creating the file in the pool, with this error (**). Shooting in the dark again, I would say that this is related to this (***). This is everything I have so far, can you help me understanding if it is possible? And in case no, do you have any suggested solution here? Thanks, Diego (*) xrootd.trace all ofs.trace dump xrd.trace debug cms.trace debug sec.trace debug pfc.trace dump oss.trace dump all.export / all.role server xrd.port 32294 ofs.osslib libXrdPss.so pss.cachelib libXrdFileCache.so pfc.osslib /lib64/libXrdCeph.so diego@diegopool pss.origin localhost:1094 pfc.diskusage 0.8 0.9 pfc.ram 1g pfc.blocksize 512k pfc.prefetch 0 (**) 191030 09:49:26 236384 root.236560:34@localhost XrootdProtocol: 0100 open rat /test.txt? ceph_stat: /test.txt.cinfo 191030 09:49:26 236384 XrdFileCache_Manager: info Cache::Attach() root://u34@localhost:1094//test.txt?pss.tid=root.236560:34@localhost&oss.lcl=1<mailto:root://u34@localhost:1094//test.txt?pss.tid=root.236560:34@localhost&oss.lcl=1> 191030 09:49:26 236384 XrdFileCache_Manager: debug Cache::GetFile /test.txt, io 0x7f22b0003960 ceph_stat: /test.txt.cinfo 191030 09:49:26 236384 XrdFileCache_IO: debug IOEntireFile::initCachedStat get stat from client res = 0, size = 10 root://u34@localhost:1094//test.txt?p<mailto:root://u34@localhost:1094//test.txt?p> ss.tid=root.236560:34@localhost&oss.lcl=1<mailto:ss.tid=root.236560:34@localhost&oss.lcl=1> 191030 09:49:26 236384 XrdFileCache_File: dump File::Open() open file for disk cache /test.txt ceph_stat: /test.txt ceph_stat: /test.txt.cinfo 191030 09:49:26 236384 XrdFileCache_File: error File::Open() Create failed , err_code=95, err_str=Operation not supported /test.txt 191030 09:49:26 236384 XrdFileCache_File: debug File::~File() ended, prefetch score = 1 /test.txt 191030 09:49:26 236384 XrdFileCache_IO: debug IOEntireFile::~IOEntireFile() 0x7f22b0003960 root://u34@localhost:1094//test.txt?pss.tid=root.236560:34@lo<mailto:root://u34@localhost:1094//test.txt?pss.tid=root.236560:34@lo> calhost&oss.lcl=1 191030 09:49:26 236384 XrdFileCache_Manager: error Cache::Attach() Failed opening local file, falling back to remote access root://u34@localhost:1094//t est.txt?pss.tid=root.236560:34@localhost&oss.lcl=1<mailto:est.txt?pss.tid=root.236560:34@localhost&oss.lcl=1> (***) https://github.com/xrootd/xrootd-ceph/blob/master/src/XrdCeph/XrdCephOss.cc#L166 ________________________________ Use REPLY-ALL to reply to list To unsubscribe from the XROOTD-L list, click the following link: https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1 ######################################################################## Use REPLY-ALL to reply to list To unsubscribe from the XROOTD-L list, click the following link: https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1