Print

Print


Hello Brian,

I did not state that so far and this is a 1 billion dollar question. Each
has it's own pros/cons and now what I want to have is:
Tested XRootD with both possible options (using xrootd-ceph plugin or
cephfs) and have them in my final report. Later depending on the full
outcome (EC tests, Mem issue, Plugin issues) will have a final decision.
Not making it right now.

I have xrootd deployed both ways - but there are issues in both and you
know some already.

On Mon, 14 Sep 2020 at 22:07, Bockelman, Brian <[log in to unmask]>
wrote:

>
>
> On Sep 10, 2020, at 4:01 PM, Justas Balcas <[log in to unmask]> wrote:
>
> Hi All,
>
> Thanks for all replies.
>
> Brian, I will use the XRootD repo for the evaluation, which is fine - but
> if we decide to move forward with Ceph - I would be happier to have it
> under OSG.
>
>
> Hi Justas,
>
> Just to be clear - when you say "move forward with Ceph", you mean "move
> forward with librados" and not "move forward with CephFS"?  Based on your
> work so far, it's not clear that librados was in the running.
>
> Brian
>
> So will see in the near future about this. I have tried several ways to
> integrate with Ceph and I reported one issue (unrelated to Ceph) to OSG
> Support already (checksum).
>
> For features - these are great ones!
>
> From operational point - Can the RAL team or you share your XRootD Ceph
> configuration? Tried to use the default pool and enforce a specific pool,
> but not able to succeed any write. If you can share it with me, will be
> easier to move on. Thanks!
>
> On Wed, 9 Sep 2020 at 07:28, Brian Lin <[log in to unmask]> wrote:
>
>> Hi Alistair,
>>
>> Thanks for the offer to share your RPMs! Do you modify upstream's
>> packaging at all or do you treat it as a passthrough? I'm adding Carl who
>> has been working on building the package on the OSG side.
>>
>> To answer some of the other points in the thread:
>>
>> 1. Unfortunately, changing the OSG dist tag naming scheme would be a
>> pretty major project and we would lose benefits such as being able to
>> easily identify OSG packages through the package name. If the XRootD team
>> is open to it, I'd suggest that we remove the "-%{release}" from the
>> requirement string [1].
>>
>> 2. We're currently working on enabling xrootd-ceph in our 4.12.x build of
>> XRootD [2] but this is starting to prove to be quite a bit more work than I
>> had anticipated for a package that would be used for evaluating the plugin
>> vs direct POSIX access. Justas, can you can use the packages from the
>> XRootD repo for the evaluation?
>>
>> Thanks,
>> Brian
>>
>> [1]
>> https://github.com/xrootd/xrootd/blob/master/packaging/rhel/xrootd.spec.in#L380
>>
>> [2] https://opensciencegrid.atlassian.net/browse/SOFTWARE-4226
>>
>> On 9/9/20 4:22 AM, Alastair Dewhurst wrote:
>>
>> Hi
>>
>> I am not sure which of the RAL people are on the xrootd mailing list so I
>> have added them all in case.  For their benefit AHM stands for “All Hands
>> Meeting” and is the equivalent of our GridPP meeting.  Justas’ slides can
>> be found here:
>> https://indico.fnal.gov/event/22127/contributions/194938/attachments/133987/165495/osg-ahm-Balcas-ceph.pptx
>>
>>
>> In terms of development there are two key areas RAL have effort to work
>> on at the moment:
>> 1) Adding Vector Reads to XrdCeph.
>> This is to solve the long standing issue that was discussed in the
>> following ticket: https://github.com/xrootd/xrootd/issues/1259  Ian
>> Johnson will do the development work, although we need to schedule it and
>> we want to schedule a discussion with a few experts (e.g. Brian B and
>> Andreas JP) to make sure the logical implementation of vector reads is
>> correct.
>>
>> 2) Getting TPC transfers to work
>> James Walder is doing the testing work here.  Many of the issues found
>> have been handled by the core XrootD team however there are some that fall
>> on us such as how XrdCeph handles the mkdir call and the command to
>> calculate checksums on large files correctly.
>>
>> The XrdCeph plugin can already read/write to different pools (we have a
>> different pool for ATLAS, CMS, LHCb, ALICE and DUNE as well as a general
>> purpose one).  These all use 8+3 EC however we could have created a 3x rep
>> pool if we wanted.  Having features that make it easier to write to
>> different QoS pools would be very interesting, but I would add that to the
>> “Data Lake” development work rather than a more immediate operational issue.
>>
>> With regards to releases, we build the necessary RPMs at RAL.  I would be
>> happy to share the RPMs we produce and we were planning to do that anyway
>> for Glasgow’s benefit.  Happy to have a discussion about (hopefully) simple
>> things we could do to make life easier for OSG.
>>
>> Alastair
>>
>> P.S. I have money for "Data Lake" development work but haven’t recruited
>> anyone to actually do it yet.   Data Lake here is the catch all term for
>> storage development that we will need for the future aka Run 4.
>>
>>
>>
>> On 8 Sep 2020, at 22:35, Yang, Wei <[log in to unmask]> wrote:
>>
>> Hi Justas,
>>
>> Thanks for summarizing the issues. For features in xrootd-ceph, I hope
>> Andy or Alastair know whether there is development resource for that.
>>
>> From your other points, it seems to me that you want to continue using
>> OSG repo and
>>
>>
>>    1. I think Brian Lin will have a say on whether OSG can change the
>>    naming convention of OSG built Xrootd rpms. I suspect that he does not have
>>    such a flexibility.
>>    2. Can OSG integrate Xrootd-ceph into their xrootd RPM suit? I don’t
>>    know whether OSG has the resource to do so.
>>    3. How much is xrootd-ceph rpms depends on a specific version of
>>    CEPH. If repo maintainer has to maintain separate RPMs for each CEPH
>>    release, that is a disaster for both repo maintainer and users.
>>
>>
>> I think you may want use something like yum install –disablerepo=”*”
>> –enablerepo=”osg” … to pick and choose which repo you choose to pick the
>> xrootd release. You may (but not guaranteed) to be able to navigate through
>> those constraints. In some of my cases, I do things in the following order:
>>
>>
>>    1. Install all other rpms xrootd depended (from EPEL), for example
>>    voms
>>    2. Install xrootd from OSG (or xrootd), specific the version I need.
>>    3. Continue other rpms installation.
>>
>>
>> regards,
>> --
>> Wei Yang  |  [log in to unmask]  |  650-926-3338(O)
>>
>> On 9/4/20, 3:18 PM, "[log in to unmask] on behalf of Justas
>> Balcas" <[log in to unmask] on behalf of [log in to unmask]>
>> wrote:
>>
>> Hi Folks,
>>
>> Following on chat today in OSG AHM in terms of the XRootd Ceph plugin.
>> Just draft dump of my thoughts and what I have encountered.
>>
>> 1. We depend fully on OSG XRootD distribution to install, maintain XRootD
>> Servers.
>> 2. OSG does not build the XRootd-Ceph plugin.
>> 3. XRootd-FS would not fit us - as we need to keep and store adler32
>> checksum. Maybe we could get around this with a separate checksum
>> calculation. Never tried, but I think doable.
>> 4. I cant simply include xrootd-ceph.repo [1] and install xrootd-ceph
>> plugin as I will get many conflicts [2] (I could enforce specific version
>> `yum install xrootd-ceph-4.12.3`, but will be the same issue.
>> 5. xrootd-ceph plugin from xrootd depends on a specific version of ceph,
>> e.g. 4.12.3 on librados 14.2 (means Nautilus), while cluster I test is
>> Mimic, I also have Nautilus, but the issue remains for me to go away from
>> OSG team build xrootd version to xrootd team build version. If I do so -
>> the only peace of software remaining dependent on the OSG team is condor
>> (So I can change that too and be free from OSG, even this is not what I am
>> tackling and OSG does the hard part to validate all software is ready for
>> HEP).
>>       a. Ask OSG to change how they name RPMs, so there is no conflict
>> and I can remain using OSG repos?
>>       b. Move away from OSG XRootd build to XRootd? For the future, this
>> still needs next point c.
>>       c. Any prod ready XRootD release (and a few back releases, 4.11,
>> 4.12, 5.0, and soon 5.1) would be nice if the plugin is compatible with
>> ceph 2 last active releases (Nautilus, Octupus).
>>
>> Now XrootD-Ceph plugin features needed:
>> 1. Now it defines nbStripes, stripeUnit, objectSize - which can be
>> overwritten during startup. What Ceph allows is to control these parameters
>> per directory. So any time XRootd-Ceph plugin writes data - it should find
>> out directory parameters (nbStripes, stripeUnit, objectSize) and use it. If
>> not defined, use default.
>> 2. That is a *futuristic* idea: Allow xrootd-ceph to control what EC and
>> what nbStripes, stripeUnit, objectSize to use. In this case, site
>> defines for pool what EC is supported and VO during file put can make
>> decision - use EC 10,1 and stripe unit 16mb and so on. That is just an idea
>> and I have not tested this functionality fully yet - so not sure to what
>> level is possible.
>>
>>
>>
>> [1]
>> [root@transfer-5 ~]# cat /etc/yum.repos.d/xrootd.repo
>> [xrootd-ceph]
>> name=XRootD Ceph repository
>> baseurl=http://xrootd.cern.ch/xrootd-ceph-repo/cc-7-x86_64/
>> gpgcheck=1
>> enabled=1
>> protect=0
>> gpgkey=http://xrootd.cern.ch/sw/releases/RPM-GPG-KEY.txt
>>
>> [2]
>> [root@transfer-5 ~]# yum install xrootd-ceph
>> Loaded plugins: fastestmirror, priorities
>> Loading mirror speeds from cached hostfile
>>  * base: centos3.zswap.net
>>  * epel: d2lzkl7pfhq30w.cloudfront.net
>>  * extras: mirrors.xtom.com
>>  * osg: mirror.grid.uchicago.edu
>>  * updates: mirrors.xtom.com
>> 56 packages excluded due to repository priority protections
>> Resolving Dependencies
>> --> Running transaction check
>> ---> Package xrootd-ceph.x86_64 1:5.0.1-1.el7 will be installed
>> --> Processing Dependency: xrootd-server-libs(x86-64) = 1:5.0.1-1.el7 for
>> package: 1:xrootd-ceph-5.0.1-1.el7.x86_64
>> --> Processing Dependency: xrootd-libs(x86-64) = 1:5.0.1-1.el7 for
>> package: 1:xrootd-ceph-5.0.1-1.el7.x86_64
>> --> Processing Dependency: xrootd-client-libs(x86-64) = 1:5.0.1-1.el7 for
>> package: 1:xrootd-ceph-5.0.1-1.el7.x86_64
>> --> Processing Dependency: librados.so.2(LIBRADOS_14.2.0)(64bit) for
>> package: 1:xrootd-ceph-5.0.1-1.el7.x86_64
>> --> Processing Dependency: libXrdUtils.so.3()(64bit) for package:
>> 1:xrootd-ceph-5.0.1-1.el7.x86_64
>> --> Finished Dependency Resolution
>> Error: Package: 1:xrootd-ceph-5.0.1-1.el7.x86_64 (xrootd-ceph)
>>            Requires: xrootd-libs(x86-64) = 1:5.0.1-1.el7
>>            Installed: 1:xrootd-libs-4.12.3-1.osg35.el7.x86_64 (@osg)
>>                xrootd-libs(x86-64) = 1:4.12.3-1.osg35.el7
>>            Available: 1:xrootd-libs-4.10.0-1.osg35.el7.x86_64 (osg)
>>                xrootd-libs(x86-64) = 1:4.10.0-1.osg35.el7
>>            Available: 1:xrootd-libs-4.10.1-1.osg35.el7.x86_64 (osg)
>>                xrootd-libs(x86-64) = 1:4.10.1-1.osg35.el7
>>            Available: 1:xrootd-libs-4.11.0-1.osg35.el7.x86_64 (osg)
>>                xrootd-libs(x86-64) = 1:4.11.0-1.osg35.el7
>>            Available: 1:xrootd-libs-4.11.1-1.osg35.el7.x86_64 (osg)
>>                xrootd-libs(x86-64) = 1:4.11.1-1.osg35.el7
>>            Available: 1:xrootd-libs-4.11.2-1.osg35.el7.x86_64 (osg)
>>                xrootd-libs(x86-64) = 1:4.11.2-1.osg35.el7
>>            Available: 1:xrootd-libs-4.11.3-1.2.osg35.el7.x86_64 (osg)
>>                xrootd-libs(x86-64) = 1:4.11.3-1.2.osg35.el7
>> Error: Package: 1:xrootd-ceph-5.0.1-1.el7.x86_64 (xrootd-ceph)
>>            Requires: xrootd-client-libs(x86-64) = 1:5.0.1-1.el7
>>            Installed: 1:xrootd-client-libs-4.12.3-1.osg35.el7.x86_64
>> (@osg)
>>                xrootd-client-libs(x86-64) = 1:4.12.3-1.osg35.el7
>>            Available: 1:xrootd-client-libs-4.10.0-1.osg35.el7.x86_64 (osg)
>>                xrootd-client-libs(x86-64) = 1:4.10.0-1.osg35.el7
>>            Available: 1:xrootd-client-libs-4.10.1-1.osg35.el7.x86_64 (osg)
>>                xrootd-client-libs(x86-64) = 1:4.10.1-1.osg35.el7
>>            Available: 1:xrootd-client-libs-4.11.0-1.osg35.el7.x86_64 (osg)
>>                xrootd-client-libs(x86-64) = 1:4.11.0-1.osg35.el7
>>            Available: 1:xrootd-client-libs-4.11.1-1.osg35.el7.x86_64 (osg)
>>                xrootd-client-libs(x86-64) = 1:4.11.1-1.osg35.el7
>>            Available: 1:xrootd-client-libs-4.11.2-1.osg35.el7.x86_64 (osg)
>>                xrootd-client-libs(x86-64) = 1:4.11.2-1.osg35.el7
>>            Available: 1:xrootd-client-libs-4.11.3-1.2.osg35.el7.x86_64
>> (osg)
>>                xrootd-client-libs(x86-64) = 1:4.11.3-1.2.osg35.el7
>> Error: Package: 1:xrootd-ceph-5.0.1-1.el7.x86_64 (xrootd-ceph)
>>            Requires: libXrdUtils.so.3()(64bit)
>> Error: Package: 1:xrootd-ceph-5.0.1-1.el7.x86_64 (xrootd-ceph)
>>            Requires: librados.so.2(LIBRADOS_14.2.0)(64bit)
>> Error: Package: 1:xrootd-ceph-5.0.1-1.el7.x86_64 (xrootd-ceph)
>>            Requires: xrootd-server-libs(x86-64) = 1:5.0.1-1.el7
>>            Installed: 1:xrootd-server-libs-4.12.3-1.osg35.el7.x86_64
>> (@osg)
>>                xrootd-server-libs(x86-64) = 1:4.12.3-1.osg35.el7
>>            Available: 1:xrootd-server-libs-4.10.0-1.osg35.el7.x86_64 (osg)
>>                xrootd-server-libs(x86-64) = 1:4.10.0-1.osg35.el7
>>            Available: 1:xrootd-server-libs-4.10.1-1.osg35.el7.x86_64 (osg)
>>                xrootd-server-libs(x86-64) = 1:4.10.1-1.osg35.el7
>>            Available: 1:xrootd-server-libs-4.11.0-1.osg35.el7.x86_64 (osg)
>>                xrootd-server-libs(x86-64) = 1:4.11.0-1.osg35.el7
>>            Available: 1:xrootd-server-libs-4.11.1-1.osg35.el7.x86_64 (osg)
>>                xrootd-server-libs(x86-64) = 1:4.11.1-1.osg35.el7
>>            Available: 1:xrootd-server-libs-4.11.2-1.osg35.el7.x86_64 (osg)
>>                xrootd-server-libs(x86-64) = 1:4.11.2-1.osg35.el7
>>            Available: 1:xrootd-server-libs-4.11.3-1.2.osg35.el7.x86_64
>> (osg)
>>                xrootd-server-libs(x86-64) = 1:4.11.3-1.2.osg35.el7
>>  You could try using --skip-broken to work around the problem
>>  You could try running: rpm -Va --nofiles --nodigest
>>
>> ------------------------------
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the XROOTD-L list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1
>>
>>
>>
>>
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1