Print

Print


Ah, if that feature always works for any redirection response, I will just
use that.

Thanks Andy,
Andreas.


On Tue, Feb 11, 2020 at 1:04 PM Andrew Hanushevsky <
[log in to unmask]> wrote:

> This seems to be an EOS issue. The object in question does have an internl
> 2K limit. That was, however, extended to have no limit if one supplies an
> external buffer. It would seem that EOS is not using that feature.
>
> On Tue, 11 Feb 2020, Sang Un Ahn wrote:
>
> > Dear Exports,
> >
> > We are posting this to both of EOS and XRootD supports because it seems
> to us this issue relates to both.
> >
> > We found that the number of stripes exceeding 12 when we configure
> RAIN(12+4) on EOS instance by setting "sys.forced.nstripes=16" gives the
> following error:
> >
> > 200207 02:51:22 time=1581043882.201812 func=Emsg level=ERROR
> logid=b93a52f8-4954-11ea-b68a-b8599fa51330 unit=
> [log in to unmask]<mailto:unit=
> [log in to unmask]>:1094 tid=00007ff409bfe700
> source=XrdMgmOfsFile:2790 tident=root.140:[log in to unmask]
> <mailto:[log in to unmask]> sec=sss uid=2 gid=2 name=daemon
> geo="" Unable to open file - capability exceeds 2kb limit
> /eos/gsdc/testrain/file1g-jbod-mgmt-09; Cannot allocate memory
> >
> > And this is the case of success with having "sys.forced.nstrips=12":
> >
> > 200207 07:56:14 time=1581062174.302108 func=ProcessCapOpaque level=INFO
> logid=5021a2e6-497f-11ea-9a1d-b8599fa51330 unit=
> [log in to unmask]<mailto:unit=
> [log in to unmask]>:1096 tid=00007fce48129700
> source=XrdFstOfsFile:2745 tident=4.6:[log in to unmask]<mailto:
> [log in to unmask]> sec=(null) uid=99 gid=99 name=(null) geo=""
> capability=&mgm.access=create&mgm.ruid=2&mgm.rgid=2&mgm.uid=2&mgm.gid=2&mgm.path=/eos/gsdc/testrain/file1g-jbod-mgmt-09&mgm.manager=
> jbod-mgmt-01.eoscluster.sdfarm.kr:1094<
> http://jbod-mgmt-01.eoscluster.sdfarm.kr:1094
> >&mgm.fid=00000036&mgm.cid=11&mgm.sec=sss|daemon|
> jbod-mgmt-09.eoscluster.sdfarm.kr<http://jbod-mgmt-09.eoscluster.sdfarm.kr
> >||daemon|||eoscp&mgm.lid=1080298322&mgm.bookingsize=1073741824&mgm.targetsize=1073741824&mgm.fsid=1075&mgm.url0=root://
> jbod-mgmt-09.eoscluster.sdfarm.kr:1096//&mgm.fsid0=1495&mgm.url1=root://jbod-mgmt-01.eoscluster.sdfarm.kr:1095//&mgm.fsid1=67&mgm.url2=root://jbod-mg
>
> mt-08.eoscluster.sdfarm.kr:1095//&mgm.fsid2=1243&mgm.url3=root://jbod-mgmt-05.eoscluster.sdfarm.kr:1096//&mgm.fsid3=823&mgm.url4=root://jbod-mgmt-06.eoscluster.sdfarm.kr:1096//&mgm.fsid4=991&mgm.url5=root://jbod-mgmt-03.eoscluster.sdfarm.kr:1096//&mgm.fsid5=487&mgm.url6=root://jbod-mgmt-07.eoscluster.sdfarm.kr:1095//&mgm.fsid6=1075&mgm.url7=root://jbod-mgmt-05.eoscluster.sdfarm.kr:1095//&mgm.fsid7=739&mgm.url8=root://jbod-mgmt-02.eoscluster.sdfarm.kr:1095//&mgm.fsid8=235&mgm.url9=root://jbod-mgmt-08.eoscluster.sdfarm.kr:1096//&mgm.fsid9=1327&mgm.url10=root://jbod-mgmt-04.eoscluster.sdfarm.kr:1096//&mgm.fsid10=655&mgm.url11=root://jbod-mgmt-02.eoscluster.sdfarm.kr:1096//&mgm.fsid11=319&cap.valid=1581065774
> >
> > The error comes from here:
> https://github.com/cern-eos/eos/blob/master/mgm/XrdMgmOfsFile.cc#L2585
> > And XrdOucEI::Max_Error_Len has static size of 2Kb as described here:
> https://xrootd.slac.stanford.edu/doc/doxygen/current/html/structXrdOucEI.html
> >
> > FYI, we currently have 18 FSTs out of 9 servers (2 FSTs on each server)
> and intent to run them with 12+4 (16 stripes) RAIN configuration of EOS for
> data archive purpose. In near future, we will add 2 more FSTs in order to
> have a complete setup of 14+4 (18 stripes) out of 20 FSTs.
> >
> > We were also thinking about having the name of URL (including hostname
> and domain name) short however we doubt that this resolves the origin of
> issue. In some cases, the length of filename including its path might be
> pretty long in production, e.g. ALICE has typical name of its raw data like
> "/eos/<site::eos::Instance>/03/60094/00eec022-9340-11e5-a2a6-1f249ae2dbab".
> >
> > So, we are not pretty sure of whether increasing or unbarring the hard
> limit of XrdOucEI::Max_Error_Len would make sense but it would be very
> helpful for us if you give us any solutions on this issue.
> >
> > Thank you in advance.
> >
> > Best regards,
> > Sang-Un
> >
> >
> >
>
> ########################################################################
> Use REPLY-ALL to reply to list
>
> To unsubscribe from the XROOTD-L list, click the following link:
> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1