Print

Print


Hi Andy,

Thanks, all clear. Will do this tomorrow.

Alja

On 3/25/14 7:16 PM, Andrew Hanushevsky wrote:
> Hi Alja,
>
> OK, so if you look at the latest git head you will see a namespace, 
> XrdPosixGlobals, with a single symbol in it schedP. That is the 
> pointer to the thread pool scheduler. It is zero if none exists. So, 
> here is what needs to be done:
>
> 1) XrdPosixFile needs to inherit the XrdJob class.
> 2) XrdPosixFile needs to define the virtual method DoIt() from XrdJob 
> to simply call its DelayedDestroyed method and return.
> 3) When you need to delay the destroy then if the schedP exists, 
> schedule the destroy via schedP->Schedule((XrdJob *)fP) otherwise run 
> a separate thread as you do now.
>
> OK?
>
> Andy
>
> -----Original Message----- From: Alja Mrak-Tadel
> Sent: Tuesday, March 25, 2014 3:40 PM
> To: Andrew Hanushevsky ; Matevz Tadel
> Cc: xrootd-dev
> Subject: Re: release status
>
> Hi Andy,
>
> This is the minimal change that it does what we need
> https://github.com/alja/xrootd/compare/posix-close
>
> Alja
>
> On 03/24/14 16:23, Andrew Hanushevsky wrote:
>> Hi Matevz,
>>
>> The reason is that I chose ioActive() is simply that I thought we would
>> keep the code that destroys the posix object the same in close() and
>> execute it if we could do so immediately or schedule a thread to do a
>> defer close. In any case, the code (as it does now) at the very least
>> removes the FD from the fd table. Then again, looking at the code
>> changes I do get a little confused. I suppose I would say keep the old
>> logic as close as to the original with minimal changes to solve this.
>>
>> Andy
>>
>> On Mon, 24 Mar 2014, Matevz Tadel wrote:
>>
>>> Hi Andy,
>>>
>>> I vote for
>>>
>>> bool initiateClose();
>>>
>>> as this makes it clear that Close() will be called in a jiffy and that
>>> irreversible state changes might occur in the object. ioActive()
>>> sounds like something that can be called several times without
>>> changing the object.
>>>
>>> But ... you're the master so please just pick what you think is
>>> consistent with the rest.
>>>
>>> Matevz
>>>
>>> On 03/24/14 12:54, Andrew Hanushevsky wrote:
>>>> Hi Alja,
>>>>
>>>> Well, if that is the simplest way to do it then let's go ahead with
>>>> that. I
>>>> assume th I/O pending state is quite dynamic and can't really be
>>>> easily handled
>>>> by some simple flag. So, let's go with something like:
>>>>
>>>> bool ioActive();
>>>>
>>>> which returns true that triggers a delayed close and false which
>>>> allows the
>>>> caller to do an inline close. If there is more than just active I/O
>>>> that can
>>>> place the object is a non-closeable state then we should use a more
>>>> generic name
>>>> (e.g. isCloseable()). What do you think?
>>>>
>>>> Andy
>>>>
>>>> On Sun, 23 Mar 2014, Alja Mrak-Tadel wrote:
>>>>
>>>>> Hi Andy,
>>>>>> I was wondering if there was a way to structure this so that a
>>>>>> thread is run
>>>>>> to destroy the file object only if a file cache is being used. I'd
>>>>>> rather not
>>>>>> add the thread overhead to things like fuse and a non-caching proxy
>>>>>> server.
>>>>>> As for reusung a worker thread, while a good idea, we really don't
>>>>>> have
>>>>>> access to them. Something to think about. I'll see how difficult it
>>>>>> would be.
>>>>>> In any case, even if I could provide access to the thread pool,
>>>>>> there won't
>>>>>> be one if the posix interface is being used outside of xrootd, so
>>>>>> the code
>>>>>> would have to have two paths in any case.
>>>>>>
>>>>> How about adding a new virtual XrdOucCacheIO function that checks if
>>>>> destruction of XrdPosixFile needs to be delayed (e.g. 
>>>>> InitiateClose(),
>>>>> RequiresDelayedClose(), TryClose() ...)? The return value of this
>>>>> function
>>>>> determines if a new thread is needed to run the real Close(). In
>>>>> this function
>>>>> I will prevent issuing any new read requests so it has to be
>>>>> guaranteed that
>>>>> the real Close() is called immediately after that.
>>>>>
>>>>> Alja
>>>>>
>>>>>> On Fri, 21 Mar 2014, Alja Mrak-Tadel wrote:
>>>>>>
>>>>>>> Hi Andy,
>>>>>>>
>>>>>>> I had to move destruction of XrdPosixFile into separate task to
>>>>>>> avoid long
>>>>>>> lock in XrdOfsHandle::Alloc() and XrdOfsHandle::Retire(). This is
>>>>>>> the change
>>>>>>> which fixed the problem:
>>>>>>>  https://github.com/alja/xrootd/compare/pd
>>>>>>> You might want to reuse existing worker threads instead. One of the
>>>>>>> XrdClFile::Close() calls is also removed.
>>>>>>>
>>>>>>> We also made improvements in XrdFileCache module. We need a day or
>>>>>>> two to
>>>>>>> close up changes and test them.
>>>>>>>
>>>>>>> Alja
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 03/20/14 13:33, Andrew Hanushevsky wrote:
>>>>>>>> Hi Matevz,
>>>>>>>>
>>>>>>>> Hmmmm, ok you need to give me specific details on what you want
>>>>>>>> done.
>>>>>>>>
>>>>>>>> Andy
>>>>>>>>
>>>>>>>> On Thu, 20 Mar 2014, Matevz Tadel wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> Alja and I would like to push in some changes for the
>>>>>>>>> XrdFileCache. We
>>>>>>>>> have our own workaround for destruction of XrdPosixFile /
>>>>>>>>> XrdOucCacheIO in a special thread  ... Andy will probably want to
>>>>>>>>> implement this for us so he stays on top of things.
>>>>>>>>>
>>>>>>>>> We should be good to go early next week.
>>>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>> Matevz
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 03/20/14 10:04, Lukasz Janyst wrote:
>>>>>>>>>> Hello everyone,
>>>>>>>>>>
>>>>>>>>>>     I have done all the coding that was on my list. I have two
>>>>>>>>>> items
>>>>>>>>>> left:
>>>>>>>>>>
>>>>>>>>>> 1) packaging - in the process of being done - consulting with
>>>>>>>>>> Mattias to
>>>>>>>>>> integrate with EPEL. I have some draft specfiles, but probably
>>>>>>>>>> need
>>>>>>>>>> two or three
>>>>>>>>>> more iterations.
>>>>>>>>>>
>>>>>>>>>> 2) the server is broken when compiled in C++11 mode. It didn't
>>>>>>>>>> seem
>>>>>>>>>> to respond
>>>>>>>>>> when I tested it three months ago, so perhaps the issue is
>>>>>>>>>> fixed now,
>>>>>>>>>> needs
>>>>>>>>>> testing. Andy, can you please check whether this is still the
>>>>>>>>>> case?
>>>>>>>>>> You can use
>>>>>>>>>> my testing cluster.
>>>>>>>>>>
>>>>>>>>>> Cheers,
>>>>>>>>>>     Lukasz
>>>>>>>>>>
>>>>>>>>>> ######################################################################## 
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Use REPLY-ALL to reply to list
>>>>>>>>>>
>>>>>>>>>> To unsubscribe from the XROOTD-DEV list, click the following 
>>>>>>>>>> link:
>>>>>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A 
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> ######################################################################## 
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Use REPLY-ALL to reply to list
>>>>>>>>>
>>>>>>>>> To unsubscribe from the XROOTD-DEV list, click the following 
>>>>>>>>> link:
>>>>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1 
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> ######################################################################## 
>>>>>>>>
>>>>>>>>
>>>>>>>> Use REPLY-ALL to reply to list
>>>>>>>>
>>>>>>>> To unsubscribe from the XROOTD-DEV list, click the following link:
>>>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1 
>>>>>>>>
>>>>>>>
>>>>>>> ######################################################################## 
>>>>>>>
>>>>>>>
>>>>>>> Use REPLY-ALL to reply to list
>>>>>>>
>>>>>>> To unsubscribe from the XROOTD-DEV list, click the following link:
>>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1
>>>>>>>
>>>>>>
>>>>>> ######################################################################## 
>>>>>>
>>>>>>
>>>>>> Use REPLY-ALL to reply to list
>>>>>>
>>>>>> To unsubscribe from the XROOTD-DEV list, click the following link:
>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1
>>>>>
>>>
>>> ######################################################################## 
>>>
>>> Use REPLY-ALL to reply to list
>>>
>>> To unsubscribe from the XROOTD-DEV list, click the following link:
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1
>>>
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1