Print

Print


Hi Andy,

I vote for

bool initiateClose();

as this makes it clear that Close() will be called in a jiffy and that 
irreversible state changes might occur in the object. ioActive() sounds like 
something that can be called several times without changing the object.

But ... you're the master so please just pick what you think is consistent with 
the rest.

Matevz

On 03/24/14 12:54, Andrew Hanushevsky wrote:
> Hi Alja,
>
> Well, if that is the simplest way to do it then let's go ahead with that. I
> assume th I/O pending state is quite dynamic and can't really be easily handled
> by some simple flag. So, let's go with something like:
>
> bool ioActive();
>
> which returns true that triggers a delayed close and false which allows the
> caller to do an inline close. If there is more than just active I/O that can
> place the object is a non-closeable state then we should use a more generic name
> (e.g. isCloseable()). What do you think?
>
> Andy
>
> On Sun, 23 Mar 2014, Alja Mrak-Tadel wrote:
>
>> Hi Andy,
>>> I was wondering if there was a way to structure this so that a thread is run
>>> to destroy the file object only if a file cache is being used. I'd rather not
>>> add the thread overhead to things like fuse and a non-caching proxy server.
>>> As for reusung a worker thread, while a good idea, we really don't have
>>> access to them. Something to think about. I'll see how difficult it would be.
>>> In any case, even if I could provide access to the thread pool, there won't
>>> be one if the posix interface is being used outside of xrootd, so the code
>>> would have to have two paths in any case.
>>>
>> How about adding a new virtual XrdOucCacheIO function that checks if
>> destruction of XrdPosixFile needs to be delayed (e.g. InitiateClose(),
>> RequiresDelayedClose(), TryClose() ...)? The return value of this function
>> determines if a new thread is needed to run the real Close(). In this function
>> I will prevent issuing any new read requests so it has to be guaranteed that
>> the real Close() is called immediately after that.
>>
>> Alja
>>
>>> On Fri, 21 Mar 2014, Alja Mrak-Tadel wrote:
>>>
>>>> Hi Andy,
>>>>
>>>> I had to move destruction of XrdPosixFile into separate task to avoid long
>>>> lock in XrdOfsHandle::Alloc() and XrdOfsHandle::Retire(). This is the change
>>>> which fixed the problem:
>>>>  https://github.com/alja/xrootd/compare/pd
>>>> You might want to reuse existing worker threads instead. One of the
>>>> XrdClFile::Close() calls is also removed.
>>>>
>>>> We also made improvements in XrdFileCache module. We need a day or two to
>>>> close up changes and test them.
>>>>
>>>> Alja
>>>>
>>>>
>>>>
>>>> On 03/20/14 13:33, Andrew Hanushevsky wrote:
>>>>> Hi Matevz,
>>>>>
>>>>> Hmmmm, ok you need to give me specific details on what you want done.
>>>>>
>>>>> Andy
>>>>>
>>>>> On Thu, 20 Mar 2014, Matevz Tadel wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Alja and I would like to push in some changes for the XrdFileCache. We
>>>>>> have our own workaround for destruction of XrdPosixFile /
>>>>>> XrdOucCacheIO in a special thread  ... Andy will probably want to
>>>>>> implement this for us so he stays on top of things.
>>>>>>
>>>>>> We should be good to go early next week.
>>>>>>
>>>>>> Cheers,
>>>>>> Matevz
>>>>>>
>>>>>>
>>>>>> On 03/20/14 10:04, Lukasz Janyst wrote:
>>>>>>> Hello everyone,
>>>>>>>
>>>>>>>     I have done all the coding that was on my list. I have two items
>>>>>>> left:
>>>>>>>
>>>>>>> 1) packaging - in the process of being done - consulting with Mattias to
>>>>>>> integrate with EPEL. I have some draft specfiles, but probably need
>>>>>>> two or three
>>>>>>> more iterations.
>>>>>>>
>>>>>>> 2) the server is broken when compiled in C++11 mode. It didn't seem
>>>>>>> to respond
>>>>>>> when I tested it three months ago, so perhaps the issue is fixed now,
>>>>>>> needs
>>>>>>> testing. Andy, can you please check whether this is still the case?
>>>>>>> You can use
>>>>>>> my testing cluster.
>>>>>>>
>>>>>>> Cheers,
>>>>>>>     Lukasz
>>>>>>>
>>>>>>> ########################################################################
>>>>>>> Use REPLY-ALL to reply to list
>>>>>>>
>>>>>>> To unsubscribe from the XROOTD-DEV list, click the following link:
>>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A
>>>>>>
>>>>>> ########################################################################
>>>>>> Use REPLY-ALL to reply to list
>>>>>>
>>>>>> To unsubscribe from the XROOTD-DEV list, click the following link:
>>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1
>>>>>>
>>>>>
>>>>> ########################################################################
>>>>> Use REPLY-ALL to reply to list
>>>>>
>>>>> To unsubscribe from the XROOTD-DEV list, click the following link:
>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1
>>>>
>>>> ########################################################################
>>>> Use REPLY-ALL to reply to list
>>>>
>>>> To unsubscribe from the XROOTD-DEV list, click the following link:
>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1
>>>>
>>>
>>> ########################################################################
>>> Use REPLY-ALL to reply to list
>>>
>>> To unsubscribe from the XROOTD-DEV list, click the following link:
>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1
>>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-DEV list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-DEV&A=1