Print

Print


Oh yes, it could be the case that the oss.alloc directive, if specified, 
reserves too much space. The default is zero so is not specified, then 
that would not be the cause of the problem. However, the client mighgt 
open the file with CGI "oss.alloc" and that may be too large.

Andy

On Wed, 20 Jan 2021, Matevz Tadel wrote:

> On 1/20/21 12:47 PM, Andrew Hanushevsky wrote:
>>
>>
>> On Wed, 20 Jan 2021, Matevz Tadel wrote:
>>
>>> Andy, what would be the simplest way to reproduce this? Running standalone
>>> mini-server with localroot on a ssd partition and then writing in with xrdcp?
>>> Are there some tracing options that would help?
>> That's likely the best way but it won't recreate the original environment and
>> that may be the problem. Imagine that the oldest file accounted for 30% of the
>> cache space. Well, purge will still purge that file and fall far below the low
>> water mark. Then it will just sit there until the space gets used up. Sounds
>> like a rational explanation?
>
> No, see my first reply. Opening a file and purge are completely separate. So the
> open error is a real error from the FS.
>
> Sam says all partitions are at 70% so this makes no sense at all ... open
> shouldn't even be bother about the lack of space. Oh, unless there is something
> strange going on with directory entries and maximum number is reached there,
> somehow.
>
> Sam, what FS are you using? Do you do some parameter tuning?
>
> For what VO is this? Do they have a flat namespace like ATLAS, i.e., everything
> gets cached into the same directory?
>
> In your test, did you try writing into the cache directory itself or somewhere
> else / top-level?
>
> Matevz
>

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the XROOTD-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=XROOTD-L&A=1