Print

Print


Hi xrootd list,

I'm running xrootd 3.0.3-pre8 from http://newman.ultralight.org/repos/xrootd/x86_64/ at MWT2.

I've recently begun setting up xrootd on our Tier3 cluster, and I've noticed an interesting problem.  Each file I transfer into the xrootd data servers ends up with an open FD by the xrootd process.  This means that eventually even with a ulimit of 32768, we are still running out of available file handles for our process.  It seems to me that every file handle that's ever been opened/copies is staying open even after the xrdcp that's transferring the data has finished and exited successfully.  lsof on the process confirms that the xrootd process is in fact holding open this many files.  This happens on both of the data servers that I'm running, although it doesn't seem to an issue on my redirector (which isn't running a Server-Side Inventory, so it might still happen there if I was running in that mode).


My solution so far is to simply restart the xrootd process, which then restarts the timer on this problem.  However, that's clearly not an ideal solution for production.

I'm curious if anyone else has experienced this, or if there's a good way to avoid it?  

Cheers,

-Aaron van Meerten
MidWest Tier2