LISTSERV mailing list manager LISTSERV 16.5

Help for VUB-RECOIL Archives


VUB-RECOIL Archives

VUB-RECOIL Archives


VUB-RECOIL@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

VUB-RECOIL Home

VUB-RECOIL Home

VUB-RECOIL  October 2002

VUB-RECOIL October 2002

Subject:

Re: problems on AWG18

From:

"Yury G. Kolomensky" <[log in to unmask]>

Reply-To:

[log in to unmask]

Date:

31 Oct 2002 10:07:27 -0800Thu, 31 Oct 2002 10:07:27 -0800

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (56 lines)

	Hi Daniele,

do you have an example of a log file for these jobs ? I do not know
exactly what servers these disks have been installed on, but we
noticed in E158, where most of the data were sitting on one
(relatively slow) server, jobs were limited by I/O throughput to about
2 MB/sec. This limit comes from the random access pattern that split
ROOT trees provide. If your job is sufficiently fast, you can saturate
I/O limit quite quickly -- with 2-3 jobs. If you submit too many jobs
(tens or even hundreds), the server will thrash to the point that the
clients will receive NFS timeouts. ROOT usually does not like that --
you may see error messages in the log file about files not found (when
the files are actually on disk), or about problems uncompressing
branches. These are usually more severe on Linux clients, where the
NFS client implementation is not very robust.. 

There are several ways to cope with this problem:

1) Submit fewer jobs at one time. I would not submit more than 10
   I/O-limited jobs in parallel. 
2) Place your data on different servers. That means, different sulky
   servers is best. Even if you are on the same sulky server but split
   your data onto different partitions, you still get the benefit of
   parallelizing disk access
3) Re-write your jobs to first copy your data onto a local disk on the
   batch worker (for instance, /tmp), then run on the local copy, then
   delete the local copy. The benefit of that is that the cp command
   will access the file in direct-access mode (with 10-20 MB/sec
   throughput, depending on the network interface throughput). 
4) Make your ntuples non-split (very highly recommended). This usually
   increases the throughput by a factor of 10-20. If your typical job
   reads most of the branches of the tree, making tree split makes no
   sense. Non-split trees provide direct access to disk, which is much
   more optimal. 

							Yury


On Thu, Oct 31, 2002 at 09:26:08AM -0800, Daniele del Re wrote:
> 
> Hi all,
> 
>  in the last two days I tried to run on data and MC on the new disk AWG18.
> No way. I got problems in the 80% of the jobs. Someone crashed, most of
> them have did not read a large number of root files (actually there).
> 
>  This problem seems to be worse than ever. Do we have to contact
> computing people about this?
> 
>  Daniele
> 
> 



Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

March 2010
December 2009
August 2009
January 2009
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use