LISTSERV mailing list manager LISTSERV 16.5

Help for VUB-RECOIL Archives


VUB-RECOIL Archives

VUB-RECOIL Archives


VUB-RECOIL@LISTSERV.SLAC.STANFORD.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

VUB-RECOIL Home

VUB-RECOIL Home

VUB-RECOIL  October 2002

VUB-RECOIL October 2002

Subject:

Re: problems on AWG18

From:

Alessio Sarti <[log in to unmask]>

Date:

31 Oct 2002 10:58:11 -0800 (PST)Thu, 31 Oct 2002 10:58:11 -0800 (PST)

Content-Type:

TEXT/PLAIN

Parts/Attachments:

Parts/Attachments

TEXT/PLAIN (140 lines)

seems that yury suggestion is going in that direction: having few skimmed
rootfiles....
We (urs and I at least) are using them without any problem.
(See for example:
/nfs/farm/babar/AWG12/ISL/sx-071802/skim_DA)
I'm going to start the production now.
The I'm leaving ....

Hope that urs can handle all the remaining issues (thanks urs).
Alessio

______________________________________________________
Alessio Sarti     Universita' & I.N.F.N. Ferrara
 tel  +39-0532-781928  Ferrara
roma  +39-06-49914338
SLAC +001-650-926-2972

"... e a un Dio 'fatti il culo' non credere mai..."
(F. De Andre')

"He was turning over in his mind an intresting new concept in
Thau-dimensional physics which unified time, space, magnetism, gravity
and, for some reason, broccoli".  (T. Pratchett: "Pyramids")

On Thu, 31 Oct 2002, Yury G. Kolomensky wrote:

> 	Hi Daniele,
>
> did you really run 300 root jobs over the same partition ? This is
> nuts, IMHO, unless your jobs are completely CPU-limited. Looking at
> your logfile, your job ran for 12 mins, using 92 sec CPU time. In
> other words, CPU utilization was ~13%. This is not good. I see that
> you ran on a bronco -- so NFS client problems were probably
> milder. You probably had to really push the server hard to start
> seeing this errors.
>
> It is clear that the way you run your jobs is not optimal. You run
> many jobs, all in long queue, and each uses a couple of minutes of
> CPU time, while being I/-limited. You would be much better served
> (more efficient, too) by running a few jobs in parallel, each using a
> few hours of CPU time. You can get that by either chaining the files,
> or writing smarter macros.
>
> I do not know what the difference between sulky09 (AWG8 disk) and
> sulky25 (AWG18) is (Fabrizio could find out). They use somewhat
> different diskpacks, I guess -- one partition is 500 GB, and another
> is 600 GB, so it is not inconceivable that AWG18 disk is slower. Such
> disks usually are better optimized for direct access though, so again,
> you will see a much better throughput if you convert your trees to
> non-split mode with one (or few) branches, and run fewer parallel
> jobs.
>
> 								Yury
>
> On Thu, Oct 31, 2002 at 10:23:36AM -0800, Daniele del Re wrote:
> >
> > Hi Yury,
> >
> >  one example is
> >
> >  ~daniele/scra/newchains_1030/data-2
> >
> >  and the tipical message is
> >
> >  Error in <TFile::TFile>: file /nfs/farm/babar/AWG18/ISL/sx-080702/data/2000/output/outputdir/AlleEvents_2000_on-1095.root does not exist
> >
> >  on AWG8 this pathology happened just few times when there were >~300 jobs
> > reading the same disk if I remember correctly.
> >
> >  Do you know which is the difference between AWG8 and AWG18?
> >
> >  My proposal is to split things on different disks, if possible.
> >
> >  Thanks a lot,
> >
> >  Daniele
> >
> > On Thu, 31 Oct 2002, Yury G. Kolomensky wrote:
> >
> > > 	Hi Daniele,
> > >
> > > do you have an example of a log file for these jobs ? I do not know
> > > exactly what servers these disks have been installed on, but we
> > > noticed in E158, where most of the data were sitting on one
> > > (relatively slow) server, jobs were limited by I/O throughput to about
> > > 2 MB/sec. This limit comes from the random access pattern that split
> > > ROOT trees provide. If your job is sufficiently fast, you can saturate
> > > I/O limit quite quickly -- with 2-3 jobs. If you submit too many jobs
> > > (tens or even hundreds), the server will thrash to the point that the
> > > clients will receive NFS timeouts. ROOT usually does not like that --
> > > you may see error messages in the log file about files not found (when
> > > the files are actually on disk), or about problems uncompressing
> > > branches. These are usually more severe on Linux clients, where the
> > > NFS client implementation is not very robust..
> > >
> > > There are several ways to cope with this problem:
> > >
> > > 1) Submit fewer jobs at one time. I would not submit more than 10
> > >    I/O-limited jobs in parallel.
> > > 2) Place your data on different servers. That means, different sulky
> > >    servers is best. Even if you are on the same sulky server but split
> > >    your data onto different partitions, you still get the benefit of
> > >    parallelizing disk access
> > > 3) Re-write your jobs to first copy your data onto a local disk on the
> > >    batch worker (for instance, /tmp), then run on the local copy, then
> > >    delete the local copy. The benefit of that is that the cp command
> > >    will access the file in direct-access mode (with 10-20 MB/sec
> > >    throughput, depending on the network interface throughput).
> > > 4) Make your ntuples non-split (very highly recommended). This usually
> > >    increases the throughput by a factor of 10-20. If your typical job
> > >    reads most of the branches of the tree, making tree split makes no
> > >    sense. Non-split trees provide direct access to disk, which is much
> > >    more optimal.
> > >
> > > 							Yury
> > >
> > >
> > > On Thu, Oct 31, 2002 at 09:26:08AM -0800, Daniele del Re wrote:
> > > >
> > > > Hi all,
> > > >
> > > >  in the last two days I tried to run on data and MC on the new disk AWG18.
> > > > No way. I got problems in the 80% of the jobs. Someone crashed, most of
> > > > them have did not read a large number of root files (actually there).
> > > >
> > > >  This problem seems to be worse than ever. Do we have to contact
> > > > computing people about this?
> > > >
> > > >  Daniele
> > > >
> > > >
> > >
> > >
> >
> >
>



Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

March 2010
December 2009
August 2009
January 2009
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001

ATOM RSS1 RSS2



LISTSERV.SLAC.STANFORD.EDU

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager

Privacy Notice, Security Notice and Terms of Use