Hi Brian, There is a packet sequence number you are to use to order packets that come in out of order. It's rather trickey because it's only a single byte but that is good enough. Andy On Mon, 10 Oct 2011, Brian Bockelman wrote: > > On Oct 10, 2011, at 1:10 PM, Andrew Hanushevsky wrote: > >> OK, so here is what I am planning on doing for readv >> >> 1) If you monitor IO then each readv request will have a special readv entry (as opposed to a standard read entry) that will include: >> a) Number of readv segments, >> b) number of bytes read, >> c) the dictid of the file being read, and >> d) the readv distinguisher used to group a single readv request >> that produces multiple entries because it reads from multiple files >> in the vector together (i.e., you know its from the same readv). >> Note that no offset is supplied. If that is needed then?. >> > > This is fine up to here. > >> 2) If you monitor IOV then, additionally, the actual vector is unrolled and you get a read entry for each readv entry. The format is the same as a regular read. However, you can tell it's a readv because a readv entry will precede it (and tell you how many read entries will follow). Read entries contain offsets. >> > > I forget - is there an ordering of the monitoring packets? I would expect a non-trivial chance that UDP packets will get reordered. > > Brian > >> Will satisfy all the requirements? >> >> Andy >> >> On Sun, 9 Oct 2011, Matevz Tadel wrote: >> >>> Hi Andy, >>> >>> On 10/08/11 11:21, Andrew Hanushevsky wrote: >>>> On Sat, 8 Oct 2011, Brian Bockelman wrote: >>>>> On Oct 7, 2011, at 3:37 PM, Andrew Hanushevsky wrote: >>>>>> What does that mean? You want a single entry? That's not always possible >>>>>> since readv allows you to read from multiple files using a single vector. >>>>> Interesting! Is there an example use of this interface? >>>> To example uses but it was put in with anticipation of some very clever person >>>> capitalizing on this feature. >>>>> Well there's a middle-ground use case here: being able to monitor activity for >>>>> each open connection. >>>>> In our experience, without the very detailed I/O monitoring, we: >>>>> 1) Don't get any monitoring for a client that crashes (disconnects without a >>>>> close). >>>> That information can be put in the summary record, if need be. I say need be >>>> because it's a relatively rare event (yes, it does happen in spurts). >>> >>> Sorry ... what summary record? When a client program crashes, all I get in monitoring stream is session disconnect trace. Then I loop over all files associated with this session and "close" them manually. >>> >>> There could be a separate "close on disconnect" trace type that is sent in this case and includes all the information usually associated with close. >>> >>>>> 2) Don't get monitoring while a client is running. Example: it's been 5 hours >>>>> since a job has started; is this because it is getting 1 byte / second, or >>>>> because the job takes 5 hours and 1 minute? >>>> True, there is no other way of capturing this information. Another case where >>>> some more client input would make things more effecient. >>> >>> What do you mean? The the client would also send monitoring information, either directly to the monitoring host or via the server? >>> >>>>> So, we find it extremely useful without doing the data access patterns use >>>>> case. Either way we get the information - unrolling the vector to include all >>>>> the data, or getting a summary - we'll be happy. >>>> OK, I will take this into consideration when comming up with a fix. >>> >>> I'd still vote for a single trace entry for a whole vector read. And then have a new option for full vector read unroll as it really pushes monitoring overhead to a new level. Now even I have enough ;) >>> >>> Cheers, >>> Matevz >>> > >