Print

Print


Hi,

I noticed in crawling our Eng Run 2015 data that there are a few corrupted files (or single EVIO events?) which cannot be read back.

e.g.

2016-02-05 06:15:11 [INFO] org.hps.record.evio.EvioFileUtilities open :: opened /cache/mss/hallb/hps/data/hps_005381.evio.65 in 0.077 seconds in sequential
java.lang.NegativeArraySizeException
        at org.jlab.coda.jevio.EventParser.parseStructure(EventParser.java:126)
        at org.jlab.coda.jevio.EventParser.parseEvent(EventParser.java:62)
        at org.jlab.coda.jevio.EvioReader.parseEvent(EvioReader.java:1449)
        at org.jlab.coda.jevio.EvioReader.parseNextEvent(EvioReader.java:1430)
        at org.hps.record.evio.EvioFileSource.next(EvioFileSource.java:138)
        at org.freehep.record.loop.DefaultRecordLoop.fetchRecord(DefaultRecordLoop.java:809)
        at org.freehep.record.loop.DefaultRecordLoop.loop(DefaultRecordLoop.java:648)
        at org.freehep.record.loop.DefaultRecordLoop.execute(DefaultRecordLoop.java:566)
        at org.hps.record.AbstractRecordLoop.loop(AbstractRecordLoop.java:29)
        at org.hps.run.database.RunDatabaseBuilder.processEvioFiles(RunDatabaseBuilder.java:386)

This issue is preventing me from fully crawling all the data for those runs, though right now it is just happening with a few files from 5381 and 5541.

Anyone know what the cause of this might be?

Should I just insert catch blocks into the relevant code to trap these errors and then try to continue to the next event?  Or does this issue make the rest of the EVIO file unreadable?

Thanks.

--Jeremy

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1