Is it supposed to be finding conditions for this particular run (1349)? Here's the output I get at the beginning.
No input files provided by XML or command line. Dry run will be enabled.
Got ConditionsEvent with run: 0
Reading calibrations calibSVT/base for run: 0
Use this calibration from run -1: calibSVT/default.base
Reading calibrations calibSVT/tp for run: 0
Use this calibration from run -1: calibSVT/default.tp
Loading the SVT bad channels for run 0
File daqmap/svt0.badchannels was not found! Continuing with only QA bad channels
Loading SVT gains ...
Loading fieldmap for run 0
reading ECal DAQ map
Opening file /nfs/slac/g/hps/mgraham/DarkPhoton/testrun_data/hps_001349.evio.0
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Map failed
at org.lcsim.hps.evio.TestRunEvioToLcio.main(TestRunEvioToLcio.java:188)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
at org.jlab.coda.jevio.EvioReader.mapFile(EvioReader.java:323)
at org.jlab.coda.jevio.EvioReader.<init>(EvioReader.java:178)
at org.lcsim.hps.evio.TestRunEvioToLcio.main(TestRunEvioToLcio.java:186)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)
... 3 more
On Nov 9, 2012, at 6:59 PM, Sho Uemura <[log in to unmask]> wrote:
> No. That command line is correct (-jar if running the usual lcsim main(),
> -cp if you're running a different class, e.g. TestRunEvioToLcio).
>
> I haven't been able to reproduce this error. I'd try a different machine?
>
> The error seems to be coming up when the EVIO file is loaded. Does the
> same error show up if you try to run on a smaller EVIO file?
>
>
> Possibly relevant:
>
> https://issues.apache.org/bugzilla/show_bug.cgi?id=49326
>
> But if this is actually the problem and it can't be resolved by blowing up
> -Xmx, I think the fix needs to be in JEVIO. The above link mentions some
> workarounds which I can try on Monday, assuming I'm able to reproduce this
> bug.
>
> What version of the JVM are you running? (java -version)
>
> On Fri, 9 Nov 2012, Homer wrote:
>
>> Hi Matt,
>>
>> Your message shows that your command was:
>>> java -Xmx2048m -cp hps-java/target/hps-java-1.2-SNAPSHOT-bin.jar
>> org.lcsim.hps.evio.TestRunEvioToLcio -x steering/TestRunOfflineRecon.lcsim
>> /nfs/slac/g/hps/mgraham/DarkPhoton/testrun_data/hps_001349.evio.0
>> -DoutputFile=recon.slcio -d HPS-TestRun-v3
>>
>> Shouldn't "-cp" be "-jar"?
>>
>> Cheers,
>> Homer
>>
>>
>>
>> On Fri, 9 Nov 2012, Graham, Mathew Thomas wrote:
>>
>>>
>>> This actually happens right away?I tried upping Xmx to no avail.
>>>
>>> On Nov 9, 2012, at 5:11 PM, Homer <[log in to unmask]> wrote:
>>>
>>>> Hi Matt,
>>>>
>>>> Is this reproducible on different machines?
>>>> Have you tried -Xmx3192m? I'm having to push
>>>> it this far for some JAS3 SiD work. When I was
>>>> running the conversion at JLAB, the jobs would
>>>> run for several 10 of thousands of events before
>>>> dying on the mentioned error. Using -Xmx2048
>>>> resolved that problem but there is clearly something
>>>> that slowly consumes memory. This is somewhat normal
>>>> and it may not be a leak. Recent code changes may
>>>> have slightly increased the memory consumption rate.
>>>> How many events do you succeed in processing before
>>>> the crash?
>>>>
>>>> Cheers,
>>>> Homer
>>>>
>>>>
>>>> On Fri, 9 Nov 2012, McCormick, Jeremy I. wrote:
>>>>
>>>>> Hi, Matt.
>>>>>
>>>>> Do you know how to look at memory usage in Java?
>>>>>
>>>>> You'll might want to debug with something like this:
>>>>>
>>>>> ----
>>>>>
>>>>> Runtime runtime = Runtime.getRuntime();
>>>>>
>>>>> NumberFormat format = NumberFormat.getInstance();
>>>>>
>>>>> StringBuilder sb = new StringBuilder();
>>>>> long maxMemory = runtime.maxMemory();
>>>>> long allocatedMemory = runtime.totalMemory();
>>>>> long freeMemory = runtime.freeMemory();
>>>>>
>>>>> sb.append("free memory: " + format.format(freeMemory / 1024) + "<br/>");
>>>>> sb.append("allocated memory: " + format.format(allocatedMemory / 1024) +
>>>>> "<br/>");
>>>>> sb.append("max memory: " + format.format(maxMemory / 1024) + "<br/>");
>>>>> sb.append("total free memory: " + format.format((freeMemory + (maxMemory
>>>>> - allocatedMemory)) / 1024) + "<br/>");
>>>>>
>>>>> ----
>>>>>
>>>>> I took the above idea from here.
>>>>>
>>>>> http://stackoverflow.com/questions/74674/how-to-do-i-check-cpu-and-memory-usage-in-java
>>>>>
>>>>> You could make that a static method and call it at various points in
>>>>> TestRunEvioToLcio or within one of your drivers in the .lcsim file you're
>>>>> using.
>>>>>
>>>>> If memory is steadily increasing every event then this indicates some
>>>>> systematic memory leak. If it looks fine and then explodes on one event,
>>>>> that indicates some problem with the data of that event. For instance,
>>>>> there could be a bogus (very large) value that gets into an array size
>>>>> setting
>>>>> due to corrupted data (e.g. when size of something is read from a data
>>>>> block), which is often what happens to cause these kinds of problems.
>>>>>
>>>>> --Jeremy
>>>>>
>>>>> -----Original Message-----
>>>>> From: Graham, Mathew Thomas
>>>>> Sent: Friday, November 09, 2012 12:38 PM
>>>>> To: Uemura, Sho; Omar Moreno; Per Hansson
>>>>> Cc: McCormick, Jeremy I.
>>>>> Subject: OutOfMemory error in jevio?
>>>>>
>>>>> I'm trying to run over data (from a pretty pristine hps-java build) like
>>>>> this:
>>>>> java -Xmx2048m -cp hps-java/target/hps-java-1.2-SNAPSHOT-bin.jar
>>>>> org.lcsim.hps.evio.TestRunEvioToLcio -x
>>>>> steering/TestRunOfflineRecon.lcsim
>>>>> /nfs/slac/g/hps/mgraham/DarkPhoton/testrun_data/hps_001349.evio.0
>>>>> -DoutputFile=recon.slcio -d HPS-TestRun-v3
>>>>>
>>>>>
>>>>> and I get this...
>>>>>
>>>>> Exception in thread "main" java.lang.RuntimeException:
>>>>> java.io.IOException: Map failed
>>>>> at
>>>>> org.lcsim.hps.evio.TestRunEvioToLcio.main(TestRunEvioToLcio.java:188)
>>>>> Caused by: java.io.IOException: Map failed
>>>>> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
>>>>> at org.jlab.coda.jevio.EvioReader.mapFile(EvioReader.java:323)
>>>>> at org.jlab.coda.jevio.EvioReader.<init>(EvioReader.java:178)
>>>>> at
>>>>> org.lcsim.hps.evio.TestRunEvioToLcio.main(TestRunEvioToLcio.java:186)
>>>>> Caused by: java.lang.OutOfMemoryError: Map failed
>>>>> at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>>>> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)
>>>>> ... 3 more
>>>>>
>>>>>
>>>>> ...
>>>>>
>>>>> ########################################################################
>>>>> Use REPLY-ALL to reply to list
>>>>>
>>>>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>>>>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>>>>
>>>
>>>
>>
>> ########################################################################
>> Use REPLY-ALL to reply to list
>>
>> To unsubscribe from the HPS-SOFTWARE list, click the following link:
>> https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
>>
########################################################################
Use REPLY-ALL to reply to list
To unsubscribe from the HPS-SOFTWARE list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1
|