Right now they are going to : /volatile/hallb/hps/data/lcio/ (no recon yet; I’ll do that next) and I’m putting in 3 sort of representative runs: 3258: single cluster > 500MeV 3288: two cluster > 200 MeV, non-blocking (daq rate ~7kHz) 3340: two cluster > 200 MeV, blocking (daq rate ~60kHz) …3340 should also have a better behaved evio structure (but not sure…still running). Nathan & Ben should say more about what was wrong & what’s fixed. Everything used to run is in /u/group/hps/production/data/auger On Dec 16, 2014, at 11:22 AM, Graf, Norman A. <[log in to unmask]<mailto:[log in to unmask]>> wrote: Thanks Matt. Once you're done could you please post an announcement with the location of the output files? A pointer to the scripts and steering files should also be provided so as to establish a provenance for them. Norman ________________________________ From: Graham, Mathew Thomas Sent: Tuesday, December 16, 2014 8:19 AM To: Graf, Norman A. Cc: hps-software Subject: Re: Memory issue running EvioToLcio on jlab batch The scripts are there, but I’m doing only a couple runs to start with. On Dec 16, 2014, at 11:11 AM, Graf, Norman A. <[log in to unmask]<mailto:[log in to unmask]>> wrote: Hello Matt, Good to know. I was starting to scratch my head on that one. Are you setting up a pipeline or are you doing this manually at the moment? Norman ________________________________ From: [log in to unmask]<mailto:[log in to unmask]> <[log in to unmask]<mailto:[log in to unmask]>> on behalf of Graham, Mathew Thomas <[log in to unmask]<mailto:[log in to unmask]>> Sent: Tuesday, December 16, 2014 7:43 AM To: hps-software Subject: Re: Memory issue running EvioToLcio on jlab batch I got this to work by removing the “-Xmx2048m” option and letting java figure things out for itself. On Dec 16, 2014, at 9:02 AM, Graham, Mathew Thomas <[log in to unmask]<mailto:[log in to unmask]>> wrote: I’m trying to convert some evio files to lcio using the jlab farms but I can’t seem to get the memory allocation to work… In the xml submitted to auger I have the <Memory space="2500" unit="MB"/> and run with java -Xmx2048m (I’ve actually tried a bunch of settings, but no luck). I get this error: on Dec 15 22:48:22 EST 2014 :: DatabaseConditionsManager :: SEVERE :: Error loading SVT conditions onto detector. Mon Dec 15 22:48:22 EST 2014 :: DatabaseConditionsManager :: CONFIG :: DatabaseConditionsManager is initialized Mon Dec 15 22:48:22 EST 2014 :: EvioToLcio :: CONFIG :: Conditions system will be frozen to use specified run number and detector! Mon Dec 15 22:48:22 EST 2014 :: DatabaseConditionsManager :: CONFIG :: The conditions manager has been frozen and will ignore subsequent updates until unfrozen. Mon Dec 15 22:48:22 EST 2014 :: EvioToLcio :: CONFIG :: The job will include the following EVIO files ... in.evio Mon Dec 15 22:48:22 EST 2014 :: EvioToLcio :: INFO :: Opening EVIO file in.evio ... Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Map failed at org.hps.evio.EvioToLcio.run(EvioToLcio.java:304) at org.hps.evio.EvioToLcio.main(EvioToLcio.java:99) Caused by: java.io.IOException: Map failed at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849) at org.jlab.coda.jevio.MappedMemoryHandler.<init>(MappedMemoryHandler.java:112) at org.jlab.coda.jevio.EvioReader.<init>(EvioReader.java:447) at org.jlab.coda.jevio.EvioReader.<init>(EvioReader.java:342) at org.jlab.coda.jevio.EvioReader.<init>(EvioReader.java:324) at org.hps.evio.EvioToLcio.run(EvioToLcio.java:302) ... 1 more Caused by: java.lang.OutOfMemoryError: Map failed at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846) ... 6 more …this works fine running on my mac and interactively on the ifarms, but I think the problem is that we’re running 32-bit java on the batch machines (as scicomp says we are supposed to) and it has problems with large memory maps. Anyone have any ideas to get around this? Thanks, Matt ________________________________ Use REPLY-ALL to reply to list To unsubscribe from the HPS-SOFTWARE list, click the following link: https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1 ________________________________ Use REPLY-ALL to reply to list To unsubscribe from the HPS-SOFTWARE list, click the following link: https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1 ######################################################################## Use REPLY-ALL to reply to list To unsubscribe from the HPS-SOFTWARE list, click the following link: https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1