Hi,
switching to my Ultrabook (6 GB) while travelling I recently faced some
kind of borderline condition with splitter. On the first run it throws
"OutOfMemoryError: Java heap space", on closely subsequent runs without
any modifications it does not. Repeating the task after some delay fails
again. I guess, there might be some self-optimization involved for this.
/fail:
...
40.000.000 ways parsed... id=888262666
Number of stored tile combinations in multiTileDictionary: 4.525
41.000.000 ways parsed... id=929920953
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at
uk.me.parabola.splitter.tools.SparseLong2IntMap$ChunkMem.<init>(SparseLong2IntMap.java:189)
at
uk.me.parabola.splitter.tools.SparseLong2IntMap.saveCurrentChunk(SparseLong2IntMap.java:627)
at
uk.me.parabola.splitter.tools.SparseLong2IntMap.replaceCurrentChunk(SparseLong2IntMap.java:886)
at
uk.me.parabola.splitter.tools.SparseLong2IntMap.put(SparseLong2IntMap.java:691)
at
uk.me.parabola.splitter.SplitProcessor.processWay(SplitProcessor.java:149)
at
uk.me.parabola.splitter.AbstractMapProcessor.consume(AbstractMapProcessor.java:84)
at
uk.me.parabola.splitter.OSMFileHandler.execute(OSMFileHandler.java:157)
at uk.me.parabola.splitter.Main.writeTiles(Main.java:542)
at uk.me.parabola.splitter.Main.start(Main.java:132)
at uk.me.parabola.splitter.Main.main(Main.java:81)
Elapsed time: 8m 0s Memory: Current 1466MB (1339MB used, 127MB
free) Max 1466MB/
/success:
...
48.000.000 ways parsed... id=1262369277
Writing relations Tue Mar 19 10:50:36 CET 2024
100.000 relations parsed... id=1783690
200.000 relations parsed... id=4148045
300.000 relations parsed... id=7895430
400.000 relations parsed... id=11681672
500.000 relations parsed... id=15581604
coord Map: 312.851.126 stored long/int pairs require ca. 3 bytes per
pair. 14.225.657 chunks are used, the avg. number of values in one
64-chunk is 21.
coord Map details: ~852 MB, including 88 array(s) with 8 MB
way Map: 48.015.926 stored long/int pairs require ca. 3 bytes per
pair. 3.974.651 chunks are used, the avg. number of values in one
64-chunk is 12.
way Map details: ~123 MB, including 10 array(s) with 8 MB
JVM Memory Info: Current 1466MB (1357MB used, 109MB free) Max 1466MB
Full Node tests: 62.230.523
Quick Node tests: 282.354.912
Thread worker-2 has finished
...
/
My main machine has 24 GB of main memory, and runs troublefree using the
following memory allocation on the same task:
/JVM Memory Info: Current 3342MB (2378MB used, 964MB free) Max 6000MB/
Splitter 653 so far is involved without explicit memory allocation (java
-jar .../splitter-latest/splitter.jar ...), using
/java --version
openjdk 11.0.22 2024-01-16
OpenJDK Runtime Environment (build 11.0.22+7-post-Debian-1deb10u1)
OpenJDK 64-Bit Server VM (build 11.0.22+7-post-Debian-1deb10u1,
mixed mode, sharing)/
Following up on splitter tuning hints (areas.list gets generated in each
case) I reduced --max-areas= from 4096 to 2048 to 1024, but to no avail
(not even significantly on the runtimes), once I figured out the effect
above. It fails on all first runs and succeeds on all shortly following
next ones.
Unfortunately its not possible to increase main hardware memory on the
small machine, but system tools report only about 2...3 GB being used
anyway.
Is it possible to tweak Java to overcome the problem without hurting the
maps, preferably by machine, to be able to run identical scripts? Some
pointers would be appreciated, also on how to monitor the Java memory
situation.
Thanks, Felix