Hi,

I'm trying to solve this problem for some time already, but can't find the solution so I hope this is the right place to ask.

I'm trying to split SRTM contours for Nepal with 10 m contours. It is not that much data, the .pbf file has just 375 MB.

However when I try to run it:

java -jar -Xms4G -Xmx4G ..\mkgmap\Splitter\splitter.jar --output-dir=./SRTM/ --output=pbf --max-threads=6 --status-freq=60 SRTM_NP.pbf

it first starts to give warnings:

Warning: Fileblock has body size too large and may be considered corrupt

then errors:

Exception in thread "worker-3" java.lang.Error: This file has too many entities in a block. Parsers will reject it.
        at crosby.binary.file.FileBlock.newInstance(Unknown Source)
        at crosby.binary.BinarySerializer.processBatch(Unknown Source)
        at uk.me.parabola.splitter.writer.BinaryMapWriter$PBFSerializer$Processor.checkLimit(BinaryMapWriter.java:385)
        at uk.me.parabola.splitter.writer.BinaryMapWriter$PBFSerializer$Processor.process(BinaryMapWriter.java:406)
        at uk.me.parabola.splitter.writer.BinaryMapWriter.write(BinaryMapWriter.java:515)
        at uk.me.parabola.splitter.writer.AbstractOSMWriter.write(AbstractOSMWriter.java:83)
        at uk.me.parabola.splitter.SplitProcessor$OSMWriterWorker.run(SplitProcessor.java:430)
        at java.lang.Thread.run(Unknown Source)

and the resulting files are incomplete.

I tried to lower the --max-nodes parameter, but it just prolongs the process until I got to days of processing time.

Do you have any idea what to tune to avoid this? I splitted files of similar size before, but Nepal seems to be a tough nut to crack. I have Oracle Java 1.8.0_191 64bit (but I had the problems with 32bit Java too).

Thanks in advance,

Vojta

https://garmin.v0174.net