
On Mon, May 18, 2009 at 08:03:47AM -0400, Greg Troxel wrote:
It seems there is a table of a lot of Garmin codepoints, and I wonder if havinga test file that has a POI with each possible codepoint, and similarly for roads, would help determine the behavior/meanings for the rest of them. Perhaps things are 100% figured out, but it feels like not.
I made an experiment on my Edge 705 a couple of weeks ago, generating some non-ASCII labels in cp1252, and many code points were displayed as a substitution character. I think that it would be useful to test each Garmin code page on as many devices as possible to find out which code points are really supported.
The basic idea is to take a lat/lon to start (to make the map 'near' someone working on this) and then to have a grid of points spaced 50m, roughly square, that has each possible codepoint in some range known to exist. Then a second grid with a N-S road and then every 50m E-W roads of every type on the left and on the right some normal road with every access combination. They could be be labeled with names that have the codepoint, and eventually the meaning, drawing from the database of meaning/codepoint mappings.
Once this was written other people could more easily experiment to add knowledge.
It seems to me that the easiest way to achieve this could be to write a script that generates an *.osm file and translate the file with mkgmap. That would also enable some regression testing later, if we archive the generated output. For this to succeed, there should be a way to bypass the character set translations in mkgmap and to specify the code page. (For example, mkgmap --latin1 would translate UTF-8 "ß" into "ss", even though the correct translation is 0xdf in cp1252.) Marko