When putting surrogate pairs into the character set, it parses them as multiple characters. This is a bit troublesome for putting certain symbols in my font. Is there a chance this could be fixed? Thanks!
Lets you create your own TTF fonts out of pixel font images! 路 By
A bit of a clarification on this issue: Javascript internally uses UTF-16 for strings, so characters until U+FFFF work as expected. Above that though, they're encoded in pairs of characters from U+D800 to U+DFFF, which are defined to be invalid codepoints otherwise. So characters from U+10000 onward will end up in the TTF as U+Dxxx, and the rest of the characters on that line will be misaligned.
Hmm, on second thought, it still seems to be exporting the font incorrectly, even though it appears correctly in the editor itself. I have a glyph that's meant to be set to 馃憫, but looking at the exported font in FontForge, it doesn't appear there (it appears at U+F451), and the font is still set to BMP only.
I have rechecked and I do set the codepoint to U+1F451 as expected.
There's a possibility that the issue is on fonthx end, but I lack familiarity with both the codebase and TrueType format internals to even find where the codepoints are being written into the output.
Font appearing as BMP only is something about fonthx, but my attempts to figure out what is responsible for that have been similarly unsuccessful - fonts exported from FontStruct also show the same.