>People may discover that your font(s) accept U+0080. They may even discover it
by accident. If Unicode data using U+0080 then proliferates, you will be stuck
supporting such bad data for a long time.
There's some truth to what you're saying. My expectation is that people wouldn't
be likely to discover that the font accepts U+0080. Since it wasn't documented
as a "feature" of the font, I wouldn't worry too much about dropping support for
it at a later date, since I'm inclined to say that user's should know better
than to make their data dependent upon a particular font.
I'll admit that the best situation is that people always and only use Unicode
and use it in a conformant fashion. I was only meaning to suggest that if
someone feels compelled to do something in their font to deal with the change in
the definition of cp1252, that there was a rather less offensive way to go about
it than what had been suggested. The main cases in which we have recommended the
solution I've described have been for people who have made fonts with custom
character sets; i.e. they are already hacks of cp1252 and of Unicode. In such
cases, the change I described is insignificant. I want to move those users away
from these hacks and to Unicode-conformant data ASAP, but in some cases the
characters they need are not yet in the standard, and not all the apps that they
use support Unicode-encoded data.
So, in the mean time, the best I can do for them is to try to educate them on
the fact that they are using hacks, explain why their fonts break in Win98
(their custom glyph at hacked U+0080 no longer appears when they type 0x80),
explain to them why the right solution for them is Unicode, and then provide an
Thanks for clarifying for me where you're coming from. Deep down, I really do
agree that you're right. It's just that on occasion I allow my non-purist alter
ego a slight margin in which to work if it serves a practical purpose.
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:46 EDT