At 07:35 PM 16/01/97 -0800, John Plaice wrote:
>Be serious, Rick, there will ALWAYS be characters that are not encoded
>in Unicode, for various reasons. The fact that Unicode does not provide
>for other means for encoding other character sets ensures that people
>who are forced to deal with such characters will have to come up with
>ad hoc means for setting up character set changes.
I think this is good. It encourages people to take their character blocks
through the standardization process and gets the semantics nailed down
much better than most of the "table of glyphs" standards I have lying
around. True, you may not get the best encoding from your perspective
(even if your perspective is the best one,) due to the valid and invalid
concerns of various groups involved; but there will be one encoding
scheme in a readily available form that everyone can implement.*
There is even a chance that the encoding will be superior to some
of those that are concocted by people who either haven't done a few
dozen languages before or thought that one aspect of processing
outweighs all others.
If you cannot get it encoded, it is either due to politics, insufficient
information or from your data not being considered character data
by the standards bodies. Given that Braille is now accepted by the
UTC (but not part of Unicode yet!) and being proposed for the
ISO 10646 track, it looks like that stance has softened somewhat.
* I have spent more time and money than I care to chasing down
standards and conversion tables in order to talk to different hosts.
I look forward to the day when it won't be unreasonable to ask the
customer for a Unicode interface rather than having to have
research how their host chose to deal with some of tricky language
specific issues and having translators at every I/O port.
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:33 EDT