it seems that in Unicode there are satisfactory ways to write dates,
currencies, and so on, the Japanese way (Imperial age) as well as our
But I think that there is little support to enable people to write
numbers their own way (just think to Japanese/Chinese way, or (they told
It's true, the glyphs for the Japanese kanjis for the digits, as well as
100, 1000 and 10000 are there, but the problem of input of numeric data
in nonwestern contexts has several side problems:
- must (should) distinguish between Japanese digits and western digits,
and disallow the user to write a number by mixing the two sets. But all
number-related symbols are market as "is digit", without distincion
about "is digit, more precisely, a japanese digit"... so a routine that
processes numeric input needs to check the symbols read to figure out if
the number is being written in jap or western or whatsoever, and then
apply the rules (e.g. 1 million is 100 x 10000 in Japanese
['hyaku-man']), so to assemble what the user types in and to get the
internal binary representation of the number there is a lot of decisions
to be taken and there is little support built into the Unicode set for
taking these decisions.
It seems that the routine dealing with number input has to know by
itself a lot of stuff that could be (at least in part) conveniently
placed in the information tags of the Unicode format.
I'd like to know your comments on the topic. Perhaps I'm wrong.
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:41 EDT