John Cowan wrote:
>> The ISO standard also defines a 16-bit encoding form called UCS-2, in which
>> a 16-bit code value in the code space 0x0..0xFFFF directly corresponds to an
>> identical scalar value, but this form is, of course, inherently limited to
>> representing only the first 65,536 scalar values.
>UCS-2 is bogus and shouldn't be explained before UTF-16, which has been the
>real deal since Unicode 2.0.
Why is it bogus?
I see UTF-16 as really bogus. UTF-16 is there (I guess) because Unicode
suddenly realised that 16 bits were not enough and instead of
going for full UCS tried to cram as much as possible into 16 bits by
an encoding like UTF-8.
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:59 EDT