Although there is some truth here.... the fact is that it is not really true
today that everyone equates the two. The default thought on people's minds
these days when they think of Unicode is UTF-8, it seems like. And this is
mainly due to applications of Unicode to the web, I think.
In the meantime, Microsoft is still pretty firmly rooted in the idea that
Unicode=USC-2 (or UTF-16le on Windows 2000). UTF-8 is named UTF-8 and
considered to be a multibyte encoding.
----- Original Message -----
From: "Doug Ewell" <email@example.com>
To: "Unicode List" <firstname.lastname@example.org>
Sent: Thursday, July 20, 2000 10:41 PM
Subject: Re: Unicode in VFAT file system
> Addison Phillips <email@example.com> wrote:
> > Avoiding for the moment the word-parsing that Markus suggests, Unicode
> > on Microsoft platforms has always been LE (at least on Intel) and they
> > have called the encoding they use "UCS-2" (when they bothered with
> > such things: in the past they always called it "Unicode" as if it were
> > the *only* encoding). As Unicode has evolved, Microsoft products have
> > become more exact in this regard.
> I remember that in the early to mid '90s, before the invention (or at
> least widespread use) of UTF-8, UTF-32, and surrogates, *everybody* --
> not just Microsoft -- used the term "Unicode" to refer to what we would
> now call UCS-2. Even the Unicode Consortium did this! And even now,
> the few of my co-workers who know about Unicode (I'm trying to spread
> the word, folks, honest) think a "Unicode text file" is UCS-2 by
> definition. I don't know what they would think of a UTF-8 file --
> nobody but me is knowingly using them yet. In any case, this usage is
> by no means confined to Microsoft.
> -Doug Ewell
> Fullerton, California
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:21:06 EDT