Alan Flavell (EXSEAS list) sent me this... I only have 2 comments at the
end on his references on coded character sets. The rest is certainly the
original explanation of which I was giving a more recent one which might
just have been a symptom of the origin of the confusion (albeit true also).
>Date: Mon, 25 Aug 1997 19:45:45 +0100 (BST)
>From: "Alan J. Flavell" <firstname.lastname@example.org>
>To: Alain LaBonté SCT <email@example.com>
>cc: Members of the list <firstname.lastname@example.org>
>Subject: Re: character - sign - symbol
>X-spam-warning1: Do not send me unsolicited commercial email
>X-spam-warning2: See UK Computer Misuse Act for penalties.
>This isn't going to get redistributed to the list, because my email
>address has changed. If our friendly "minder" Iain catches it, could I
>ask him please to register my new address, as this will now remain
>stable. Alain, you might meantime like to copy this to the list
>with any comment that you wish to add.
>On Mon, 25 Aug 1997, Alain LaBonté SCT wrote:
>> I believe that the "pound sign" name for this symbol is just due to the
>> fact that at some time for a given character set, some codepoints (so the
>> name might be relatively recent and only date since computers exist) have
>> been the same for both the "sterling pound" and this "number sign" in
>> Britain and in the USA.
>This has been much discussed on alt.usage.english in the past, and there
>is an alternative theory, which I find quite appealing.
>The hash sign resembles a hand-written sign that is used for pound-weight
>(NOT pound sterling). Although I since noticed it in other places,
>particularly on market stalls, I distinctly recall this from Munich
>In Munich of course it's pronounced "Pfund", and means 500g, but in
>reality it's "lb", written with a flourish.
>Discussion on a.u.e made it pretty clear that the term "pound sign"
>had not originated in computer usage, independent of whether the
>respondent agreed with the above theory (some did) or not.
>> There is a slight confusion between "sign" and "symbol" in the coded
>> character set world (which also has its problems with terms like "glyph",
>> "visual representation" and even "text element"!), and that is the case as
>> well in French as in English.
>I'd say it's hopeless. At each twist and turn of character
>technology, the participants have looked for words that try to make
>clear whether they are referring to the appearance of the sign, or
>to its meaning, and whether they are describing a complete entity, or
>something to which accents and modifiers can be applied. They've
>grabbed whatever words came to hand - character, glyph, symbol etc.
>but instead of achieving clarity they seem to have achieved confusion.
>We are told that OE-ligature was excluded from iso-8859-1 because it
>was a mere ligature, whereas AE was accepted because it's treated as
>an independent letter. And so they called it AE-ligature. Clear?
>> For the list of French names of characters see:
>For the English names see
>and to ensure total confusion, may I call your attention to
The æ letter is officially a letter in ISO standards. Why? Because it is
much more used as such in Denmark (where this single letter in Danish, as
legitimate as the letter a, is sorted after z, btw) than as a ligature in
French or in English. That said, it does not preclude to use it as a
spelling ligature (which Grevisse also calls in French "DIGRAMME SOUDÉ
[joined digraph] as it is *not* a mere ligature either, it is prohibited in
cases where "oe" represents two phonemes) for specific words in French or
in English, nor to use synonyms when we talk about this character in the
context of these languages. An official name is to make sure that a
character is made to correspond to the right entity in different standards,
in the ISO world, in a given language (i.e. in English for English versions
of ISO stds or in French for French versions). This allows making accurate
translate tables, the ultimate reference for a character being its UCS code
point, its "catalog number" as ISO/IEC 14755 calls it (the latter describes
several basic input methods for entering all UCS characters on any national
keyboard or in interaction with a screen, and gives guidelines for the user
interface to different input methods).
The reference given on the apostrophe might explain (who knows?) why the
apostrophe is badly translated in Word when it comes from a WordPerfect
document (this problem appeared only in the recent years).
It is the first time that I hear this story (see: Apostrophe.htm) and I
thank Alan Flavell for the reference. The story appears academic to me,
although it proves that academic speeches might well have worldwide
ISO follows a usage totally contrary to what UNICODE does: ISO always
stated consistently that it does not dictate the semantic of coded
characters, semantic which is absolutely free, i.e. you can use whichever
coded character you want for whichever usage you want (they only code
characters and give them a name which they pretend to be neutral, a way to
rewrite history, imho [btw "they" includes me as I am also in the bandwagon
whose only way of working is consensus -- i.e. in ISO a YES means 75% of
voting members saying so, which is the absolute antithesis of democracy as
it means that less than 26% may decide for the rest of the world, albeit
only in a negative manner -- so the entire world is guilty and I am guilty
as well as I did not retire from it]).
At the limit you could decide to visually represent the coded character 65
as a "Z" rather than as a "A". Exaggerated? Merely. That is the extreme
consequence, never deliberately implemented (if you were wondering!), of
ISO principles on coded characters. This could eventually be very bad for
users and compound the current mess.
On the other hand principles on semantics might also lead to problems for
end-users, such as the one generated in Word for the apostrophe (which was
not a problem before, imho), which is real, and maybe the result of the
correction of a legend.
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:36 EDT