From: Dean Snyder (firstname.lastname@example.org)
Date: Sat May 22 2004 - 01:04:52 CDT
>> Or (making the missed point explicit):
>I attempted to bring this thread back on track yesterday, but
>since it seems to have veered off into the ditch again, we
>may as well spin our wheels some more, I guess. :-(
My response to your assessment was that it completely ignored the
PROBLEMS that encoding a diascript could cause; I don't believe that that
is off-track wheel-spinning.
>> If the UTC did consider the potential for large numbers of users as a
>> decisive criterion for encoding a script,
>The UTC and WG2 *do* consider the potential for a significant
>number of users as *one* criterion for encoding a script. It may
>not be a *decisive* criterion, and people's opinions will vary
>concerning how large a number of users has to be to be considered
>significant. Certainly more than one.
The numbers-of-users argument was presented as decisive and I pointed out
that it should not be decisive.
>> Japanese would be separately encoded.
>This is an utter non demonstrandum. "Japanese" is a writing
>system, not a script. The Unicode Standard does not encode
>writing systems -- it encodes scripts. And the scripts used
>by the Japanese writing system *are* separately encoded --
>separately from other scripts.
Kanji, the only unified part of the Japanese writing system, and,
naturally, of course, that part to which I was referring, is not
>> I can assure you, that there would be many users for a
>> separately encoded Japanese,
>On what basis do you assert that? Especially given that there
>are, in this case, literally tens of millions of users of
>Japanese language data represented using the characters
>encoded in the Unicode Standard as currently defined.
I really dumbfounded that the logic is not being followed here.
BEFORE CJK was unified in Unicode, IF Japanese (meaning Kanji, of course,
to be pedantic) had been separately encoded (and also Chinese, and
Korean), I say there would have been many users of a separately encoded
Japanese. Do you deny that? If numbers of users (the logic being
suggested for Phoenician) is justification enough to encode, then why did
you NOT separately encode Japanese (Kanji of course) for all those many
more potential users?
>> just as there would be for a separately encoded Fraktur,
>A faulty analogy, as well as another assertion with no
>apparent evidence to back it up.
I have given in previous emails what I believe to be sufficient and
specific evidence for my claim that "Phoenician" is to Jewish Hebrew what
Fraktur is to Roman German. Do I need to repeat it? ;-) Without giving
any countermanding evidence, you just assert baldly that it is a faulty
analogy. That's not good enough.
>> since Japanese and Fraktur are not separately encoded just because there
>> would be lots of people who would use such an encoding,
>Unless you are using "just because" in some sense I am unfamiliar
>with in the English language, this claim makes no sense whatsoever
I see, even from other responses, that my wording here was, to say the
What I was trying to say, of course, was that, since Japanese and Fraktur
were not separately encoded EVEN THOUGH there would have been lots of
people who would use such encodings, a fortiori the far smaller number of
potential Phoenician users should not be taken as decisive for its encoding.
>The Japanese writing system and the Fraktur style of the
>Latin script are not separately encoded because neither is
>adjudged to be a distinct script, not because of some
>speculative census of potential users.
And the separate script business is precisely the point that you have
failed to prove for Phoenician (while I have provided multiple evidence
that it is not a separate script), and so you keep falling back on your
argument that some people want it anyway, to which I counter, is their
desire of sufficient significance to introduce complications for Semitic
scholars, the main users of the "scripts" in question?
>> why would you, on that same faulty basis,
>Making a nonsensical claim, then (falsely) attributing it to
>others as the basis of claims they make would seem to be a
>double red herring to me.
The claim was not nonsensical and it is not a false attribution that
others were using the numbers-of-users argument as being decisive.
>> support a separate encoding for Phoenician?
>Michael, I, and a number of others have already stated
>sufficient reasons for why we would support a separate encoding
>for the Phoenician (~Old Canaanite) script.
But I can recall no evidence you have given that Phoenician IS a separate
>By the way, as your attempted analogy above appears to demonstrate
>a failure to understand the distinction between a writing system
>and a script for the purposes of encoding in the Unicode Standard,
>perhaps you would consider recusing yourself from further
>argumentation regarding the proposed encoding of Phoenician.
>No? I thought not, but I had to give it a try.
#1 What does knowledge about writing system/script distinctions have to
do with Phoenician? You're not claiming any writing system status for
Phoenician are you?
#2 I have actually written commercial internationalization software for
the Japanese writing system and its four scripts, Kanji, Katakana,
Hiragana, and Romaji. Knowing that, are you now willing to re-consider
your suggestion that I might not be qualified to discuss a Phoenician
#3 Does a dearth of research experience in Phoenician qualify one for
argumentation regarding its proposed encoding? ;-)
Dean A. Snyder
Assistant Research Scholar
Manager, Digital Hammurabi Project
Computer Science Department
Whiting School of Engineering
218C New Engineering Building
3400 North Charles Street
Johns Hopkins University
Baltimore, Maryland, USA 21218
office: 410 516-6850
cell: 717 817-4897
This archive was generated by hypermail 2.1.5 : Sat May 22 2004 - 01:04:23 CDT