RE: Re: Devanagari [was Re: Any web-published rebuttals to

From: Geoff Back (GEOFF@AUTOCUE.MHS.CompuServe.COM)
Date: Tue Jan 14 1997 - 13:40:44 EST


Thanks for the information, but our Devanagari support is already complete
(and accepted by our Indian customers)

Cheers,

   -- Geoff.

----------
From: MAIL@CSERVE {INTERNET:unicode@Unicode.ORG}
Sent: 13 January 1997 02:47
To: MAIL@CSERVE {INTERNET:UNICODE@UNICODE.ORG}
Subject: Re: Re: Devanagari [was Re: Any web-published rebuttals to

Hello Sandeep,

I am commenting on the dialog between yourself and Glenn Admas on the
deficiencies of Unicode etc. I am neither a language expert or a
Unicode/ISCII expert. My position is that of one who is involved in
implementing a Unicode based system for Indian Languages to run under both
Windows and Unix environments.

Comparing Unicode and ISCII it is fairly obvious that Unicode has been
derived from ISCII. Also ISCII was designed to use existing 8-bit processing
capabilities of software to accomodate all the Indian scripts.

Unicode (as well as ISCII) is not bothered with how a character is
displayed, and the intention is to encode all the basic characters of the
language and leave it to a rendering or display software to determine how to
render the font (which is different from the character). The idea being if
the coding system can always represent the original characters, independent
of how it is displayed, the original text will survie mutiple
transliterations etc.

Unlike Latin, the number fo glyphs associated with Indian languages is too
many to accomodate within a fixed code space of Unicode. I believe that this
is the reason why Unicode has not bothered to give a code point for conjunct
characters. This is just an opinion since I do not have access to all the
thinking that went into the creation of the Unicode sandard.

From the vie point of an implementor of a sofftware to use Unicode, I prefer
to work with the relatively few code points of Unicode rather than with a
large number of code points specific to a large numbe of glyphs. I have this
view only because I am interested in ALL the Indian scripts as well as the
trasnliteration between them. If, for example, if Unicode were to have
defined code points for a large, if not all, the possible glyphs for
Devanagari, my job as an implementor would have been hard, if not
impossible, on those portions of the system where I am implementing a
transliteration module to transliterate from say Devanagari script to
Malayalam. Under the present scheme of Unicode all I need to do is shift the
base of the first byte, and leave it to the display module to render it in
whichever way it wants. If Unicode indeed had defined a separate code point
for a glyph, unless it also provides a code point corresponding to the same
glyph in every other Indian script, my job would have been impossible, since
I would have no way of knowing what are the original charcters of the glyph
is.

Because of the various problems that you had pointed out in the Unicode (and
ISCII), plus the fact that nothing is mentioned on how to display the
character in Unicode, I believe that Mohan Thambe, the originator of the
ISCII scheme and the designer of the GIST card, is working on a methodology
for displaying characters based on 'an attachment point" concept, as well as
a coding scheme called ACII (Alphabetic Code for Information Interchange),
which is an 8 bit code capable of representing all the world languages. I do
not understand fully the scheme, since as I said I am not an expert in this
matter. You may wish to contact Mohan on ACII. He can be reached at Mohan
Tambe <tambe@ncore.soft.net>.

Regards...Das



This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:33 EDT