From: JFC Morfin (firstname.lastname@example.org)
Date: Fri Apr 25 2008 - 04:47:53 CDT
At 02:41 25/04/2008, Asmus Freytag wrote:
>If the character doesn't violate a principle in the standard,
>there's no reason why it couldn't be
>encoded; however, if its presence in the standard is not correlated
>with it showing up in actual
>documents (for example, because of the way systems and fonts have
>implemented the standard)
>then there's perhaps no need to encode the character based on its
>presence in a code chart.
>On the other hand, perhaps the standard did base the design on a
>real character. If sufficient
>information can be assembled to define that character, it would open
>up an avenue to encode
>it, which would be independent of the character.
This is the problem I already reported of the difference we encounter
between norms and standard concepts. In French language we initially
emphasisied the norm reporting the world, and in English they
emphasised the standard ruling the world. The globalization problem
makes that norms and standards are no more locally interoperable the
standard influencing the way the world is and its normative
description, but that norm is global and standard is local.
This has two main consequences :
- the alternative between standard internationalisation
(interoperability in using the same rules) and normatic
multilateralisation (interoperability based upon the same understanding).
- metalanguage development introducing analysis and often (as you
mention it) leading to constraints, i.e. complication, to address
thre resulting complexity.
If adding a code is subject to a metalanguage limitations (for
example, because of the way systems and fonts have implemented the
standard) this means that Unicode is a Standard and not a Norm. The
ambiguity is that it is mostly understood as being both.
This archive was generated by hypermail 2.1.5 : Fri Apr 25 2008 - 04:51:04 CDT