From: Asmus Freytag (asmusf@ix.netcom.com)
Date: Wed Mar 09 2005 - 22:48:32 CST
At 07:36 PM 3/9/2005, Dean Snyder wrote:
>I've made it very clear that THE basis for my thinking on encoding damage
>indicators is to enable "guaranteed" integrity for damaged, interchanged
>plain text.
A similar argument could be made for the absolute integrity of mathematical
expressions. More people (users) rely on textual representations of
mathematical
expressions in their work than on transcriptions of damaged plain text, and
in many cases there are potentially severe consequences if mathematical
expressions are inadvertently altered.
Nevertheless, nobody asserts that mathematical expressions *must* be in
plain text. All users of mathematics agree that some form of convention,
going beyond plain text, is needed. The two top contenders, TeX and MathML
use very different approaches to the markup, TeX focusing only on defining
the visual appearance, MathML focusing on the underlying mathematical
structure.
>My reasoning goes to the core of
>what separates text from meta-text. THAT, I believe, is the proper basis
>for discussion this discussion, not the merits or demerits of any
>particular markup system.
And given my comments above, it is not the task of plain text to indicate
damage and similar type of information.
Some people feel that the choice between plain-text and markup should not
be an all-or-nothing proposition. For example, Murray Sargent has been
working patiently on a minimal markup scheme for mathematics. His method is
very clever, in that most of the simpler mathematical expressions would
look the way one would have typed them on a single line in a type written
manuscript (for example: (a + b) / (c - d) instead of a built-up fraction
without parens). Therefore, the raw form of this is very readable, and you
could argue it is nealry plain text. Nevertheless it is a form of markup,
since he needs special conventions, such as dropping the outermost parens
when building up expressions as well as special conventions to represent
superscript and subscript, etc..
In this, it is similar to ideographic desciption sequences. If you have
software support to display a built-up sequence, the text can act like
formatting instructions, if you don't, you get a human-readable, symbolic
description language.
In none of these cases do you have the expectation that *all* software (or
even potentially all software) would be required to treat any characters as
other than perfectly ordinary graphical characters - although *some*
software can choose to follow specific conventions. That is very similar to
XML, where the source code is plain text, and the result is something else.
But in none of these cases is the information itself encoded in plain text
- it's encoded in a convention that uses plain text as a source form.
A./
This archive was generated by hypermail 2.1.5 : Wed Mar 09 2005 - 22:49:13 CST