From: John H. Jenkins (firstname.lastname@example.org)
Date: Thu Mar 26 2009 - 14:19:02 CST
First of all, I don't see why having a virtual machine programmed in
as dedicated character code points is any better than having a virtual
machine programmed via ASCII. It might be more compact, but from an
implementation standpoint, that's about it's only advantage.
To show what you're up against, just look over the history of Java and
remember that it started out with a major player in IT pushing it.
And Java distinguishes the text used to write programs from the
portable byte code used to implement them. You, on the other hand,
are going against fifty years' experience in CS of making a
distinction between the text used to write programs and the machine
language used to execute it.
You'll have to convince people to write editors or compilers/
assemblers, debuggers, and so on. You'll have to get a set of experts
together to hammer out the details of the architecture and syntax.
You'll have to determine how it will be displayed on systems that
support it. What is supposed to happen if the user has a regular font
installed covering these code points? How will the user distinguish
cases where they want to *see* the code and cases where they want to
*execute* it? What happens if they don't want to do either? What
triggers execution, anyway? What happens when execution stops? What
happens if execution *fails* to stop? What happens if the code
attempts something illegal like dividing by zero? What will happen if
this is embedded, say, in a URI or email or a file name? What happens
if it's embedded in a word processing document? artwork? spread
sheet? record in a database? instant message? Web page? comment on
a blog? somebody's character name in an MMORPG? What are systems
that don't support it supposed to do with it? How will you prevent
people from using it to write viruses, trojan horses, and other
malware? Where does input come from? Can it interact with a GUI?
How do you display output? What is supposed to happen if you print
it? Where is data stored? How will this interact with the various
operating systems in existence? How does it interact with the
applications on those systems? There are a hundred other questions
that will need specific answers.
*Plus* you'll have to convince people to use it, and enough of them
that you reach critical mass.
Once that is done, you'll have to show that there is a practical
reason why this has to be implemented in Unicode and convince the UTC
and WG2 that such encoding as plain text is the best solution to the
practical problem(s) involved.
By and large, any proposal for something which is not at least
arguably "plain text" will not be favorably looked upon as a candidate
for inclusion in Unicode. (N.B., I said "arguably". Personally, I
don't think this is even arguably plain text and is, at best, a
solution in search of a problem, but I don't want to resurrect any of
the past wars we've seen on certain candidates proposed for
encoding.) Unicode has, after all, more than once woken up with a bad
hangover, looked at the character block lying next to it, and said to
itself, "Did I *really* encode *that*‽‽"
This archive was generated by hypermail 2.1.5 : Thu Mar 26 2009 - 14:21:32 CST