> Frank da Cruz wrote on 2000-03-24 18:30 UTC:
> > That's fine for browsers (except Lynx!).
> Lynx can easily recode incoming CP1252 and Unicode-NCR HTML files into
> UTF-8 and then output them on xterm, which does now support the full
> MES-3 repertoire and more with UTF-8.
But why should Lynx and every other application have to know every
character set in the world? If CP1252, then why not any other CP? Why
not EVERY other one? Do you know how many code pages are in the IBM
Registry? As of about 1992, there were well over 700. I'm sure the
number has increased since then. And that's only IBM.
Hey, somebody has to stick up for standards. When we don't follow them,
we make more work for ourselves, more grief for our users, and more
difficulties for the information archaelogists of the future. And it's
all completely unnecessary. The only gain from ignoring standards is to
the companies who do it. Bending the rules for their benefit only puts
you to work for them as unpaid labor, thereby compounding the original
transgression and -- worse -- legitimizing it.
When our philosophy becomes "every software package must support every
encoding ever made up by anybody", then it becomes very difficult for small
companies to produce software -- only the big ones can afford to hire the
warehouses full of programmers needed to keep up with every crazy thing that
pops up in the world. Why should anybody adopt Unicode when they can just
keeping piling on the code pages?
In such an atmosphere, good ideas no longer have any value because they
can't be put into practice without also simultanesously supporting all the
bad ideas, which only the rich and powerful can afford to do. This is not
a good direction.
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:21:00 EDT