|Intended Audience:||Managers, Software Engineers, Systems Analysts, Font Designers, Technical Writers, Testers, Web Administrators, Web Designers, End Users|
Prior to Unicode, those of us who needed to compute in multiple languages were limited to few options. We could use localized operating systems that would handle one other language and English using national character sets and ASCII or extended ASCII. At this point, the commercial world was content to deal with localized operating systems. However, if you were involved in language learning or if you wanted to support multiple languages and character sets in a standard way, you were out of luck. Long before the Internet took off and we all realized that not everyone speaks English, the U.S. government realized that they needed not just internationalized applications but applications where the user could dynamically switch the working language of an application. This presentation will define multilingual processing requirements and detail the history of the government's use of Unicode as an imperative for multilingual processing, and the lessons learned when building a solution and architecture that must support multiple languages. Included will be lessons learned from implementing multilingual processing with Unicode in all parts of the architecture and why Unicode is needed. Areas of investigation will include front end processing with input method editors (IMEs), middle tier issues, networking issues and database processing with Unicode. The audience will come away with a solid understanding of multilingual processing in the government and how Unicode has been vital to supporting government requirements and how implementing a Unicode solution from the start will save money and valuable time.