The history of information technology has always had a bias towards Western languages, and particularly towards English, making it less accessible to those living in other parts of the globe. One of the earliest, most commendable and still ongoing efforts to counter this west-centricity was the formation of the Unicode Consortium, the goal of which is to ensure that the character sets of all modern (and even many no longer spoken) languages can be understood by computers everywhere. (You can read an appreciation of the Unicode Consortium and its work here.)
For those with disabilities, of course, there can be a second layer of challenge to accessing the Web, and all that it can offer, requiring special tools in order to make equal opportunities available to all. As ever with technology, however, new layers of technology continue to be built on top of old ones to accomplish other and/or more sophisticated tasks, requiring that the same type of effort must often be replicated at each successive layer of technology or abstraction. Historically, that has meant that those with accessibility issues often find that just as they begin to achieve a meaningful degree of access to one plateau of technology, the next generation of products reaches the market. Unless existing tools are upgraded or new tools are created, they will at best be relegated to less state of the art platforms, and at worst risk being abandoned as those tools and platforms are no longer supported.
So it is with linguistic accessibility, as not only the Web becomes more truly World-Wide, but the devices able to access it proliferate as well. And some of those tools are (at present) ill equipped to provide equal access to all. For those with disabilities (such as less than perfect vision), the small screens of mobile devices present special challenges: not only can less text be displayed, but the size of that text may also be reduced and the screens may be difficult to navigate. At the same time, experts estimate that the mobile devices may be the primary means of accessing the Web in Third World countries in the future. So where would this leave the visually disable.
Happily, a new initiative has been launched by the W3C to level the linguistic playing field. And in an interesting example of how “what goes around can come around” in good ways as well as bad, a new project launched by the University of Manchester in England shows how techniques created to assist the disabled in their use of full size screens may help everyone use small-screened mobile devices to use the Web.
Let’s look at the W3C project first.
It is not surprising that the W3C would be one of the organizations that would address this issue. The W3C it has a well deserved reputation as one of the most socially aware consortia in the world, and a long history of working to ensure that the Web is accessible to all, regardless of physical disabilities and native language. Last week, it announced new efforts on a standard that addresses both linguistic as well as disability issues: speech synthesis.
The new initiative acknowledges the expectation that in the near future the Web will not only include far more content in many more languages than today, but that a logarithmically expanding percentage of Web users world-wide will be accessing that content via audio-capable mobile devices. Accordingly, the W3C will expand the number of languages that its Speech Synthesis Markup Language can support, such that more users in more countries will be able to access text in audio, speech-synthesized form. I’ve pasted in more from the W3C press release at the end of this blog entry.
The University of Manchester, on the other hand, is repurposing techniques developed to assist the disabled in order to make Web pages more useful and easy to navigate on the tiny screens of mobile devices. Last week, they announced a three year project called RIAM, for “Reciprocal Interoperability between Accessible and Mobile Webs.” As described in a press release:
The RIAM project will draw on the experiences of blind and visually impaired users and the technologies they use to surf the internet, such as screenreaders, in a bid to simplify the content of conventional websites so that they can be accessed via the mobile web….[Project leader Dr. Simon] Harper said: “Mobile web users are handicapped not by physiology but technology. Not only is the screen on the majority of phones very small, limiting the user’s vision, but the information displayed is difficult to navigate and read.
“Add to this the fact that the content displayed is determined by a service provider and not the user and you have a web which is not very accessible or user friendly. Our aim is to change this by enabling web accessibility and mobile technologies to interoperate….Screenreaders used by blind or visually impaired web users are very good at stripping web pages down into text only formats but what we want to achieve are content rich formats which are just as accessible.”
The University of Manchester team plans to build a “validation engine” to determine whether a given Website is accessible and mobile web compatible, and if not, will work with a transcoding program to “de-clutter” and reorder pages into a mobile-friendly format.
While virtue should be reward enough, it’s interesting to see this example of how doing good can help everyone do well.
# # # # #
The following is taken from the press release issued by the W3C on August 3:
http://www.w3.org/ — 3 August 2006 — Today, the W3C announced the results of the second Workshop on Speech Synthesis Markup Language, where speech experts from around the world presented ideas for expanding the range of languages supported by SSML 1.0.
The results include a new initiative to revise SSML 1.0 in ways that support a wider range of the world’s languages, including the widely spoken languages of Mandarin, Hindi, Arabic, Russian, Hebrew, and other languages spoken in India and Asia.
These results reinforce important discoveries reached at the first SSML Workshop in Beijing late last year, which provided critical information on many Asian languages.
The announcement of the second workshop results serves as a call for participation to researchers around the world to join the effort to improve the specification.
Voice Applications and Under-represented Languages Are Growing on the Web
It is estimated that within three years, the World Wide Web will contain significantly more content from currently under-represented languages, such as Chinese and Indian language families.
In many of the regions where these languages are spoken, people can access the Web more easily through a mobile handset than through a desktop computer. There are more than 10 times as many cellphones in the world today as there are Internet-connected PCs.
An improved SSML will increase the ability of people world-wide to listen to synthesized speech through mobile phones, desktop computers, or other devices, greatly extending the reach of computation and information delivery into nearly every corner of the globe.
Expanding the Range of Languages Supported in Standards is Critical
The participants in the W3C Workshop reached conclusions that support the expansion of the SSML standard.
For example, the Workshop participants expressed the need to add to the standard the ability to represent features of spoken language, including tone, syllabic stress or accent, and duration in a machine-readable fashion. In some languages, these attributes are an important factor in determining meaning.
The goal of the next phase is to identify a few basic mechanisms that can greatly extend the power of SSML to better cover more of the world’s languages.
W3C Invites Current and New Members to Join Efforts
W3C is moving forward on enhancing and expanding the capabilities of SSML, based on the results of the Workshop. Organizations, particularly those with native understanding of the languages of Japan, China, Korea, Russia and India are encouraged to join W3C and participate in the W3C Voice Browser Activity.
For further blog entries on Standards and Society, click here