Personalization, interaction, and navigation in rich multimedia documents for print-disabled usersAccess to text-based electronic information, whether in software applications, on the World Wide Web (WWW), or in eBooks, is a problem that has largely been solved for visually impaired individuals by using screen reading programs combined with synthetic speech or refreshable braille displays. Even if this text information is embedded in a graphical user interface, it can be more-or-less readily extracted, However, access to truly graphic and multimedia information, including images, animations, and video material, is still highly problematic for readers who are visually impaired and for readers who are deaf or hard of hearing. The solution to this accessibility problem may well lie in adoption of multimedia and format techniques already being used in electronic environments such as the WWW to provide synchronization and coordination of multimedia materials.In the MultiReader Project funded by the EU (European Union) that is described here, we have specifically explored the use of multimedia techniques to make such material accessible to readers with a range of print-related disabilities, including visual and hearing impairments as well as dyslexia. In this paper we discuss the use of content management approaches to provide personalized multimedia documents that suit the particular needs of individual print-disabled readers, in effect creating adaptable hypermedia documents. Our central hypothesis is that the needs of all readers cannot be addressed with one single multimedia document that is transformed in different ways for each user group, the so-called "one document for all" approach. Our approach instead involves providing actual alternative media to produce a variety of different views of the document in order to address particular user subsets (stereotypes (1)) and short-term individual preferences. Techniques for adaptation and personalization have not yet been applied to this area, although many ways to personalize navigation and interaction with documents for mainstream readers (without special reading needs) have been explored. (2) Earlier work in the AVANTI (Added Value Access to New Technologies and Services on the Internet) project addressed the needs of physically disabled and blind people by adaptation of information at the lexical, syntactic, and semantic levels of interaction. (3) Access to kiosk and desktop applications was successfully provided by verbalizing the textual content through speech synthesis and replacing keyboard-based interaction techniques with single-switch operations. However, temporal relationships involving time-dependent lexical entries, such as audio or movies, were not foreseen in this application. Addressing the needs of a stereotype requires both information about system properties suitable for a cluster of users and information about the behaviors and actions of those users. For example, Electronic Program Guides (EPG) encode information about multiple temporal arrangements of different television genres, channels, and so forth for digital television. The television viewer interacts with an EPG and navigates through these time-dependent media. The interactive behavior of EPGs and the way that they represent the broadcast media can be modified, and alternatives based on several approaches to user modeling have been reported. (4) However, it is important to note that the broadcast media themselves are not modified in this process. In contrast, access to multimedia documents by print-disabled individuals requires adaptation of the media themselves or provision of alternative media, with the specific nature of the changes depending on a group's particular reading needs. Alternative media may be different in their visual appearance, spatial layout, or temporal arrangement. Considerable work has been undertaken in the context of the World Wide Web Consortium's Web Accessibility Initiative (WAI) (5) regarding the questions of how to integrate descriptions of images into Web documents, how to provide navigation in textual documents, forms, and tables, and how to adapt the visual appearance of pages to the needs of print-disabled readers. This work has led to the concept of media enrichment, the providing of content in different media in addition to the original. This can involve, for example, text descriptions of images, subtitling of videos, or sign language translation of texts. Enrichment improves digital content and increases information accessibility and, at the same time, leaves the original material unchanged. These additional media used for enriching other media are in some sense redundant because they specifically do not replace the original material. However, their importance lies in the support they provide to the user in understanding the original material. The WAI advice on accessibility for multimedia material, such as animation and video, is very general. For example, WAI recommends using alternative discrete media, but does not specify how such media might be used. (6) In fact, for visual components of multimedia, visually impaired individuals need audio description to describe purely visual elements, whereas individuals who are deaf or hard of hearing need subtitling or sign language translation. Currently, enrichment is usually embedded in the original medium itself for time-dependent media like video. (7) This strict synchronization does not allow the user to independently control the medium used for enrichment or its presentation. Markup languages such as SMIL (Synchronized Multimedia Integration Language) are required if the document is to provide a more flexible degree of adaptation. |