O’zaro ta’sirli interfeysni loyihalashda hisobga olinishi kerak bo‘lgan operatorning asosiy xususiyatlari


Download 0.76 Mb.
bet10/12
Sana11.05.2023
Hajmi0.76 Mb.
#1450589
1   ...   4   5   6   7   8   9   10   11   12
Bog'liq
26-ma\'ruza

2.8 Multimodal User Interfaces
Multimodality concerns the identification of the most effective combination of various interaction modalities. A simple vocabulary for this purpose was provided by the CARE properties (Coutaz et al., 1995): complementarity, the considered part of the interface is partly supported by one modality and partly by another one; assignment, the considered part of the interface is supported by one assigned modality; redundancy, the considered part of the interface is supported by both modalities; equivalence, the considered part of the interface is supported by either one modality or another. In Manca et al. (2013) the authors describe how to exploit them more in detail for the design and development of multimodal user interfaces: they can be applied to composition operators, interaction and output-only elements. In the case of interaction elements, it is possible to decompose them further into three parts: prompt, input, and feedback, which can be associated with different CARE properties. In this approach equivalence can be applied only to the input elements since only with them the user can choose which element to enter, while redundancy can be applied to prompt and feedback but not to input since once an input is entered through a modality it does not make sense to enter it also through another modality.
Figure 15 shows a general architecture for supporting adaptive multimodal user interfaces. There is a context manager able to detect events related to the user, technology, environment and social aspects. Then, the adaptation engine receives the descriptions of the user interface and the possible adaptation rules. The descriptions of the user interfaces can be obtained through authoring environments at design time or generated automatically through reverse engineering tools at run-time. When events associated with any adaptation rule occur, then the corresponding action part should be executed. For this purpose three options are possible:

  • complete change of interaction modality: the corresponding adapter should be invoked in order to make the corresponding complete adaptation to the new modality, and then the new user interface is generated;

  • some change in the current user interface structure should be performed; then its logical description should be modified and the new implementation generated;

  • small changes in the current user interface should be performed, e.g. changes of some attributes in some elements; then the changes can be performed directly in the implementation through adaptation scripts.


Figure 2.15: A general architecture for multimodal adaptation
It is now possible to obtain multimodal applications also in the Web. A first possibility was provided by X+V, a language integrating HTML and VoiceXML. However, such language is no longer supported by current browsers.
In Manca et al. (2013) a novel solution is proposed, obtained by extending HTML and CSS for accessing Google support for the vocal part (through CSS annotations interpreted by specific Javascripts). Desktop applications can exploit Chrome extensions through Javascripts to access ASR and TTS Google APIs according to such CSS annotations. This implementation is still not possible for Chrome mobile version. Thus, mobile applications need to create instances of Web View components able to load Web pages and access ASR and TTS APIs through Java. In first empirical tests associated with this solution for context-dependent multimodal adaptation the results are encouraging. User feedback pointed out that users like to have control on modality distribution for supporting personal preferences. It also turned out that the choice of the modalities should take into account the tasks to support, beyond the current context of use, for example showing long query results is something inherently preferable to present graphically since the vocal modality is not persistent and when the last results are presented vocally the user may have forgotten the initial ones. Another aspect is that mixing modalities at the granularity of parts of single UI elements is not always considered appropriate; for example, in considering a single text field which has to be selected graphically, it is not perceived meaningful then to ask to enter the value vocally.

Download 0.76 Mb.

Do'stlaringiz bilan baham:
1   ...   4   5   6   7   8   9   10   11   12




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling