Multimodal user interfaces (Computer systems)

Enlarge text Shrink text
  • Topic
| מספר מערכת 987007547418605171
Information for Authority record
Name (Hebrew)
מנשקי משתמשים רב-אפנויות (מערכות מחשב)
Name (Latin)
Multimodal user interfaces (Computer systems)
Name (Arabic)
מנשקי משתמשים רב-אפנויות (מערכות מחשב)
Other forms of name
MMUIs (Multimodal user interfaces)
See Also From tracing topical name
User interfaces (Computer systems)
MARC
MARC
Other Identifiers
Wikidata: Q738567
Library of congress: sh2009000017
Sources of Information
  • Work cat.: Harelick, M. Multimodal dialogue for a fax application, 1995:
  • Handbook of research on user interface design and evaluation for mobile technology, 2008:
  • Raisamo, R. Multimodal human-computer interaction, 1999:
  • Australasian Computer Human Interaction Conference (2007 : Adelaide (S. Aust.)). Proceedings of the 2007 Conference on Computer-Human Interaction Special Interest Group (CHISIG) of Australia on Computer-Human Interaction, 2007:
  • Virtual reality, 1995:
Wikipedia description:

Multimodal interaction provides the user with multiple modes of interacting with a system. A multimodal interface provides several distinct tools for input and output of data. Multimodal human-computer interaction involves natural communication with virtual and physical environments. It facilitates free and natural communication between users and automated systems, allowing flexible input (speech, handwriting, gestures) and output (speech synthesis, graphics). Multimodal fusion combines inputs from different modalities, addressing ambiguities. Two major groups of multimodal interfaces focus on alternate input methods and combined input/output. Multiple input modalities enhance usability, benefiting users with impairments. Mobile devices often employ XHTML+Voice for input. Multimodal biometric systems use multiple biometrics to overcome limitations. Multimodal sentiment analysis involves analyzing text, audio, and visual data for sentiment classification. GPT-4, a multimodal language model, integrates various modalities for improved language understanding. Multimodal output systems present information through visual and auditory cues, using touch and olfaction. Multimodal fusion integrates information from different modalities, employing recognition-based, decision-based, and hybrid multi-level fusion. Ambiguities in multimodal input are addressed through prevention, a-posterior resolution, and approximation resolution methods.

Read more on Wikipedia >