Designing Speech and Language Interactions for Mobiles and Wearables

 

Traditional interfaces are continuously being replaced by mobiles and wearables. Yet when it comes to the modalities enabling our interactions, we have yet to fully embrace the most natural one: speech. Very little HCI attention has been dedicated to designing and developing spoken language or multimodal interaction techniques, especially for mobiles and wearables. In addition to the enormous, recent engineering progress in processing such modalities, there is now sufficient evidence that many real-life applications do not require 100% accuracy of processing multimodal input to be useful, particularly if such modalities complement each other. This multidisciplinary, one-day workshop will bring together interaction designers, usability researchers, and general HCI practitioners to analyze the opportunities and directions to take in designing more natural interactions especially with mobile and wearable devices, and to look at how we can leverage recent advances in speech, language, acoustic, and multimodal processing.