Leigh Clark is a Postdoctoral Research Fellow in the School of Information & Communication Studies at University College Dublin. His research examines the communicative aspects of user interactions with speech interfaces, how context impacts perceptions of computer speech and how linguistic theories can be implemented and redefined in speech-based HCI.
Benjamin R Cowan is an Assistant Professor at University College Dublin’s School of Information & Communication Studies. His research lies at the juncture between psychology, HCI and computer science in investigating how theoretical perspectives in human communication can be applied to understand phenomena in speech based human-machine communication.
Justin Edwards is a graduate student in the School of Information & Communication Studies at University College Dublin. His research focuses on multitasking and interruptions as they relate to conversational interactions and subsequent design challenges for agent-initiated interactions.
Cosmin Munteanu is an Assistant Professor at the Institute for Communication, Culture, Information, and Technology at University of Toronto Mississauga. His research includes speech and natural language interaction for mobile devices, mixed reality systems, learning technologies for marginalized users, and assistive technologies for older adults, and ethics in HCI research.
Christine Murad is a graduate student at the Technologies for Aging Gracefully lab in the Depart- ment of Computer Science at the University of Toronto. Her research looks at the usability and design of conversational voice interfaces, and exploring design heuristics to aid intuitive and user-friendly conversational voice interaction.
Matthew Aylett is Chief Science Officer at a founder of CereProc Ltd which offers unique emotional and characterful synthesis solutions and has been awarded a Royal Society Industrial Fellowship to explore the role of speech synthesis in the perception of character in artificial agents.
Roger K Moore is Professor of Spoken Language Processing at Sheffield University. Much of his research has been based on insights from human speech perception and production. He was recently awarded the 2016 LREC Antonio Zampoli Prize for "Outstanding Contributions to the Advancement of Language Resources & Language Technology Evaluation within Human Language Technologies".
Jens Edlund is an Assistant Professor at KTH Royal Institute of Technology in Stockholm. His research combines linguistics and phonetics with engineering and speech technology. His published papers focus on many aspects of speech technology, with a particular focus on analysis of human behaviour in conversations and on the generation of humanlike behaviours in computers that speak.
Eva Szekely is a Postdoctoral Research Fellow at KTH Royal Institute of Technology in Stockholm. Her research focuses on modelling expressive and conversational phenomena in speech synthesis, and their application in situational contexts.
Patrick Healey is Professor of Human Interaction and leader of the Cognitive Science Research Group in the School of Electronic Engineering and Computer Science, Queen Mary University of London. His research analyses miscommunication; the processes by which people detect and recover from misunderstandings and its implications in designing technologies to support human interaction.
Naomi Harte is an Associate Professor in Digital Media Systems in the School of Engineering at Trinity College Dublin. She is a Co-Founder of the ADAPT Centre and holds a Google Faculty Award. Her specialist area is Human Speech Communication. Her work involves the design and application of algorithms to enhance or augment speech communication between humans and technology.
Ilaria Torre is a Marie Skłodowska-Curie postdoctoral research fellow in the Department of Electronic and Electrical Engineering at Trinity College Dublin. Her research analyses how speech and other cues influence trusting behaviour in human-machine interaction, drawing inspiration from games and behavioural economics.
Emer Gilmartin is a Fulbright TechImpact Scholar in the School of Computer Science and Statistics at Trinity College Dublin. Her current work focuses on spoken dialogue technology, including the integration of spoken language technology into an automatic language tutor for refugees and migrants.
Philip Doyle is a speech UX researcher at Voysis Ltd with experience constructing development sets for training AI and examining user language behaviour. His work focuses on understanding partner models in human-computer dialogue contexts.