Adaptation and Learning for Assistive Domestic Vocal Interfaces
Project information
Abstract: 

 

Voice control of the apparatus that we use in our daily lives is perceived as a luxury. Often, a remote control is even better suited for home automation because we find it easier to push a button than to speak a command. However, for persons with a physical impairment, pushing a button is not always as easy as it is for most people and voice control is a viable solution for them. What is perceived as a luxury for most people can actually mean a significant improvement in the quality of life of a handicapped person.

 

However, vocal interfaces are currently not widely used for assistive devices for several reasons. In ALADIN, we propose an approach that is based on learning and adaptation. The interface should learn what the user means with commands, which words he/she uses and what his/her vocal characteristics are. Users should formulate commands as they like, using the words they like and only addressing the functionality they are interested in. Learning takes place by using the device, i.e. by mining the vocal commands and the change they provoke to the device.

Project Leader(s): 
Walter Daelemans
Guy De Pauw
External Collaborator(s): 
  • Centre for Processing of Speech and Images (PSI), ESAT, KULeuven (coordinator)
  • Centre for User Experience Research, Faculteit Sociale Wetenschappen, KULeuven
  • MOBILAB, Departement Industriële en Biowetenschappen, Katholieke Hogeschool Kempen
Period: 
01/01/2011 - 31/12/2014
Sponsor(s): 

IWT, Agentschap voor Innovatie door Wetenschap en Technologie

Publications + Talks

van de Loo, J., Gemmeke J. F., De Pauw G., Van hamme H., & Daelemans W. (2013).  Semantic frame induction in an assistive vocal interface using hierarchical HMMs. Presented at the 23rd Meeting of Computational Linguistics in the Netherlands (CLIN2013), Enschede, The Netherlands.
Syndicate content