Communication is one of the most important parts of human life. With a high life quality in nowadays society, more attention is paid to the disabled people. The World Federation of the Deaf (WFD) reports that there are more than 70 million deaf people in the world. Deaf people need to be able to communicate effectively, access information and influence the world around them by any appropriate method. Deaf Human- Computer Interaction (DHCI) has the potential to be an indispensable field in the future, and the audio-visual based HCI is probably one of the most widespread areas. As far as the orally educated deaf people are concerned, the act of lip-reading remains the main modality of perceiving speech. Recently research on lip-reading has been widely admitted in the world since a long history. From the prior work of, it is well established that the visual information from the speaker’s face is used to enhance speech perception under noisy environment. Moreover, even in the context of the clear auditory speech, visual information remains essential for both normal hearing and hearing impair people. In the literature, there is a large body of work on lip reading.
Currently, DNNs are very popular and achieve great success in lip-reading. However, even with high lip-reading performances, without knowledge about the semantic context, speech cannot be thoroughly perceived. The best lip readers ‘Watch, Listen, Attend and Spell’ (WLAS) network in still obtains 23.8% word error rates on ‘Lip Reading in the Wild’ (LRS) dataset. It scarcely reaches perfection since the ambiguity of the visual pattern.
Apart from lip reading, there are some other approaches for the deaf to use.
Nowadays, deaf or hard of  hearing people use Sign Language or Cued Speech or auditory speech (since some of them are equiped with Cochlear Implants), and a part of them are flexible and fluent with all, especially children.
The special track concerns the topic of audio-visual speech processing, for example, lip reading, sign language, silent speech and cued speech processing. This topic is very meaningful since it can bring a lot of benefit to normal people and disabled people who have difficulties in communication. This topic needs more attention. The audiovisual problem might contain audio speech and image video sequence processing, gesture analysis (lip reading, body, emotion and hand movement), multimodal fusion, blind source separation and even tensor decomposition. The methodologies cover the domain in signal processing, computer vision, human-computer interaction, Machine learning, and deep learning. Especially, the machine learning and deep learning methods are very successful in these fields and achieve satisfactory performance recently.

 

Prospective authors are invited to submit original papers on topics including, but not limited to:

Audio-visual speech recognition
Audio-visual feature extraction
Human and machine models of multimodal integration
Speech to vision and vision to speech synthesis
Lip reading, Sign language, Cued Speech, Silent speech processing
audio visual speech perception, including MRI and EEG data
Role of gestures accompanying speech
Deep learning in Audio-visual speech processing
Language analysis and processing
Human-computer interaction

 

Submission:
1. Inform the Chair: with the Title of your Contribution
2. Submission URL
Please select Track Preference as DCAVSP

 

Special track
DCAVSP: Deaf Community-oriented Audio-Visual Speech Processing

SIGNAL 2018, The Third International Conference on Advances in Signal, Image and Video Processing
May 20, 2018 to May 24, 2018 - Nice, France
http://www.iaria.org/conferences2018/SIGNAL18.html

 

The DCAVSP chairs:
Li Liu (Cette adresse e-mail est protégée contre les robots spammeurs. Vous devez activer le JavaScript pour la visualiser.)
Gang Feng (Cette adresse e-mail est protégée contre les robots spammeurs. Vous devez activer le JavaScript pour la visualiser.)
Denis Beautemps (Cette adresse e-mail est protégée contre les robots spammeurs. Vous devez activer le JavaScript pour la visualiser.)

 

Important Datelines
- Inform the Chairs: As soon as you decided to contribute
- Submission: February 7, 2018
- Notification: March 7, 2018
- Registration: March 21, 2018
- Camera ready: April 2, 2018
Note: These deadlines are somewhat flexible, providing arrangements are made ahead of time with the chair.

 

Contribution Types
- Regular papers [in the proceedings, digital library]
- Short papers (work in progress) [in the proceedings, digital library]
- Posters: two pages [in the proceedings, digital library]
- Posters: slide only [slide-deck posted on www.iaria.org]
- Presentations: slide only [slide-deck posted on www.iaria.org]
- Demos: two pages [posted on www.iaria.org]

 

Paper Format
- See: http://www.iaria.org/format.html
- Before submission, please check and comply with the editorial rules: http://www.iaria.org/editorialrules.html

 

Publications
- Extended versions of selected papers will be published in IARIA Journals: http://www.iariajournals.org
- Print proceedings will be available via Curran Associates, Inc.: http://www.proceedings.com/9769.html
- Articles will be archived in the free access ThinkMind Digital Library: http://www.thinkmind.org