Creation and Analysis of Emotional Speech Database for Multiple Emotions Recognition Ryota Sato, Ryohei Sasaki, Norisato Suga, Toshihiro Furukawa Tokyo University of Science, Hosei University Japan |
Towards Speech Entrainment: Considering ASR Information in Speaking Rate Variation of TTS Waveform Generation Mayuko Okamoto, Sakriani Sakti, Satoshi Nakamura Nara Institute of Science and Technology (NAIST), Nara Institute of Science and Technology (NAIST) / RIKEN AIP Japan |
Improving Valence Prediction in Dimensional Speech Emotion Recognition Using Linguistic Information Bagus Tris Atmaja, Masato Akagi Institut Teknologi Sepuluh Nopember Surabaya (ITS), Japan Advanced Institute of Science and Technology (JAIST) Indinesia, Japan |
Oriental-COCOSDA 2020 will move to fully virtual conference for the safety of all participants during the COVID-19 pandemic.
COCOSDA, an acronym of the International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques, was established in 1991 to promote international cooperation in developing speech corpora and coordinating assessment methods of speech input/output systems. In 1994 it was proposed that a sub-organization for the Oriental community should be established to share linguistic features unique to the region. After a preparatory meeting held by interested members in Hong Kong in 1997, annual meetings have been held since 1998. The community has enjoyed increasing participation from the community and enthusiastic interests to organize future meetings, thus ensuring promising prospect of sustained activities in the future. The purpose of Oriental COCOSDA is to exchange ideas, share information and discuss regional matters on creation, utilization, dissemination of spoken language corpora of oriental languages and also on the assessment methods of speech recognition/synthesis systems as well as to promote speech research on oriental languages. The 23rd Conference of Oriental COCOSDA will be hosted by the University of Computer Studies, Yangon (UCSY). With the Myanmar hosting Oriental COCOSDA for the first time, we turn a new leaf to continually help boost the research and development in the field of speech technology and for further enthusiasm towards speech technology in East and Southeast Asia.
Technically Co-sponsored by IEEE Myanmar Subsection.
IEEE Conference Details: Webpage
We invite papers describing substantial, original and unpublished research covering aspects of speech databases, assessments and speech I/O, including, but not limited to:
Important Dates:
Full paper submission: 10th July 2020 20th July 2020 30th July 2020
Notification of Acceptance: 10th Aug 2020 31st Aug 2020 ( 31st Aug 2020 to 3rd Sep 2020 )
Final Manuscript Submission : 31st Aug 2020 30th Sep 2020
Low resource machine translation is not only an important application, but it also poses very interesting machine learning challenges, such as learning with limited supervision, learning from auxiliary tasks, and leveraging effectively large amounts of unlabeled data. I will briefly overview a research framework which is centered around three pillars: the construction of better benchmarks, the design of better learning algorithms and the analysis of these datasets and models. Research in this area can be described as a never-ending cycle over these three related efforts. I will then focus the remainder of the talk on some recent work on unsupervised and low resource machine translation using iterative self-training and back-translation.
Today, there have been very few researches on the machine translation (MT) between Myanmar language and another language. And Myanmar machine translation is still in its early stages as there are the lack of resources. Furthermore, the techniques for performing as pre-processing step for Myanmar language, such as word segmentation, part-of-speech tagging, name entity recognition are also currently in the process of being developed. Existing research on Myanmar translation has been phrase-based techniques and neural machine translation on word-based, character-based and syllable-based MT models.
For more information : Contact us