ABAIR is a project of the Phonetics and Speech Laboratory at the School of Linguistic, Speech and Communication Sciences, Trinity College Dublin. We've been developing synthetic voices for Irish since 2008. We've covered all three major dialects – Ulster, Connaught and Munster Irish. We're also in the process of developing a speech recognition system for Irish.
Synthetic voices and speech recognition have been in high demand for the last few years. Devices like Alexa are very popular and there are many uses for speech synthesis. There is no doubt that speech technology will flourish in the future especially now that Artificial Intelligence is improving. We've been providing speech technology for Irish since 2008 and have come a long way. Today ABAIR's technology is used in the fields of education and accessibility and we are planning to develop more. We are currently in the porcess of developing a speech recognition system.
There are a lot of use cases for text-to-speech (TTS) synthesis. It is used in public places, e.g. for announcements at train stations or aiports. TTS synthesis is extremely important for the visually impaired, because it enables them to use computers or smart phones. The amount of teaching material that incorporates TTS synthesis is constantly increasing too.
People also like to use TTS synthesis to have newspaper articles read out to them while they are driving or jogging. In fact, TTS synthesis is a must for any language these days.
Speech recognition enables us to interact with computers by simply "talking" to them, e.g. when you tell your GPS where to go or when you ask your smart phone for the weather forecast. There's an increasing demand for a speech recognition system for Irish, and ABAIR is currently developing one.
Compared to major languages such as English and Spanish, there aren't as many resources available for minority languages like Irish. There are far less speakers and thus it is more difficult to build up a data base of recorded speech, which is indispensable for developing synthetic voices or speech recognition systems. Furthermore, experts are needed who understand the sound system and the structure of the language as well as engineers who are skilled in speech processing. That's why there is usually only a handful of people who develop speech technology for minority languages.
Machine learning has improved considerably over the last couple of years and we are always trying to use the latest technology to improve our software and to provide facilities to the public.