If I send this small piece of SSML to the speech processor I get two voices
<speak version='1.0' xml:lang='es-ES'>
<voice xml:lang='es-ES' xml:gender='Male' name='Microsoft Server Speech Text to Speech Voice (es-ES, Pablo, Apollo)'>
<p>
<s>Hola </s>
<s xml:lang='en'>Hello</s>
<s>¿Cómo estas?.</s>
</p>
</voice>
</speak>
A man in Spanish and a woman in English. Is this a limitation of the Project Oxford Text to Speech engine? in other words, I would expect the same voice to speak several languages but it looks like this is not the case.
To quote the SSML spec,
Specifying xml:lang does not imply a change in voice, though this may indeed occur. When a given voice is unable to speak content in the indicated language, a new voice may be selected by the processor.
While the current fallback behavior leaves something to desire, the recommendation is to create multiple voice nodes and pick a voice more explicitly when switching languages.
Related
In my application, I am already using Google TTS but I am amazed by Microsoft TTS because they are providing a lot more useful attributes than Google. Since I am more familiar with Google, I would like to keep my implementation but would still like to be able to use MS attributes like:
<mstts:express-as style="cheerful">
That'd be just amazing!
</mstts:express-as>
Is that possible?
There are no style attributes in Google Text-to-Speech, but you can change the Standard voice to a WaveNet voice[1].
The WaveNet voice synthesizes speech with more human-like emphasis and inflection on syllables, phonemes, and words. You can see all the supported voices in Google Text-to-Speech[2].
[1]https://cloud.google.com/text-to-speech/docs/wavenet#wavenet_voices
[2]https://cloud.google.com/text-to-speech/docs/voices
I have been trying for a while to get Phonetic or Phoneme pronunciation working with google's text to speech but have not managed to get it performing consistently.
I have managed to get limited results from using https://tophonetics.com/
It translated "The cow went mad." to "ðə kaʊ wɛnt mæd." but the 'the' 'ðə'
was not audible. And when I tried "ðɪs ɪz səm fəˈnɛtɪk tɛkst ˈɪnˌpʊt".
Are there any SSML codes to define phonetic blocks of text,
that can be this format "D,Is Iz sVm f#n'EtIk t'Ekst 'InpUt"
can be used instead of "ðɪs ɪz səm fəˈnɛtɪk tɛkst ˈɪnˌpʊt"
"
Google Text-to-Speech supports the <phoneme> tag since at least spring 2021.
However, there are a lot of potential gotchas to overcome:
The demo page filters out <phoneme> tags on the client side before they even reach the API. (It does the same with the <voice> tag as pointed out here)
As with Microsoft Azure Text-to-speech (see the other answer for details), each language only supports a limited set of phonemes ("letters") that can be used.
If you use an unsupported one, the phoneme tag is completely ignored without any warning. So the official example <phoneme alphabet="ipa" ph="ˌmænɪˈtoʊbə">manitoba</phoneme> does not work with any English variant but en-US, since all others lack the "o" or "oʊ" phoneme.
It's unclear if you need to use the v1beta1 API (which I can confirm is working) or if version v1 is also ok.
There is the SSML tag <phoneme> that serves your purpose.
Unfortunately, it's currently not supported in Google Cloud Text-to-speech. The available subset of SSML tags for Google Cloud is listed in the documentation. The <phoneme> tag is not in this list. An experiment using Google Cloud's text-to-speech-demo confirms that the phonemes are ignored. The content of the tag is being read as ordinary text, as has already been remarked by #Trevor in the comments.
The <phoneme> tag is, however, being supported by Microsoft Azure Text-to-Speech and Amazon Polly. In both cases, the available phonemes are limited to those available in the language being used (see here for Azure and here for Polly). The Azure documentation isn't 100% clear about the exclusion of out-of-language phonemes, but practical experiments with the Azure Text-to-Speech demo confirm that they're not working properly. In some cases, they at least seem to be replaced by the nearest available equivalent in the language used.
Being restricted to the phonemes of one language severely limits the usefulness of the phonemes tag. E.g., you can't used the feature to embed correctly pronounced content in a second language, as the second language will usually have some phonemes that are not available in the first language. Concrete language pairs in which each language has some phonemes that are not available in the other one are English/German, Spanish/German, English/Spanish.
I am developing a PoC using Watson text to speech and Watson conversation.
Sometimes, the chatbot needs to ask a question, so I'd like text to speech to synthesize the voice using an interrogation intonation.
Is it possible to be done?
Watson Text to Speech supports SSML, and has expressive SSML tags.
The one you want to use is Uncertainty. As it is defined as "conveys an uncertain, interrogative message".
Example:
<express-as type="Uncertainty">
Could she still be in the office? She told me that she might leave early.
</express-as>
More details on it's usage is here:
https://console.bluemix.net/docs/services/text-to-speech/SSML-expressive.html#the-express-as-element
Yes, you can certainly use text-to-speech (TTS) for output and speech-to-text (STT) for input. You would need to use a middleware or app layer to drive the conversation and route the input/output to the other services (see "how to use" in the docs).
I have used the following TJBot recipe as a simple and good started for some projects: https://github.com/damiancummins/tell_the_time
Unfortunately Concatenative TTS may have problems to create correct intonation in questions. If you think it happens consistently or too often please open a bug.
If you have a specific question which gets incorrect intonation try to rephrase it a little bit if possible. A useful trick for this voice could be to use double question mark '??'
I have created a voice controlled android application. I am giving option to change the locale to Japanese with English being the default one.
The Japanese TTS works perfectly fine. But when the voice recognition comes into picture, the Japanese words are recognized as English words and hence matched with the English words for possible match. Here, the mismatch occurs and hence my problem.
Is there any way google supports Japanese voice recognition.
Adding this if someone else finds this post helpful:
After months of RnD, I found another more accurate offline voice recognition engine i.e Julius. its hard to configure, yet the results are far better than google.
Providing the link below:
https://github.com/tech-sketch/JuliusForAndroid
Is there any opensource library that I could use to feed the letters and sounds and produce a text to speech system.
What must I do to start from scratch? Python would be my language of choice so where must I be headed to develop my own text to speech in my language.
Here's a list of a few Open Source TTS engines:
MBrola
FreeTTS
Festival Speech Synthesis
FLite
Festvox
GnuSpeech
Epos Speech
Maybe one of the covers what you're looking for.