Working with the Azure Web App Bot SDK v3. I try to output text and speech at the same time.
messageActivity.Text = information;
messageActivity.Speak = information;
messageActivity.Locale = "de-DE";
await context.PostAsync(messageActivity);
Cortana nor Direct Line nor the Bot emulator does speak something out. However, the Bot does receive the text to speak out loud.
In addition, even
await context.SayAsync(information, information);
isn't working. I seems like there is an issue with the localization or something. I ran out of ideas.
Direct line is configured as following
const speechOptions = {
speechRecognizer: new CognitiveServices.SpeechRecognizer({ subscriptionKey: 'SUB_KEY_XXX', locale: 'de-DE' }),
speechSynthesizer: new CognitiveServices.SpeechSynthesizer({
gender: CognitiveServices.SynthesisGender.Male,
subscriptionKey: 'SUB_KEY_AGAIN',
voiceName: 'Microsoft Server Speech Text to Speech Voice (de-DE, Michael)'
})
};
Sidenote: Voice to text works flawlessly.
EDIT: Direct Line does work now. While I was using a iFrame for demonstration purposes, the voice output only works if the input also was provided via voice input. However, you can change that behaviour as well.
You should try a well formed SSML wrapper if you want the informationText to speak in Cortana.
Also note that Cortana only officially supports en-US for locale and market for third party skills. Though you can do a lot of cool things with other voices and locales, there are a few quirks and you can run into issues.