I used to get output format as "riff-24khz-16bit-mono-pcm" from Azure Test-to-Speech API service. Due to some technical changes the audio texts we are now getting is in audio-16khz-128kbitrate-mono-mp3.
Before this change we used to do following to play the audio from audio text:
String stepTitle=soundData; // audioText output from Azure
byte[] bytes = stepTitle.getBytes();
Base64.Decoder decoder = Base64.getDecoder();
byte[] decoded = decoder.decode(bytes);
InputStream input = new ByteArrayInputStream(decoded);
AudioInputStream audioInput = null;
try {
///////// This line is giving exception ////////////////////////
audioInput = AudioSystem.getAudioInputStream(input);
} catch (UnsupportedAudioFileException | IOException e) {
e.printStackTrace();
}
AudioFormat audioFormats = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
24000,
16,
1,
1 * 2,
24000,
false);
As mentioned above, while getting an audio input stream I am getting UnsupportedAudioFileException.
I have tried with mp3plugin.jar. But I think I was not able to get it to work correctly. Please help!
By referring to official TTS REST API documentation, I will exchange for a token with key, and use that token to convert text to speech. I will get a byte array of audio data, and then I use JLayer to play the audio.
Here is my sample:
For JMF with mp3Plugin, I save the audio data to a temporary file, then play it.