How can I make my Alexa skill play a simple mp3 file?

2.5k Views Asked by At

I am attempting to write an Alexa skill that simply plays an MP3 file. I modified the default "hello world" lambda with this code:

const Alexa = require('ask-sdk-core');

const LaunchRequestHandler = {
    canHandle(handlerInput) {
        return Alexa.getRequestType(handlerInput.requestEnvelope) === 'LaunchRequest';
    },
    handle(handlerInput) {
        return {
        "response": {
            "directives": [
                {
                    "type": "AudioPlayer.Play",
                    "playBehavior": "REPLACE_ALL",
                    "audioItem": {
                        "stream": {
                            "token": "12345",
                            "url": "https://jpc.io/r/brown_noise.mp3",
                            "offsetInMilliseconds": 0
                        }
                    }
                }
            ],
            "shouldEndSession": true
        }
        }
    }
};
exports.handler = Alexa.SkillBuilders.custom()
    .addRequestHandlers(
        LaunchRequestHandler,
    )
    .lambda();

But when I deploy the code and invoke the skill, it doesn't make any sound or report any error. When I replace the handle code with

const speakOutput = 'Hello world';
        return handlerInput.responseBuilder
            .speak(speakOutput)
            .reprompt(speakOutput)
            .getResponse();

then she says hello world when invoked.

My question: why will she speak words to me, but not seem to play my mp3? I've seen other questions on stack overflow where the cause was that they weren't using https, but I am using https.

1

There are 1 best solutions below

0
On

At the time of this writing, there is a new way to directly issue AudioPlayer directives.

const PlayStreamIntentHandler = {
  canHandle(handlerInput) {
    return handlerInput.requestEnvelope.request.type === 'LaunchRequest'
      || handlerInput.requestEnvelope.request.type === 'IntentRequest';
  },
  handle(handlerInput) {
    const stream = STREAMS[0];

    handlerInput.responseBuilder
      .speak(`starting ${stream.metadata.title}`)
      .addAudioPlayerPlayDirective('REPLACE_ALL', stream.url, stream.token, 0, null, stream.metadata);

    return handlerInput.responseBuilder
      .getResponse();
  },
};

The key function being .addAudioPlayerPlayDirective(); You can read more about these directive functions on the Alexa Skills Kit API although some of the code there seems outdated. The Building Response page lists the newer audio player functions.

An example of what the STREAMS array might look like:

const STREAMS = [
  {
    token: '1',
    url: 'https://streaming.radionomy.com/-ibizaglobalradio-?lang=en-US&appName=iTunes.m3u',
    metadata: {
      title: 'Stream One',
      subtitle: 'A subtitle for stream one',
      art: {
        sources: [
          {
            contentDescription: 'example image',
            url: 'https://s3.amazonaws.com/cdn.dabblelab.com/img/audiostream-starter-512x512.png',
            widthPixels: 512,
            heightPixels: 512,
          },
        ],
      },
      backgroundImage: {
        sources: [
          {
            contentDescription: 'example image',
            url: 'https://s3.amazonaws.com/cdn.dabblelab.com/img/wayfarer-on-beach-1200x800.png',
            widthPixels: 1200,
            heightPixels: 800,
          },
        ],
      },
    },
  },
];

Code snippets taken from this example.