Alexa Presentation Language Document rendering while speaking

401 Views Asked by At

I am trying to render an Alexa Presentation Language document while Alexa is speaking her speech. I tried with a pager with several pages and the AutoPager command. The problem I am trying to solve is that the document is rendered when Alexa starts speaking but the command is started when the speech is finished, and I would like to see the three pages moving while speaking. I am using the RenderDocumet and the executeCommand and the speak directives of responseBuilder.

The Document Template: PagerDoc —>

{
    "type": "APL",
    "version": "1.0",
    "theme": "dark",
    "import": [],
    "resources": [],
    "styles": {},
    "layouts": {},
    "mainTemplate": {
        "parameters": [
            "datasource"
        ],
        "item": [{

                "type": "Container",
                "items": [
                    {
                    "type": "Sequence",
                    "id": "pagerComponentId",
                    "scrollDirection": "vertical",
                    "numbered": true,
                    "width": "100vw",
                    "height": "100vh",
                    "alignItems": "center",
                    "justifyContent": "center",
                    "direction": "column",
                    "items": [

                        {
                            "type": "Image",
                            "source": "${datasource.app.properties.images.robot1}",
                            "position": "relative",
                            "width": "100vw",
                            "height": "100vh"

                        },
                        {
                            "type": "Image",
                            "source": "${datasource.app.properties.images.robot2}",
                            "position": "relative",
                            "width": "100vw",
                            "height": "100vh"

                        }
                    ]
                }

            ]

            }
        ]      
        }

    }

And the Directives:

var response = handlerInput.responseBuilder;
                return response
.addDirective({
                    type : 'Alexa.Presentation.APL.RenderDocument',
                    token: 'pagerToken',
                    document : pagerDoc,
                    datasources : {
                        "app": {
                            "properties": {
                                "images": {
                                    "robot1": "https://xxx/robot1.png",
                                    "robot2": "https://xxx/robot2.png"
                                }
                            }
                        }
                    }
                })
               .addDirective({
                    type: 'Alexa.Presentation.APL.ExecuteCommands',
                    token: 'pagerToken',
                    commands: [
                       {
                            "type": "Parallel",
                            "commands": [
                                {
                                    "type": "Scroll",
                                    "componentId": "pagerComponentId",
                                    "distance": 1


          }
                                ]
                              })
                           .speak(speechOutput)
                          .reprompt(repromptOutput)
                          .getResponse();

Could somebody tell me what should I do? If this is possible with Alexa? Thanks a lot in advance and best regards, Fernando

1

There are 1 best solutions below

2
On

It's not posible yet. If you wait until the release of APL 1.1 (coming soon) APL 1.1 will add onMount to the APL document which should allow for the execution of commands as soon as a document is loaded (eg. before alexa speaks)