Using the new Alexa Presentation Language (APL), you can deliver richer, more engaging voice-first interactions to your customers across tens of millions of Alexa-enabled devices with screens. Before APL, you could use our Display Directive interface to create a skill that supports screen display. While display templates allow you to support visual experiences, APL is more flexible, giving you the ability to enhance your skill experience for different device types, control your user experience by defining where visual elements are placed on screens, and choose from a variety of components available with APL.
If you have already built multimodal Alexa skills with visuals using our Display Directive interface, you can still use APL to create similar displays of those templates. In today’s blog, we share a quick overview of how you can migrate your display templates over to APL.
APL is JSON that is compiled to multimodal components to be inflated and rendered on your device. APL is comprised of components, which are reusable, self-contained artifacts used to display elements on the screen such as text, images, sequences, and frames. Please note that APL is currently in public beta, and we are continually adding components to our visual reference that you can use in their APL documents.
Alongside components, APL incorporates styles, resources, and Layouts. You can apply styles to components to add defined visual properties that can be extended or inherited. Resources are named global values in your APL document denoted by the “@” symbol. Finally, layouts are composite components you create and can reuse throughout the main template of your APL document.
When you begin to integrate visuals into your Alexa skills, it is crucial that your skill can adapt to work on different devices. Previously with the display directives interface, serving information to a body or list template would guarantee scaling to a round or landscape rectangular display, according to the GUI specifications of that template.
With APL, we offer you the JSON code to achieve a similar experience to the body and list display directive templates in your skill. However, with APL, we advocate that you customize your visual experience even further and tailor it according to a set of viewport characteristic specifications.
When you enter the APL authoring tool, you can select samples representing the following display templates: BodyTemplate1, BodyTemplate2, BodyTemplate3, BodyTemplate6, BodyTemplate7, ListTemplate1, and ListTemplate2. You can hover over each document to read about its intended purpose and which template it most closely relates to.
The main difference between the previous display interface and APL is how these visuals are served to the customer. When you are building your response in your skill code, with APL you use a different type of directive. The directive specifies how the compiler translates the corresponding input. Previously, you would use a DisplayDirective that would tell the compiler to inject the information you provide into a static, inflexible template.
handlerInput.responseBuilder
.speak(speechText)
.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
document: require('./mainScreen.json'),
datasources: require('./datasources.json'),
})
.getResponse();
With APL, you use an Alexa.Presentation.APL.RenderDocument directive. In this example, you are informing the compiler to interpret the document containing your APL JSON as components to be inflated on the display, and the datasource to be sent in parallel to the document as a payload that holds information to be data-bound within the main template. This datasource should contain any information from your skill that you want to incorporate in your display. This could include information from your request, such as slot values or skill states, profile or account linking data, static variable datasets, or webservice or API responses. In short, the information contained in your datasource is completely up to you as the developer, and can be anything you want to display on the device. It is important, however, that you do not display any private or sensitive information about the customer without their consent. This data is then inflated with the document on the device. You can utilize data from the datasource directly or conditionally in your document.
Throughout the rest of this this blog post, we will examine the APL document that resembles BodyTemplate1. First, let’s breakdown the BodyTemplate1 APL document.
At the top of the APL mainTemplate lives a Container. This is the highest-level attribute of the template, and it is responsible for being the parent of all of the components within the rest of the template. Associated to this container is a when clause. This is a statement that allows you to conditionally inflate components. In this case, the when clause checks the viewport profile to see if the device is round or otherwise, and changes the layout of the template accordingly.
The first child of the container is an Image component. This Image component is positioned absolutely so it appears as a background image and the succeeding child components will appear atop the image.
The next component is an AlexaHeader. This is a layout that we have created for you to use by importing alexa-layouts in your APL document. Essentially, AlexaHeader is comprised of a Text and Image components to resemble the experience of headers in the display directive templates. We have named parameters included the AlexaHeader layout to intuitively place your title, subtitle, skill icon, etc.
The final component is a Text component. This is a block to show the primary text on the display.
{
"type": "APL",
"version": "1.0",
"theme": "dark",
"import": [
{
"name": "alexa-layouts",
"version": "1.0.0"
}
],
"resources": [
...
],
"styles": {
...
},
"layouts": {},
"mainTemplate": {
"parameters": [
"payload"
],
"items": [
{
"type": "Container",
...
"items": [
{
"type": "Image",
...
},
{
"type": "AlexaHeader",
...
},
{
"type": "Text",
...
}
]
}
]
}
}
When you select Long Text Sample in the APL authoring tool, you will notice there are two tabs, one with the name of the template and then the Data JSON tab.
The data that lives in Data JSON is the datasource. To make this data accessible in your APL mainTemplate, you need to include a parameter that unlocks the information. You will notice the parameters field includes this variable, called payload.
Within the Data JSON, there is a JSON object entitled bodyTemplate1Data. Attributes within bodyTemplate1Data allow you to edit the data of various attributes in the APL document.
{
"bodyTemplate1Data": {
"type": "object",
"objectId": "bt1Sample",
"backgroundImage": {
"contentDescription": null,
"smallSourceUrl": null,
"largeSourceUrl": null,
"sources": [
{
"url": "https://d2o906d8ln7ui1.cloudfront.net/images/BT1_Background.png",
"size": "small",
"widthPixels": 0,
"heightPixels": 0
},
{
"url": "https://d2o906d8ln7ui1.cloudfront.net/images/BT1_Background.png",
"size": "large",
"widthPixels": 0,
"heightPixels": 0
}
]
},
"title": "Did You Know?",
"textContent": {
"primaryText": {
"type": "PlainText",
"text": "But in reality, mice prefer grains, fruits, and manmade foods that are high in sugar, and tend to turn up their noses at very smelly foods, like cheese. In fact, a 2006 study found that mice actively avoid cheese and dairy in general."
}
},
"logoUrl": "https://d2o906d8ln7ui1.cloudfront.net/images/cheeseskillicon.png"
}
}
To edit the image source of the Image component, you will update the url attribute living under sources. There are two URLs of the same image. The first is intended for smaller hubs, the second larger. It is important to include varying sizes of images to assure that it renders appropriately from a small round hub like the Echo Spot to an extra-large landscape TV with Fire TV Cube. These attributes are accessed in the mainTemplate from the payload via direct databinding.
{
"type": "Image",
"source": "${payload.bodyTemplate1Data.backgroundImage.sources[0].url}",
"position": "absolute",
"width": "100vw",
"height": "100vh",
"scale": "best-fill"
},
The appropriate image is selected via the when clause that lives on the parent container: "when": "${viewport.shape == 'round'}"
To edit the title and icon of the AlexaHeader layout, you will update the title and logoUrl attributes living under bodyTemplate1Data. These attributes are accessed in the mainTemplate from the payload via direct databinding.
{
"type": "AlexaHeader",
"headerTitle": "${payload.bodyTemplate1Data.title}",
"headerAttributionImage": "${payload.bodyTemplate1Data.logoUrl}"
},
Finally, to edit the content of the large Text component, you will edit the text attribute living under textContent.primaryText. This attribute is accessed in the mainTemplate from the payload via direct databinding.
{
"type": "Text",
"text": "${payload.bodyTemplate1Data.textContent.primaryText.text}",
"fontSize": "@textSizeBody",
"spacing": "@spacingSmall",
"style": "textStyleBody"
}
This approach of updating the Data JSON is similar for each of the Body and List templates in the Authoring Tool.
Consider using these examples as a starting point to create your own unique APL documents. In addition to being a more powerful tools that allows greater flexibility in creating interactive voice experiences, APL was developed to make it easy to design and build visually rich Alexa skills to tens of millions of Alexa-enabled devices with screens.
Start building with APL and then enter your creation for the Alexa Skills Challenge: Multimodal and compete for $150k in total prizes. You can also earn a new Amazon device by just publishing an eligible APL skill.