Configure Your Skill with the APL Interface
To use Alexa Presentation Language in your skill, you must add support for the Alexa.Presentation.APL
interface.
Configure your skill to support the Alexa.Presentation.APL interface
Your skill must support the Alexa.Presentation.APL
interface to use the APL directives and requests (RenderDocument
, ExecuteCommands
, and UserEvent
). After you enable the interface, you can determine whether a request sent to your skill came from a device that supports APL.
You can use either the developer console or the ASK CLI to update your skill.
Alexa.Presentation.APL
interface when you choose to integrate a new response with your skill. For details about the response builder, see Use the Multimodal Response Builder.Configure Alexa.Presentation.APL in the developer console
- Open the developer console, and click Edit for the skill you want to configure.
- Navigate to the Build > Interfaces page.
- Enable the Alexa Presentation Language option.
- Click Save Interfaces and then Build Model to re-build your interaction model.
- In your skill code, determine whether the user's device supports APL before you return the APL directives.
Configure Alexa.Presentation.APL with the ASK CLI or SMAPI
With ASK CLI or SMAPI, update the manifest.apis.custom.interfaces
array in your skill manifest to include an ALEXA_PRESENTATION_APL
object. This object also includes the supportedViewports
array, as shown here:
{
"type": "ALEXA_PRESENTATION_APL",
"supportedViewports": [
{
"mode": "HUB",
"shape": "ROUND",
"minWidth": 100,
"maxWidth": 599,
"minHeight": 100,
"maxHeight": 599
},
{
"mode": "HUB",
"shape": "RECTANGLE",
"minHeight": 600,
"maxHeight": 959,
"minWidth": 960,
"maxWidth": 1279
},
{
"mode": "HUB",
"shape": "RECTANGLE",
"minHeight": 600,
"maxHeight": 1279,
"minWidth": 1280,
"maxWidth": 1920
},
{
"mode": "TV",
"shape": "RECTANGLE",
"minHeight": 540,
"maxHeight": 540,
"minWidth": 960,
"maxWidth": 960
}
]
}
For more about supported viewports, see Select the Viewport Profiles Your Skill Supports.
- Run this command to download your skill components:
ask api get-skill -s amzn1.ask.skill.<skillId>
See Get skill subcommand.
-
Edit the skill manifest (downloaded as
skill.json
) and add theALEXA_PRESENTATION_APL
object shown previously to themanifest.apis.custom.interfaces
array. - Run this command to deploy the revised skill manifest:
ask api update-skill -s amzn1.ask.skill.<skillId> -f skill.json
-
To ensure that the update deployed correctly, repeat step #1 to download the skill manifest. Open the
skill.json
file in your editor and confirm that the interfaces object contains theALEXA_PRESENTATION_APL
object. - In your skill code, determine whether the user's device supports APL before you return the APL directives.
For more about using the ASK CLI, see Quick Start Alexa Skills Kit Command Line Interface.
Verify that the user's device supports APL
Before your skill code sends any APL directives, make sure the user's device supports the relevant APL interface. Check the context.System.device.supportedInterfaces
object included in every request. Note that there are two interfaces to check for:
Alexa.Presentation.APL
: this indicates that the device has a screen, such as an Echo Show or a Fire TV. You can return theAlexa.Presentation.APL
directives. Your document can use all APL features supported for the indicated version (most recent version is 2024.3), including images and other media.Alexa.Presentation.APLT
: this indicates that the device has a character display, such as an Echo Dot with clock. You can return theAlexa.Presentation.APLT
directives. Your document can use a limited set of APL features to place brief text content on the device.
These examples show a LaunchRequest
– one from a device that supports version 2024.3 of the Alexa.Presentation.APL
interface, and one from a device that supports version 1.0 of the Alexa.Presentation.APLT
interface.
{
"version": "1.0",
"session": {},
"context": {
"System": {
"device": {
"deviceId": "amzn1.ask.device.1",
"supportedInterfaces": {
"AudioPlayer": {},
"Alexa.Presentation.APL": {
"runtime": {
"maxVersion": "2024.3"
}
}
}
},
"apiEndpoint": "https://api-amazonalexa.amazon.com",
"apiAccessToken": ""
},
"Viewport": {
"experiences": [
{
"canRotate": true,
"canResize": true
},
{
"canRotate": false,
"canResize": false
}
],
"shape": "RECTANGLE",
"pixelWidth": 1024,
"pixelHeight": 600,
"dpi": 160,
"currentPixelWidth": 640,
"currentPixelHeight": 500,
"touch": [
"SINGLE"
],
"keyboard": [
"DIRECTION"
],
"video": {
"codecs": [
"H_264_42",
"H_264_41"
]
}
}
},
"request": {
"type": "LaunchRequest",
"requestId": "amzn1.echo-api.request.1",
"timestamp": "2019-06-27T15:52:19Z",
"locale": "en-US",
"shouldLinkResultBeReturned": false
}
}
{
"version": "1.0",
"session": {},
"context": {
"System": {
"device": {
"deviceId": "amzn1.ask.device.1",
"supportedInterfaces": {
"AudioPlayer": {},
"Alexa.Presentation.APLT": {
"runtime": {
"maxVersion": "1.0"
}
}
}
},
"apiEndpoint": "https://api-amazonalexa.amazon.com",
"apiAccessToken": ""
},
"Viewports": [
{
"id": "main",
"type": "APLT",
"supportedProfiles": [
"FOUR_CHARACTER_CLOCK"
],
"lineLength": 4,
"lineCount": 1,
"format": "SEVEN_SEGMENT",
"interSegments": [
{
"x": 2,
"y": 0,
"characters": "':."
}
]
}
]
},
"request": {
"type": "LaunchRequest",
"requestId": "amzn1.echo-api.request.1",
"timestamp": "2019-06-27T15:52:19Z",
"locale": "en-US",
"shouldLinkResultBeReturned": false
}
}
For brevity, not all the request properties are shown. For the complete schema, see request syntax:
For a code sample that shows how to check this when using the ASK SDK, see Use APL with the ASK SDK v2.
Use the APL directives and requests in your code
In your skill code, use the Alexa.Presentation.APL
and Alexa.Presentation.APLT
directives and requests.
Use the RenderDocument
directive to send content to display on the device:
Alexa.Presentation.APL.RenderDocument
: display content on a device with a screen.Alexa.Presentation.APLT.RenderDocument
: display content on a device with a character display.
Use the ExecuteCommands
directive to send commands related to your document to the device:
Handle UserEvent
requests to handle user events from the device, such as button presses. Note that UserEvent
is available for devices with screens, but not character displays.
For an overview of how APL fits into the overall skill flow, see APL and skill flow
For a code sample that illustrates using the APL directives and requests with the ASK SDK, see Use APL with the ASK SDK v2.
For more about using APL to display content on character displays, see Understand Alexa Presentation Language and Character Displays.
Do not combine the APL directives with the Display interface.
Configure Cross-Origin Resource Sharing (CORS) for resources
If your skill references external resources such as images hosted on an HTTPS endpoint, ensure that the endpoint meets these requirements:
- The endpoint provides an SSL certificate signed by an Amazon-approved certificate authority. Many content hosting services provide this. For example, you could host your files at a service such as Amazon Simple Storage Service (Amazon S3) (an Amazon Web Services offering).
- The endpoint must allow cross-origin resource sharing (CORS) for the images.
To enable CORS, the resource server must set the Access-Control-Allow-Origin
header in its responses. To restrict the resources to just Alexa, allow just the origin be whitelisted to *.amazon.com
.
If your resources are in an Amazon S3 bucket, you can configure your bucket with the following CORS configuration (shown in JSON):
[
{
"AllowedHeaders": [],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
For more about S3 and CORS, see Enabling Cross-Origin Resource Sharing.
Last updated: Nov 28, 2023