Tutorial: Add Your First Visual Response to a Custom Skill
This tutorial shows you how to use Alexa Presentation Language (APL) to add a visual response to a custom skill. The response displays a "Hello World" message by using a responsive template for the layout and a data source for the content.
Prerequisites
You must have an Amazon developer account and a working custom skill. The tutorial provides code examples that build on the Node.js "Hello World" sample skill available on GitHub: skill-sample-nodejs-hello-world. Follow the directions provided in the git repository to set up the skill before you begin this tutorial.
Make sure that you can log into the developer console and use the test simulator to invoke your skill. Verify that you can invoke one of your intents and get a response without errors. If you use the Hello World sample, make sure you can invoke the HelloWorldIntent
.
Steps to add a visual response to your custom skill
APL works within the request and response interface for a custom skill. When your skill receives a request, such as a LaunchRequest
or IntentRequest
, you can provide an APL document and data source as part of your response. Alexa displays your response on the screen.
In this tutorial, you create a new APL document and data source, and then update an intent handler in the skill code to include the APL content in the response. The tutorial walks through the following steps:
- Configure the skill to support the APL interface.
- Use the APL authoring tool to create and save an APL document to display.
- Create a data source to contain the content to display within the document, and then write data-binding expressions to bind properties of the document to the data source.
- Update the skill code to send the document and data source to Alexa as part of a response.
- Test the skill in the developer console to see the new visual response.
Step 1: Enable the APL interface in the developer console
To send a response with APL content, your skill must support the Alexa.Presentation.APL
interface. Enable this option in the developer console.
To enable the APL interface in the developer console
- Open the developer console, and click Edit for the skill you want to configure.
- Navigate to the Build > Interfaces page.
-
Enable the Alexa Presentation Language option.
This action makes it possible for your skill to send Alexa APL content to display.
-
In the list of viewport profiles, select all the profiles.
- Click Save Interfaces and then Build Model to re-build your interaction model.
Step 2: Create a new APL document in the authoring tool
An APL document defines a template for your visual response. In the following sections, you create a new APL document in the authoring tool and update the document to display a line of text.
To create a new APL document
- In the developer console for your skill, make sure you're still on the Build page.
- In the sidebar, click Multimodal Responses. This action opens the APL authoring tool in a new window or browser tab.
- Select Visual.
- Click Create Visual Response. This action opens a page with several templates you can use.
- Click Blank Document to create a new APL document with no content.
-
Click the Save icon. When prompted, enter the name "HelloWorldDocument".
As you complete the tutorial steps, click Save frequently to save your work. The authoring tool doesn't save automatically.
An APL package defines a set of layouts, resources, and styles that you can import into your APL documents. The alexa-layouts
package provided by Amazon contains a set of responsive components and templates. These layouts automatically work on viewports with different modes, sizes, and shapes. When you decide to create a visual response, review the available pre-built templates and components first.
To use a package in your document, you must first import it.
To import the alexa-layouts package
-
In the authoring tool, click the Code View tab to display the JSON for the APL document.
-
In the
import
array, add the following object.{ "name": "alexa-layouts", "version": "1.7.0" }
Note: For packages provided by Amazon, you specify thename
andversion
. For your own packages or packages shared by other developers, you specify thename
,version
, andsource
, wheresource
is a URL for the package.
After making the change, your JSON should look like the following example:
{
"type": "APL",
"version": "2024.2",
"license": "Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: LicenseRef-.amazon.com.-AmznSL-1.0\nLicensed under the Amazon Software License http://aws.amazon.com/asl/",
"settings": {},
"theme": "dark",
"import": [
{
"name": "alexa-layouts",
"version": "1.7.0"
}
],
"resources": [],
"styles": {},
"onMount": [],
"graphics": {},
"commands": {},
"layouts": {},
"mainTemplate": {
"parameters": [
"payload"
],
"items": []
}
}
In an APL document, the mainTemplate
property specifies the layout to display when the document initially displays on the screen. The mainTemplate.items
property contains an array of items to display. In the following steps, you add the AlexaHeadline
responsive template to the items
array.
To add the JSON for AlexaHeadline
- Make sure that the authoring tool is still in Code View.
-
In the
mainTemplate.items
array, add the following block of code:{ "type": "AlexaHeadline", "primaryText": "Display this text" }
The preview pane updates to show "Display this text" centered in the viewport.
After making the change, your JSON should look like the following example:
{
"type": "APL",
"version": "1.8",
"license": "Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: LicenseRef-.amazon.com.-AmznSL-1.0\nLicensed under the Amazon Software License http://aws.amazon.com/asl/",
"settings": {},
"theme": "dark",
"import": [
{
"name": "alexa-layouts",
"version": "1.4.0"
}
],
"resources": [],
"styles": {},
"onMount": [],
"graphics": {},
"commands": {},
"layouts": {},
"mainTemplate": {
"parameters": [
"payload"
],
"items": [
{
"type": "AlexaHeadline",
"primaryText": "Display this text"
}
]
}
}
Step 3: Connect the document to a data source
Although the document you've created so far would work in a skill, it's better to separate the presentation from the content to display. This approach is important when the data to display within the template changes based on some condition in the skill. For example, imagine a skill that asks the user for their name, and then later displays that name on the screen. Each time the device renders the document, the data to display might be different.
The correct way to pass data from your skill to the document is to place the content in a data source and then use a data-binding expression to connect that content to your APL document. From your code, you send an identical document in each response, but with an updated data source.
To add a data source to the authoring tool
-
In the authoring tool, click the Data button to open the data sources pane.
The data sources pane lets you simulate passing a data source to your document. When you add the document to your code later, you send the data source as part of the directive.
-
Replace the contents of this pane with the following code:
{ "helloWorldDataSource": { "primaryText": "Hello World!", "secondaryText": "Welcome to Alexa Presentation Language!", "color": "@colorTeal800" } }
The preview pane doesn't change because you haven't yet bound any template properties to the data source properties.
When building your own APL documents, you design the structure of the data source.
To add the data source to your document and write data-binding expressions
- In the authoring tool, click the APL button to return to the document pane. Make sure you're still in Code View.
-
In the
parameters
array, remove the existingpayload
item and replace it with"helloWorldDataSource"
.{ "mainTemplate": { "parameters": [ "helloWorldDataSource" ], "items": [ { "type": "AlexaHeadline", "primaryText": "Display this text" } ] } }
-
For the
primaryText
property, change "Display this text" to the following expression:${helloWorldDataSource.primaryText}
The text shown on the preview pane changes to "Hello World!"
-
Add the
secondaryText
andbackgroundColor
properties, binding each one to the corresponding data in the data source:{ "type": "AlexaHeadline", "primaryText": "${helloWorldDataSource.primaryText}", "secondaryText": "${helloWorldDataSource.secondaryText}", "backgroundColor": "${helloWorldDataSource.color}" }
The preview pane updates to show the two blocks of text and a teal background.
Step 4: Add code to your skill to render the response
Now that you have a working APL document, you can add code to your skill to include the document in a response. You send Alexa the Alexa.Presentation.APL.RenderDocument
directive in your response. You pass a link to the document and the full data source JSON to the RenderDocument
directive in the directive properties.
To add the RenderDocument directive to the skill response
- Open your skill code in an IDE. For an Alexa-hosted skill, navigate to the Code tab in the developer console.
-
Find the intent handler to update. For the "Hello World" sample skill, find the
HelloWorldIntentHandler
. -
To add the
RenderDocument
directive, add theif...else
block to thehandle
function between the first line of the code and thereturn
statement.handle(handlerInput) { const speakOutput = handlerInput.t('HELLO_MSG'); // ADD THE NEW IF BLOCK HERE return handlerInput.responseBuilder .speak(speakOutput) //.reprompt('add a reprompt if you want to keep the session open for the user to respond') .getResponse(); }
- Save your updates to the code.
If…else block to return RenderDocument
if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']){
console.log("The user's device supports APL");
const documentName = "HelloWorldDocument"; // Name of the document saved in the authoring tool
const token = documentName + "Token";
// Add the RenderDocument directive to the response
handlerInput.responseBuilder.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
token: token,
document: {
src: 'doc://alexa/apl/documents/' + documentName,
type: 'Link'
},
datasources: {
"helloWorldDataSource": {
"primaryText": "Hello World!",
"secondaryText": "Welcome to Alexa Presentation Language!",
"color": "@colorTeal800"
}
}
});
} else {
// Just log the fact that the device doesn't support APL.
// In a real skill, you might provide different speech to the user.
console.log("The user's device doesn't support APL. Retest on a device with a screen")
}
The following code example shows the full HelloWorldIntentHandler
with the APL code.
Before you can test your updates, you must deploy your skill code.
To save and deploy an Alexa-hosted skill
- Make sure you're on the Code tab.
- Click Save, and then click Deploy.
The RenderDocument
code shown sets the document
property to a link to your document saved in the authoring tool. To make APL documents saved in the authoring tool available to the skill back-end code, you must build the interaction model for the skill.
To rebuild the interaction model
- Return to the developer console and navigate to the Build tab.
- Navigate to one of the interaction model sections, such as Invocations > Skill Invocation Name and click Build Model.
Step 5: Test your updated skill
You can test your skill with a device or in the skill simulator in the developer console.
To test your skill in the developer console
- In the developer console, navigate to the Test tab.
- For the Skill testing is enabled in option, select Development.
- To display the simulator for a device with a screen, select Device Display.
-
Under Alexa Simulator, invoke your skill, and then enter your test utterances.
For the Hello World, skill, enter your invocation name, and then enter the utterance "hello".
- Scroll past the Skill I/O section to see the device simulator. You should see the "Hello World" document you built earlier.
Troubleshooting the hello world skill
If the simulator doesn't display the APL comment, check the following issues.
Issue: Error stating that the URI is not valid
Symptoms
When you invoke the intent, Alexa responds with the error message "The U.R.I. configured for the specified skill is not valid."
This error occurs when the URI you provided in the document.src
property for the RenderDocument
directive doesn't match a document stored with your skill.
Try this
You must rebuild your interaction model to make documents saved in the authoring tool available to your skill. This error can occur when you have saved the document, but haven't yet rebuilt the model. Rebuild the model and test the skill again.
Try this
The URI you pass to the document.src
property must be in the format doc://alexa/apl/documents/{documentName}
where {documentName}
is the name for the document in the authoring tool. In this tutorial, you should save your document as HelloWorldDocument
. Therefore, the correct URI is doc://alexa/apl/documents/HelloWorldDocument
.
- Recheck your code against the examples shown in Add the RenderDocument directive to the skill response. Make sure you spelled all names correctly.
- Return to the authoring tool and confirm that you saved the document with the correct name.
Issue: The APL content on screen is missing information or the screen is blank
Symptoms
When testing the skill, Alexa doesn't report any errors. However, the content displayed on the screen is missing information. For example, the "Hello World" line displays, but the "Welcome to Alexa Presentation Language!" line is missing. Alternatively, the entire screen is blank.
Try this
This issue occurs if the data-binding expressions in the document don't match the structure of the data source. If the entire screen is blank, your RenderDocument
directive might not be configured to pass the full data source along with your document.
To make sure the data-binding expressions match the data source
-
Make sure that your call to
addDirective()
sets thedatasources
property to the JSON for thehelloWorldDataSource
:handlerInput.responseBuilder.addDirective({ type: 'Alexa.Presentation.APL.RenderDocument', token: token, document: { src: 'doc://alexa/apl/documents/' + documentName, type: 'Link' }, datasources: { "helloWorldDataSource": { "primaryText": "Hello World!", "secondaryText": "Welcome to Alexa Presentation Language!", "color": "@colorTeal800" } } });
-
Make sure that all properties in the data source match the properties used in the document. The sample hello world document uses three expressions:
${helloWorldDataSource.primaryText}
${helloWorldDataSource.secondaryText}
${helloWorldDataSource.color}
-
In your document, make sure that the
mainTemplate.parameters
array contains the stringhelloWorldDataSource
.
Next steps
- Go deeper with a course on APL – Alexa Learning Lab
- Learn more about the
AlexaHeadline
responsive template – AlexaHeadline - Explore other responsive templates – Responsive Templates
- Learn more about how APL works and the different elements you build – What Makes Up an APL Visual Response?
- Learn more about using the APL authoring tool – Build Documents in the Developer Console
Last updated: Nov 28, 2023