Voice Interface and User Experience Testing for a Custom Skill


During the certification process the certification team runs voice user interface and user experience tests to verify the following items for your custom skill:

  • The skill aligns with several key features of Alexa that help create a great experience for customers.
  • The intent schema, the set of sample utterances, and the list of values for any custom slot types are correct, complete, and adhere to voice design best practices. For more details, see Create the Interaction Model for Your Skill.

These tests address the following goals:

  • Increase the different ways users can phrase requests to your skill.
  • Evaluate the ease of speech recognition during interaction with your skill. Was Alexa able to recognize the right words?
  • Improve language understanding. When Alexa recognizes the right words, did Alexa understand what to do?
  • Make sure that users can speak to Alexa naturally and spontaneously.
  • Verify that Alexa understands most requests within the context of a skill's functionality.
  • Verify that Alexa responds to users' requests in an appropriate way, by either fulfilling the request or explaining why the request isn't possible.

These tests verify that your skill adheres to the Alexa Design Guide. Review these guidelines during this testing.

To return to the testing checklist, see Skill Certification Testing.

Session management

Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user's response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user's response. On Echo Show or Fire TV Cube, the bottom of the screen flashes blue. On Echo Spot, a blue light ring flashes around the circular screen.

This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user's request close the session.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • افتحي <Invocation Name>.

Respond to the prompt provided by the skill and verify that you get a correct response.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

2.

Test a variety of intents – both those that ask questions and those complete the user's request.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user's response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user's response. On Echo Show or Fire TV Cube, the bottom of the screen flashes blue. On Echo Spot, a blue light ring flashes around the circular screen.

This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user's request close the session.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Open <Invocation Name>.

Respond to the prompt provided by the skill and verify that you get a correct response.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

2.

Test a variety of intents – both those that ask questions and those complete the user's request.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user's response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user's response. On Echo Show or Fire TV Cube, the bottom of the screen flashes blue. On Echo Spot, a blue light ring flashes around the circular screen.

This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user's request close the session.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Open <Invocation Name>.

Respond to the prompt provided by the skill and verify that you get a correct response.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

2.

Test a variety of intents – both those that ask questions and those complete the user's request.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user's response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user's response. On Echo Show or Fire TV Cube, the bottom of the screen flashes blue. On Echo Spot, a blue light ring flashes around the circular screen.

This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user's request close the session.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Ouvre <Invocation Name>.

Respond to the prompt provided by the skill and verify that you get a correct response.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

2.

Test a variety of intents – both those that ask questions and those complete the user's request.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user's response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user’s response. On Echo Show or Fire TV Cube, the bottom of the screen flashes blue. On Echo Spot, a blue light ring flashes around the circular screen.

This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user's request close the session.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Öffne <Invocation Name>.

Respond to the prompt provided by the skill and verify that you get a correct response.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

2.

Test a variety of intents – both those that ask questions and those complete the user's request.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user's response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user's response. On Echo Show or Fire TV Cube, the bottom of the screen flashes blue. On Echo Spot, a blue light ring flashes around the circular screen.

This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user's request close the session.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • <Invocation Name> खोलो.

Respond to the prompt provided by the skill and verify that you get a correct response.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

2.

Test a variety of intents – both those that ask questions and those complete the user's request.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user's response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user's response. On Echo Show, the bottom of the screen flashes blue. On Echo Spot, a blue light ring flashes around the circular screen.

This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user's request close the session.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Apri <Invocation Name>.

Respond to the prompt provided by the skill and verify that you get a correct response.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

2.

Test a variety of intents – both those that ask questions and those complete the user's request.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user's response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user's response. On Echo Show or Fire TV Cube, the bottom of the screen flashes blue. On Echo Spot, a blue light ring flashes around the circular screen.

This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user's request close the session.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • <Invocation Name>を開いて

Respond to the prompt provided by the skill and verify that you get a correct response.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

2.

Test a variety of intents – both those that ask questions and those complete the user's request.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user's response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user's response. On Echo Show or Fire TV Cube, the bottom of the screen flashes blue. On Echo Spot, a blue light ring flashes around the circular screen.

This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user's request close the session.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Abrir <Invocation Name>.

Respond to the prompt provided by the skill and verify that you get a correct response.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

2.

Test a variety of intents – both those that ask questions and those complete the user's request.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user's response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user's response. On Echo Show, the bottom of the screen flashes blue. On Echo Spot, a blue light ring flashes around the circular screen.

This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user's request close the session.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Abre <Invocation Name>.

Respond to the prompt provided by the skill and verify that you get a correct response.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

2.

Test a variety of intents – both those that ask questions and those complete the user's request.

After every response that asks the user a question, the session remains open and the device waits for your response.

After every response that completes the user's request, the interaction ends.

Intent and slot combinations

A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • اسألي <Invocation Name> عن <something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Evaluate the response for each intent

The response is appropriate for the context of the request.

For example, if the request includes a slot value, the response is relevant to that information. If a request to that same intent does not include the slot, the response uses a default or asks the user for clarification

You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example:

Intent Slot Combination Sample Utterance to Test
IntentName SlotOne This is an utterance to test this intent and slot one
IntentName SlotTwo This is an utterance to test this intent and slot two
IntentName SlotOne
SlotTwo
This is an utterance to test this intent with both slot one and slot two
Each additional valid intent and slot combination -  

A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • vraag <Invocation Name> om <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Evaluate the response for each intent

The response is appropriate for the context of the request.

For example, if the request includes a slot value, the response is relevant to that information. If a request to that same intent does not include the slot, the response uses a default or asks the user for clarification

You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example:

Intent Slot Combination Sample Utterance to Test
IntentName SlotOne This is an utterance to test this intent and slot one
IntentName SlotTwo This is an utterance to test this intent and slot two
IntentName SlotOne
SlotTwo
This is an utterance to test this intent with both slot one and slot two
Each additional valid intent and slot combination -  

A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Ask <Invocation Name> to <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Evaluate the response for each intent

The response is appropriate for the context of the request.

For example, if the request includes a slot value, the response is relevant to that information. If a request to that same intent does not include the slot, the response uses a default or asks the user for clarification

You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example:

Intent Slot Combination Sample Utterance to Test
IntentName SlotOne This is an utterance to test this intent and slot one
IntentName SlotTwo This is an utterance to test this intent and slot two
IntentName SlotOne
SlotTwo
This is an utterance to test this intent with both slot one and slot two
Each additional valid intent and slot combination -  

A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Demande à <Invocation Name> de <faire quelque chose>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Evaluate the response for each intent

The response is appropriate for the context of the request.

For example, if the request includes a slot value, the response is relevant to that information. If a request to that same intent does not include the slot, the response uses a default or asks the user for clarification

You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example:

Intent Slot Combination Sample Utterance to Test
IntentName SlotOne This is an utterance to test this intent and slot one
IntentName SlotTwo This is an utterance to test this intent and slot two
IntentName SlotOne
SlotTwo
This is an utterance to test this intent with both slot one and slot two
Each additional valid intent and slot combination -  

A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Öffne <Invocation Name> und <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Evaluate the response for each intent

The response is appropriate for the context of the request.

For example, if the request includes a slot value, the response is relevant to that information. If a request to that same intent does not include the slot, the response uses a default or asks the user for clarification

You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example:

Intent Slot Combination Sample Utterance to Test
IntentName SlotOne Diese Äußerung soll IntentName mit SlotOne testen
IntentName SlotTwo Diese Äußerung soll IntentName mit SlotTwo testen
IntentName SlotOne
SlotTwo
Diese Äußerung soll IntentName mit SlotOne und SlotTwo testen
Each additional valid intent and slot combination -  

A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • <Invocation Name> से पूछो के <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Evaluate the response for each intent

The response is appropriate for the context of the request.

For example, if the request includes a slot value, the response is relevant to that information. If a request to that same intent does not include the slot, the response uses a default or asks the user for clarification

You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example:

Intent Slot Combination Sample Utterance to Test
IntentName SlotOne This is an utterance to test this intent and slot one
IntentName SlotTwo This is an utterance to test this intent and slot two
IntentName SlotOne
SlotTwo
This is an utterance to test this intent with both slot one and slot two
Each additional valid intent and slot combination -  

A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Domanda a <Invocation Name> di <fare qualcosa>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Evaluate the response for each intent

The response is appropriate for the context of the request.

For example, if the request includes a slot value, the response is relevant to that information. If a request to that same intent does not include the slot, the response uses a default or asks the user for clarification

You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example:

Intent Slot Combination Sample utterance to test
IntentName SlotOne This is an utterance to test this intent and slot one
IntentName SlotTwo This is an utterance to test this intent and slot two
IntentName SlotOne
SlotTwo
This is an utterance to test this intent with both slot one and slot two
Each additional valid intent and slot combination -  

A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • <Invocation Name> を開いて <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Evaluate the response for each intent

The response is appropriate for the context of the request.

For example, if the request includes a slot value, the response is relevant to that information. If a request to that same intent does not include the slot, the response uses a default or asks the user for clarification

You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example:

Intent Slot Combination Sample Utterance to Test
IntentName SlotOne This is an utterance to test this intent and slot one
IntentName SlotTwo This is an utterance to test this intent and slot two
IntentName SlotOne
SlotTwo
This is an utterance to test this intent with both slot one and slot two
Each additional valid intent and slot combination -  

A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Abra o <Invocation Name> para <do something>
  • Abre a <Invocation Name> para <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Evaluate the response for each intent

The response is appropriate for the context of the request.

For example, if the request includes a slot value, the response is relevant to that information. If a request to that same intent does not include the slot, the response uses a default or asks the user for clarification

You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example:

Intent Slot Combination Sample Utterance to Test
IntentName SlotOne This is an utterance to test this intent and slot one
IntentName SlotTwo This is an utterance to test this intent and slot two
IntentName SlotOne
SlotTwo
This is an utterance to test this intent with both slot one and slot two
Each additional valid intent and slot combination -  

A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Pídele a <Invocation Name> que <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Evaluate the response for each intent

The response is appropriate for the context of the request.

For example, if the request includes a slot value, the response is relevant to that information. If a request to that same intent does not include the slot, the response uses a default or asks the user for clarification

You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example:

Intent Slot Combination Sample Utterance to Test
IntentName SlotOne This is an utterance to test this intent and slot one
IntentName SlotTwo This is an utterance to test this intent and slot two
IntentName SlotOne
SlotTwo
This is an utterance to test this intent with both slot one and slot two
Each additional valid intent and slot combination -  

Intent response design

A good user experience for a skill depends on the skill having well-designed text-to-speech responses. The Alexa Design Guide provides guidance for creating natural sounding responses in your skill.

You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • اسألي <Invocation Name> عن <something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Try a variety of sample utterances for each intent.

If the skill vocalizes any examples for users to try, use those examples exactly as instructed by the skill.

Evaluate the response for each intent

The response meets each of the following requirements:

  • Answers the user's request in a concise, terse manner.
  • Provides information in consumable chunks.
  • Does not include technical or legal jargon.
  • Responses from intents that are not typically used in a one-shot manner provide a relevant response or inform users how to begin using the skill.
  • The response is spoken in the same language used by the Alexa account. For instance, when testing with an account configured with German, the text-to-speech responses are in German. When testing with an account configured with English (US), the text-to-speech responses are in English.

For a better user experience, the response should also meet these recommendations:

  • Easy to understand
  • Written for the ear, not the eye

2.

If your skill's responses contain a wake word, invoke each response that contains a wake word on an Alexa device.

If your skill's responses contain a wake word, the wake word in the response might wake up the device. Make sure that your skill's responses do not wake up the device. One way to do this is to make sure that there are no pauses after the wake word.

You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test.

A good user experience for a skill depends on the skill having well-designed text-to-speech responses. The Alexa Design Guide provides guidance for creating natural sounding responses in your skill.

You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Vraag <Invocation Name> om <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Try a variety of sample utterances for each intent.

If the skill vocalizes any examples for users to try, use those examples exactly as instructed by the skill.

Evaluate the response for each intent

The response meets each of the following requirements:

  • Answers the user's request in a concise, terse manner.
  • Provides information in consumable chunks.
  • Does not include technical or legal jargon.
  • Responses from intents that are not typically used in a one-shot manner provide a relevant response or inform users how to begin using the skill.
  • The response is spoken in the same language used by the Alexa account. For instance, when testing with an account configured with Dutch, the text-to-speech responses are in Dutch. When testing with an account configured with English (US), the text-to-speech responses are in English.

For a better user experience, the response should also meet these recommendations:

  • Easy to understand
  • Written for the ear, not the eye

2.

If your skill's responses contain a wake word, invoke each response that contains a wake word on an Alexa device.

If your skill's responses contain a wake word, the wake word in the response might wake up the device. Make sure that your skill's responses do not wake up the device. One way to do this is to make sure that there are no pauses after the wake word.

You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test.

A good user experience for a skill depends on the skill having well-designed text-to-speech responses. The Alexa Design Guide provides guidance for creating natural sounding responses in your skill.

You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Ask <Invocation Name> to <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Try a variety of sample utterances for each intent.

If the skill vocalizes any examples for users to try, use those examples exactly as instructed by the skill.

Evaluate the response for each intent

The response meets each of the following requirements:

  • Answers the user's request in a concise, terse manner.
  • Provides information in consumable chunks.
  • Does not include technical or legal jargon.
  • Responses from intents that are not typically used in a one-shot manner provide a relevant response or inform users how to begin using the skill.
  • The response is spoken in the same language used by the Alexa account. For instance, when testing with an account configured with German, the text-to-speech responses are in German. When testing with an account configured with English (US), the text-to-speech responses are in English.

For a better user experience, the response should also meet these recommendations:

  • Easy to understand
  • Written for the ear, not the eye

2.

If your skill's responses contain a wake word, invoke each response that contains a wake word on an Alexa device.

If your skill's responses contain a wake word, the wake word in the response might wake up the device. Make sure that your skill's responses do not wake up the device. One way to do this is to make sure that there are no pauses after the wake word.

You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test.

A good user experience for a skill depends on the skill having well-designed text-to-speech responses. The Alexa Design Guide provides guidance for creating natural sounding responses in your skill.

You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Demande à <Invocation Name> de <faire quelque chose>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Try a variety of sample utterances for each intent.

If the skill vocalizes any examples for users to try, use those examples exactly as instructed by the skill.

Evaluate the response for each intent

The response meets each of the following requirements:

  • Answers the user's request in a concise, terse manner.
  • Provides information in consumable chunks.
  • Does not include technical or legal jargon.
  • Responses from intents that are not typically used in a one-shot manner provide a relevant response or inform users how to begin using the skill.
  • The response is spoken in the same language used by the Alexa account. For instance, when testing with an account configured with German, the text-to-speech responses are in German. When testing with an account configured with English (US), the text-to-speech responses are in English.

For a better user experience, the response should also meet these recommendations:

  • Easy to understand
  • Written for the ear, not the eye

You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test.

A good user experience for a skill depends on the skill having well-designed text-to-speech responses. The Alexa Design Guide provides guidance for creating natural sounding responses in your skill.

You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Öffne <Invocation Name> und <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Try a variety of sample utterances for each intent.

If the skill vocalizes any examples for users to try, use those examples exactly as instructed by the skill.

Evaluate the response for each intent

The response meets each of the following requirements:

  • Answers the user's request in a concise, terse manner.
  • Provides information in consumable chunks.
  • Does not include technical or legal jargon.
  • Responses from intents that are not typically used in a one-shot manner provide a relevant response or inform users how to begin using the skill.
  • The response is spoken in the same language used by the Alexa account. For instance, when testing with an account configured with German, the text-to-speech responses are in German. When testing with an account configured with English (US), the text-to-speech responses are in English.

For a better user experience, the response should also meet these recommendations:

  • Easy to understand
  • Written for the ear, not the eye

You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test.

A good user experience for a skill depends on the skill having well-designed text-to-speech responses. The Alexa Design Guide provides guidance for creating natural sounding responses in your skill.

You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • <Invocation Name> से पूछो अगर <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Try a variety of sample utterances for each intent.

If the skill vocalizes any examples for users to try, use those examples exactly as instructed by the skill.

Evaluate the response for each intent

The response meets each of the following requirements:

  • Answers the user's request in a concise, terse manner.
  • Provides information in consumable chunks.
  • Does not include technical or legal jargon.
  • Responses from intents that are not typically used in a one-shot manner provide a relevant response or inform users how to begin using the skill.
  • The response is spoken in the same language used by the Alexa account. For instance, when testing with an account configured with Hindi, the text-to-speech responses are in Hinglish (Mixed Hindi and English script). When testing with an account configured with English (US), the text-to-speech responses are in English.

For a better user experience, the response should also meet these recommendations:

  • Easy to understand
  • Written for the ear, not the eye

2.

If your skill's responses contain a wake word, invoke each response that contains a wake word on an Alexa device.

If your skill's responses contain a wake word, the wake word in the response might wake up the device. Make sure that your skill's responses do not wake up the device. One way to do this is to make sure that there are no pauses after the wake word.

You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test.

A good user experience for a skill depends on the skill having well-designed text-to-speech responses. The Alexa Design Guide provides guidance for creating natural sounding responses in your skill.

You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Domanda a <Invocation Name> di <fare qualcosa>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Try a variety of sample utterances for each intent.

If the skill vocalizes any examples for users to try, use those examples exactly as instructed by the skill.

Evaluate the response for each intent

The response meets each of the following requirements:

  • Answers the user's request in a concise, terse manner.
  • Provides information in consumable chunks.
  • Does not include technical or legal jargon.
  • Responses from intents that are not typically used in a one-shot manner provide a relevant response or inform users how to begin using the skill.
  • The response is spoken in the same language used by the Alexa account. For instance, when testing with an account configured with Italian, the text-to-speech responses are in Italian. When testing with an account configured with English (US), the text-to-speech responses are in English.

For a better user experience, the response should also meet these recommendations:

  • Easy to understand
  • Written for the ear, not the eye

You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test.

A good user experience for a skill depends on the skill having well-designed text-to-speech responses. The Alexa Design Guide provides guidance for creating natural sounding responses in your skill.

You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • <Invocation Name> を開いて <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Try a variety of sample utterances for each intent.

If the skill vocalizes any examples for users to try, use those examples exactly as instructed by the skill.

Evaluate the response for each intent

The response meets each of the following requirements:

  • Answers the user's request in a concise, terse manner.
  • Provides information in consumable chunks.
  • Does not include technical or legal jargon.
  • Responses from intents that are not typically used in a one-shot manner provide a relevant response or inform users how to begin using the skill.
  • The response is spoken in the same language used by the Alexa account. For instance, when testing with an account configured with Japanese, the text-to-speech responses are in Japanese. When testing with an account configured with English (US), the text-to-speech responses are in English.

For a better user experience, the response should also meet these recommendations:

  • Easy to understand
  • Written for the ear, not the eye

You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test.

A good user experience for a skill depends on the skill having well-designed text-to-speech responses. The Alexa Design Guide provides guidance for creating natural sounding responses in your skill.

You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Iniciar o <Invocation Name> para <do something>
  • Inicie a <Invocation Name> para <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Try a variety of sample utterances for each intent.

If the skill vocalizes any examples for users to try, use those examples exactly as instructed by the skill.

Evaluate the response for each intent

The response meets each of the following requirements:

  • Answers the user's request in a concise, terse manner.
  • Provides information in consumable chunks.
  • Does not include technical or legal jargon.
  • Responses from intents that are not typically used in a one-shot manner provide a relevant response or inform users how to begin using the skill.
  • The response is spoken in the same language used by the Alexa account. For instance, when testing with an account configured with German, the text-to-speech responses are in German. When testing with an account configured with English (US), the text-to-speech responses are in English.

For a better user experience, the response should also meet these recommendations:

  • Easy to understand
  • Written for the ear, not the eye

You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test.

A good user experience for a skill depends on the skill having well-designed text-to-speech responses. The Alexa Design Guide provides guidance for creating natural sounding responses in your skill.

You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test.

Test Expected Results

1.

Test the skill's intent responses using different combinations of slot values.

You can use one of the one-shot phrases for starting the skill, for example:

  • Pídele a <Invocation Name> que <do something>

Be sure to invoke every intent, not just those that are typically used in a one-shot manner.

Try a variety of sample utterances for each intent.

If the skill vocalizes any examples for users to try, use those examples exactly as instructed by the skill.

Evaluate the response for each intent

The response meets each of the following requirements:

  • Answers the user's request in a concise, terse manner.
  • Provides information in consumable chunks.
  • Does not include technical or legal jargon.
  • Responses from intents that are not typically used in a one-shot manner provide a relevant response or inform users how to begin using the skill.
  • The response is spoken in the same language used by the Alexa account. For instance, when testing with an account configured with German, the text-to-speech responses are in German. When testing with an account configured with English (US), the text-to-speech responses are in English.

For a better user experience, the response should also meet these recommendations:

  • Easy to understand
  • Written for the ear, not the eye

You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test.

Supportive prompting

A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request).

In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios.

Test Expected Results

1.

Invoke the skill with no intent. You can do this by using a phrase that sends a LaunchRequest rather than an IntentRequest. For example:

  • افتحي <Invocation Name>.

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

  • The skill prompts you for information about what you want to do.
  • The prompt includes the skill's name so you know you are in the right place.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.
  • If no information is needed from users after launch, the skill completes a core function and closes the session.

2.

Invoke the skill with a partial intent. You can do this by using a phrase that invokes the intent without including all the required slot data. For example:

  • اسألي <Invocation Name> عن <something> (leave out slot data in the command)

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

If the skill does not define any slots, you can skip this test, as it is not possible to send a partial intent.

  • The skill prompts you for the missing information.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.

See the Alexa Design Guide for recommendations on designing prompts.

A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request).

In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios.

Test Expected Results

1.

Invoke the skill with no intent. You can do this by using a phrase that sends a LaunchRequest rather than an IntentRequest. For example:

  • Open <Invocation Name>.

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

  • The skill prompts you for information about what you want to do.
  • The prompt includes the skill's name so you know you are in the right place.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.
  • If no information is needed from users after launch, the skill completes a core function and closes the session.

2.

Invoke the skill with a partial intent. You can do this by using a phrase that invokes the intent without including all the required slot data. For example:

  • Vraag <Invocation Name> om <do something> (leave out slot data in the command)

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

If the skill does not define any slots, you can skip this test, as it is not possible to send a partial intent.

  • The skill prompts you for the missing information.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.

See the Alexa Design Guide for recommendations on designing prompts.

A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request).

In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios.

Test Expected Results

1.

Invoke the skill with no intent. You can do this by using a phrase that sends a LaunchRequest rather than an IntentRequest. For example:

  • Open <Invocation Name>.

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

  • The skill prompts you for information about what you want to do.
  • The prompt includes the skill's name so you know you are in the right place.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.
  • If no information is needed from users after launch, the skill completes a core function and closes the session.

2.

Invoke the skill with a partial intent. You can do this by using a phrase that invokes the intent without including all the required slot data. For example:

  • Ask <Invocation Name> to <do something> (leave out slot data in the command)

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

If the skill does not define any slots, you can skip this test, as it is not possible to send a partial intent.

  • The skill prompts you for the missing information.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.

See the Alexa Design Guide for recommendations on designing prompts.

A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request).

In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios.

Test Expected Results

1.

Invoke the skill with no intent. You can do this by using a phrase that sends a LaunchRequest rather than an IntentRequest. For example:

  • Ouvre <Invocation Name>.

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

  • The skill prompts you for information about what you want to do.
  • The prompt includes the skill's name so you know you are in the right place.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.
  • If no information is needed from users after launch, the skill completes a core function and closes the session.

2.

Invoke the skill with a partial intent. You can do this by using a phrase that invokes the intent without including all the required slot data. For example:

  • Demande à <Invocation Name> de <faire quelque chose> (leave out slot data in the command)

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

If the skill does not define any slots, you can skip this test, as it is not possible to send a partial intent.

  • The skill prompts you for the missing information.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.

See the Alexa Design Guide for recommendations on designing prompts.

A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request).

In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios.

Test Expected Results

1.

Invoke the skill with no intent. You can do this by using a phrase that sends a LaunchRequest rather than an IntentRequest. For example:

  • Öffne <Invocation Name>.

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

  • The skill prompts you for information about what you want to do.
  • The prompt includes the skill's name so you know you are in the right place.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.
  • If no information is needed from users after launch, the skill completes a core function and closes the session.

2.

Invoke the skill with a partial intent. You can do this by using a phrase that invokes the intent without including all the required slot data. For example:

  • Öffne <Invocation Name> und <do something> (leave out slot data in the command)

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

If the skill does not define any slots, you can skip this test, as it is not possible to send a partial intent.

  • The skill prompts you for the missing information.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.

See the Alexa Design Guide for recommendations about designing prompts.

A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request).

In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios.

Test Expected Results

1.

Invoke the skill with no intent. You can do this by using a phrase that sends a LaunchRequest rather than an IntentRequest. For example:

  • <Invocation Name> खोलो.

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

  • The skill prompts you for information about what you want to do.
  • The prompt includes the skill's name so you know you are in the right place.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.
  • If no information is needed from users after launch, the skill completes a core function and closes the session.

2.

Invoke the skill with a partial intent. You can do this by using a phrase that invokes the intent without including all the required slot data. For example:

  • <Invocation Name> से पूछो के <do something>(leave out slot data in the command)

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

If the skill does not define any slots, you can skip this test, as it is not possible to send a partial intent.

  • The skill prompts you for the missing information.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.

See the Alexa Design Guide for recommendations on designing prompts.

A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request).

In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios.

Test Expected Results

1.

Invoke the skill with no intent. You can do this by using a phrase that sends a LaunchRequest rather than an IntentRequest. For example:

  • Apri <Invocation Name>.

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

  • The skill prompts you for information about what you want to do.
  • The prompt includes the skill's name so you know you are in the right place.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.
  • If no information is needed from users after launch, the skill completes a core function and closes the session.

2.

Invoke the skill with a partial intent. You can do this by using a phrase that invokes the intent without including all the required slot data. For example:

  • Domanda a <Invocation Name> di <fare qualcosa> (leave out slot data in the command)

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

If the skill does not define any slots, you can skip this test, as it is not possible to send a partial intent.

  • The skill prompts you for the missing information.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.

See the Alexa Design Guide for recommendations on designing prompts.

A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request).

In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios.

Test Expected Results

1.

Invoke the skill with no intent. You can do this by using a phrase that sends a LaunchRequest rather than an IntentRequest. For example:

  • <Invocation Name>を開いて

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

  • The skill prompts you for information about what you want to do.
  • The prompt includes the skill's name so you know you are in the right place.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.
  • If no information is needed from users after launch, the skill completes a core function and closes the session.

2.

Invoke the skill with a partial intent. You can do this by using a phrase that invokes the intent without including all the required slot data. For example:

  • <Invocation Name> を開いて <do something> (leave out slot data in the command)

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

If the skill does not define any slots, you can skip this test, as it is not possible to send a partial intent.

  • The skill prompts you for the missing information.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.

See the Alexa Design Guide for recommendations on designing prompts.

A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request).

In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios.

Test Expected Results

1.

Invoke the skill with no intent. You can do this by using a phrase that sends a LaunchRequest rather than an IntentRequest. For example:

  • Abrir <Invocation Name>.

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

  • The skill prompts you for information about what you want to do.
  • The prompt includes the skill's name so you know you are in the right place.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.
  • If no information is needed from users after launch, the skill completes a core function and closes the session.

2.

Invoke the skill with a partial intent. You can do this by using a phrase that invokes the intent without including all the required slot data. For example:

  • Abrir o <Invocation Name> para <do something> (leave out slot data in the command)
  • Abre <Invocation Name> para <do something> (leave out slot data in the command)

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

If the skill does not define any slots, you can skip this test, as it is not possible to send a partial intent.

  • The skill prompts you for the missing information.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.

See the Alexa Design Guide for recommendations on designing prompts.

A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request).

In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios.

Test Expected Results

1.

Invoke the skill with no intent. You can do this by using a phrase that sends a LaunchRequest rather than an IntentRequest. For example:

  • Abre <Invocation Name>.

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

  • The skill prompts you for information about what you want to do.
  • The prompt includes the skill's name so you know you are in the right place.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.
  • If no information is needed from users after launch, the skill completes a core function and closes the session.

2.

Invoke the skill with a partial intent. You can do this by using a phrase that invokes the intent without including all the required slot data. For example:

  • Pídele a <Invocation Name> que <do something> (leave out slot data in the command)

Verify that you get a prompt, then respond to the prompt and verify that you get a correct response.

If the skill does not define any slots, you can skip this test, as it is not possible to send a partial intent.

  • The skill prompts you for the missing information.
  • The prompt gives you specific options about what to do, but is brief. If the skill has many functions, the prompt gives the most common.
  • The prompt does not give verbose instructions telling the user what to say (such as "to do xyz, say xyz"). The prompt is concise.
  • When you respond to the prompt, the skill continues prompting until all needed information is collected, then provides a contextualized, non-error response.

See the Alexa Design Guide for recommendations on designing prompts.

Invocation name

Users say the invocation name for a skill to begin an interaction. Inspect the skill's invocation name and verify that it meets the invocation name requirements described in [Choosing the Invocation Name for a Custom Skill][choosing-the-invocation-name-for-an-alexa-skill].

Users say the invocation name for a skill to begin an interaction. Inspect the skill's invocation name and verify that it meets the invocation name requirements described in Choosing the Invocation Name for a Custom Skill.

Users say the invocation name for a skill to begin an interaction. Inspect the skill's invocation name and verify that it meets the invocation name requirements described in Choosing the Invocation Name for a Custom Skill.

Users say the invocation name for a skill to begin an interaction. Inspect the skill's invocation name and verify that it meets the invocation name requirements described in Choosing the Invocation Name for a Custom Skill.

Users say the invocation name for a skill to begin an interaction. Inspect the skill’s invocation name and verify that it meets the invocation name requirements described in Choosing the Invocation Name for a Custom Skill.

Users say the invocation name for a skill to begin an interaction. Inspect the skill's invocation name and verify that it meets the invocation name requirements described in Choosing the Invocation Name for a Custom Skill.

Users say the invocation name for a skill to begin an interaction. Inspect the skill's invocation name and verify that it meets the invocation name requirements described in Choosing the Invocation Name for a Custom Skill.

Users say the invocation name for a skill to begin an interaction. Inspect the skill's invocation name and verify that it meets the invocation name requirements described in Choosing the Invocation Name for a Custom Skill.

Users say the invocation name for a skill to begin an interaction. Inspect the skill's invocation name and verify that it meets the invocation name requirements described in Choosing the Invocation Name for a Custom Skill.

Users say the invocation name for a skill to begin an interaction. Inspect the skill's invocation name and verify that it meets the invocation name requirements described in Choosing the Invocation Name for a Custom Skill.

One-shot phrasing for sample utterances

Most skills provide quick, simple, "one-shot" interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase.

The اسألي and اطلبي من phrases are the most natural phrases for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with these phrases and are easy and natural to say.

In these tests, you review the sample utterances you've written for the skill, then test them by voice to ensure that they work as expected.

Test Expected Results

1.

Inspect the skill's sample utterances to ensure that they contain the right phrasing to match the different phrases for invoking a skill with a specific intent.

Noun phrases: phrases that can follow: "اطلبي من <invocation name> …" or "اسألي <invocation name> عن …"

  • "اطلبي من <invocation name> توقعات برج الثور"
  • "اسألي <invocation name> عن لوني المفضل"

Questions, in both interrogative and inverted forms: phrases that can follow "أبغى أسأل <invocation name> …"

  • "اسألي <invocation name> وين سيارتي"
  • "اسألي <invocation name> وين هي سيارتي"

Commands: phrases that can follow "اطلبي من <invocation name> أن…"

  • "اطلبي من <invocation name> أن تجيبلي سيارة

(In the examples above, the italic phrase is the sample utterance).

  • Noun, question, and command utterances are all included.
  • At least five varieties of these two types of phrases are present (five noun forms, and five question forms)
  • When combined with the ask phrase, the sample utterances are intuitive and natural.

2.

Launch the skill using the following common "اسألي" pattern:

  • اسألي <Invocation Name> عن <something>
  • This common "اسألي" pattern works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

3.

Launch the skill with the generic "اطلبي من" pattern (recommended test if this is a natural phrase for your skill):

  • اطلبي من <Invocation Name> أن <something>

Test with questions starting with different question words (منو, ايش, كيف, and so on).

The specific question words that sound natural with your skill may vary. For example, these types of questions do not flow well with "رائد الفضاء." A user is unlikely to say something like "اسألي رائد الفضاء عن حقائق الفضاء?"

  • The generic "اسألي" pattern works if appropriate for your skill.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

4.

Launch the skill using the following common "اطلبي من" pattern:

  • اطلبي من <Invocation Name> أن <do something>
  • The common "اطلبي من" pattern works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

5.

Review the "Invoking a Skill with a Specific Request (Intent)" section in Understand How Users Invoke Custom Skills and test as many of the phrases as apply to your skill.

Note that not all of the phrases apply to all skills. For example, the "اسألي…إذا…" phrasing would probably not make sense for a skill asking about weather or tide information, so the skill would still pass this test even without this phrase.

  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

Most skills provide quick, simple, "one-shot" interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase.

The vraag and vertel phrases are the most natural phrases for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with these phrases and are easy and natural to say.

In these tests, you review the sample utterances you've written for the skill, then test them by voice to ensure that they work as expected.

Test Expected Results

1.

Inspect the skill's sample utterances to ensure that they contain the right phrasing to match the different phrases for invoking a skill with a specific intent.

Noun phrases: phrases that can follow
"vraag <invocation name> om …" or
"vertel <invocation name> over…"

  • "vraag <invocation name> om mijn favoriete kleur"
  • "vertel <invocation name> over mijn afspraak vandaag om 3 uur 's middags"

Questions, in both interrogative and inverted forms: phrases that can follow "vraag <invocation name> …"

  • "Vraag <invocation name> waar is mijn auto"
  • "Vraag <invocation name> waar mijn auto is"

Commands: phrases that can follow "vertel <invocation name> om…" or "Vraag <invocation name> to…"

  • "Vraag <invocation name> om mijn auto to vinden"
  • "vertel <invocation name> mijn favoriete boek the vinden"

(In the examples above, the italic phrase is the sample utterance).

  • Noun, question, and command utterances are all included.
  • At least five varieties of these three types of phrases are present (five noun forms, five question forms, and five command forms)
  • When combined with the 'vraag' and 'vertel' phrases, the sample utterances are intuitive and natural.

2.

Launch the skill using each of the following common "vraag" patterns (ideally do multiple variations for each pattern):

  • Vraag <Invocation Name> naar <something>
  • Vraag <Invocation Name> over <something>
  • Vraag <Invocation Name> om <do something>
  • Each of these common "vraag" patterns works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

3.

Launch the skill with the generic "vraag" pattern (recommended test if this is a natural phrase for your skill):

  • Vraag <Invocation Name> <question>

Test with questions starting with different question words (wie, wat, hoe, and so on).

The specific question words that sound natural with your skill may vary. For example, these types of questions do not flow well with "Ruimte Feitjes." A user is unlikely to say something like "Vraag Ruimte Feitjes wat is een ruimte feitje?"

  • The generic "vraag" pattern works if appropriate for your skill.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

4.

Launch the skill using the following common "vertel" pattern:

  • Vertel <Invocation Name> om <do something>
  • The common "vertel" pattern works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

5.

Review the "Invoking a Skill with a Specific Request (Intent)" section in Understand How Users Invoke Custom Skills and test as many of the phrases as apply to your skill.

Note that not all of the phrases apply to all skills. For example, the "Vertel…dat…" phrasing would probably not make sense for a skill asking about weather or tide information, so the skill would still pass this test even without this phrase.

  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

Most skills provide quick, simple, "one-shot" interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase.

The ask and tell phrases are the most natural phrases for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with these phrases and are easy and natural to say.

In these tests, you review the sample utterances you've written for the skill, then test them by voice to ensure that they work as expected.

Test Expected Results

1.

Inspect the skill's sample utterances to ensure that they contain the right phrasing to match the different phrases for invoking a skill with a specific intent.

Noun phrases: phrases that can follow
"ask <invocation name> for …" or
"tell <invocation name> about…"

  • "ask <invocation name> for my favorite color"
  • "tell <invocation name> about my appointment today at 3 pm"

Questions, in both interrogative and inverted forms: phrases that can follow "ask <invocation name> …"

  • "ask <invocation name> where is my car"
  • "ask <invocation name> where my car is"

Commands: phrases that can follow "tell <invocation name> to…" or "ask <invocation name> to…"

  • "ask <invocation name> to get me a car"
  • "tell <invocation name> to find my favorite book"

(In the examples above, the italic phrase is the sample utterance).

  • Noun, question, and command utterances are all included.
  • At least five varieties of these three types of phrases are present (five noun forms, five question forms, and five command forms)
  • When combined with the ask and tell phrases, the sample utterances are intuitive and natural.

2.

Launch the skill using each of the following common "ask" patterns (ideally do multiple variations for each pattern):

  • Ask <Invocation Name> for <something>
  • Ask <Invocation Name> about <something>
  • Ask <Invocation Name> to <do something>
  • Each of these common "ask" patterns works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

3.

Launch the skill with the generic "ask" pattern (recommended test if this is a natural phrase for your skill):

  • Ask <Invocation Name> <question>

Test with questions starting with different question words (who, what, how, and so on).

The specific question words that sound natural with your skill may vary. For example, these types of questions do not flow well with "Space Geek." A user is unlikely to say something like "Ask Space Geek what is a space fact?"

  • The generic "ask" pattern works if appropriate for your skill.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

4.

Launch the skill using the following common "tell" pattern:

  • Tell <Invocation Name> to <do something>
  • The common "tell" pattern works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

5.

Review the "Invoking a Skill with a Specific Request (Intent)" section in Understand How Users Invoke Custom Skills and test as many of the phrases as apply to your skill.

Note that not all of the phrases apply to all skills. For example, the "Ask…whether…" phrasing would probably not make sense for a skill asking about weather or tide information, so the skill would still pass this test even without this phrase.

  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

Most skills provide quick, simple, "one-shot" interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase.

The demande à phrase is the most natural phrase for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with this phrase and are easy and natural to say.

In these tests, you review the sample utterances you've written for the skill, then test them by voice to ensure that they work as expected.

Test Expected Results

1.

Inspect the skill's sample utterances to ensure that they contain the right phrasing to match the different phrases for invoking a skill with a specific intent.

Noun phrases: phrases that can follow
"Demande à <invocation name> de …" or
"Lance <invocation name> pour…"

  • "demande à <invocation name> de commander un taxi"
  • "lance <invocation name> pour demander un rendez-vous"

Commands: phrases that can follow "demande à <invocation name> de…"

  • "demande à <invocation name> de commander un taxi"
  • "lance <invocation name> pour demander un rendez-vous"

(In the examples above, the italic phrase is the sample utterance).

  • Noun, question, and command utterances are all included.
  • At least five varieties of these three types of phrases are present (five noun forms, five question forms, and five command forms)
  • When combined with the ask and tell phrases, the sample utterances are intuitive and natural.

2.

Launch the skill using each of the following common "demande" patterns (ideally do multiple variations for each pattern):

  • Demande à <Invocation Name> de <something>
  • Demande à <Invocation Name> pour <something>
  • Each of these common "ask" patterns works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

3.

Launch the skill with the generic "demande" pattern (recommended test if this is a natural phrase for your skill):

  • Demande à <Invocation Name> <question>

Test with questions starting with different question words (qui, quoi, comment, etc…).

The specific question words that sound natural with your skill may vary. For example, these types of questions do not flow well with "mes taxis" A user is unlikely to say something like "Demande à mes taxis qu'est ce que commander un taxi?"

  • The generic "demande" pattern works if appropriate for your skill.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

5.

Review the "Invoking a Skill with a Specific Request (Intent)" section in Understand How Users Invoke Custom Skills and test as many of the phrases as apply to your skill.

Note that not all of the phrases apply to all skills. For example, the "Demande à…si…" phrasing would probably not make sense for a skill asking about weather or tide information, so the skill would still pass this test even without this phrase.

  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

Most skills provide quick, simple, "one-shot" interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase.

The frage and sage phrases are the most natural phrases for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with these phrases and are easy and natural to say.

In these tests, you review the sample utterances you've written for the skill, then test them by voice to ensure that they work as expected.

Test Expected Results

1.

Inspect the skill's sample utterances to ensure that they contain the right phrasing to match the different phrases for invoking a skill with a specific intent. Noun phrases: phrases that can follow
"frage <invocation name> nach …" or
"sage <invocation name>, dass…"

  • "frage <invocation name> nach meinen Lieblingsfarben"
  • "sage <invocation name>, dass ich heute einen Termin um 15:30 Uhr habe"

Questions, in both interrogative and inverted forms: phrases that can follow "frage <invocation name> …"

  • "frage <invocation name>, wo mein Auto ist"

Commands: phrases that can follow "sage <invocation name>…" or "frage <invocation name>…"

  • "sage <invocation name>, hol mein auto"
  • "frage <invocation name>, nach meinem Lieblingsbuch"

(In the examples above, the italic phrase is the sample utterance).

  • Noun, question, and command utterances are all included.
  • At least five varieties of these three types of phrases are present (five noun forms, five question forms, and five command forms)
  • When combined with the ask and tell phrases, the sample utterances are intuitive and natural.

2.

Launch the skill using each of the following common "frage" patterns (ideally do multiple variations for each pattern):

  • Frage <Invocation Name> nach <something>
  • Frage <Invocation Name> ob <something>
  • Frage <Invocation Name>, <something>
  • Each of these common "frage" patterns works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

3.

Launch the skill with the generic "frage" pattern (recommended test if this is a natural phrase for your skill):

  • Frage <Invocation Name> <question>

Test with questions starting with different question words (wer, was, wie, and so on).

The specific question words that sound natural with your skill may vary. For example, these types of questions do not flow well with "Space Geek." A user is unlikely to say something like "Frage Space Geek was ein Weltraumfakt ist?"

  • The generic "frage" pattern works if appropriate for your skill.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

4.

Launch the skill using the following common "sage" pattern:

  • Sage <Invocation Name>, <do something>
  • The common "sage" pattern works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

5.

Review the "Invoking a Skill with a Specific Request (Intent)" section in Understand How Users Invoke Custom Skills and test as many of the phrases as apply to your skill.

Note that not all of the phrases apply to all skills. For example, the "Frage…ob…" phrasing would probably not make sense for a skill asking about weather or tide information, so the skill would still pass this test even without this phrase.

  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

Most skills provide quick, simple, "one-shot" interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase.

The से पूछो, चालू करो, शुरू करो, प्रारंभ करो,का इस्तेमाल करो phrases are the most natural phrases for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with these phrases and are easy and natural to say.

In these tests, you review the sample utterances you've written for the skill, then test them by voice to ensure that they work as expected.

Test Expected Results

1.

Inspect the skill's sample utterances to ensure that they contain the right phrasing to match the different phrases for invoking a skill with a specific intent.

Noun phrases: phrases that can follow
" <invocation name> खोलो और …" or
" <invocation name> का इस्तेमाल करें और…"
"<invocation name> चालू कर और

  • "<invocation name> खोल और हवाई जहाज़ की टिकट book करो"
    -"<invocation name> खोल और हवाई जहाज़ की टिकट book करो"
  • "<invocation name> इस्तेमाल करें और जाँच में सहायता" "<invocation name> चालू कर और मेरा पसंदीदा रंग बता"

Questions, in both interrogative and inverted forms: phrases that can follow "<invocation name> से पूछो …"

  • " <invocation name> से पूछो के मेरी गाड़ी कहा है"
  • "<invocation name> से पूछो के  कहाँ है मेरी गाड़ी"

Commands: phrases that can follow "<invocation name> का इस्तेमाल करें और…"

  • "<invocation name> का इस्तेमाल करें और मुर मुझे नई गाड़ी ला दें"

(In the examples above, the italic phrase is the sample utterance).

  • Noun, question, and command utterances are all included.
  • At least five varieties of these three types of phrases are present (five noun forms, five question forms, and five command forms)
  • When combined with the ask phrases, the sample utterances are intuitive and natural.

2.

Launch the skill using each of the following common "खोलो","चालू करो", "शुरू करो","का इस्तेमाल करें", etc patterns (ideally do multiple variations for each pattern):

-<Invocation Name> खोलो और <something>

  • <Invocation Name> चालू करो और <something>
  • <Invocation Name> शुरू करो और <do something>
  • The skill successfully launches and completes the request.
  • The phrases are easy and natural to say.

3.

Launch the skill with the generic "खोलो" pattern (recommended test if this is a natural phrase for your skill):

  • *<Invocation Name> खोलो *<question>

Test with questions starting with different question words (कौन, क्या, कैसे, etc).

The specific question words that sound natural with your skill may vary.

  • The generic "खोलो" pattern works if appropriate for your skill.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

4.

Review the "Invoking a Skill with a Specific Request (Intent)" section in Understand How Users Invoke Custom Skills and test as many of the phrases as apply to your skill. Note that not all of the phrases apply to all skills.

  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

Most skills provide quick, simple, "one-shot" interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase.

The chiedi a and domanda a phrases are the most natural phrases for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with these phrases and are easy and natural to say.

In these tests, you review the sample utterances you've written for the skill, then test them by voice to ensure that they work as expected.

Test Expected Results

1.

Inspect the skill's sample utterances to ensure that they contain the right phrasing to match the different phrases for invoking a skill with a specific intent.

Noun phrases: phrases that can follow "chiedi a <invocation name> …" or "domanda a <invocation name> …"

  • "chiedi a <invocation name> il mio colore preferito"
  • "domanda a <invocation name> il mio romanzo preferito"

Questions: phrases that can follow "Chiedi a <invocation name>…"

  • "Chiedi a <invocation name> dov'è la mia macchina"
  • "domanda a <invocation name> dov'è la mia macchina"

Commands: phrases that can follow "chiedi a <invocation name> (se può | di)…" or "domanda a <invocation name> (se può | di)…"

  • "chiedi a <invocation name> se può trovarmi una macchina"
  • "domanda a <invocation name> se può trovare il mio romanzo preferito"
  • "chiedi a <invocation name> di trovarmi una macchina
  • "domanda a <invocation name> di trovare il mio romanzo preferito"

(In the examples above, the italic phrase is the sample utterance).

  • Noun, question, and command utterances are all included.
  • At least five varieties of these three types of phrases are present (five noun forms, five question forms, and five command forms)
  • When combined with the chiedi a and domanda a phrases, the sample utterances are intuitive and natural.

2.

Launch the skill using each of the following common "ask" patterns (ideally do multiple variations for each pattern):

  • Chiedi a <Invocation Name> se può <fare qualcosa>
  • Domanda a <Invocation Name> se può <fare quelcosa>
  • Each of these common "ask" patterns works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

3.

Test with questions starting with different question words (chi, cose, come, and so on).

The specific question words that sound natural with your skill may vary. For example, these types of questions do not flow well with "I miei taxi." A user is unlikely to say something like "Domanda ai miei taxi cos'è chiamare un taxi?"

  • The generic "chiedi a" pattern works if appropriate for your skill.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

4.

Review the "Invoking a Skill with a Specific Request (Intent)" section in Understand How Users Invoke Custom Skills and test as many of the phrases as apply to your skill.

Note that not all of the phrases apply to all skills. For example, the "Domanda a…se può…" phrasing would probably not make sense for a skill asking about weather or tide information, so the skill would still pass this test even without this phrase.

  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

Most skills provide quick, simple, "one-shot" interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase.

The 教えて and 調べて phrases are the most natural phrases for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with these phrases and are easy and natural to say.

In these tests, you review the sample utterances you've written for the skill, then test them by voice to ensure that they work as expected.

Test Expected Results

1.

Inspect the skill's sample utterances to ensure that they contain the right phrasing to match the different phrases for invoking a skill with a specific intent.

Noun phrases: phrases that can follow
"<invocation name><something>(を)教えて" or
"<invocation name>を使って<something>(を)教えて"

  • "<invocation name>でトマトのレシピを教えて"
  • "<invocation name>で今日の午後三時の予定について教えて"

Questions, in both interrogative and inverted forms: phrases that can follow "<invocation name><something>教えて"

  • "<invocation name>で私の自動車がどこにあるか教えて"
  • "<invocation name>でどこに私の自動車があるか教えて"

Commands: phrases that can follow "<invocation name><something>" or "<invocation name>を使って<something>"

  • "<invocation name>リビングのエアコン消して"
  • "<invocation name>を使ってリビングのエアコン消して"
  • Noun, question, and command utterances are all included.
  • At least five varieties of these three types of phrases are present (five noun forms, five question forms, and five command forms)
  • When combined with the 調べて and お願い phrases, the sample utterances are intuitive and natural.

2.

Launch the skill using each of the following common "(を)調べて" patterns (ideally do multiple variations for each pattern):

  • <Invocation Name><something>(を)調べて
  • <Invocation Name>を使って<something>(を)調べて
  • <Invocation Name>を開いて<something>(を)調べて
  • Each of these common "(を)調べて" patterns works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

3.

Launch the skill with the generic "調べて" pattern (recommended test if this is a natural phrase for your skill):

  • <Invocation Name><question>調べて

Test with questions starting with different question words ("誰", "何", "どのように", and so on).

The specific question words that sound natural with your skill may vary. For example, these types of questions do not flow well with "宇宙の豆知識." A user is unlikely to say something like "宇宙の豆知識を開いて、宇宙に関する事実は何"

  • The generic "調べて" pattern works if appropriate for your skill.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

4.

Launch the skill using the following common "(を)教えて" pattern:

  • <Invocation Name><something>(を)教えて
  • <Invocation Name>を使って<something>(を)教えて
  • <Invocation Name>を開いて<something>(を)教えて
  • The common "(を)教えて" pattern works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

5.

Review the "Invoking a Skill with a Specific Request (Intent)" section in Understand How Users Invoke Custom Skills and test as many of the phrases as apply to your skill.

Note that not all of the phrases apply to all skills. For example, the "<呼び出し名>で<…かどうか>調べて" phrasing would probably not make sense for a skill asking about weather or tide information, so the skill would still pass this test even without this phrase.

  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

Most skills provide quick, simple, "one-shot" interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase.

The peça ao, peça à, diga ao, and diga à phrases are the most natural phrases for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with these phrases and are easy and natural to say.

In these tests, you review the sample utterances you've written for the skill, then test them by voice to ensure that they work as expected.

Test Expected Results

1.

Inspect the skill's sample utterances to ensure that they contain the right phrasing to match the different phrases for invoking a skill with a specific intent.

Phrases with nouns or verbs: phrases that can follow
"peça ao <invocation name> por…" "peça à <invocation name> pela…" "peça para o <invocation name>…" "diga ao <invocation name> para…" "diga à <invocation name> que…"

  • "peça ao <invocation name> por ajuda"
  • "peça à <invocation name> pela previsão do tempo"
  • "peça para o <invocation name>tocar uma música"
  • "diga ao <invocation name> para pesquisar uma receita de macarrão com molho de tomate"
  • "diga à <invocation name> que toque músicas clássicas"

A question in the interrogative form : a phrase that can follow "perguntar para o or "perguntar para a<invocation name> …"

  • "pergunte para o <invocation name> onde está o meu carro"

(In the examples above, the italic phrase is the sample utterance).

  • Noun, question and command utterances are all included.
  • At least five varieties of these three types of phrases are present (five noun forms, five question forms, and five command forms)
  • When combined with the ask and tell phrases, the sample utterances are intuitive and natural.

2.

Launch the skill using each of the following common "peça ao" "diga à" "diz para o", "fala com a", "pede pro" patterns (ideally do multiple variations for each pattern):

  • Peça ao <Invocation Name> para <do something>
  • Diga à <Invocation Name> para <do something>
  • Diz para o <Invocation Name> que <do something>
  • Fala com a <Invocation Name> sobre <something>
  • Pede pro <Invocation Name> para <do something>
  • Each of these common "pedir ao" and "pedir à" patterns works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

3.

Launch the skill with the generic "pergunta para o" and "pergunta para a" pattern (recommended test if this is a natural phrase for your skill):

  • pergunta para o <Invocation Name> <question>
  • pergunta para a <Invocation Name> <question>

Test with questions starting with different question words (quem, qual, que, o que, como, and so on).

The specific question words that sound natural with your skill may vary. For example, these types of questions do not flow well with "Meu Táxi." A user is unlikely to say something like "Pergunte para o Meu Táxi o que é chamar um táxi?"

  • The generic "pedir ao" and "pedir à" pattern works if appropriate for your skill.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

4.

Launch the skill using the following common "falar com o" and "falar com a" pattern:

  • Fale com o <Invocation Name> para <do something>
  • Fale com a <Invocation Name> para <do something>
  • The common "falar com o" and "falar com a" pattern works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

5.

Review the "Invoking a Skill with a Specific Request (Intent)" section in Understand How Users Invoke Custom Skills and test as many of the phrases as apply to your skill.

Note that not all of the phrases apply to all skills. For example, the "Peça ao…se" and "Peça à…se" phrasing would probably not make sense for a skill asking about weather or tide information, so the skill would still pass this test even without this phrase.

  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

Most skills provide quick, simple, "one-shot" interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase.

The pídele a and pregúntale a phrases are the most natural phrases for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with these phrases and are easy and natural to say.

In these tests, you review the sample utterances you've written for the skill, then test them by voice to ensure that they work as expected.

Test Expected Results

1.

Inspect the skill's sample utterances to ensure that they contain the right phrasing to match the different phrases for invoking a skill with a specific intent.

Noun phrases: phrases that can follow
"Pedirle a <invocation name> por …" or
"Dígale a <invocation name> sobre…" or
"Pregúntale a <invocation name> para…"

  • "Pedirle a <invocation name> por mi color favorito"
  • "Dígale a <invocation name> sobre mi cita hoy a las tres de la tarde"
  • "Pregúntale a <invocation name> para la fecha de mi nacimiento"

Commands: phrases that can follow "Pedirle a <invocation name> que…" or "dígale a <invocation name> que…" or "pregúntale a <invocation name> que…"

  • "Pedirle a <invocation name> que consiga un taxi"
  • "Dígale a <invocation name> que encuentre mi libro favorito"
  • "Pregúntale a <invocation name> que busque la ubicación de mi mascota"

(In the examples above, the italic phrase is the sample utterance).

  • Noun, question, and command utterances are all included.
  • At least five varieties of these three types of phrases are present (five noun forms, five question forms, and five command forms)
  • When combined with the ask and tell phrases, the sample utterances are intuitive and natural.

2.

Launch the skill using each of the following common "pídele a" patterns (ideally do multiple variations for each pattern):

  • Pídele a <Invocation Name> por <something>
  • Pregúntale a <Invocation Name> sobre <something>
  • Pídele a <Invocation Name> que <do something>
  • Each of these common "Pregúntale a" patterns works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

3.

Launch the skill with the generic "Pregúntale a" pattern (recommended test if this is a natural phrase for your skill):

  • Pregúntale a <Invocation Name> <question>

Test with questions starting with different question words (quién, qué, cómo, and so on).

The specific question words that sound natural with your skill may vary. For example, these types of questions do not flow well with "Horóscopo Diario." A user is unlikely to say something like "Pregúntale a Horóscopo Diario dónde está escorpión?"

  • The generic "Pídele a" pattern works if appropriate for your skill.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

4.

Launch the skill using the following common "Dile a" pattern:

  • Dile a <Invocation Name> que <do something>
  • The common "Dile a" pattern works.
  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

5.

Review the "Invoking a Skill with a Specific Request (Intent)" section in Understand How Users Invoke Custom Skills and test as many of the phrases as apply to your skill.

Note that not all of the phrases apply to all skills. For example, the "Pregúntale a…si…" phrasing would probably not make sense for a skill asking for a horoscope, so the skill would still pass this test even without this phrase.

  • The skill successfully launches and completes the request.
  • The phrase is easy and natural to say.

Variety of sample utterances

Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent.

In this test, inspect the sample utterances for all intents, not just the "one shot" intents described in One-Shot Phrasing for Sample Utterances.

Test Expected Results

1.

Inspect the skill's intent schema and sample utterances:

  1. For each intent, identify several ways a user could phrase a request for that intent.
  2. Verify whether the sample utterances mapped to that intent cover those phrasings.
  3. Examine any slots that appear in the sample utterances.

The five most common synonyms for phrase patterns are present. For example, if the skill contains "جيبيلي <some value>", then the utterances include synonyms such as "أعطيني <some value>", "قوليلي <some value>", and so on.

Each sample utterance must be unique. There cannot be any duplicate sample utterances mapped to different intents.

Each slot is used only once within a sample utterance.

Note that in Arabic, the connecting words ل or و will also be part of the request portion (sample utterance) and should be accounted for when designing your sample utterances.

Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent.

In this test, inspect the sample utterances for all intents, not just the "one shot" intents described in One-Shot Phrasing for Sample Utterances.

Test Expected Results

1.

Inspect the skill's intent schema and sample utterances:

  1. For each intent, identify several ways a user could phrase a request for that intent.
  2. Verify whether the sample utterances mapped to that intent cover those phrasings.
  3. Examine any slots that appear in the sample utterances.

The five most common synonyms for phrase patterns are present. For example, if the skill contains "haal me <some value>", then the utterances include synonyms such as "geef me <some value>", "vertel me <some value>", and so on.

Each sample utterance must be unique. There cannot be any duplicate sample utterances mapped to different intents.

Each slot is used only once within a sample utterance.

Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent.

In this test, inspect the sample utterances for all intents, not just the "one shot" intents described in One-Shot Phrasing for Sample Utterances.

Test Expected Results

1.

Inspect the skill's intent schema and sample utterances:

  1. For each intent, identify several ways a user could phrase a request for that intent.
  2. Verify whether the sample utterances mapped to that intent cover those phrasings.
  3. Examine any slots that appear in the sample utterances.

The five most common synonyms for phrase patterns are present. For example, if the skill contains "get me <some value>", then the utterances include synonyms such as "give me <some value>", "tell me <some value>", and so on.

Each sample utterance must be unique. There cannot be any duplicate sample utterances mapped to different intents.

Each slot is used only once within a sample utterance.

Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent.

In this test, inspect the sample utterances for all intents, not just the "one shot" intents described in One-Shot Phrasing for Sample Utterances.

Test Expected Results

1.

Inspect the skill's intent schema and sample utterances:

  1. For each intent, identify several ways a user could phrase a request for that intent.
  2. Verify whether the sample utterances mapped to that intent cover those phrasings.
  3. Examine any slots that appear in the sample utterances.

The five most common synonyms for phrase patterns are present. For example, if the skill contains "trouve-moi <some value>", then the utterances include synonyms such as "donne-moi <some value>", "dis-moi <some value>", and so on.

Each sample utterance must be unique. There cannot be any duplicate sample utterances mapped to different intents.

Each slot is used only once within a sample utterance.

Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent.

In this test, inspect the sample utterances for all intents, not just the "one shot" intents described in One-Shot Phrasing for Sample Utterances.

Test Expected Results

1.

Inspect the skill's intent schema and sample utterances:

  1. For each intent, identify several ways a user could phrase a request for that intent.
  2. Verify whether the sample utterances mapped to that intent cover those phrasings.
  3. Examine any slots that appear in the sample utterances.

The five most common synonyms for phrase patterns are present. For example, if the skill contains "hol mir <some value>", then the utterances include synonyms such as "gib mir <some value>", "sag mir <some value>", and so on.

Each sample utterance must be unique. There cannot be any duplicate sample utterances mapped to different intents.

Each slot is used only once within a sample utterance.

Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent.

In this test, inspect the sample utterances for all intents, not just the "one shot" intents described in One-Shot Phrasing for Sample Utterances.

Test Expected Results

1.

Inspect the skill's intent schema and sample utterances:

  1. For each intent, identify several ways a user could phrase a request for that intent.
  2. Verify whether the sample utterances mapped to that intent cover those phrasings.
  3. Examine any slots that appear in the sample utterances.

The five most common synonyms for phrase patterns are present. For example, if the skill contains "<some value>ला दो", then the utterances include synonyms such as "<some value>मुझे दो ", "<some value>मुझे बोलो", and so on.

Each sample utterance must be unique. There cannot be any duplicate sample utterances mapped to different intents.

Each slot is used only once within a sample utterance.

Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent.

In this test, inspect the sample utterances for all intents, not just the "one shot" intents described in One-Shot Phrasing for Sample Utterances.

Test Expected Results

1.

Inspect the skill's intent schema and sample utterances:

  1. For each intent, identify several ways a user could phrase a request for that intent.
  2. Verify whether the sample utterances mapped to that intent cover those phrasings.
  3. Examine any slots that appear in the sample utterances.

The five most common synonyms for phrase patterns are present. For example, if the skill contains "trovami <some value>", then the utterances include synonyms such as "dammi <some value>", "dimmi <some value>", and so on.

Each sample utterance must be unique. There cannot be any duplicate sample utterances mapped to different intents.

Each slot is used only once within a sample utterance.

Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent.

In this test, inspect the sample utterances for all intents, not just the "one shot" intents described in One-Shot Phrasing for Sample Utterances.

Test Expected Results

1.

Inspect the skill's intent schema and sample utterances:

  1. For each intent, identify several ways a user could phrase a request for that intent.
  2. Verify whether the sample utterances mapped to that intent cover those phrasings.
  3. Examine any slots that appear in the sample utterances.

The five most common synonyms for phrase patterns are present. For example, if the skill contains "<some value>を探して", then the utterances include synonyms such as "<some value>を調べて", "<some value>を教えて", and so on.

Each sample utterance must be unique. There cannot be any duplicate sample utterances mapped to different intents.

Each slot is used only once within a sample utterance.

Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent.

In this test, inspect the sample utterances for all intents, not just the "one shot" intents described in One-Shot Phrasing for Sample Utterances.

Test Expected Results

1.

Inspect the skill's intent schema and sample utterances:

  1. For each intent, identify several ways a user could phrase a request for that intent.
  2. Verify whether the sample utterances mapped to that intent cover those phrasings.
  3. Examine any slots that appear in the sample utterances.

The five most common synonyms for phrase patterns are present. For example, if the skill contains "encontre <some value>", then the utterances include synonyms such as "me dê <some value>", "diga à or diga ao <some value>", and so on.

Each sample utterance must be unique. There cannot be any duplicate sample utterances mapped to different intents.

Each slot is used only once within a sample utterance.

Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent.

In this test, inspect the sample utterances for all intents, not just the "one shot" intents described in One-Shot Phrasing for Sample Utterances.

Test Expected Results

1.

Inspect the skill's intent schema and sample utterances:

  1. For each intent, identify several ways a user could phrase a request for that intent.
  2. Verify whether the sample utterances mapped to that intent cover those phrasings.
  3. Examine any slots that appear in the sample utterances.

The five most common synonyms for phrase patterns are present. For example, if the skill contains "consígueme <some value>", then the utterances include synonyms such as "dame <some value>", "dime <some value>", and so on.

Each sample utterance must be unique. There cannot be any duplicate sample utterances mapped to different intents.

Each slot is used only once within a sample utterance.

Intents and slot types

Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user's spoken text into a different format (such as converting the spoken text "march fifth" into the date format "2017-03-05"). Custom slot types are used for items that are not covered by Amazon Alexa's built-in types.

For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect.

Note that this test assumes you have migrated to the updated slot types. If you are still using the older version (for instance, DATE instead of AMAZON.DATE), then you need to also perform the Sample Utterances (Slot Type Values) test.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slot types.

Verify that the types match the type of data to be collected.

  • The slots for each intent match the recommended slot types listed in the Slot Types table below.
  • Slots that collect a value from a list use a custom slot type.

Slot Types:

Slot Type Use for slots that collect...

AMAZON.NUMBER

Positive Integer and decimal numbers ("واحد" and "اثنين فاصلة خمسة").

AMAZON.DATE

Relative and absolute dates ("واحد أكتوبر" and "التاسع من فبراير").

AMAZON.TIME

The time of day ("التاسعة والنصف صباحا").

AMAZON.DURATION

A period of time ("خمس دقائق").

Custom Slot Types

A value from a list (horoscope signs, all NFL football teams, supported cities, recipe ingredients, and so on).

See Custom Slot Types (Values) for additional testing for your custom slot types.

Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user's spoken text into a different format (such as converting the spoken text "march fifth" into the date format "2017-03-05"). Custom slot types are used for items that are not covered by Amazon Alexa's built-in types.

For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect.

Note that this test assumes you have migrated to the updated slot types. If you are still using the older version (for instance, DATE instead of AMAZON.DATE), then you need to also perform the Sample Utterances (Slot Type Values) test.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slot types.

Verify that the types match the type of data to be collected.

  • The slots for each intent match the recommended slot types listed in the Slot Types table below.
  • Slots that collect a value from a list use a custom slot type.

Slot Types:

Slot Type Use for slots that collect...

AMAZON.NUMBER

Positive and negative integers, decimal numbers ("één'", "min vijf" and "twee punt vijf")

AMAZON.DATE

Relative and absolute dates, holidays ("dit weekend", "zesentwintig augustus tweeduizend vijftien" and "koningsdag")

AMAZON.TIME

The time of day and relative times ("drie uur 's middags" and "vijf seconden geleden")

AMAZON.DURATION

A period of time ("vijf minuten", "drie en een half weken" or "twee hele maanden")

AMAZON.FOUR_DIGIT_NUMBER

Numeric sequences composed of four digits ("tien vijfenveertig")

AMAZON.Ordinal

Numbers defining the position of something in a series ("eerste" or "tweede")

AMAZON.PhoneNumber

Numeric sequences representing telephone numbers.

AMAZON.AlphaNumeric

Alphanumeric sequences such as postal codes and flight numbers ("zeven vijf nul één a. k." and "b. a. w. negen acht twee g.")

Custom Slot Types

A value from a list (horoscope signs, all eredivisie football teams, supported cities, recipe ingredients, and so on).

See Custom Slot Types (Values) for additional testing for your custom slot types.

Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user's spoken text into a different format (such as converting the spoken text "march fifth" into the date format "2017-03-05"). Custom slot types are used for items that are not covered by Amazon Alexa's built-in types.

For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect.

Note that this test assumes you have migrated to the updated slot types. If you are still using the older version (for instance, DATE instead of AMAZON.DATE), then you need to also perform the Sample Utterances (Slot Type Values) test.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slot types.

Verify that the types match the type of data to be collected.

  • The slots for each intent match the recommended slot types listed in the Slot Types table below.
  • Slots that collect a value from a list use a custom slot type.

Slot Types:

Slot Type Use for slots that collect...

AMAZON.NUMBER

Positive and negative integers, decimal numbers ("one", "minus five" and "two point five")

AMAZON.DATE

Relative and absolute dates, holidays ("this weekend", "august twenty sixth twenty fifteen" and "canada day")

AMAZON.TIME

The time of day and relative times ("three thirty p. m." and "five seconds ago")

AMAZON.DURATION

A period of time ("five minutes", "three and a half weeks" or "two entire months")

AMAZON.FOUR_DIGIT_NUMBER

Numeric sequences composed of four digits ("six oh four five")

AMAZON.Ordinal

Numbers defining the position of something in a series ("first" or "second")

AMAZON.PhoneNumber

Numeric sequences representing telephone numbers.

AMAZON.AlphaNumeric

Alphanumeric sequences such as postal codes and flight numbers ("n. one nine t. l." and "q. f. five six eight")

Custom Slot Types

A value from a list (horoscope signs, all NFL football teams, supported cities, recipe ingredients, and so on).

See Custom Slot Types (Values) for additional testing for your custom slot types.

Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user's spoken text into a different format (such as converting the spoken text "march fifth" into the date format "2017-03-05"). Custom slot types are used for items that are not covered by Amazon Alexa's built-in types.

For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slot types.

Verify that the types match the type of data to be collected.

  • The slots for each intent match the recommended slot types listed in the Slot Types table below.
  • Slots that collect a value from a list use a custom slot type.

Slot Types:

Slot Type Use for slots that collect...

AMAZON.NUMBER

Positive and negative integers, decimal numbers ("un", "moins cinq" and "trois virgule cinq").

AMAZON.DATE

Relative and absolute dates, holidays ("ce week-end", "ce soir", "le quatre février mille neuf cent soixante-douze" and "pâques").

AMAZON.TIME

The time of day and relative times ("quatre heures et demi" and "il y a cinq minutes").

AMAZON.DURATION

A period of time ("cinq minutes").

AMAZON.FOUR_DIGIT_NUMBER

Numeric sequences composed of four digits ("six zéro quatre cinq")

AMAZON.Ordinal

Numbers defining the position of something in a series ("premier" or "deuxième")

AMAZON.PhoneNumber

Numeric sequences representing telephone numbers.

AMAZON.AlphaNumeric

Alphanumeric sequences such as postal codes and flight numbers ("g. deux l. zéro c. sept" and "a. f. zéro trois quatre neuf")

Custom Slot Types

A value from a list (horoscope signs, all NFL football teams, supported cities, recipe ingredients, and so on).

See Custom Slot Types (Values) for additional testing for your custom slot types.

Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user's spoken text into a different format (such as converting the spoken text "fünfter März" into the date format "2019-03-05"). Custom slot types are used for items that are not covered by Amazon Alexa's built-in types.

For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect.

Note that this test assumes you have migrated to the updated slot types. If you are still using the older version (for instance, DATE instead of AMAZON.DATE), then you need to also perform the Sample Utterances (Slot Type Values) test.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slot types.

Verify that the types match the type of data to be collected.

  • The slots for each intent match the recommended slot types listed in the Slot Types table below.
  • Slots that collect a value from a list use a custom slot type.

Slot Types:

Slot Type Use for slots that collect...

AMAZON.NUMBER

Positive and negative integers, decimal numbers ("eins", "minus fünf", and "drei komma fünf").

AMAZON.DATE

Relative and absolute dates ("dieses Wochenende" and "achter März neunzehn hundert vier und neunzig").

AMAZON.TIME

The time of day ("dreizehn Uhr").

AMAZON.DURATION

A period of time ("fünf Minuten").

AMAZON.FOUR_DIGIT_NUMBER

Numeric sequences composed of four digits ("sechs null vier fünf").

AMAZON.Ordinal

Numbers defining the position of something in a series ("erste" or "zweite").

AMAZON.PhoneNumber

Numeric sequences representing telephone numbers.

AMAZON.AlphaNumeric

Alphanumeric sequences such as postal codes and flight numbers ("zehn eins eins sieben" and "l. h. vier drei vier zwei").

Custom Slot Types

A value from a list (horoscope signs, supported cities, recipe ingredients, and so on).

See Custom Slot Types (Values) for additional testing for your custom slot types.

Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user's spoken text into a different format (such as converting the spoken text "पांचवीं मार्च" into the date format "2017-03-05"). Custom slot types are used for items that are not covered by Amazon Alexa's built-in types.

For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect.

Note that this test assumes you have migrated to the updated slot types. If you are still using the older version (for instance, DATE instead of AMAZON.DATE), then you need to also perform the Sample Utterances (Slot Type Values) test.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slot types.

Verify that the types match the type of data to be collected.

  • The slots for each intent match the recommended slot types listed in the Slot Types table below.
  • Slots that collect a value from a list use a custom slot type.

Slot Types:

Slot Type Use for slots that collect...

AMAZON.NUMBER

Positive and negative integers, decimal numbers ("एक", "minus पांच" and "दो दशमलव पांच").

AMAZON.DATE

Relative and absolute dates ("इस सप्ताहांत" and "एक जनवरी दो हज़ार तेरह").

AMAZON.TIME

The time of day ("दोपहर के साढ़े तीन").

AMAZON.DURATION

A period of time ("पाँच मिनट").

AMAZON.FOUR_DIGIT_NUMBER

Numeric sequences composed of four digits ("छह शून्य चार पांच").

AMAZON.Ordinal

Numbers defining the position of something in a series ("पहला" or "दूसरा").

AMAZON.PhoneNumber

Numeric sequences representing telephone numbers.

AMAZON.AlphaNumeric

Alphanumeric sequences such as postal codes and flight numbers ("एक एक शून्य शून्य पांच छह" and "a. i. एक सौ तिहत्तर").

Custom Slot Types

A value from a list (horoscope signs, all NFL football teams, supported cities, recipe ingredients, and so on).

See Custom Slot Types (Values) for additional testing for your custom slot types.

Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user's spoken text into a different format (such as converting the spoken text "il cinque marzo" into the date format "2017-03-05"). Custom slot types are used for items that are not covered by Amazon Alexa's built-in types.

For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slot types.

Verify that the types match the type of data to be collected.

  • The slots for each intent match the recommended slot types listed in the Slot Types table below.
  • Slots that collect a value from a list use a custom slot type.

Slot Types:

Slot Type Use for slots that collect...

AMAZON.NUMBER

Positive and negative integers, decimal numbers ("uno", "meno cinque" and "tre virgola cinque").

AMAZON.DATE

Relative and absolute dates, holidays ("questo fine settimana", "il ventisei agosto due mila e quindici" and "pasqua").

AMAZON.TIME

The time of day ("alle tre e mezza del pomeriggio").

AMAZON.DURATION

A period of time ("cinque minuti" or "cinque minuti e mezzo").

AMAZON.FOUR_DIGIT_NUMBER

Numeric sequences composed of four digits ("sei zero quattro cinque")

AMAZON.Ordinal

Numbers defining the position of something in a series ("primo" or "secondo")

AMAZON.PhoneNumber

Numeric sequences representing telephone numbers.

AMAZON.AlphaNumeric

Alphanumeric sequences such as postal codes and flight numbers ("ottanta dieci zero" and "a. v. nove sette quattro nove")

Custom Slot Types

A value from a list (horoscope signs, all NFL football teams, supported cities, recipe ingredients, and so on).

See Custom Slot Types (Values) for additional testing for your custom slot types.

Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user's spoken text into a different format (such as converting the spoken text "3月5日" into the date format "2017-03-05"). Custom slot types are used for items that are not covered by Amazon Alexa's built-in types.

For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect.

Note that this test assumes you have migrated to the updated slot types. If you are still using the older version (for instance, DATE instead of AMAZON.DATE), then you need to also perform the Sample Utterances (Slot Type Values) test.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slot types.

Verify that the types match the type of data to be collected.

  • The slots for each intent match the recommended slot types listed in the Slot Types table below.
  • Slots that collect a value from a list use a custom slot type.

Slot Types:

Slot Type Use for slots that collect...

AMAZON.NUMBER

Positive and negative integers, decimal numbers ("一", "マイナス五" and "二点五").

AMAZON.DATE

Relative and absolute dates ("今週末", "二千十五年八月二十六日" and "海の日").

AMAZON.TIME

The time of day ("午後三時三十分" and "五秒前").

AMAZON.DURATION

A period of time ("五分", "一時間" or "二か月").

AMAZON.FOUR_DIGIT_NUMBER

Numeric sequences composed of four digits ("六〇四五").

AMAZON.Ordinal

Numbers defining the position of something in a series ("一つ目" or "二番目").

AMAZON.PhoneNumber

Numeric sequences representing telephone numbers.

AMAZON.AlphaNumeric

Alphanumeric sequences such as postal codes and flight numbers ("一四一〇〇二一" and "j l 九〇五").

Custom Slot Types

A value from a list (horoscope signs, all NFL football teams, supported cities, recipe ingredients, and so on).

See Custom Slot Types (Values) for additional testing for your custom slot types.

Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user's spoken text into a different format (such as converting the spoken text "cinco de março" into the date format "2017-03-05"). Custom slot types are used for items that are not covered by Amazon Alexa's built-in types.

For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect.

Note that this test assumes you have migrated to the updated slot types. If you are still using the older version (for instance, DATE instead of AMAZON.DATE), then you need to also perform the Sample Utterances (Slot Type Values) test.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slot types.

Verify that the types match the type of data to be collected.

  • The slots for each intent match the recommended slot types listed in the Slot Types table below.
  • Slots that collect a value from a list use a custom slot type.

Slot Types:

Slot Type Use for slots that collect...

AMAZON.NUMBER

Positive Integer and decimal numbers ("um" and "três vírgula cinco").

AMAZON.DATE

Relative and absolute dates ("este fim de semana" and "vinte e seis de agosto de dois mil e quinze").

AMAZON.TIME

The time of day ("três e meia da tarde").

AMAZON.DURATION

A period of time ("cinco minutos").

AMAZON.FOUR_DIGIT_NUMBER

Numeric sequences composed of four digits ("seis zero quatro cinco").

AMAZON.Ordinal

Numbers defining the position of something in a series ("primeira" or "segundo").

AMAZON.PhoneNumber

Numeric sequences representing telephone numbers.

AMAZON.AlphaNumeric

Alphanumeric sequences such as postal codes and flight numbers ("noventa quinhentos e quarenta traço duzentos e dez" and "b. r. três oito um").

Custom Slot Types

A value from a list (horoscope signs, all NFL football teams, supported cities, recipe ingredients, and so on).

See Custom Slot Types (Values) for additional testing for your custom slot types.

Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user's spoken text into a different format (such as converting the spoken text "march fifth" into the date format "2017-03-05"). Custom slot types are used for items that are not covered by Amazon Alexa's built-in types.

For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slot types.

Verify that the types match the type of data to be collected.

  • The slots for each intent match the recommended slot types listed in the Slot Types table below.
  • Slots that collect a value from a list use a custom slot type.

Slot Types:

Slot Type Use for slots that collect...

AMAZON.NUMBER

Positive and negative integers, decimal numbers ("uno", "menos cinco" and "tres coma cinco").

AMAZON.DATE

Relative and absolute dates ("este fin de semana" and "el treinta de agosto del dos mil quince").

AMAZON.TIME

The time of day ("tres y media de la tarde").

AMAZON.DURATION

A period of time ("cinco minutos").

AMAZON.FOUR_DIGIT_NUMBER

Numeric sequences composed of four digits ("seis cero cuatro cinco").

AMAZON.Ordinal

Numbers defining the position of something in a series ("primera" or "segundo").

AMAZON.PhoneNumber

Numeric sequences representing telephone numbers.

AMAZON.AlphaNumeric

Alphanumeric sequences such as postal codes and flight numbers ("treinta y dos seis cero cero" and "l. h. cuatro dos tres").

Custom Slot Types

A value from a list (horoscope signs, all NFL football teams, supported cities, recipe ingredients, and so on).

See Custom Slot Types (Values) for additional testing for your custom slot types.

Custom slot type values

The custom slot type is used for items that are not covered by Amazon's built-in types and is recommended for most use cases where a slot value is one of a set of possible values.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slots that use custom slot types.

For each custom slot type, review the set of values you provided for the type.

  • If possible, the list of values includes all values you expect to be used. For example, a horoscope skill with a LIST_OF_SIGNS custom type would include all twelve Zodiac signs as values for the type.
  • If the list cannot cover every possible value, it covers as many representative values as possible.
  • If the list cannot cover every possible value, the values reflect the expected word counts. For instance, if values of one to four words are possible, use values of one to four words in your value list. But also be sure to distribute them proportionally. If a four-word value occurs in an estimated 10% of inputs, then include four-word values only in 10% of the values in your list.
  • All custom values are written in the selected language. For instance, all custom slot type values on the German tab must be in German.
  • Where grammatically appropriate, include particles such as و (and), ل (to), لل (to the) and عال (on the) in the skill's custom slot values. Particles cannot precede a slot in a carrier phrase as they must be attached to the following word. Depending on the content of the slot, you may need to include more or fewer variations.

For guidelines for defining custom slot type values, see Recommendations for Custom Slot Type Values.

The custom slot type is used for items that are not covered by Amazon's built-in types and is recommended for most use cases where a slot value is one of a set of possible values.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slots that use custom slot types.

For each custom slot type, review the set of values you provided for the type.

  • If possible, the list of values includes all values you expect to be used. For example, a horoscope skill with a LIST_OF_SIGNS custom type would include all twelve Zodiac signs as values for the type.
  • If the list cannot cover every possible value, it covers as many representative values as possible.
  • If the list cannot cover every possible value, the values reflect the expected word counts. For instance, if values of one to four words are possible, use values of one to four words in your value list. But also be sure to distribute them proportionally. If a four-word value occurs in an estimated 10% of inputs, then include four-word values only in 10% of the values in your list.
  • All custom values are written in the selected language. For instance, all custom slot type values on the Dutch tab must be in Dutch.

For guidelines for defining custom slot type values, see Recommendations for Custom Slot Type Values.

The custom slot type is used for items that are not covered by Amazon's built-in types and is recommended for most use cases where a slot value is one of a set of possible values.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slots that use custom slot types.

For each custom slot type, review the set of values you provided for the type.

  • If possible, the list of values includes all values you expect to be used. For example, a horoscope skill with a LIST_OF_SIGNS custom type would include all twelve Zodiac signs as values for the type.
  • If the list cannot cover every possible value, it covers as many representative values as possible.
  • If the list cannot cover every possible value, the values reflect the expected word counts. For instance, if values of one to four words are possible, use values of one to four words in your value list. But also be sure to distribute them proportionally. If a four-word value occurs in an estimated 10% of inputs, then include four-word values only in 10% of the values in your list.
  • All custom values are written in the selected language. For instance, all custom slot type values on the German tab must be in German.

For guidelines for defining custom slot type values, see Recommendations for Custom Slot Type Values.

The custom slot type is used for items that are not covered by Amazon's built-in types and is recommended for most use cases where a slot value is one of a set of possible values.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slots that use custom slot types.

For each custom slot type, review the set of values you provided for the type.

  • If possible, the list of values includes all values you expect to be used. For example, a horoscope skill with a LIST_OF_SIGNS custom type would include all twelve Zodiac signs as values for the type.
  • If the list cannot cover every possible value, it covers as many representative values as possible.
  • If the list cannot cover every possible value, the values reflect the expected word counts. For instance, if values of one to four words are possible, use values of one to four words in your value list. But also be sure to distribute them proportionally. If a four-word value occurs in an estimated 10% of inputs, then include four-word values only in 10% of the values in your list.
  • All custom values are written in the selected language. For instance, all custom slot type values on the German tab must be in German.

For guidelines for defining custom slot type values, see Recommendations for Custom Slot Type Values.

The custom slot type is used for items that are not covered by Amazon's built-in types and is recommended for most use cases where a slot value is one of a set of possible values.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slots that use custom slot types.

For each custom slot type, review the set of values you provided for the type.

  • If possible, the list of values includes all values you expect to be used. For example, a horoscope skill with a LIST_OF_SIGNS custom type would include all twelve Zodiac signs as values for the type.
  • If the list cannot cover every possible value, it covers as many representative values as possible.
  • If the list cannot cover every possible value, the values reflect the expected word counts. For instance, if values of one to four words are possible, use values of one to four words in your value list. But also be sure to distribute them proportionally. If a four-word value occurs in an estimated 10% of inputs, then include four-word values only in 10% of the values in your list.
  • All custom values are written in the selected language. For instance, all custom slot type values on the German tab must be in German.

For guidelines for defining custom slot type values, see Recommendations for Custom Slot Type Values.

The custom slot type is used for items that are not covered by Amazon's built-in types and is recommended for most use cases where a slot value is one of a set of possible values.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slots that use custom slot types.

For each custom slot type, review the set of values you provided for the type.

  • If possible, the list of values includes all values you expect to be used. For example, a horoscope skill with a LIST_OF_SIGNS custom type would include all twelve Zodiac signs as values for the type.
  • If the list cannot cover every possible value, it covers as many representative values as possible.
  • If the list cannot cover every possible value, the values reflect the expected word counts. For instance, if values of one to four words are possible, use values of one to four words in your value list. But also be sure to distribute them proportionally. If a four-word value occurs in an estimated 10% of inputs, then include four-word values only in 10% of the values in your list.
  • All custom values are written in the selected language. For instance, all custom slot type values on the Hindi tab must be in Hindi.

For guidelines for defining custom slot type values, see Recommendations for Custom Slot Type Values.

The custom slot type is used for items that are not covered by Amazon's built-in types and is recommended for most use cases where a slot value is one of a set of possible values.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slots that use custom slot types.

For each custom slot type, review the set of values you provided for the type.

  • If possible, the list of values includes all values you expect to be used. For example, a horoscope skill with a LIST_OF_SIGNS custom type would include all twelve Zodiac signs as values for the type.
  • If the list cannot cover every possible value, it covers as many representative values as possible.
  • If the list cannot cover every possible value, the values reflect the expected word counts. For instance, if values of one to four words are possible, use values of one to four words in your value list. But also be sure to distribute them proportionally. If a four-word value occurs in an estimated 10% of inputs, then include four-word values only in 10% of the values in your list.
  • All custom values are written in the selected language. For instance, all custom slot type values on the Italian tab must be in Italian.

For guidelines for defining custom slot type values, see Recommendations for Custom Slot Type Values.

The custom slot type is used for items that are not covered by Amazon's built-in types and is recommended for most use cases where a slot value is one of a set of possible values.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slots that use custom slot types.

For each custom slot type, review the set of values you provided for the type.

  • If possible, the list of values includes all values you expect to be used. For example, a horoscope skill with a LIST_OF_SIGNS custom type would include all twelve Zodiac signs as values for the type.
  • If the list cannot cover every possible value, it covers as many representative values as possible.
  • If the list cannot cover every possible value, the values reflect the expected word counts. For instance, if values of one to four words are possible, use values of one to four words in your value list. But also be sure to distribute them proportionally. If a four-word value occurs in an estimated 10% of inputs, then include four-word values only in 10% of the values in your list.
  • All custom values are written in the selected language. For instance, all custom slot type values on the Japanese tab must be in Japanese.

For guidelines for defining custom slot type values, see Recommendations for Custom Slot Type Values.

The custom slot type is used for items that are not covered by Amazon's built-in types and is recommended for most use cases where a slot value is one of a set of possible values.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slots that use custom slot types.

For each custom slot type, review the set of values you provided for the type.

  • If possible, the list of values includes all values you expect to be used. For example, a horoscope skill with a LIST_OF_SIGNS custom type would include all twelve Zodiac signs as values for the type.
  • If the list cannot cover every possible value, it covers as many representative values as possible.
  • If the list cannot cover every possible value, the values reflect the expected word counts. For instance, if values of one to four words are possible, use values of one to four words in your value list. But also be sure to distribute them proportionally. If a four-word value occurs in an estimated 10% of inputs, then include four-word values only in 10% of the values in your list.
  • All custom values are written in the selected language. For instance, all custom slot type values on the German tab must be in German.

For guidelines for defining custom slot type values, see Recommendations for Custom Slot Type Values.

The custom slot type is used for items that are not covered by Amazon's built-in types and is recommended for most use cases where a slot value is one of a set of possible values.

Test Expected Results

1.

Inspect the skill's intent schema to identify all slots that use custom slot types.

For each custom slot type, review the set of values you provided for the type.

  • If possible, the list of values includes all values you expect to be used. For example, a horoscope skill with a LIST_OF_SIGNS custom type would include all twelve Zodiac signs as values for the type.
  • If the list cannot cover every possible value, it covers as many representative values as possible.
  • If the list cannot cover every possible value, the values reflect the expected word counts. For instance, if values of one to four words are possible, use values of one to four words in your value list. But also be sure to distribute them proportionally. If a four-word value occurs in an estimated 10% of inputs, then include four-word values only in 10% of the values in your list.
  • All custom values are written in the selected language. For instance, all custom slot type values on the German tab must be in German.

For guidelines for defining custom slot type values, see Recommendations for Custom Slot Type Values.

Writing conventions for sample utterances

Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill.

Test Expected Results

1.

Review the text of all sample utterances.

All sample utterances adhere to the following
writing conventions:

  • Capital letters and punctuation are not used (periods can be used, but only in initialisms and spelling (e.g. "t. v."). Hyphens can be used, but should be very infrequent. Apostrophes can be used in possessives) for use of English words in utterances. However, in Arabic Language Capital letters, acronyms, initialisms and apostrophes don't exist.
  • Individual letters are followed by a period and a space before the next letter or word: "TV" is written as "t. v.", "OK" is written as "o. k."- Compounds are written similarly: "AccessHD" is written as "access h. d." for use of English words in utterances.
  • The invocation name must not appear in isolation or within supported launch phrasing. For example, a skill with the invocation name "الابراج اليومية"cannot contain any sample utterances that are just "الابراج اليومية" or sample utterances containing launch phrases such as "افتحي الابراج اليومية."For a complete list of launch phrases see Understand How Users Invoke Custom Skills.
  • All sample utterances are written in the selected language. For instance, the sample utterances on the German tab must be in German.

For more information about syntax rules for sample utterances, see the Rules for Sample Utterances.

Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill.

Test Expected Results

1.

Review the text of all sample utterances.

All sample utterances adhere to the following
writing conventions:

  • Capital letters and punctuation are not used (periods can be used, but only in initialisms and spelling (e.g. "t. v."). Hyphens can be used, but should be very infrequent. Apostrophes can be used in possessives).
  • Individual letters are followed by a period and a space before the next letter or word: "TV" is written as "t. v.", "OK" is written as "o. k."- Compounds are written similarly: "AccessHD" is written as "access h. d."
  • The invocation name must not appear in isolation or within supported launch phrasing. For example, a skill with the invocation name "Dagelijkse Horoscoop"cannot contain any sample utterances that are just "dagelijkse horoscoop" or sample utterances containing launch phrases such as "vertel me dagelijkse horoscoop."For a complete list of launch phrases see Understand How Users Invoke Custom Skills.
  • All sample utterances are written in the selected language. For instance, the sample utterances on the Dutch tab must be in Dutch.

For more information about syntax rules for sample utterances, see the Rules for Sample Utterances.

Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill.

Test Expected Results

1.

Review the text of all sample utterances.

All sample utterances adhere to the following
writing conventions:

  • Capital letters and punctuation are not used (periods can be used, but only in initialisms and spelling (e.g. "t. v."). Hyphens can be used, but should be very infrequent. Apostrophes can be used in possessives).
  • Individual letters are followed by a period and a space before the next letter or word: "TV" is written as "t. v.", "OK" is written as "o. k."- Compounds are written similarly: "AccessHD" is written as "access h. d."
  • The invocation name must not appear in isolation or within supported launch phrasing. For example, a skill with the invocation name "Daily Horoscopes"cannot contain any sample utterances that are just "daily horoscopes" or sample utterances containing launch phrases such as "tell daily horoscopes."For a complete list of launch phrases see Understand How Users Invoke Custom Skills.
  • All sample utterances are written in the selected language. For instance, the sample utterances on the German tab must be in German.

For more information about syntax rules for sample utterances, see the Rules for Sample Utterances.

Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill.

Test Expected Results

1.

Review the text of all sample utterances.

All sample utterances adhere to the following
writing conventions:

  • Capital letters and punctuation are not used (periods can be used, but only in initialisms and spelling (e.g. "t. v."). Hyphens can be used, but should be very infrequent. Apostrophes can be used in possessives).
  • Individual letters are followed by a period and a space before the next letter or word: "TV" is written as "t. v. ", "OK" is written as "o. k. "
  • The invocation name must not appear in isolation or within supported launch phrasing. For example, a skill with the invocation name "signe astrologique" cannot contain any sample utterances that are just "signe astrologique" or sample utterances containing launch phrases such as "demande à signe astrologique."For a complete list of launch phrases see Understand How Users Invoke Custom Skills.
  • All sample utterances are written in the selected language. For instance, the sample utterances on the German tab must be in German.

For more information about syntax rules for sample utterances, see Rules for Sample Utterances.

Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill.

Test Expected Results

1.

Review the text of all sample utterances.

All sample utterances adhere to the following
writing conventions:

  • Capital letters and punctuation are not used (periods can be used, but only in initialisms and spelling (e.g. "t. v."). Hyphens can be used, but should be very infrequent. Apostrophes can be used in possessives).
  • Individual letters are followed by a period and a space before the next letter or word: "TV" is written as "t. v.", "OK" is written as "o. k."- Compounds are written similarly: "AccessHD" is written as "access h. d."
  • The invocation name must not appear in isolation or within supported launch phrasing. For example, a skill with the invocation name "Tageshoroskop" cannot contain any sample utterances that are just "tageshoroskop" or sample utterances containing launch phrases such as "sage tageshoroskop."For a complete list of launch phrases see Understand How Users Invoke Custom Skills.
  • All sample utterances are written in the selected language. For instance, the sample utterances on the German tab must be in German.

For more information about syntax rules for sample utterances, see the Rules for Sample Utterances.

Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill.

Test Expected Results

1.

Review the text of all sample utterances.

All sample utterances adhere to the following
writing conventions:

  • Capital letters and punctuation are not used (periods can be used, but only in initialisms and spelling (e.g. "t. v."). Hyphens can be used, but should be very infrequent. Apostrophes can be used in possessives) for use of English words in utterances. However, in Hindi Language Capital letters, punctuations, acronyms, initialisms and apostrophes don't exist.
  • Individual letters are followed by a period and a space before the next letter or word: "TV" is written as "t. v.", "OK" is written as "o. k."- Compounds are written similarly: "AccessHD" is written as "access h. d." for use of English words in utterances. Periods do not have use in Hindi Language.
  • The invocation name must not appear in isolation or within supported launch phrasing. For example, a skill with the invocation name "दैनिक राशिफल"cannot contain any sample utterances that are just "दैनिक राशिफल" or sample utterances containing launch phrases such as "दैनिक राशिफल बताओ."For a complete list of launch phrases see Understand How Users Invoke Custom Skills.
  • Since use of English words in Hindi utterances is quite common, therefore use of English words should be allowed for writing utterances.

For more information about syntax rules for sample utterances, see the Rules for Sample Utterances.

Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill.

Test Expected Results

1.

Review the text of all sample utterances.

All sample utterances adhere to the following
writing conventions:

  • Capital letters and punctuation are not used (periods can be used, but only in initialisms and spelling (e.g. "a. d. n."). Hyphens can be used, but should be very infrequent. Apostrophes can be used in possessives).
  • Individual letters are followed by a period and a space before the next letter or word: "ADN" is written as "a. d. n.", "OK" is written as "o. k."- Compounds are written similarly: "AccessHD" is written as "access h. d."
  • The invocation name must not appear in isolation or within supported launch phrasing. For example, a skill with the invocation name "Oroscopo del Giorno" cannot contain any sample utterances that are just "oroscopo del giorno" or sample utterances containing launch phrases such as "lancia oroscopo del giorno" For a complete list of launch phrases see Understand How Users Invoke Custom Skills.
  • All sample utterances are written in the selected language. For instance, the sample utterances on the German tab must be in German.

For more information about syntax rules for sample utterances, see the Rules for Sample Utterances.

Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill.

Test Expected Results

1.

Review the text of all sample utterances.

All sample utterances adhere to the following
writing conventions:

  • Punctuation are not used. Hyphens can be used, but should be very infrequent. Apostrophes can be used.
  • Individual letters are followed by a space before the next letter or word: "TV" is written as "t. v.", "OK" is written as "o. k."- Compounds are written similarly: "AccessHD" is written as "access h. d."
  • The invocation name must not appear in isolation or within supported launch phrasing. For example, a skill with the invocation name "星座占い" cannot contain any sample utterances that are just "星座占い" or sample utterances containing launch phrases such as "星座占いで…を教えて." For a complete list of launch phrases see Understand How Users Invoke Custom Skills.
  • All sample utterances are written in the selected language. For instance, the sample utterances on the Japanese tab must be in Japanese.

For more information about syntax rules for sample utterances, see the Rules for Sample Utterances.

Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill.

Test Expected Results

1.

Review the text of all sample utterances.

All sample utterances adhere to the following
writing conventions:

  • Capital letters and punctuation are not used (periods can be used, but only in initialisms and spelling (e.g. "t. v."). Hyphens can be used. Apostrophes can be used in possessives).
  • Individual letters are followed by a period and a space before the next letter or word: "TV" is written as "t. v.", "OK" is written as "o. k."- Compounds are written similarly: "AccessoHD" is written as "accesso h. d."
  • The invocation name must not appear in isolation or within supported launch phrasing. For example, a skill with the invocation name "Horóscopo Diário" cannot contain any sample utterances that are just "horóscopo diário" or sample utterances containing launch phrases such as "diga ao horóscopo diário". For a complete list of launch phrases see Understand How Users Invoke Custom Skills.
  • All sample utterances are written in the selected language. For instance, the sample utterances on the German tab must be in German.

For more information about syntax rules for sample utterances, see the Rules for Sample Utterances.

Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill.

Test Expected Results

1.

Review the text of all sample utterances.

All sample utterances adhere to the following
writing conventions:

  • Capital letters and punctuation are not used (periods can be used, but only in initialisms and spelling (e.g. "t. v."). Hyphens can be used, but should be very infrequent.)
  • Individual letters are followed by a period and a space before the next letter or word: "TV" is written as "t. v.", "OK" is written as "o. k."- Compounds are written similarly: "AccessHD" is written as "access h. d."
  • The invocation name must not appear in isolation or within supported launch phrasing. For example, a skill with the invocation name "Horóscopo Diario" cannot contain any sample utterances that are just "Horóscopo diario" or sample utterances containing launch phrases such as "di el horóscopo diario." For a complete list of launch phrases see Understand How Users Invoke Custom Skills.
  • All sample utterances are written in the selected language. For instance, the sample utterances on the German tab must be in German.

For more information about syntax rules for sample utterances, see the Rules for Sample Utterances.

Error handling

Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill's ability to handle common errors.

For more information on validating user input, please see [Handling Possible Input Errors][handling_req#input-errors].

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • افتحي <Invocation Name>.

When prompted to respond, say nothing.

  • The skill responds with a prompt that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt you hear is the re-prompt included in the previous response.

2.

Invoke the skill using the following phrase:

  • افتحي <Invocation Name>.

When prompted to respond, say something that matches one of your skill's intents, but with invalid slot data.

For instance, if the intent expects an AMAZON.DATE slot, say something that cannot be converted to a date.

Repeat this test for each slot.

  • The skill responds with a prompt or help text that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt is not the re-prompt included in the previous response. This prompt must come from error handling within the code that handles the intent.

Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill's ability to handle common errors.

For more information on validating user input, please see Handling Possible Input Errors.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Open <Invocation Name>.

When prompted to respond, say nothing.

  • The skill responds with a prompt that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt you hear is the re-prompt included in the previous response.

2.

Invoke the skill using the following phrase:

  • Open <Invocation Name>.

When prompted to respond, say something that matches one of your skill's intents, but with invalid slot data.

For instance, if the intent expects an AMAZON.DATE slot, say something that cannot be converted to a date.

Repeat this test for each slot.

  • The skill responds with a prompt or help text that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt is not the re-prompt included in the previous response. This prompt must come from error handling within the code that handles the intent.

Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill's ability to handle common errors.

For more information on validating user input, please see Handling Possible Input Errors.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Open <Invocation Name>.

When prompted to respond, say nothing.

  • The skill responds with a prompt that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt you hear is the re-prompt included in the previous response.

2.

Invoke the skill using the following phrase:

  • Open <Invocation Name>.

When prompted to respond, say something that matches one of your skill's intents, but with invalid slot data.

For instance, if the intent expects an AMAZON.DATE slot, say something that cannot be converted to a date.

Repeat this test for each slot.

  • The skill responds with a prompt or help text that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt is not the re-prompt included in the previous response. This prompt must come from error handling within the code that handles the intent.

Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill's ability to handle common errors.

For more information on validating user input, please see Handling Possible Input Errors.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Ouvre <Invocation Name>.

When prompted to respond, say nothing.

  • The skill responds with a prompt that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt you hear is the re-prompt included in the previous response.

2.

Invoke the skill using the following phrase:

  • Ouvre <Invocation Name>.

When prompted to respond, say something that matches one of your skill's intents, but with invalid slot data.

For instance, if the intent expects an AMAZON.DATE slot, say something that cannot be converted to a date.

Repeat this test for each slot.

  • The skill responds with a prompt or help text that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt is not the re-prompt included in the previous response. This prompt must come from error handling within the code that handles the intent.

Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill's ability to handle common errors.

For more information on validating user input, please see Handling Possible Input Errors.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Öffne <Invocation Name>.

When prompted to respond, say nothing.

  • The skill responds with a prompt that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt you hear is the re-prompt included in the previous response.

2.

Invoke the skill using the following phrase:

  • Öffne <Invocation Name>.

When prompted to respond, say something that matches one of your skill's intents, but with invalid slot data.

For instance, if the intent expects an AMAZON.DATE slot, say something that cannot be converted to a date.

Repeat this test for each slot.

  • The skill responds with a prompt or help text that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt is not the re-prompt included in the previous response. This prompt must come from error handling within the code that handles the intent.

Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill's ability to handle common errors.

For more information on validating user input, please see Handling Possible Input Errors.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • <Invocation Name> खोलो.

When prompted to respond, say nothing.

  • The skill responds with a prompt that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt you hear is the re-prompt included in the previous response.

2.

Invoke the skill using the following phrase:

  • <Invocation Name> खोलो.

When prompted to respond, say something that matches one of your skill's intents, but with invalid slot data.

For instance, if the intent expects an AMAZON.DATE slot, say something that cannot be converted to a date.

Repeat this test for each slot.

  • The skill responds with a prompt or help text that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt is not the re-prompt included in the previous response. This prompt must come from error handling within the code that handles the intent.

Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill's ability to handle common errors.

For more information on validating user input, please see Handling Possible Input Errors.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Apri <Invocation Name>.

When prompted to respond, say nothing.

  • The skill responds with a prompt that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt you hear is the re-prompt included in the previous response.

2.

Invoke the skill using the following phrase:

  • Apri <Invocation Name>.

When prompted to respond, say something that matches one of your skill's intents, but with invalid slot data.

For instance, if the intent expects an AMAZON.DATE slot, say something that cannot be converted to a date.

Repeat this test for each slot.

  • The skill responds with a prompt or help text that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt is not the re-prompt included in the previous response. This prompt must come from error handling within the code that handles the intent.

Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill's ability to handle common errors.

For more information on validating user input, please see Handling Possible Input Errors.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • <Invocation Name>を開いて

When prompted to respond, say nothing.

  • The skill responds with a prompt that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt you hear is the re-prompt included in the previous response.

2.

Invoke the skill using the following phrase:

  • <Invocation Name>を開いて

When prompted to respond, say something that matches one of your skill's intents, but with invalid slot data.

For instance, if the intent expects an AMAZON.DATE slot, say something that cannot be converted to a date.

Repeat this test for each slot.

  • The skill responds with a prompt or help text that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt is not the re-prompt included in the previous response. This prompt must come from error handling within the code that handles the intent.

Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill's ability to handle common errors.

For more information on validating user input, please see Handling Possible Input Errors.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Abre <Invocation Name>.

When prompted to respond, say nothing.

  • The skill responds with a prompt that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt you hear is the re-prompt included in the previous response.

2.

Invoke the skill using the following phrase:

  • Abre <Invocation Name>.

When prompted to respond, say something that matches one of your skill's intents, but with invalid slot data.

For instance, if the intent expects an AMAZON.DATE slot, say something that cannot be converted to a date.

Repeat this test for each slot.

  • The skill responds with a prompt or help text that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt is not the re-prompt included in the previous response. This prompt must come from error handling within the code that handles the intent.

Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill's ability to handle common errors.

For more information on validating user input, please see Handling Possible Input Errors.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Abre <Invocation Name>.

When prompted to respond, say nothing.

  • The skill responds with a prompt that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt you hear is the re-prompt included in the previous response.

2.

Invoke the skill using the following phrase:

  • Abre <Invocation Name>.

When prompted to respond, say something that matches one of your skill's intents, but with invalid slot data.

For instance, if the intent expects an AMAZON.DATE slot, say something that cannot be converted to a date.

Repeat this test for each slot.

  • The skill responds with a prompt or help text that clarifies the information you need to provide.
  • The prompt clearly indicates what you need to say.
  • The prompt ends with a question and keeps the session open for your response.

Note that in this scenario, the prompt is not the re-prompt included in the previous response. This prompt must come from error handling within the code that handles the intent.

Providing help

A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see [Implementing the Built-in Intents][intents].

This test verifies that this intent exists and provides useful information.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • افتحي <Invocation Name>.

When prompted to respond, say "مساعدة".

For a simple skill that gives a complete response even with no specific intent, (such as the Space Geek sample), invoke the help intent directly:

  • ابحثي <Invocation Name> عن مساعدة.

The help response:

  • Provides instructions to help the user navigate the skill's core functionality.
  • Is more informative than the prompt users hear when launching the skill with no intent. For example, the help prompt could explain more about what the skill does or inform users how to exit the skill.
  • Educates users on what the skill can do, as opposed to what they need to say in order for the skill to function.
  • Ends with a question prompting the user to complete their request.
  • Leaves the session open to get a response from the user.

For more about designing help for your skill, see the Alexa Design Guide.

A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see Implementing the Built-in Intents.

This test verifies that this intent exists and provides useful information.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Open <Invocation Name>.

When prompted to respond, say "help".

For a simple skill that gives a complete response even with no specific intent, invoke the help intent directly:

  • Vraag <Invocation Name> om hulp

The help response:

  • Provides instructions to help the user navigate the skill's core functionality.
  • Is more informative than the prompt users hear when launching the skill with no intent. For example, the help prompt could explain more about what the skill does or inform users how to exit the skill.
  • Educates users on what the skill can do, as opposed to what they need to say in order for the skill to function.
  • Ends with a question prompting the user to complete their request.
  • Leaves the session open to get a response from the user.

For more about designing help for your skill, see the Alexa Design Guide.

A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see Implementing the Built-in Intents.

This test verifies that this intent exists and provides useful information.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Open <Invocation Name>.

When prompted to respond, say "help".

For a simple skill that gives a complete response even with no specific intent, (such as the Space Geek sample), invoke the help intent directly:

  • Ask <Invocation Name> for help.

The help response:

  • Provides instructions to help the user navigate the skill's core functionality.
  • Is more informative than the prompt users hear when launching the skill with no intent. For example, the help prompt could explain more about what the skill does or inform users how to exit the skill.
  • Educates users on what the skill can do, as opposed to what they need to say in order for the skill to function.
  • Ends with a question prompting the user to complete their request.
  • Leaves the session open to get a response from the user.

For more about designing help for your skill, see the Alexa Design Guide.

A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see Implementing the Built-in Intents.

This test verifies that this intent exists and provides useful information.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Ouvre <Invocation Name>.

When prompted to respond, say "aide-moi".

For a simple skill that gives a complete response even with no specific intent, (such as the Space Geek sample), invoke the help intent directly:

  • Demande à <Invocation Name> de l'aide.

The help response:

  • Provides instructions to help the user navigate the skill's core functionality.
  • Is more informative than the prompt users hear when launching the skill with no intent. For example, the help prompt could explain more about what the skill does or inform users how to exit the skill.
  • Educates users on what the skill can do, as opposed to what they need to say in order for the skill to function.
  • Ends with a question prompting the user to complete their request.
  • Leaves the session open to get a response from the user.

For more about designing help for your skill, see the Alexa Design Guide.

A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see Implementing the Built-in Intents.

This test verifies that this intent exists and provides useful information.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Öffne <Invocation Name>.

When prompted to respond, say "Hilfe".

For a simple skill that gives a complete response even with no specific intent, (such as the Space Geek sample), invoke the help intent directly:

  • Frage <Invocation Name> nach Hilfe.

The help response:

  • Provides instructions to help the user navigate the skill's core functionality.
  • Is more informative than the prompt users hear when launching the skill with no intent. For example, the help prompt could explain more about what the skill does or inform users how to exit the skill.
  • Educates users on what the skill can do, as opposed to what they need to say in order for the skill to function.
  • Ends with a question prompting the user to complete their request.
  • Leaves the session open to get a response from the user.

See the Alexa Design Guide for guidelines and examples for contextual help.

A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see Implementing the Built-in Intents.

This test verifies that this intent exists and provides useful information.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • <Invocation Name> खोलो.

When prompted to respond, say "मदद करो".

For a simple skill that gives a complete response even with no specific intent, (such as the Space Geek sample), invoke the help intent directly:

  • <Invocation Name>खोलो और मदद करो.

The help response:

  • Provides instructions to help the user navigate the skill's core functionality.
  • Is more informative than the prompt users hear when launching the skill with no intent. For example, the help prompt could explain more about what the skill does or inform users how to exit the skill.
  • Educates users on what the skill can do, as opposed to what they need to say in order for the skill to function.
  • Ends with a question prompting the user to complete their request.
  • Leaves the session open to get a response from the user.

For more about designing help for your skill, see the Alexa Design Guide.

A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see Implementing the Built-in Intents.

This test verifies that this intent exists and provides useful information.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Apri <Invocation Name>.

When prompted to respond, say "aiuto".

For a simple skill that gives a complete response even with no specific intent, (such as the Secchione Spaziale sample), invoke the help intent directly:

  • Chiedi a <Invocation Name> di aiutarmi

The help response:

  • Provides instructions to help the user navigate the skill's core functionality.
  • Is more informative than the prompt users hear when launching the skill with no intent. For example, the help prompt could explain more about what the skill does or inform users how to exit the skill.
  • Educates users on what the skill can do, as opposed to what they need to say in order for the skill to function.
  • Ends with a question prompting the user to complete their request.
  • Leaves the session open to get a response from the user.

For more about designing help for your skill, see the Alexa Design Guide.

A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see Implementing the Built-in Intents.

This test verifies that this intent exists and provides useful information.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • <Invocation Name>を開いて

When prompted to respond, say "ヘルプ."

For a simple skill that gives a complete response even with no specific intent, (such as the 宇宙の豆知識 sample), invoke the help intent directly:

  • <Invocation Name>でヘルプ

The help response:

  • Provides instructions to help the user navigate the skill's core functionality.
  • Is more informative than the prompt users hear when launching the skill with no intent. For example, the help prompt could explain more about what the skill does or inform users how to exit the skill.
  • Educates users on what the skill can do, as opposed to what they need to say in order for the skill to function.
  • Ends with a question prompting the user to complete their request.
  • Leaves the session open to get a response from the user.

For more about designing help for your skill, see the Alexa Design Guide.

A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see Implementing the Built-in Intents.

This test verifies that this intent exists and provides useful information.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Abre <Invocation Name>.

When prompted to respond, say "ajuda".

For a simple skill that gives a complete response even with no specific intent, (such as Meu Táxi), invoke the help intent directly:

  • Peça ao <Invocation Name> por ajuda.

The help response:

  • Provides instructions to help the user navigate the skill's core functionality.
  • Is more informative than the prompt users hear when launching the skill with no intent. For example, the help prompt could explain more about what the skill does or inform users how to exit the skill.
  • Educates users on what the skill can do, as opposed to what they need to say in order for the skill to function.
  • Ends with a question prompting the user to complete their request.
  • Leaves the session open to get a response from the user.

For more about designing help for your skill, see the Alexa Design Guide.

A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see Implementing the Built-in Intents.

This test verifies that this intent exists and provides useful information.

Test Expected Results

1.

Invoke the skill without specifying an intent, for example:

  • Abre <Invocation Name>.

When prompted to respond, say "ayuda".

For a simple skill that gives a complete response even with no specific intent, (such as the Mi Taxi), invoke the help intent directly:

  • Pídele a <Invocation Name> ayuda.

The help response:

  • Provides instructions to help the user navigate the skill's core functionality.
  • Is more informative than the prompt users hear when launching the skill with no intent. For example, the help prompt could explain more about what the skill does or inform users how to exit the skill.
  • Educates users on what the skill can do, as opposed to what they need to say in order for the skill to function.
  • Ends with a question prompting the user to complete their request.
  • Leaves the session open to get a response from the user.

See the Alexa Design Guide for guidelines and examples for contextual help.

Stopping and canceling

Your skill must respond appropriately to common utterances for stopping and canceling actions (such as "إلغاء" "الغي" "انهي" and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. Handle these as follows:

  • AMAZON.CancelIntent: In most cases, this should just exit the skill. However, you can map it to alternate functionality if it makes sense for your skill. See [Implementing the Built-in Intents][intents].
  • AMAZON.StopIntent: Your skill must implement this intent and shouldEndSession must be true or null in the response.
Test Expected Results

1.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "بطلي."

The skill can respond with text-to-speech and then must exit.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

2.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "اليكسا بطلي" to interrupt the response.

After the wake word interrupts Alexa, the skill can respond with text-to-speech and then must exit.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

3.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "الغي."

One of the following occurs:

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "cancel." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

4.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "اليكسا الغي" to interrupt the response.

After the wake word interrupts Alexa, one of the following occurs.

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "cancel." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

5.

Invoke any intent that starts the skill session. While the session is open, say "انهي." This ends the session and sends your skill a [SessionEndedRequest][custom-standard-request-type-reference#sessionendedrequest].

The skill closes without returning an error response.

Your skill must respond appropriately to common utterances for stopping and canceling actions (such as "stop," "annuleer," "laat maar," and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. Handle these as follows:

  • AMAZON.CancelIntent: In most cases, this should just exit the skill. However, you can map it to alternate functionality if it makes sense for your skill. See Implementing the Built-in Intents.
  • AMAZON.StopIntent: Your skill must implement this intent and shouldEndSession must be true or null in the response.
Test Expected Results

1.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "stop."

The skill can respond with text-to-speech and then must exit.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

2.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, stop" to interrupt the response.

After the wake word interrupts Alexa, the skill can respond with text-to-speech and then must exit.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

3.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "annuleer."

One of the following occurs:

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "annuleer." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

4.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, annuleer" to interrupt the response.

After the wake word interrupts Alexa, one of the following occurs.

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "annuleer." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

5.

Invoke any intent that starts the skill session. While the session is open, say "verlaten." This ends the session and sends your skill a SessionEndedRequest.

The skill closes without returning an error response.

Your skill must respond appropriately to common utterances for stopping and canceling actions (such as "stop," "cancel," "never mind," and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. Handle these as follows:

  • AMAZON.CancelIntent: In most cases, this should just exit the skill. However, you can map it to alternate functionality if it makes sense for your skill. See Implementing the Built-in Intents.
  • AMAZON.StopIntent: Your skill must implement this intent and shouldEndSession must be true or null in the response.
Test Expected Results

1.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "stop."

The skill can respond with text-to-speech and then must exit.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

2.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, stop" to interrupt the response.

After the wake word interrupts Alexa, the skill can respond with text-to-speech and then must exit.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

3.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "cancel."

One of the following occurs:

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "cancel." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

4.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, cancel" to interrupt the response.

After the wake word interrupts Alexa, one of the following occurs.

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "cancel." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

5.

Invoke any intent that starts the skill session. While the session is open, say "Exit." This ends the session and sends your skill a SessionEndedRequest.

The skill closes without returning an error response.

Your skill must respond appropriately to common utterances for stopping and canceling actions (such as "stop," "cancel," "never mind," and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. Handle these as follows:

  • AMAZON.CancelIntent: In most cases, this should just exit the skill. However, you can map it to alternate functionality if it makes sense for your skill. See Implementing the Built-in Intents.
  • AMAZON.StopIntent: Your skill must implement this intent and shouldEndSession must be true or null in the response.
Test Expected Results

1.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "arrête."

The skill can respond with text-to-speech and then must exit.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

2.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, arrête" to interrupt the response.

After the wake word interrupts Alexa, the skill can respond with text-to-speech and then must exit.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

3.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "annule."

One of the following occurs:

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "annule." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

4.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, annule" to interrupt the response.

After the wake word interrupts Alexa, one of the following occurs.

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "annule." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

5.

Invoke any intent that starts the skill session. While the session is open, say "Quitte." This ends the session and sends your skill a SessionEndedRequest.

The skill closes without returning an error response.

Your skill must respond appropriately to common utterances for stopping and canceling actions (such as "stopp," "abbrechen," "vergiss es," and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. Handle these as follows:

  • AMAZON.CancelIntent: In most cases, this should just exit the skill. However, you can map it to alternate functionality if it makes sense for your skill. See Implementing the Built-in Intents.
  • AMAZON.StopIntent: Your skill must implement this intent and shouldEndSession must be true or null in the response.
Test Expected Results

1.

Start the skill and invoke an intent that prompts the user for a response. After hearing the prompt, say "Stopp."

The skill can respond with text-to-speech and then must exit.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

2.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, Stopp" to interrupt the response.

After the wake word interrupts Alexa, the skill can respond with text-to-speech and then must exit.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

3.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "abbrechen."

One of the following occurs:

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "abbrechen." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

4.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, abbrechen" to interrupt the response.

After the wake word interrupts Alexa, one of the following occurs.

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "abbrechen." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

5.

Invoke any intent that starts the skill session. While the session is open, say "schließen." This ends the session and sends your skill a SessionEndedRequest.

The skill closes without returning an error response.

Your skill must respond appropriately to common utterances for stopping and canceling actions (such as "बंद कर","चुप","छोड़ो", "रद्द करें," "cancel कर दो" and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. Handle these as follows:

  • AMAZON.CancelIntent: In most cases, this should just exit the skill. However, you can map it to alternate functionality if it makes sense for your skill. See Implementing the Built-in Intents.
  • AMAZON.StopIntent: Your skill must implement this intent and shouldEndSession must be true or null in the response.
Test Expected Results

1.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "बंद करो."

The skill can respond with text-to-speech and then must exit.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

2.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, बंद कर" to interrupt the response.

After the wake word interrupts Alexa, the skill can respond with text-to-speech and then must exit.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

3.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "रद्द करें."

One of the following occurs:

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "रद्द करें." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

4.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, रद्द करें" to interrupt the response.

After the wake word interrupts Alexa, one of the following occurs.

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "रद्द करें." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

5.

Invoke any intent that starts the skill session. While the session is open, say "बाहर निकलो." This ends the session and sends your skill a SessionEndedRequest.

The skill closes without returning an error response.

Your skill must respond appropriately to common utterances for stopping and canceling actions (such as "smettila," "annulla," "non importa," and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. Handle these as follows:

  • AMAZON.CancelIntent: In most cases, this should just exit the skill. However, you can map it to alternate functionality if it makes sense for your skill. See Implementing the Built-in Intents.
  • AMAZON.StopIntent: Your skill must implement this intent and shouldEndSession must be true or null in the response.
Test Expected Results

1.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "smettila"

The skill can respond with text-to-speech and then must exit.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

2.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, smettila" to interrupt the response.

After the wake word interrupts Alexa, the skill can respond with text-to-speech and then must exit.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

3.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "annulla"

One of the following occurs:

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "annulla" For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

4.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, annulla" to interrupt the response.

After the wake word interrupts Alexa, one of the following occurs.

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "annulla" For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

5.

Invoke any intent that starts the skill session. While the session is open, say "esci" This ends the session and sends your skill a SessionEndedRequest.

The skill closes without returning an error response.

Your skill must respond appropriately to common utterances for stopping and canceling actions (such as "ストップ", "キャンセル", and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. Handle these as follows:

  • AMAZON.CancelIntent: In most cases, this should just exit the skill. However, you can map it to alternate functionality if it makes sense for your skill. See Implementing the Built-in Intents.
  • AMAZON.StopIntent: Your skill must implement this intent and shouldEndSession must be true or null in the response.
Test Expected Results

1.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "ストップ."

The skill can respond with text-to-speech and then must exit.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

2.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "アレクサ、ストップ" to interrupt the response.

After the wake word interrupts Alexa, the skill can respond with text-to-speech and then must exit.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

3.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "キャンセル."

One of the following occurs:

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "キャンセル." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

4.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "アレクサ、キャンセル" to interrupt the response.

After the wake word interrupts Alexa, one of the following occurs.

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "キャンセル." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

5.

Invoke any intent that starts the skill session. While the session is open, say "終了." This ends the session and sends your skill a SessionEndedRequest.

The skill closes without returning an error response.

Your skill must respond appropriately to common utterances for stopping and canceling actions (such as "pare," "cancela," "anula," and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. Handle these as follows:

  • AMAZON.CancelIntent: In most cases, this should just exit the skill. However, you can map it to alternate functionality if it makes sense for your skill. See Implementing the Built-in Intents.
  • AMAZON.StopIntent: Your skill must implement this intent and shouldEndSession must be true or null in the response.
Test Expected Results

1.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "parar."

The skill can respond with text-to-speech and then must exit.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

2.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, parar" to interrupt the response.

After the wake word interrupts Alexa, the skill can respond with text-to-speech and then must exit.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

3.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "cancela."

One of the following occurs:

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "cancela". For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

4.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, cancela" to interrupt the response.

After the wake word interrupts Alexa, one of the following occurs.

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "cancela". For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

5.

Invoke any intent that starts the skill session. While the session is open, say "Sair." This ends the session and sends your skill a SessionEndedRequest.

The skill closes without returning an error response.

Your skill must respond appropriately to common utterances for stopping and canceling actions (such as "stop," "cancel," "never mind," and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. Handle these as follows:

  • AMAZON.CancelIntent: In most cases, this should just exit the skill. However, you can map it to alternate functionality if it makes sense for your skill. See Implementing the Built-in Intents.
  • AMAZON.StopIntent: Your skill must implement this intent and shouldEndSession must be true or null in the response.
Test Expected Results

1.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "para."

The skill can respond with text-to-speech and then must exit.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

2.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, para" to interrupt the response.

After the wake word interrupts Alexa, the skill can respond with text-to-speech and then must exit.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

3.

Start the skill and invoke an intent that prompts the user for a response.

After hearing the prompt, say "cancela."

One of the following occurs:

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "cancela." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If the skill responds to all requests with a complete response and never provides a prompt, skip this test.

4.

Invoke an intent that responds with lengthy text-to-speech. As soon as Alexa begins speaking the response, say "Alexa, cancela" to interrupt the response.

After the wake word interrupts Alexa, one of the following occurs.

  • The skill exits.
  • The skill returns a response that is appropriate to the skill's functionality. The response also makes sense in the context of the request to "cancela." For example, a skill that places orders could send back a reply confirming that the user's order has been canceled.

If all of the skill's responses are too short to reasonably interrupt, skip this test.

5.

Invoke any intent that starts the skill session. While the session is open, say "Sal." This ends the session and sends your skill a SessionEndedRequest.

The skill closes without returning an error response.

Name-Free interaction

If you have chosen to implement the CanFulfillIntentRequest interface, you must verify that a response to the CanFulfillIntentRequest interface call isn't blank, and that it's in the expected structure. For more details, see Understand Name-free Interaction for Custom Skills.

Appendix: Sample utterances and slot type values (deprecated)

If all your slots use the newer slot types with the AMAZON namespace, such as AMAZON.DATE, you can skip this test.

In previous versions of the Alexa Skills Kit, it was necessary to include slot values showing different ways of phrasing the slot data in your sample utterances. For example, sample utterances for a DATE slot look like this:

OneshotTideIntent when is high tide on {january first|Date}
OneshotTideIntent when is high tide {tomorrow|Date}
OneshotTideIntent when is high tide {saturday|Date}
...(many more utterances showing different ways to say the date)

If your skill still uses this syntax for the built-in slot types, review the sample slot values in your sample utterances.

Test Expected Results

1.

Inspect the intent schema to identify all slot types, and then inspect the slot type values found in the sample utterances.

Verify that the slot type values provide sufficient variety for good recognition.

  • NUMBER: provide multiple ways of stating integer numbers, and include samples showing the full range of numbers you expect. For example, include "ten, "one hundred," and several samples in between. If you expect the numbers to only be within a small range, include every number within that range as a sample value in an utterance.
  • DATE: provide both relative and absolute date samples (for example, "today", "tomorrow", "september first", "june twenty sixth twenty fifteen"). If you expect a certain set of phrases to be more likely than others, include samples of those phrases.
  • TIME: provide samples of stating the time ("three thirty p. m.").
  • DURATION: provide samples of indicating different time periods ("five minutes", "ten days", "four years").

Was this page helpful?

Last updated: May 01, 2024