View Operational Metrics for Smart Home and Video Skills


Operational metrics provide valuable insights into the performance of smart home and video skills. By monitoring key data, such as latency and success rate, you can identify and address issues that might impact the user experience. Regular analysis of these metrics allows you to tune and debug your skill so that it operates reliably and accurately.

Research shows that customers lapse due to poor operational performance. High latency, low success rate, and skill outages cause an uptick in bad customer ratings. Use these metrics to monitor and improve your skill's performance and build a strong customer base.

To troubleshoot operational issues, use the Capability Directives page to download message ID logs. For more details, see Download Message IDs for Troubleshooting.

View operational metrics

You can view smart home and video operational metrics on the Analytics page on the Alexa developer console. Operational metrics, such as latency and success rate, are available for device discovery, capability directives, and state reporting. For each metric, graphics, such as charts, grids, and other visualizations, are available. Export graphics in PNG or JPG file formats, or download raw data in CSV file format.

In the Operational Metrics section, you can view the following pages:

  • Discovery – Alexa sends a discovery request to your skill to discover your customer's smart devices. The Discovery page shows the latency and success rate graphs for all discovery requests to your skill.

    Successful discovery is key to customer retention. If the customer isn't able to discover their device, for example, discovery takes too long or fails, the customer gives up and doesn't connect their device to Alexa. Use these metrics to analyze discovery issues.

    For more details about discovery, see Alexa Discovery interface.

  • Capability Directives – When a user asks Alexa to control their smart device, Alexa sends a directive request to your skill to trigger a capability, for example, turning on a light. For more details about capability directives, see the documentation for each interface that your skill supports. On this page, you can download logs for troubleshooting.

    The Capability Directives page shows the following tabs:
    • Overview – Shows a summary view of latency, success rate, and user perceived errors for all capabilities.
    • Latency – Shows latency data for the aggregate of all capability interfaces and for each capability interface supported by your skill. Each per-capability latency graph shows the aggregate across all directives and the breakdown per directive for all directives supported by the capability.
    • Success Rate – Shows success rate data for the aggregate of all capability interfaces and for each capability interface supported by your skill. Each per-capability success-rate graph shows the aggregate across all directives and the breakdown per directive for all directives supported by the capability.
    • Error – Shows user-perceived errors by error type for the aggregate of all capability interfaces for the last 90 days. These metrics include errors that impact customers.
  • Reporting State – Alexa sends a ReportState directive to your skill to request the state of an endpoint. The Reporting State page shows the latency and success rate graphs for all the report state requests to your skill. For more details about report state, see State Reporting for a Smart Home Skill.

  • Change Report – When the state of a device changes for any reason, the device reports that change to Alexa with a ChangeReport event. Then, Alexa provides the status change to the user in the Alexa app. The Change Report page shows accuracy metrics that reflect the percentage of time that the ChangeReport event matched a previous StateReport response. For more details about change reporting, see Report state in a ChangeReport.

The following image shows an example Overview tab on the Capability Directives page on the Alexa developer console.

Sample overview page that shows latency, success rate, and UPE summary charts.
Operational metrics on the Capability Directives page

Interpret operational metrics

Operational metrics help you track the performance of your skill and analyze errors that impact customers.

Accuracy rate

Accuracy rate indicates the percentage of time that the ChangeReport event matched a previous StateReport response. Accuracy Rate metrics include both manual and Alexa-directed state changes, such as when a user physically turns on a light and when a user asks Alexa to turn on the light. Accuracy data is available on the Change Report page.

To maintain Works with Alexa certification, your product should achieve 97 percent change reporting accuracy. For details, see Recommended performance targets.

Success rate

Success rate is the total number of successful responses Alexa receives from your skill divided by the total number of requests Alexa sends to your skill. Success rate translates to availability.

Alexa counts success as reaching the skill, handling the request, and returning a successful response. If there is an outage where Alexa is unable to reach the skill or an ambiguous response, such as INTERNAL_ERROR, the success rate drops. Make sure to respond with a specific error code so that Alexa can understand the response. To analyze availability failure reasons, use the error metrics.

Success rate graphs show percentage success and request volume over time. Success rate metrics are available on the Capability Directives, Discovery, Reporting State pages.

To maintain Works with Alexa certification, maintain 99.93 percent availability annually. For details, see Availability requirements.

Error

Skills that have high user-perceived errors (UPEs) fail to provide the user-requested action to control a smart home device or service, and cause friction for users. These faults stem from skill errors, Alexa errors, and fatal errors.

UPE data is available on the Capability Directives page.

Amazon provides the following UPE metrics:

  • User-Perceived Error Type Summary – List of error types and an aggregated count of affected utterances. Error types include Alexa errors, skills errors, and fatal errors. Click an error type to see when the error occurred over the last 30 days.
  • User-perceived errors per million: All Capabilities – Aggregated view of all error types per million of requests in the selected region over the selected time interval and aggregation period.
  • User-perceived errors: Error type – List of error types under the chart.

The following image shows an example Error tab on the Capability Directives page.

Sample error page that shows user-perceived error summary, errors per million requests, and errors by type.
Error tab on Capability Directives page that shows user-perceived errors

Error types

User-perceived error types fall into the following categories:

  • Skill errors – Errors returned in the skill response to Alexa. For a comprehensive list of error types and their descriptions, see the Alexa.ErrorResponse Interface reference. For skill-related errors, Alexa tries to provide the best speech output that informs the user of the situation and suggests actions to take. The more accurate the error type, the better the answer.
  • Alexa errors – Errors that occur on the Alexa side after Alexa attempts to process the response from your skill. Here Alexa indicates an inability to complete the request. Alexa errors include the following exceptions:
    • SKILL_RESPONSE_TIMEOUT_EXCEPTION – Exception thrown when a skill times out on an Alexa request to the skill.
    • INVALID_SKILL_RESPONSE_EXCEPTION – Exception thrown when Alexa receives an invalid response from the skill.
  • Fatal errors – Unexpected and non-recoverable errors that impact customers, such as outages and infrastructure failures. These errors lead to silence, a ba-dump sound, or the message, "Sorry, something went wrong."

Latency

After Alexa sends a request to your skill, latency is the time in milliseconds until Alexa receives a response from your skill. For devices, such as smart lights, the requested action should seem as instantaneous as a light switch. For battery-powered devices, the response might take more time due to the limitations of the power source. In both cases, if Alexa doesn't receive a response within eight seconds, Alexa might report an error without a clear reason. Even worse, if the action succeeded, but a timeout occurred, Alexa might erroneously report a failure, leaving the user perplexed and unsatisfied with the experience.

Amazon measures latency with two types of statistics:

  • Percentile (P) – Percentile, such as P99, P90, and P50, indicates how a value compares to others within the same period. For example, P90 is the 90th percentile and means that 90 percent of the data within the period is lower than this value and 10 percent of the data is higher than this value. Average indicates the sum of values divided by the total number of data points.
  • Trimmed Mean (TM) – Trimmed mean is the average of a set of measurements, after including or discarding values indicated by number or range, such as TM95 and TM95:99. TM metrics more accurately reflect latency at volume than single percentile-based metrics. The following examples show TM metrics:
  • TM95, with no specified ending range, is the average latency after throwing away the highest five percent of latency data. TM95 latency is an indicator of overall latency performance as it discards the outlier high latency values.
  • TM95:99, with a range of 95–99 percent, focuses on a segment of the highest latency values collected, averaging latency values that fall in the specified range. Use this data to identify edge cases of high latency and understand peak latency as compared to a more inclusive metric, such as TM95.

Latency graphs show latency and request volume over time. Latency metrics are available on the Capability Directives, Discovery, Reporting State pages. Use the Choose Data menu (three vertical dots in a circle icon) on the latency graphs to choose different percentile and trimmed mean statistics or the average, and to toggle request volume on and off. A single graph displays multiple statistics options.

To maintain Works with Alexa certification, for capability directive responses, your product should have a maximum latency of 1000 milliseconds for P90 and 800 milliseconds for P50. For details, see Recommended performance targets.

The following image shows example latency and success rate graphs for the last 30 days, aggregated daily.

Dashboard shows summary Discovery operational metrics.
Operational metrics on the Discovery page

Was this page helpful?

Last updated: frontmatter-missing