Understanding the task verification automation

Prerequisites

  • You need a Domino account with access to the Quests section

  • Access to a quest with user submissions

  • Basic understanding of Domino's automation system

Overview

Domino's quest system uses automated verification to validate user submissions. When reviewing quest claims, you can access the underlying automation runs to understand exactly how tasks were validated, diagnose issues, and gain insights into user interactions. This guide explains how to interpret automation run data and use it for troubleshooting verification issues.

Accessing Automation Runs

When reviewing quest claims, each task shows its verification status and provides access to the underlying automation that performed the validation.

  1. Navigate to your quest's claims panel

  2. Click on any claim to view the detailed submission

  3. For each task, look for the robot icon button next to the status indicator

  4. Click the robot icon to open the automation run details

The robot icon provides access to the automation run that validated each task

If a task shows no automation run link, it might indicate a problem with the quest setup or that a preliminary check failed before the automation could run.

Understanding Automation Run Information

The automation run toolbar provides an overview of the execution process.

Run Toolbar Elements

When you open an automation run, you'll see a toolbar with key information:

  • Run ID: Unique identifier for the automation execution

  • Status: Current state (Success, Failed, Warning, or Running)

  • Start Time: When the verification process began

  • Duration: How long the automation took to complete

  • Tasks Used: Number of tasks used by the automation from your available quota

The run toolbar provides an overview of the verification process

If an automation has multiple runs (multiple claims) you will be able to easily navigate through each:

  • Use the left arrow to view previous attempts

  • Use the right arrow to view more recent attempts

Visualizing the Automation Flow

The automation editor displays the entire verification flow, highlighting the path that was taken during execution.

Understanding the Verification Path

Verification automations follow a standard pattern:

  1. Begin with the Task Submitted trigger

  2. Process through various steps that evaluate the submission

  3. End at either a Task Completed or Task Failed action

The highlighted path shows exactly how the user's submission was processed

Steps that were executed during the run are highlighted, giving you a clear visual representation of the path the automation took to reach its conclusion.

Examining Step Data

Each step in the automation processes specific data. You can examine this data to understand what happened during verification.

Accessing Step Traces

To view detailed information about a specific step:

  1. Click on any step in the automation flow

  2. Navigate to the Traces tab in the step editor

  3. View the execution history for that specific step

Interpreting Trace Information

The Traces tab shows:

  • Status indicator: Success, error, or warning for each execution

  • Duration: How long the step took to execute

  • Data In: What information was provided to the step

  • Data Out: What information was produced by the step

The Traces tab shows detailed input and output data for each step execution

Common Verification Scenarios

Successful Verification

In a successful verification:

  • The automation flows from Task Submitted to various processing steps

  • All conditions evaluate to the expected values

  • The automation reaches the Task Completed action

  • The status shows as Success with a green indicator

Failed Verification

In a failed verification:

  • The automation flows from Task Submitted to various processing steps

  • A condition evaluates to an unexpected value

  • The automation reaches the Task Failed action

  • The status shows as Failed with a red indicator

  • The failure reason is captured and displayed to the user

Error During Verification

When an error occurs:

  • A step encounters an unexpected issue

  • The automation cannot proceed as designed

  • If no error handling path exists, the entire run fails

  • The task validation also fails as a result

  • The error message is displayed in the step trace

Troubleshooting Verification Issues

When users report problems with task verification, the automation run provides valuable diagnostic information.

No Automation Run Available

If there's no automation run link for a task:

  • Check if the task has a valid trigger configuration

  • Verify that all required connections are established

  • Ensure the quest is properly published and active

  • Check for preliminary validation errors in the quest setup

Unexpected Validation Results

If a task is being validated incorrectly:

  1. Examine the data inputs to see what the user actually submitted

  2. Check the conditions in your automation to ensure they correctly evaluate the submission

  3. Verify that all connections to external services are working properly

  4. Review the error messages if any step failed during execution

Common Error Patterns

  • API rate limiting: External services may temporarily refuse connections

  • Missing user data: Required user information might not be available

  • Invalid input format: User provided data in an unexpected format

  • Timeout errors: Operations took too long to complete

Optimizing Validation Automations

Based on insights from automation runs, you can improve your verification processes.

Adding Error Handling

If you notice runs failing without reaching a Task Failed step:

  1. Identify where errors commonly occur

  2. Add error handling paths that direct to Task Failed with helpful messages

  3. Include conditions that check for common error states

Refining Validation Logic

Use the step trace data to improve your validation criteria:

  • Identify edge cases that users are encountering

  • Adjust conditions to handle various input formats

  • Add data transformation steps to normalize user inputs

Last updated