Understanding the task verification automation
Last updated
Last updated
You need a Domino account with access to the Quests section
Access to a quest with user submissions
Basic understanding of Domino's automation system
Domino's quest system uses automated verification to validate user submissions. When reviewing quest claims, you can access the underlying automation runs to understand exactly how tasks were validated, diagnose issues, and gain insights into user interactions. This guide explains how to interpret automation run data and use it for troubleshooting verification issues.
When reviewing quest claims, each task shows its verification status and provides access to the underlying automation that performed the validation.
Navigate to your quest's claims panel
Click on any claim to view the detailed submission
For each task, look for the robot icon button next to the status indicator
Click the robot icon to open the automation run details
The automation run toolbar provides an overview of the execution process.
When you open an automation run, you'll see a toolbar with key information:
Run ID: Unique identifier for the automation execution
Status: Current state (Success, Failed, Warning, or Running)
Start Time: When the verification process began
Duration: How long the automation took to complete
Tasks Used: Number of tasks used by the automation from your available quota
If an automation has multiple runs (multiple claims) you will be able to easily navigate through each:
Use the left arrow to view previous attempts
Use the right arrow to view more recent attempts
Restart Functionality
The restart button allows you to rerun the verification if needed. This is particularly useful during quest development or when troubleshooting inconsistent verifications.
The automation editor displays the entire verification flow, highlighting the path that was taken during execution.
Verification automations follow a standard pattern:
Begin with the Task Submitted trigger
Process through various steps that evaluate the submission
End at either a Task Completed or Task Failed action
Each step in the automation processes specific data. You can examine this data to understand what happened during verification.
To view detailed information about a specific step:
Click on any step in the automation flow
Navigate to the Traces tab in the step editor
View the execution history for that specific step
The Traces tab shows:
Status indicator: Success, error, or warning for each execution
Duration: How long the step took to execute
Data In: What information was provided to the step
Data Out: What information was produced by the step
Debugging Tips
Compare the "Data In" and "Data Out" tabs to understand how information was transformed at each step. This is crucial for identifying where verification rules might not be working as expected.
In a successful verification:
The automation flows from Task Submitted to various processing steps
All conditions evaluate to the expected values
The automation reaches the Task Completed action
The status shows as Success with a green indicator
In a failed verification:
The automation flows from Task Submitted to various processing steps
A condition evaluates to an unexpected value
The automation reaches the Task Failed action
The status shows as Failed with a red indicator
The failure reason is captured and displayed to the user
When an error occurs:
A step encounters an unexpected issue
The automation cannot proceed as designed
If no error handling path exists, the entire run fails
The task validation also fails as a result
The error message is displayed in the step trace
When users report problems with task verification, the automation run provides valuable diagnostic information.
If there's no automation run link for a task:
Check if the task has a valid trigger configuration
Verify that all required connections are established
Ensure the quest is properly published and active
Check for preliminary validation errors in the quest setup
Missing automation runs often indicate configuration issues at the quest or task level rather than problems with user submissions.
If a task is being validated incorrectly:
Examine the data inputs to see what the user actually submitted
Check the conditions in your automation to ensure they correctly evaluate the submission
Verify that all connections to external services are working properly
Review the error messages if any step failed during execution
API rate limiting: External services may temporarily refuse connections
Missing user data: Required user information might not be available
Invalid input format: User provided data in an unexpected format
Timeout errors: Operations took too long to complete
Based on insights from automation runs, you can improve your verification processes.
If you notice runs failing without reaching a Task Failed step:
Identify where errors commonly occur
Add error handling paths that direct to Task Failed with helpful messages
Include conditions that check for common error states
Well-designed error handling ensures users receive meaningful feedback even when unexpected issues occur during validation.
Use the step trace data to improve your validation criteria:
Identify edge cases that users are encountering
Adjust conditions to handle various input formats
Add data transformation steps to normalize user inputs