Domino Docs
Go to Domino
  • Introduction
  • Concepts
  • Getting started
  • Quest Management
    • Creating your community on Domino
    • Launching your first quest
    • Creating custom quest tasks
  • Viewing quest claims (submissions)
  • Using custom rewards
  • Understanding the task verification automation
  • Restricting quest access to Discord role
  • Domino Portal
    • Setting up the portal
    • Customizing the portal
    • Managing portal quests
    • Creating leaderboards
  • Developer Resources
    • Authenticating users with the portal API
    • Embedding the portal in Telegram Mini Apps
  • Checking claim status
  • Getting quest modules
  • Claiming a quest
Powered by GitBook
On this page
  • Overview
  • Accessing Automation Runs
  • Finding the Automation Link
  • Understanding Automation Run Information
  • Run Toolbar Elements
  • Navigating Between Runs
  • Visualizing the Automation Flow
  • Understanding the Verification Path
  • Examining Step Data
  • Accessing Step Traces
  • Interpreting Trace Information
  • Common Verification Scenarios
  • Successful Verification
  • Failed Verification
  • Error During Verification
  • Troubleshooting Verification Issues
  • No Automation Run Available
  • Unexpected Validation Results
  • Common Error Patterns
  • Optimizing Validation Automations
  • Adding Error Handling
  • Refining Validation Logic

Understanding the task verification automation

PreviousUsing custom rewardsNextRestricting quest access to Discord role

Last updated 12 days ago

Prerequisites

  • You need a Domino account with access to the Quests section

  • Access to a quest with user submissions

  • Basic understanding of Domino's automation system

Overview

Domino's quest system uses automated verification to validate user submissions. When reviewing quest claims, you can access the underlying automation runs to understand exactly how tasks were validated, diagnose issues, and gain insights into user interactions. This guide explains how to interpret automation run data and use it for troubleshooting verification issues.

Accessing Automation Runs

When reviewing quest claims, each task shows its verification status and provides access to the underlying automation that performed the validation.

Finding the Automation Link

  1. Navigate to your quest's claims panel

  2. Click on any claim to view the detailed submission

  3. For each task, look for the robot icon button next to the status indicator

  4. Click the robot icon to open the automation run details

If a task shows no automation run link, it might indicate a problem with the quest setup or that a preliminary check failed before the automation could run.

Understanding Automation Run Information

The automation run toolbar provides an overview of the execution process.

Run Toolbar Elements

When you open an automation run, you'll see a toolbar with key information:

  • Run ID: Unique identifier for the automation execution

  • Status: Current state (Success, Failed, Warning, or Running)

  • Start Time: When the verification process began

  • Duration: How long the automation took to complete

  • Tasks Used: Number of tasks used by the automation from your available quota

Navigating Between Runs

If an automation has multiple runs (multiple claims) you will be able to easily navigate through each:

  • Use the left arrow to view previous attempts

  • Use the right arrow to view more recent attempts

Restart Functionality

The restart button allows you to rerun the verification if needed. This is particularly useful during quest development or when troubleshooting inconsistent verifications.

Visualizing the Automation Flow

The automation editor displays the entire verification flow, highlighting the path that was taken during execution.

Understanding the Verification Path

Verification automations follow a standard pattern:

  1. Begin with the Task Submitted trigger

  2. Process through various steps that evaluate the submission

  3. End at either a Task Completed or Task Failed action

Steps that were executed during the run are highlighted, giving you a clear visual representation of the path the automation took to reach its conclusion.

Examining Step Data

Each step in the automation processes specific data. You can examine this data to understand what happened during verification.

Accessing Step Traces

To view detailed information about a specific step:

  1. Click on any step in the automation flow

  2. Navigate to the Traces tab in the step editor

  3. View the execution history for that specific step

Interpreting Trace Information

The Traces tab shows:

  • Status indicator: Success, error, or warning for each execution

  • Duration: How long the step took to execute

  • Data In: What information was provided to the step

  • Data Out: What information was produced by the step

Debugging Tips

Compare the "Data In" and "Data Out" tabs to understand how information was transformed at each step. This is crucial for identifying where verification rules might not be working as expected.

Common Verification Scenarios

Successful Verification

In a successful verification:

  • The automation flows from Task Submitted to various processing steps

  • All conditions evaluate to the expected values

  • The automation reaches the Task Completed action

  • The status shows as Success with a green indicator

Failed Verification

In a failed verification:

  • The automation flows from Task Submitted to various processing steps

  • A condition evaluates to an unexpected value

  • The automation reaches the Task Failed action

  • The status shows as Failed with a red indicator

  • The failure reason is captured and displayed to the user

Error During Verification

When an error occurs:

  • A step encounters an unexpected issue

  • The automation cannot proceed as designed

  • If no error handling path exists, the entire run fails

  • The task validation also fails as a result

  • The error message is displayed in the step trace

Troubleshooting Verification Issues

When users report problems with task verification, the automation run provides valuable diagnostic information.

No Automation Run Available

If there's no automation run link for a task:

  • Check if the task has a valid trigger configuration

  • Verify that all required connections are established

  • Ensure the quest is properly published and active

  • Check for preliminary validation errors in the quest setup

Missing automation runs often indicate configuration issues at the quest or task level rather than problems with user submissions.

Unexpected Validation Results

If a task is being validated incorrectly:

  1. Examine the data inputs to see what the user actually submitted

  2. Check the conditions in your automation to ensure they correctly evaluate the submission

  3. Verify that all connections to external services are working properly

  4. Review the error messages if any step failed during execution

Common Error Patterns

  • API rate limiting: External services may temporarily refuse connections

  • Missing user data: Required user information might not be available

  • Invalid input format: User provided data in an unexpected format

  • Timeout errors: Operations took too long to complete

Optimizing Validation Automations

Based on insights from automation runs, you can improve your verification processes.

Adding Error Handling

If you notice runs failing without reaching a Task Failed step:

  1. Identify where errors commonly occur

  2. Add error handling paths that direct to Task Failed with helpful messages

  3. Include conditions that check for common error states

Well-designed error handling ensures users receive meaningful feedback even when unexpected issues occur during validation.

Refining Validation Logic

Use the step trace data to improve your validation criteria:

  • Identify edge cases that users are encountering

  • Adjust conditions to handle various input formats

  • Add data transformation steps to normalize user inputs

The robot icon provides access to the automation run that validated each task
The run toolbar provides an overview of the verification process
The highlighted path shows exactly how the user's submission was processed
The Traces tab shows detailed input and output data for each step execution