Documentation

Running Tests

Learn how to execute tests in Verex, understand test statuses, and track test results.

Triggering a Test Run

Users can trigger tests from a Test Suite. To run a test:

  1. Navigate to a Test Suite.
  2. Click Run Tests.
  3. Ensure the test has a defined criteria.
  4. Verify that your organization has enough Evaluation Tokens for execution.
  5. (Optional) Override the Test URL (defaults to the Test Suite’s URL).
  6. Start the test run.

The system will estimate the minimum required Evaluation Tokens for running the test (or entire suite). If the organization lacks sufficient Evaluation Tokens, the test will not start.

Test Execution Process & Statuses

Once triggered, a test follows these statuses:

  1. Pending – The test is waiting for the test runner to pick it up (usually within a few seconds).
  2. Running – The test is actively executing.
  3. Passed – The test met all criteria successfully.
  4. Failed – The test did not meet its criteria or encountered an issue.
  5. Stopped – The test was manually interrupted by the user before completion.

Users can stop a test at any time, but already used Evaluation Tokens will not be refunded. A test immediately consumes at least one Evaluation Token upon starting, even if stopped right away.

Evaluation Token Consumption

  • The system cannot predict exact Evaluation Token usage before a test starts, but provides an estimate.
  • Evaluation Token consumption depends on the complexity and duration of the test.
  • Refer to the Evaluation Tokens documentation for more details.

Organization-Specific Limitations

Each organization, depending on the plan, has specific limitations such as:

  • Maximum Steps per Test – Defines the number of actions a test can take.
  • Maximum Test Duration – Limits how long a test can run before timing out.

For full details on plan-based limitations, visit our Pricing Page.

Viewing Test Run Details

Each executed test generates a Test Run, providing insights into:

  • AI evaluation steps – See how the AI processed and validated the test criteria.
  • Thinking logic – Understand the AI’s decision-making process.
  • Screenshots – Track interactions and website rendering.
  • Interactive elements – View and toggle UI elements that the AI interacted with.
  • Browser console logs – View debugging details.

Running a Test Suite

Executing a Test Suite Run triggers all included tests and consolidates results into one suite-level execution report. This is useful for:

  • CI/CD Pipelines – Automating quality checks before deployment.
  • Bulk test execution – Running multiple tests efficiently.

For details on integrating tests into CI/CD workflows, see Integrations.

On this page