Skip to content

AQA Flow

TL;DR

Use AQA Flow when you need Rosetta-guided automated UI test work tied to a real TestRail case or QA scenario. The workflow gathers TestRail and Confluence context, clarifies assertions, analyzes existing test architecture, identifies selectors without guessing, implements the test, then stops so you can run it and return the report.

This is a strict sequential workflow. Phases build on each other, agents/aqa-state.md is updated after each phase, and the coding agent must not skip ahead. Mandatory user interaction happens in Phase 2, Phase 6, Phase 7, and Phase 8. Phase 4 asks for page HTML only when frontend code or stable selectors are not available.

When To Use This Workflow

When Not To Use This Workflow

Before You Start

Prepare the inputs this workflow explicitly depends on:

You also get better results when the project already has strong shared Rosetta context. Keep shared setup in Usage Guide, especially docs/CONTEXT.md, docs/ARCHITECTURE.md, and docs/TECHSTACK.md.

How To Start

Typical prompts:

Automate TestRail case C12345 for the checkout confirmation flow.
Create UI automation for the registration success scenario using TestRail case 5678 and Confluence page https://...
Analyze this failing automated test report for case C9012 and prepare corrections.
Extend the existing checkout automation with a new TestRail scenario and reuse current Page Objects.

How Rosetta Shapes This Workflow

Rosetta provides the instructions. The coding agent executes them. Rosetta itself does not read your source code or test data.

For this workflow, the always-active Rosetta behavior changes the user experience in these ways:

Workflow At A Glance

Phase What you provide What the coding agent does What you get Mandatory workflow stop
1. Data Collection TestRail case, Confluence reference Reads external QA/business context and creates the test plan agents/plans/aqa-<test-name>.md, initial agents/aqa-state.md None
2. Requirements Clarification Answers about assertions, data, edge cases, scope Turns vague steps into explicit, measurable assertions Updated test plan with assertions, edge cases, test data rules Mandatory user answers before Phase 3
3. Code Analysis Repository test code, project docs, user instruction files Analyzes framework, conventions, Page Objects, similar tests, helpers, optional frontend code Updated test plan with architecture findings and target test location None
4. Selector Identification Frontend code if available, otherwise page HTML when requested Maps test steps to UI elements and identifies missing selectors without guessing Selector map, page-source request if needed, updated plan/state Mandatory user input only if selectors cannot be grounded from code
5. Selector Implementation Approved selector set Adds selectors or Page Object methods using current project conventions Updated Page Objects and test plan None
6. Test Implementation Approved assertions and reusable test architecture Implements the automated test and stops before execution analysis Test file plus updated plan/state Mandatory user execution before Phase 7
7. Test Report Analysis Test report path, logs, or output Reads report, classifies failures, analyzes root causes, inspects page source for selector errors Failure analysis and recommended actions Mandatory user handoff of report/output
8. Test Corrections Explicit approval for proposed fixes Prepares fixes, waits for approval, applies approved changes, updates state Corrected test/Page Objects and re-test guidance Explicit approval required before changes

Recommended review still matters throughout the workflow, but those checks are advisory checkpoints, not extra mandatory stops.

Workflow Overview

flowchart TD

    A[Start AQA request]:::start --> B[Phase 1 Data Collection]:::phase
    B --> C[Phase 2 Requirements Clarification]:::phase
    C --> C1{User answered?}:::hitl
    C1 -- No --> C2[Wait for answers]:::action
    C2 --> C1
    C1 -- Yes --> D[Phase 3 Code Analysis]:::phase
    D --> E[Phase 4 Selector Identification]:::phase
    E --> E1{Selectors grounded from frontend code?}:::hitl
    E1 -- Yes --> F[Phase 5 Selector Implementation]:::phase
    E1 -- No --> E2[Request page HTML and wait]:::action
    E2 --> E3[Analyze provided page sources]:::phase
    E3 --> F
    F --> G[Phase 6 Test Implementation]:::phase
    G --> G1[User runs test]:::action
    G1 --> H[Phase 7 Test Report Analysis]:::phase
    H --> I[Phase 8 Test Corrections]:::phase
    I --> I1{User approved fixes?}:::hitl
    I1 -- No --> I2[Revise proposal or stop]:::action
    I2 --> I1
    I1 -- Yes --> J[Apply approved changes]:::phase
    J --> K{Re-run tests pass?}:::hitl
    K -- Yes --> L[Workflow complete]:::done
    K -- No --> H

Interaction Flow

sequenceDiagram
    autonumber
    participant U as User
    participant R as Rosetta Instructions
    participant A as Coding Agent
    participant X as External Systems
    participant F as Workspace Files

    U->>A: Request automated QA work
    R-->>A: Enforce sequential phases, no assumptions, state tracking
    A->>X: Read TestRail case and Confluence context
    A->>F: Create agents/plans/aqa-test-name.md and agents/aqa-state.md
    A->>U: Ask clarification questions for assertions, scope, data, edge cases
    U->>A: Provide answers
    A->>F: Update plan with explicit assertions
    A->>F: Analyze project_description.md, user instructions, tests, Page Objects, helpers
    alt Frontend selectors available
        A->>F: Record selectors from frontend code
    else Selectors missing
        A->>U: Request page HTML for specific elements
        U->>F: Add files under agents/aqa/TICKET-KEY/page-sources/
        A->>F: Read page source files and choose selectors
    end
    A->>F: Update Page Objects and implement test
    A->>U: Stop and ask user to run the test
    U->>A: Provide report path, logs, or output
    A->>F: Analyze failures and root causes
    A->>U: Present proposed corrections for approval
    U->>A: Explicitly approve or request changes
    A->>F: Apply approved corrections and update state
    A->>U: Return re-test guidance and final status

Phases

Phase 1: Data Collection

Goal:

What you provide:

What the coding agent does:

Artifacts:

Recommended review:

Phase 2: Requirements Clarification

Goal:

What you provide:

What the coding agent does:

Artifacts:

Recommended review:

Phase 3: Code Analysis

Goal:

What you provide:

What the coding agent does:

Artifacts:

Recommended review:

Phase 4: Selector Identification

Goal:

What you provide:

What the coding agent does:

Artifacts:

Recommended review:

Phase 5: Selector Implementation

Goal:

What you provide:

What the coding agent does:

Artifacts:

Recommended review:

Phase 6: Test Implementation

Goal:

What you provide:

What the coding agent does:

Artifacts:

Recommended review:

Phase 7: Test Report Analysis

Goal:

What you provide:

What the coding agent does:

Artifacts:

Recommended review:

Phase 8: Test Corrections

Goal:

What you provide:

What the coding agent does:

Artifacts:

Recommended review:

How To Review Results

Review each handoff like a QA and test-automation lead, not like a passive approver. These are recommended review checkpoints, not additional mandatory workflow stops beyond the ones listed in the summary table.

If the page-source request, test plan, or correction proposal is vague, stop the workflow and ask for a more explicit version. This workflow is only reliable when the review loop is used seriously.

Workflow-Specific Customization

These customizations materially improve AQA Flow:

Artifacts You Will Get

Common artifacts from this workflow:

Common artifact content:

Conditional outputs:

Common Mistakes

Source Files

Authoritative source workflow and phases: