Skip to content

Usage Guide

Who is this for? Engineers, leads, and architects using Rosetta in their daily work.

When should I read this? After Quick Start. When you want to understand what Rosetta offers and how to use each flow.

For terminology and mental model, see Overview. For setup, see Quick Start or Installation.


How Rosetta Works

Describe what you need in plain language. Rosetta handles the rest.

  1. Your AI coding agent loads Rosetta’s bootstrap rules automatically
  2. Rosetta classifies your request (coding, research, init, etc.)
  3. The matching workflow, skills, and guardrails load into context
  4. The agent executes with the right instructions, approval gates, and safety constraints

No special syntax. No commands to memorize. Progressive disclosure keeps context clean: only what the current task needs gets loaded.

Workflows

Rosetta classifies your request and loads the matching workflow. Each workflow defines phases, produces traceable artifacts, and enforces approval gates (⏸) where decisions matter.

Init Workspace

Sets up a new or existing repository for AI-assisted development. Handles fresh repos, upgrades, and plugin mode.

Phases:

  1. Context — detect workspace mode (fresh, upgrade, plugin) and build file inventory
  2. Shells — generate IDE/agent shell files from KB schemas
  3. Discovery — produce TECHSTACK.md, CODEMAP.md, DEPENDENCIES.md
  4. Rules — configure local agent rules (optional, when explicit all-local rules requested)
  5. Patterns — extract recurring coding and architectural patterns
  6. Documentation — create CONTEXT.md, ARCHITECTURE.md, IMPLEMENTATION.md, ASSUMPTIONS.md
  7. ⏸ Questions — clarifying questions about gaps and assumptions
  8. Verification — completeness check and catch-up for missed artifacts
"Initialize this repository using Rosetta"
"Initialize subagents and workflows"

For composite workspaces, init each repository separately, then init at workspace level.

Self Help

Answers questions about Rosetta itself. If you decide to act, hands off to the real workflow without leaving the session.

Phases:

  1. List capabilities — catalog all workflows, skills, and agents from the KB
  2. Match and acquire — find capabilities matching your question, load their descriptions
  3. ⏸ Guide — explain matched capabilities and offer to launch the real workflow
  4. Handoff — transfer to the matching workflow if you accept (optional)
"What workflows are available?"
"How do I use the research flow?"
"What can Rosetta help me with?"
Coding

The main development workflow. Scales with task size: small tasks skip phases marked (M,L).

Phases:

  1. Discovery — gather context, codebase, dependencies, affected areas (M,L)
  2. Tech plan — architect defines specs, contracts, interfaces, and execution plan (all)
  3. Review plan — reviewer inspects specs and plan against intent (M,L)
  4. ⏸ User review plan — you approve the plan before implementation (all)
  5. Implementation — engineer executes the approved plan (all)
  6. Review code — reviewer inspects implementation against specs (all)
  7. Impl validation — validator runs actual checks against specs (M,L)
  8. ⏸ User review impl — you review the implementation (all)
  9. Tests — engineer writes and runs tests, 80%+ coverage (all)
  10. Review tests — reviewer inspects test coverage and quality (M,L)
  11. Final validation — end-to-end verification (M,L)
"Add password reset functionality"
"Fix the race condition in payment processing"
"Implement the notification service"
Requirements Authoring

Produces structured, testable, approved requirements. Saves to docs/REQUIREMENTS/.

Phases:

  1. Discovery — collect project and scope signals
  2. Research — gather standards, prior decisions, and domain context
  3. ⏸ Intent capture — capture what you actually need, surface assumptions
  4. Outline — propose MECE (mutually exclusive, collectively exhaustive) requirement layout
  5. ⏸ Draft — author atomic requirement units with per-requirement approval
  6. Validate — check correctness, conflicts, gaps, and contradictions
  7. ⏸ Deliver — finalize requirement artifacts with traceability matrix
"Define requirements for the checkout flow covering discount codes, tax, and retries"
"Write requirements for the user onboarding experience"
Research

Deep, project-grounded investigation using meta-prompting. Every claim backed by evidence.

Phases:

  1. Context load — gather research scope from project context
  2. ⏸ Prompt craft — architect builds an optimized research prompt for your approval
  3. Execute research — dedicated subagent runs the investigation
  4. Finalize — deliver documented analysis with grounded references
"Research best practices for microservices authentication"
"Investigate OAuth 2.0 implementation options for our stack"
"Compare event sourcing vs CRUD for our order service"
Ad-hoc

Adaptive meta-workflow for tasks that do not fit a fixed structure. Constructs a custom execution plan from building blocks and adapts mid-execution. Good for cross-cutting work, experiments, or anything that spans multiple concerns.

Building blocks: discover, reason, plan, execute, review, validate.

Phases:

  1. Analyze — classify request and select building blocks
  2. Build plan — compose execution plan from selected blocks
  3. ⏸ Review plan — plan reviewer validates approach (medium, large tasks)
  4. Execute plan — run steps with plan manager tracking (loops until done)
  5. Review and summarize — final review and delivery
"Ad-hoc: write a quick script to parse these CSV files"
"Refactor the logging across three services"
Code Analysis PRO

Systematic understanding of existing codebases. Distinguishes small and large analysis targets.

  • Small: component identification, pattern analysis, logic flow, sequence diagrams, dependency mapping
  • Large: per-module deep analysis (business logic, architecture, design patterns, data, quality), then cross-module summary with business process mapping
"Explain how the authentication system works"
"What is the architecture of the payment module?"
Automated QA PRO

Test automation workflow with approval gate before implementation.

Phases:

  1. Requirements analysis and existing code review
  2. Test framework identification (Jest, Playwright, JUnit, etc.)
  3. Test scenario generation (20-50 scenarios)
  4. Detailed test case specification (Given-When-Then)
  5. ⏸ Approve test cases before implementation
  6. Implementation and execution with coverage reporting
"Write tests for the user registration feature"
"Create QA automation for the checkout flow"
Test Case Generation PRO

Generates test cases from Jira tickets and Confluence documentation.

Phases:

  1. Data collection (Jira, Confluence via MCP)
  2. Gap, contradiction, and ambiguity analysis
  3. ⏸ Clarification questions
  4. Structured requirements document
  5. Test scenario generation
  6. TestRail export
"Generate test cases for PROJ-123"
"Create test scenarios from EPIC-789 and export to TestRail"
Modernization PRO

Large-scale code conversions, upgrades, and re-architecture.

Phases:

  1. Reuse analysis
  2. Old code analysis
  3. Testing (optional)
  4. Grouping
  5. Cross-project analysis
  6. Implementation mapping
  7. Review
  8. Implementation

Pattern detection drives consistency across the transformation.

"Migrate from Java 8 to Java 21"
"Re-architect monolith to microservices"
External Library PRO

Onboards private or external libraries for AI understanding. Uses Repomix for codebase analysis, generates compressed documentation, publishes to the knowledge base, and extracts usage patterns.

"Teach AI about our internal authentication library"
"Document the shared utilities package"
Coding Agents Prompting PRO

Specialized workflow for authoring and adapting prompts for AI coding agents. Built for teams that create or maintain instruction sets for AI tools.

Phases:

  1. Discover — gather context about the target coding agent and its environment
  2. Extract intake — capture intent and requirements from the user
  3. Blueprint — design the prompt architecture and structure
  4. ⏸ Author — draft each prompt with iterative user approval (loop per prompt)
  5. ⏸ Harden — review and harden each prompt for edge cases (loop per prompt)
  6. Simulate — trace prompt execution to verify behavior
  7. Validate — quality validation of the complete prompt set

Always Active

Every request benefits from these regardless of workflow.

Customization

Custom overrides work in all installation modes. You do not need to modify any Rosetta files.

Project Context Files

The single most effective way to improve AI output. These files tell the AI what your project is, how it works, and what matters. Run initialization to generate them, then customize.

The more your team invests in these three files, the fewer follow-up questions Rosetta asks and the better the output gets. See Installation — Workspace Files Created for the full list of files Rosetta manages.

Custom Rules

Add project-specific rules alongside Rosetta without touching its files.

IDE / Agent Core rules file Additional rules
Cursor .cursor/rules/agents.mdc .cursor/rules/*.mdc
Claude Code CLAUDE.md .claude/rules/*.md
GitHub Copilot .github/copilot-instructions.md  
Windsurf .windsurf/rules/*.md All .md files auto-load
JetBrains (Junie + AI Assistant) .aiassistant/rules/agents.md .junie/guidelines.md
Antigravity / Google IDX .agent/rules/agents.md .agent/rules/*.md
OpenCode AGENTS.md .opencode/agent/*.md

MCPs give the AI eyes and hands beyond the codebase.

Bold entries are strongly recommended. The rest depend on your project needs.

Skills

Reusable units of work that workflows and subagents invoke. Each skill focuses on one type of task.

Skill What it does
Coding Implementation with KISS/SOLID/DRY principles, multi-environment awareness, systematic validation
Testing Thorough, isolated, idempotent tests with 80% minimum coverage and scenario-driven testing
Tech Specs Clear, testable specifications defining target state architecture, contracts, and interfaces
Planning Execution-ready plans from approved specs using sequenced WBS and HITL checkpoints
Reasoning Structured meta-cognitive reasoning using canonical 7D for complex problems
Questioning Targeted clarification questions when high-impact unknowns block safe execution
Debugging Root cause investigation before attempting fixes for errors, test failures, unexpected behavior
Load Context Fast, automated loading of current project context for planning and understanding user intent
Reverse Engineering Extract what a system does and why from source files, stripped of implementation details
Requirements Authoring Atomic requirement units with EARS format, explicit user approval, and traceability
Requirements Use Consume approved requirements to drive planning, implementation, and validation
Coding Agents Prompt Adaptation Adapt prompts from one coding agent/IDE to another while preserving intent and strategy
Large Workspace Handling Partition large workspaces (50+ files) into scoped subagent tasks
Init Workspace Context Classify initialization mode and build existing file inventory
Init Workspace Discovery Produce TECHSTACK.md, CODEMAP.md, DEPENDENCIES.md from workspace analysis
Init Workspace Documentation Create CONTEXT.md, ARCHITECTURE.md, IMPLEMENTATION.md, ASSUMPTIONS.md, MEMORY.md
Init Workspace Patterns Extract recurring coding and architectural patterns into reusable templates
Init Workspace Rules Create local cached agent rules configured for IDE/OS/project context
Init Workspace Shells Generate IDE/CodingAgent shell files from KB schemas
Init Workspace Verification Verify initialization completeness and run catch-up for missed artifacts
Backward Compatibility PRO Ensure changes preserve backward compatibility
Code Review PRO Structured code review against standards and intent
Context Engineering PRO Advanced context construction and optimization
Data Generation PRO Generate test data and synthetic datasets
Design PRO System and API design patterns
Discovery PRO Deep codebase and domain discovery
Documentation PRO Technical documentation authoring
Git PRO Git operations and workflow management
Large File Handling PRO Process files too large for single-pass context
Plan Review PRO Review execution plans for completeness and risk
Prompt Diagnosis PRO Diagnose and fix underperforming prompts
Research PRO Systematic deep research using meta-prompting with grounded references and self-validation
Scenarios Generation PRO Generate test scenarios from requirements
Security PRO Security analysis and vulnerability assessment
Simulation PRO Simulate prompt execution for validation
Technical Summarization PRO Concise technical summaries of complex content
Template Execution PRO Execute parameterized prompt templates
Coding Agents Prompt Authoring PRO Author, update, and validate prompts for AI coding agents with analytics artifacts
Coding Agents Farm PRO Orchestrate multiple coding agents in parallel on isolated git worktrees
Natural Writing PRO Clear, human-sounding text without AI cliches or marketing hype
Agents

Workflows delegate phases to specialized subagents. Each has a focused role, its own context window, and access to relevant skills. The orchestrator coordinates sequence, state, and approvals.

Agent Role
Discoverer Lightweight. Gathers context from codebase and external sources before any work begins
Executor Lightweight. Runs simple commands and summarizes results to prevent context overflow
Planner Produces sequenced execution plans scaled to request size with quality gates
Architect Transforms requirements into technical specifications and architecture decisions
Engineer Executes implementation and testing tasks
Reviewer Inspects artifacts against intent and contracts, provides recommendations
Validator Verifies implementation through actual execution and evidence-based validation
Analyst PRO Business and technical requirements analysis
Orchestrator PRO Manages a team of subagents, owns delegation quality end-to-end
Researcher PRO Deep research with grounded references and systematic exploration
Prompt Engineer PRO Authors and adapts prompt artifacts under explicit HITL approvals
In Practice

Feature Development

You: "Add password reset functionality"

What happens:
1. Rosetta loads the coding workflow
2. Agent reads CONTEXT.md and ARCHITECTURE.md
3. Agent discovers existing auth code and email service
4. Creates tech spec in plans/PASSWORD-RESET/
5. Creates implementation plan
6. ⏸ Waits for your approval
7. Implements the feature
8. Separate reviewer inspects the code
9. Writes tests (80%+ coverage)
10. Validator verifies against specs

Requirements Before Building

You: "Define requirements for the checkout flow"

What happens:
1. Rosetta loads the requirements workflow
2. Agent researches your codebase and asks clarifying questions
3. Drafts atomic requirements in EARS format
4. ⏸ You approve each requirement individually
5. Validates for conflicts, gaps, and contradictions
6. Delivers to docs/REQUIREMENTS/ with traceability matrix

Project Initialization

You: "Initialize this repository using Rosetta"

What happens:
1. Agent scans your tech stack, dependencies, and project structure
2. Generates TECHSTACK.md, CODEMAP.md, DEPENDENCIES.md
3. Creates CONTEXT.md and ARCHITECTURE.md
4. ⏸ Asks clarifying questions about your project
5. Verifies all generated docs

Research

You: "Investigate OAuth 2.0 options for our stack"

What happens:
1. Rosetta loads the research workflow
2. Agent reads your project context
3. Crafts an optimized research prompt
4. ⏸ You approve the research direction
5. Dedicated subagent runs the investigation
6. Delivers documented analysis with grounded references
How Rosetta Protects You

These rules are always active. They cannot be turned off.

Rule What it means
Approval before action Produces a plan and waits for your explicit approval before making changes
No data deletion Never deletes data from servers or generates scripts that do so
Sensitive data protection Personal, financial, and regulated data is masked and never shared or logged
Bounded scope Tasks kept to a manageable size (up to 2 hours of work, 15 files, spec files under 350 lines)
Tracks assumptions When something is unclear, asks rather than guesses
Risk assessment Checks for access to dangerous tools (databases, cloud, S3) and assigns a risk level. High risk requires confirmation. Critical risk blocks execution
SDLC only All requests must be development-related. No personal or private chats
Context monitoring Warns at 65% context usage and escalates at 75% to prevent degraded output

Plugins

Rosetta is distributed as plugins for Claude Code and Cursor.

See Installation — Plugin-Based Installation for install commands.

Best Practices

Video Tutorials

Setup:

Configuration:

Workflows:

These videos were recorded in different IDEs to show that Rosetta works everywhere.

Getting Help


PRO In development or available with the enterprise edition.