Usage Guide
Who is this for? Engineers, leads, and architects using Rosetta in their daily work.
When should I read this? After Quick Start. When you want to understand what Rosetta offers and how to use each flow.
For terminology and mental model, see Overview. For setup, see Quick Start or Installation.
How Rosetta Works
Describe what you need in plain language. Rosetta handles the rest.
- Your AI coding agent loads Rosetta’s bootstrap rules automatically
- Rosetta classifies your request (coding, research, init, etc.)
- The matching workflow, skills, and guardrails load into context
- The agent executes with the right instructions, approval gates, and safety constraints
No special syntax. No commands to memorize. Progressive disclosure keeps context clean: only what the current task needs gets loaded.
Workflows
Rosetta classifies your request and loads the matching workflow. Each workflow defines phases, produces traceable artifacts, and enforces approval gates (⏸) where decisions matter.
Init Workspace
Sets up a new or existing repository for AI-assisted development. Handles fresh repos, upgrades, and plugin mode.
Phases:
- Context — detect workspace mode (fresh, upgrade, plugin) and build file inventory
- Shells — generate IDE/agent shell files from KB schemas
- Discovery — produce TECHSTACK.md, CODEMAP.md, DEPENDENCIES.md
- Rules — configure local agent rules (optional, when explicit all-local rules requested)
- Patterns — extract recurring coding and architectural patterns
- Documentation — create CONTEXT.md, ARCHITECTURE.md, IMPLEMENTATION.md, ASSUMPTIONS.md
- ⏸ Questions — clarifying questions about gaps and assumptions
- Verification — completeness check and catch-up for missed artifacts
"Initialize this repository using Rosetta"
"Initialize subagents and workflows"
For composite workspaces, init each repository separately, then init at workspace level.
Self Help
Answers questions about Rosetta itself. If you decide to act, hands off to the real workflow without leaving the session.
Phases:
- List capabilities — catalog all workflows, skills, and agents from the KB
- Match and acquire — find capabilities matching your question, load their descriptions
- ⏸ Guide — explain matched capabilities and offer to launch the real workflow
- Handoff — transfer to the matching workflow if you accept (optional)
"What workflows are available?"
"How do I use the research flow?"
"What can Rosetta help me with?"
Coding
The main development workflow. Scales with task size: small tasks skip phases marked (M,L).
Phases:
- Discovery — gather context, codebase, dependencies, affected areas (M,L)
- Tech plan — architect defines specs, contracts, interfaces, and execution plan (all)
- Review plan — reviewer inspects specs and plan against intent (M,L)
- ⏸ User review plan — you approve the plan before implementation (all)
- Implementation — engineer executes the approved plan (all)
- Review code — reviewer inspects implementation against specs (all)
- Impl validation — validator runs actual checks against specs (M,L)
- ⏸ User review impl — you review the implementation (all)
- Tests — engineer writes and runs tests, 80%+ coverage (all)
- Review tests — reviewer inspects test coverage and quality (M,L)
- Final validation — end-to-end verification (M,L)
"Add password reset functionality"
"Fix the race condition in payment processing"
"Implement the notification service"
Requirements Authoring
Produces structured, testable, approved requirements. Saves to docs/REQUIREMENTS/.
Phases:
- Discovery — collect project and scope signals
- Research — gather standards, prior decisions, and domain context
- ⏸ Intent capture — capture what you actually need, surface assumptions
- Outline — propose MECE (mutually exclusive, collectively exhaustive) requirement layout
- ⏸ Draft — author atomic requirement units with per-requirement approval
- Validate — check correctness, conflicts, gaps, and contradictions
- ⏸ Deliver — finalize requirement artifacts with traceability matrix
"Define requirements for the checkout flow covering discount codes, tax, and retries"
"Write requirements for the user onboarding experience"
Research
Deep, project-grounded investigation using meta-prompting. Every claim backed by evidence.
Phases:
- Context load — gather research scope from project context
- ⏸ Prompt craft — architect builds an optimized research prompt for your approval
- Execute research — dedicated subagent runs the investigation
- Finalize — deliver documented analysis with grounded references
"Research best practices for microservices authentication"
"Investigate OAuth 2.0 implementation options for our stack"
"Compare event sourcing vs CRUD for our order service"
Ad-hoc
Adaptive meta-workflow for tasks that do not fit a fixed structure. Constructs a custom execution plan from building blocks and adapts mid-execution. Good for cross-cutting work, experiments, or anything that spans multiple concerns.
Building blocks: discover, reason, plan, execute, review, validate.
Phases:
- Analyze — classify request and select building blocks
- Build plan — compose execution plan from selected blocks
- ⏸ Review plan — plan reviewer validates approach (medium, large tasks)
- Execute plan — run steps with plan manager tracking (loops until done)
- Review and summarize — final review and delivery
"Ad-hoc: write a quick script to parse these CSV files"
"Refactor the logging across three services"
Code Analysis PRO
Systematic understanding of existing codebases. Distinguishes small and large analysis targets.
- Small: component identification, pattern analysis, logic flow, sequence diagrams, dependency mapping
- Large: per-module deep analysis (business logic, architecture, design patterns, data, quality), then cross-module summary with business process mapping
"Explain how the authentication system works"
"What is the architecture of the payment module?"
Automated QA PRO
Test automation workflow with approval gate before implementation.
Phases:
- Requirements analysis and existing code review
- Test framework identification (Jest, Playwright, JUnit, etc.)
- Test scenario generation (20-50 scenarios)
- Detailed test case specification (Given-When-Then)
- ⏸ Approve test cases before implementation
- Implementation and execution with coverage reporting
"Write tests for the user registration feature"
"Create QA automation for the checkout flow"
Test Case Generation PRO
Generates test cases from Jira tickets and Confluence documentation.
Phases:
- Data collection (Jira, Confluence via MCP)
- Gap, contradiction, and ambiguity analysis
- ⏸ Clarification questions
- Structured requirements document
- Test scenario generation
- TestRail export
"Generate test cases for PROJ-123"
"Create test scenarios from EPIC-789 and export to TestRail"
Modernization PRO
Large-scale code conversions, upgrades, and re-architecture.
Phases:
- Reuse analysis
- Old code analysis
- Testing (optional)
- Grouping
- Cross-project analysis
- Implementation mapping
- Review
- Implementation
Pattern detection drives consistency across the transformation.
"Migrate from Java 8 to Java 21"
"Re-architect monolith to microservices"
External Library PRO
Onboards private or external libraries for AI understanding. Uses Repomix for codebase analysis, generates compressed documentation, publishes to the knowledge base, and extracts usage patterns.
"Teach AI about our internal authentication library"
"Document the shared utilities package"
Coding Agents Prompting PRO
Specialized workflow for authoring and adapting prompts for AI coding agents. Built for teams that create or maintain instruction sets for AI tools.
Phases:
- Discover — gather context about the target coding agent and its environment
- Extract intake — capture intent and requirements from the user
- Blueprint — design the prompt architecture and structure
- ⏸ Author — draft each prompt with iterative user approval (loop per prompt)
- ⏸ Harden — review and harden each prompt for edge cases (loop per prompt)
- Simulate — trace prompt execution to verify behavior
- Validate — quality validation of the complete prompt set
Always Active
Every request benefits from these regardless of workflow.
- Execution policies enforce plan-driven work, incremental validation, and memory-based self-learning. The agent consults
agents/MEMORY.mdduring planning and records lessons learned. See Architecture — Workspace Files for the full file list. - HITL and questioning rules govern how the agent interacts with you. Questions are batched (5-10 per round), prioritized by impact, each targeting a single decision. If something is unclear, Rosetta stops and asks.
- Subagent orchestration defines how work gets delegated. Subagents start with fresh context, receive explicit scope boundaries, and return concise results. Independent work runs in parallel.
Customization
Custom overrides work in all installation modes. You do not need to modify any Rosetta files.
Project Context Files
The single most effective way to improve AI output. These files tell the AI what your project is, how it works, and what matters. Run initialization to generate them, then customize.
docs/CONTEXT.md(the why) — purpose, business context, design principles, key workflows, constraintsdocs/ARCHITECTURE.md(the how) — system structure, component relationships, data flow, deploymentdocs/TECHSTACK.md(the what) — technologies, frameworks, tools, and reasoning behind each choice
The more your team invests in these three files, the fewer follow-up questions Rosetta asks and the better the output gets. See Installation — Workspace Files Created for the full list of files Rosetta manages.
Custom Rules
Add project-specific rules alongside Rosetta without touching its files.
| IDE / Agent | Core rules file | Additional rules |
|---|---|---|
| Cursor | .cursor/rules/agents.mdc |
.cursor/rules/*.mdc |
| Claude Code | CLAUDE.md |
.claude/rules/*.md |
| GitHub Copilot | .github/copilot-instructions.md |
|
| Windsurf | .windsurf/rules/*.md |
All .md files auto-load |
| JetBrains (Junie + AI Assistant) | .aiassistant/rules/agents.md |
.junie/guidelines.md |
| Antigravity / Google IDX | .agent/rules/agents.md |
.agent/rules/*.md |
| OpenCode | AGENTS.md |
.opencode/agent/*.md |
Recommended MCP Servers
MCPs give the AI eyes and hands beyond the codebase.
- Context7 — up-to-date library documentation
- Playwright MCP — interact with web pages through structured accessibility snapshots
- Chrome DevTools — full browser control with console, network tab, snapshots
- GitNexus — indexes any codebase into a knowledge graph
- Figma MCP — Figma integration so AI can see designs directly
- Jira & Confluence MCP — tickets, comments, and documentation
- Fetch — retrieve and process content from APIs and web pages
- Repomix MCP — documentation for AI to use existing client libraries
- DeepWiki — up-to-date documentation
- Database MCPs — read schema, read data
Bold entries are strongly recommended. The rest depend on your project needs.
Skills
Reusable units of work that workflows and subagents invoke. Each skill focuses on one type of task.
| Skill | What it does |
|---|---|
| Coding | Implementation with KISS/SOLID/DRY principles, multi-environment awareness, systematic validation |
| Testing | Thorough, isolated, idempotent tests with 80% minimum coverage and scenario-driven testing |
| Tech Specs | Clear, testable specifications defining target state architecture, contracts, and interfaces |
| Planning | Execution-ready plans from approved specs using sequenced WBS and HITL checkpoints |
| Reasoning | Structured meta-cognitive reasoning using canonical 7D for complex problems |
| Questioning | Targeted clarification questions when high-impact unknowns block safe execution |
| Debugging | Root cause investigation before attempting fixes for errors, test failures, unexpected behavior |
| Load Context | Fast, automated loading of current project context for planning and understanding user intent |
| Reverse Engineering | Extract what a system does and why from source files, stripped of implementation details |
| Requirements Authoring | Atomic requirement units with EARS format, explicit user approval, and traceability |
| Requirements Use | Consume approved requirements to drive planning, implementation, and validation |
| Coding Agents Prompt Adaptation | Adapt prompts from one coding agent/IDE to another while preserving intent and strategy |
| Large Workspace Handling | Partition large workspaces (50+ files) into scoped subagent tasks |
| Init Workspace Context | Classify initialization mode and build existing file inventory |
| Init Workspace Discovery | Produce TECHSTACK.md, CODEMAP.md, DEPENDENCIES.md from workspace analysis |
| Init Workspace Documentation | Create CONTEXT.md, ARCHITECTURE.md, IMPLEMENTATION.md, ASSUMPTIONS.md, MEMORY.md |
| Init Workspace Patterns | Extract recurring coding and architectural patterns into reusable templates |
| Init Workspace Rules | Create local cached agent rules configured for IDE/OS/project context |
| Init Workspace Shells | Generate IDE/CodingAgent shell files from KB schemas |
| Init Workspace Verification | Verify initialization completeness and run catch-up for missed artifacts |
| Backward Compatibility PRO | Ensure changes preserve backward compatibility |
| Code Review PRO | Structured code review against standards and intent |
| Context Engineering PRO | Advanced context construction and optimization |
| Data Generation PRO | Generate test data and synthetic datasets |
| Design PRO | System and API design patterns |
| Discovery PRO | Deep codebase and domain discovery |
| Documentation PRO | Technical documentation authoring |
| Git PRO | Git operations and workflow management |
| Large File Handling PRO | Process files too large for single-pass context |
| Plan Review PRO | Review execution plans for completeness and risk |
| Prompt Diagnosis PRO | Diagnose and fix underperforming prompts |
| Research PRO | Systematic deep research using meta-prompting with grounded references and self-validation |
| Scenarios Generation PRO | Generate test scenarios from requirements |
| Security PRO | Security analysis and vulnerability assessment |
| Simulation PRO | Simulate prompt execution for validation |
| Technical Summarization PRO | Concise technical summaries of complex content |
| Template Execution PRO | Execute parameterized prompt templates |
| Coding Agents Prompt Authoring PRO | Author, update, and validate prompts for AI coding agents with analytics artifacts |
| Coding Agents Farm PRO | Orchestrate multiple coding agents in parallel on isolated git worktrees |
| Natural Writing PRO | Clear, human-sounding text without AI cliches or marketing hype |
Agents
Workflows delegate phases to specialized subagents. Each has a focused role, its own context window, and access to relevant skills. The orchestrator coordinates sequence, state, and approvals.
| Agent | Role |
|---|---|
| Discoverer | Lightweight. Gathers context from codebase and external sources before any work begins |
| Executor | Lightweight. Runs simple commands and summarizes results to prevent context overflow |
| Planner | Produces sequenced execution plans scaled to request size with quality gates |
| Architect | Transforms requirements into technical specifications and architecture decisions |
| Engineer | Executes implementation and testing tasks |
| Reviewer | Inspects artifacts against intent and contracts, provides recommendations |
| Validator | Verifies implementation through actual execution and evidence-based validation |
| Analyst PRO | Business and technical requirements analysis |
| Orchestrator PRO | Manages a team of subagents, owns delegation quality end-to-end |
| Researcher PRO | Deep research with grounded references and systematic exploration |
| Prompt Engineer PRO | Authors and adapts prompt artifacts under explicit HITL approvals |
In Practice
Feature Development
You: "Add password reset functionality"
What happens:
1. Rosetta loads the coding workflow
2. Agent reads CONTEXT.md and ARCHITECTURE.md
3. Agent discovers existing auth code and email service
4. Creates tech spec in plans/PASSWORD-RESET/
5. Creates implementation plan
6. ⏸ Waits for your approval
7. Implements the feature
8. Separate reviewer inspects the code
9. Writes tests (80%+ coverage)
10. Validator verifies against specs
Requirements Before Building
You: "Define requirements for the checkout flow"
What happens:
1. Rosetta loads the requirements workflow
2. Agent researches your codebase and asks clarifying questions
3. Drafts atomic requirements in EARS format
4. ⏸ You approve each requirement individually
5. Validates for conflicts, gaps, and contradictions
6. Delivers to docs/REQUIREMENTS/ with traceability matrix
Project Initialization
You: "Initialize this repository using Rosetta"
What happens:
1. Agent scans your tech stack, dependencies, and project structure
2. Generates TECHSTACK.md, CODEMAP.md, DEPENDENCIES.md
3. Creates CONTEXT.md and ARCHITECTURE.md
4. ⏸ Asks clarifying questions about your project
5. Verifies all generated docs
Research
You: "Investigate OAuth 2.0 options for our stack"
What happens:
1. Rosetta loads the research workflow
2. Agent reads your project context
3. Crafts an optimized research prompt
4. ⏸ You approve the research direction
5. Dedicated subagent runs the investigation
6. Delivers documented analysis with grounded references
How Rosetta Protects You
These rules are always active. They cannot be turned off.
| Rule | What it means |
|---|---|
| Approval before action | Produces a plan and waits for your explicit approval before making changes |
| No data deletion | Never deletes data from servers or generates scripts that do so |
| Sensitive data protection | Personal, financial, and regulated data is masked and never shared or logged |
| Bounded scope | Tasks kept to a manageable size (up to 2 hours of work, 15 files, spec files under 350 lines) |
| Tracks assumptions | When something is unclear, asks rather than guesses |
| Risk assessment | Checks for access to dangerous tools (databases, cloud, S3) and assigns a risk level. High risk requires confirmation. Critical risk blocks execution |
| SDLC only | All requests must be development-related. No personal or private chats |
| Context monitoring | Warns at 65% context usage and escalates at 75% to prevent degraded output |
Plugins
Rosetta is distributed as plugins for Claude Code and Cursor.
- core — 20 skills, 7 agents, 5 workflows, 11 rules, 7 IDE templates. Full OSS foundation bundled locally.
- grid — 4 skills, 2 agents, 2 workflows, 2 rules. Enterprise extensions (requires core).
- rosetta — bootstrap rule and MCP connection only. Smallest footprint, all instructions loaded from MCP on demand.
See Installation — Plugin-Based Installation for install commands.
Best Practices
- Talk naturally. Describe what you need. Rosetta figures out the right workflow.
- Be specific. More context means better output and fewer questions. “Define requirements for the checkout flow covering discount codes, tax calculation, and payment retries” beats “Write requirements for checkout.”
- Read plans before approving. The plan is your last checkpoint before work begins. Check scope, approach, and what will change.
- Answer questions fully. When Rosetta asks, it targets a specific gap. Short answers lead to incomplete solutions.
- Write requirements first. The requirements workflow prevents scope creep and gives you a clear acceptance baseline.
- Invest in context files. CONTEXT.md and ARCHITECTURE.md benefit every developer on the project.
- Point Rosetta at existing specs. Reference requirements, API contracts, or design documents in CONTEXT.md. Rosetta uses them as constraints instead of generating assumptions.
- Clean up dead code before onboarding. Unused code confuses AI the same way it confuses new developers.
- Do not approve plans you have not read. The approval gate only protects you if you use it.
- Do not delete files in
docs/. They are Rosetta’s project knowledge. Deleting them means starting over.
Video Tutorials
Setup:
- Install Using MCP (3 min)
- Install without MCP (2 min)
- Initialize Repo (4 min)
Configuration:
Workflows:
- Code, Validate, QA, Integration Testing, E2E testing
- Code Comprehension
- Help, Research, and Modernization
These videos were recorded in different IDEs to show that Rosetta works everywhere.
Getting Help
Related Docs
- Overview — mental model and terminology
- Quick Start — zero to working setup
- Installation — all setup modes and environment variables
- Architecture — system structure, components, data flow
- Deployment — org-wide deployment
- Contributing — fastest path to a merged PR
- Troubleshooting — symptom-first diagnosis
PRO In development or available with the enterprise edition.