Operator Runbook

Quick Start (Demo Mode)

git clone https://github.com/akshan-main/revcat-agent-advocate.git
cd revcat-agent-advocate
pip install -e .
export DEMO_MODE=true
export ANTHROPIC_API_KEY=sk-ant-...
revcat-advocate demo-run
python -m http.server -d site_output 8000

Quick Start (Real Mode)

git clone https://github.com/akshan-main/revcat-agent-advocate.git
cd revcat-agent-advocate
pip install -e .
export REVENUECAT_API_KEY=sk_your_key
export ANTHROPIC_API_KEY=sk-ant-...
export DEMO_MODE=false
revcat-advocate ingest-docs
revcat-advocate write-content --topic "Your Topic" --type tutorial
revcat-advocate build-site

Safety Gates

FlagDefaultEffect
DRY_RUNtrueNo external posts, no GitHub issues. Drafts only
ALLOW_WRITESfalseBlocks POST/PUT/DELETE to RevenueCat API
DEMO_MODEfalseUses mock API responses and local fixture data for testing

Governance tested with a 17-case red-team suite covering prompt injection, PII extraction, competitor bashing, and brand safety.

Autonomy Model

Runs AutonomouslyOperator-Gated
  • Documentation ingestion and indexing
  • Hybrid search (BM25 + semantic vectors)
  • Content drafting with citations
  • Content verification (link checks, hash matching)
  • Product feedback generation from doc analysis
  • Growth experiment execution
  • Site builds
  • Ledger recording and chain verification
  • Publishing content to the live site (DRY_RUN)
  • Posting tweets (DRY_RUN + critic gate)
  • API write operations (ALLOW_WRITES)
  • Deployment to GitHub Pages (explicit deploy command)
  • GitHub issue creation (DRY_RUN)

Reliability Engineering

Pre-Execution Action Firewall

External actions are checked against applicable gates before execution:

LayerGateBlocks
1. ConfigDRY_RUN / ALLOW_WRITESAll external writes, API mutations, social posts
2. SafetySafetyError exceptionPOST/PUT/DELETE to RC API, write-capable MCP tools
3. Publish GateBanned phrase + citation checkDeployment if letter contains overclaims or too few citations
4. Tweet CriticProgrammatic + LLM reviewTweets that fail length, code detection, or quality checks
5. Governance Suite17-case red-teamPrompt injection, PII leaks, competitor attacks, brand violations

External actions are gated by applicable layers. DRY_RUN/ALLOW_WRITES and the tweet critic are enforced in code; the YAML firewall provides additional configurable rules for actions routed through the agent core.

Failure Categories and Retry Strategy

Failure TypeExampleStrategy
Transient API429 rate limit, 502 gatewayExponential backoff, 3 retries with jitter
Auth failure401 expired tokenFail fast, log to ledger, surface to operator
Content qualityPublish gate rejects letterRegen loop: up to N attempts with gate feedback injected into prompt
Tweet qualityCritic rejects draftRewrite loop: up to 3 rewrites with critic feedback
Doc fetchPage 404 or changedSkip page, log changed SHA256, continue ingest
LLM refusalClaude declines generationLog refusal reason, mark run as failed, no retry

Rollback Criteria

  • Ledger chain break: If verify-ledger reports breaks, the site build includes a red "BROKEN" badge. Deployment proceeds but the break is visible to reviewers.
  • Publish gate failure: submit command will not deploy if the letter fails gate checks after max regen attempts. The previous deployed version remains live.
  • Tweet rejection: If the critic agent rejects a tweet 3 times, it is logged as "skipped" and never posted. No partial posts.
  • Failed run: Every failed run is logged to the ledger with success=0. Failed runs are shown on the scorecard with timestamps for post-mortem.

Eval and Observability

  • Hash-chained ledger: Every CLI command produces a RunEntry with inputs, tool calls, sources, outputs, and SHA256 hash linking to the previous entry. Tamper detection is automatic.
  • Citation verification: Content verifier HEAD-checks every cited URL, matches quoted snippets against cached doc hashes, and validates code snippet syntax.
  • Scorecard metrics: Success rate, word count, citation count, experiment outcomes, and failure log are computed live from the database on every site build.
  • Staged rollout: build-site renders locally for inspection. deploy is a separate, explicit command. The submit pipeline gates deployment on publish-gate pass.

Post-Cycle Enforcement

After every autonomous agent cycle, the following actions run in code, not prompt — the agent cannot skip them:

ActionConditionContract
Dev.to stats syncconditional(has_devto)Only runs when DEVTO_API_KEY is configured. Pulls views, reactions, comments for all published articles back to Turso. Skipped silently when no key is set.
Lesson recording warningalwaysIf the agent did not call record_lesson during the cycle, a warning is logged. The cycle still succeeds — but the gap is visible in the ledger.
Site rebuildalwaysRuns via build-site in the weekly CI workflow after the agent cycle completes.
Ledger verificationalwaysRuns via verify-ledger in CI. If the hash chain is broken, the site shows a red badge.

This is a deliberate contract: post_cycle_devto_sync = conditional(has_devto). The agent decides what to create; the code decides what happens after.

Where the Receipts Are

  • /content: Posts have a Sources section with cited doc URLs. SHA256 hashes shown when available.
  • /experiments: Every experiment has hypothesis, metric, and results.
  • /feedback: Feedback items include repro steps and evidence links where available.

Commands

CommandDescription
ingest-docsDownload RC docs, fetch .md mirrors, build search index
write-content --topic "..." --type tutorialGenerate content with citations and verification
run-experiment --name programmatic-seoStart a growth experiment
generate-feedback --count 3Generate product feedback from doc analysis
tweet --topic "..."Draft tweets (DRY_RUN gated)
scan-githubScan RC repos for issues, draft responses
scan-redditScan subreddits, draft responses
competitive-digestCompetitive intel from public pages
analyze-docsDocumentation quality analysis
repro-testAPI/MCP repro scenarios
weekly-reportWeekly activity summary
build-siteBuild static site
deploy --repo owner/nameDeploy to GitHub Pages
chatInteractive doc-grounded chat
serveHTTP API server
mcp-serveMCP server for other agents
auto --interval 6hScheduled task loop (content, experiments, feedback, site build)
demo-runFull pipeline end-to-end

Reproduce a Post

revcat-advocate write-content --topic "Charts API for Agents" --type tutorial
# Output: site_output/content/charts-api-for-agents/index.md
# Includes: outline, citations, code snippets, verification results

Reproduce an Experiment

revcat-advocate run-experiment --name programmatic-seo
# Output: SEO pages in site_output/content/
# DB record in growth_experiments table

Architecture

Docs (LLM Index + .md mirrors)
    -> Knowledge Engine (BM25 + RAG hybrid search)
        -> Content Engine (Claude API + citation verification)
        -> Growth Engine (experiments + programmatic SEO)
        -> Feedback Engine (doc analysis + repro harness)
        -> Social (GitHub, Reddit, X; all DRY_RUN gated)
    -> Governance (red-team suite, safety gates)
        -> Static Site (GitHub Pages)

Claude Code Skills

Clone this repo into any project and get RevenueCat developer tools as Claude Code slash commands. These aren't wrappers. Each skill adds a new capability that combines the agent's ingested doc knowledge base with your actual codebase to solve real developer problems.

Try it

git clone https://github.com/akshan-main/revcat-agent-advocate.git
cd revcat-agent-advocate && pip install -e .

# In Claude Code, inside any project:
/review-rc          # Review your RC integration against docs
/migrate            # Generate migration plan from StoreKit/Stripe/Adapty
/paywall            # Generate paywall UI code for your platform
/debug-webhook      # Paste a webhook payload, get plain-English explanation
/rc-audit           # Full integration audit with scored report
/pricing-strategy   # Pricing optimization based on category benchmarks
/search-docs        # Search ingested RC docs with hybrid RAG
/quiz               # Product comprehension quiz on your code changes

MCP Server Integration

The agent also runs as an MCP server with 22 tools. MCP clients like Claude Desktop and Claude Code can connect and use the agent's capabilities.

Connect via stdio (local)

claude mcp add revcat-agent-advocate -- revcat-advocate mcp-serve

Connect via SSE (remote)

revcat-advocate mcp-serve --transport sse --port 8090
claude mcp add revcat-agent-advocate -t http -- http://localhost:8090/mcp

Available MCP Tools (22)

ToolDescription
search_docsHybrid RAG search over ingested doc pages
ask_questionDoc-grounded Q&A with citations
suggest_topicsSuggest content topics based on doc coverage
generate_contentContent pipeline: outline → draft (no auto-verify)
generate_feedback_mcpDoc analysis → structured feedback
run_experiment_mcpStart a growth experiment (hypothesis + artifacts)
verify_ledgerHash chain integrity check
get_agent_statsSystem statistics and counts
get_architectureFull architecture and tech stack
list_contentAll generated content pieces
list_experimentsExperiment registry and results
list_feedbackFiled product feedback
get_content_bodyFull article markdown by slug
get_experiment_detailsExperiment hypothesis, inputs, results
get_feedback_detailsFeedback repro steps and evidence
get_ledger_entriesRecent ledger entries with hashes
get_weekly_reportWeekly activity summary
read_source_fileRead source files in the agent's codebase (sensitive files blocked)
list_source_filesBrowse the agent's source tree
list_skillsList available developer skills with capabilities and scopes
run_skillPrepare a skill's context, prompt, and scoped tools for the caller to drive
run_skill_chainChain multiple skills in sequence, passing context forward

RevenueCat MCP (upstream)

claude mcp add revenuecat -t http -- https://mcp.revenuecat.ai/mcp \
  --header "Authorization: Bearer YOUR_API_KEY"