Customizing Agents

Adapt agent behavior to your project’s needs.


Table of contents

  1. Overview
  2. Agent Anatomy
  3. Common Customizations
    1. 1. Add Project-Specific Patterns
    2. 2. Change Technology Stack
    3. 4. Customize Test Framework
    4. Step 2: [Second Step]
  4. Outputs
  5. Quality Gates
  6. Rules
  7. Agent Composition Patterns
    1. Pattern 1: Sequential Agents
    2. Pattern 2: Parallel Agents
    3. Pattern 3: Conditional Agents
  8. Configuration Files
    1. 1. Contract Files (docs/contracts/*.yml)
    2. 2. CLAUDE.md (Project Instructions)
    3. 3. Agent Metadata (Optional)
  9. Testing Agent Customizations
    1. 1. Dry Run
    2. 2. Small Test Case
    3. 3. Verify Output
  10. Common Pitfalls
    1. ❌ Overly Specific Rules
    2. ❌ Contradictory Rules
    3. ❌ Vague Triggers
  11. Example: Custom Deployment Agent
    1. Step 2: Deploy
    2. Step 3: Set Environment Variables
    3. Step 4: Verify Deployment
  12. Outputs
  13. Quality Gates
  14. Rules
  15. Next Steps
  16. Best Practices

Overview

Specflow agents are prompt-based: each agent is defined by a markdown file in scripts/agents/. You can customize their behavior by editing these prompts.

No code changes required. Just edit the markdown.


Agent Anatomy

Every agent has this structure:

# Agent: agent-name

## Role
Brief description of what this agent does

## Trigger Conditions
- When to invoke this agent
- User phrases that indicate need

## Inputs
What this agent needs to function

## Process
Step-by-step workflow the agent follows

## Outputs
What this agent produces

## Quality Gates
Success criteria

## Rules
Non-negotiable constraints

Common Customizations

1. Add Project-Specific Patterns

Example: Your project uses a custom repository pattern

File: scripts/agents/migration-builder.md

Add to Rules section:

## Rules
- All migrations MUST use snake_case table names
- All timestamps MUST include timezone: `timestamptz`
- All foreign keys MUST have ON DELETE CASCADE (project requirement)
- All tables MUST have created_at, updated_at, deleted_at columns

Result: Agent now follows your conventions automatically.


2. Change Technology Stack

Example: You use MySQL instead of PostgreSQL

File: scripts/agents/migration-builder.md

Update Process section:

## Process
### Step 1: Analyze Requirements
...

### Step 2: Generate MySQL Migration
```sql
-- MySQL syntax (not PostgreSQL)
CREATE TABLE employees (
  id INT AUTO_INCREMENT PRIMARY KEY,
  name VARCHAR(255) NOT NULL,
  INDEX idx_name (name)
);

**Result:** Agent generates MySQL-compatible migrations.

---

### 3. Add Quality Checks

**Example:** Enforce TypeScript strict mode

**File:** `scripts/agents/frontend-builder.md`

**Add to Quality Gates:**
```markdown
## Quality Gates
- [ ] All files pass `tsc --noEmit --strict`
- [ ] No `any` types except in type definitions
- [ ] All props interfaces exported
- [ ] All components have displayName

Result: Agent verifies strict TypeScript compliance.


4. Customize Test Framework

Example: You use Vitest instead of Jest

File: scripts/agents/test-runner.md

Update Commands Used section:

## Commands Used
```bash
# Run all tests
pnpm vitest run

# Run specific test
pnpm vitest run src/__tests__/contracts/

# Watch mode
pnpm vitest watch

**Result:** Agent uses Vitest commands.

---

## Creating Custom Agents

### When to Create a Custom Agent

Create a new agent when you have a **repeated workflow** that:
- Requires multiple steps
- Has clear inputs/outputs
- Has quality gates
- Is triggered by specific user phrases

**Examples:**
- Custom deployment agent
- Custom notification agent
- Custom data migration agent

### Template for New Agents

Create `scripts/agents/your-agent-name.md`:

```markdown
# Agent: your-agent-name

## Role
[One sentence: what this agent does]

## Trigger Conditions
- User says: "[trigger phrase 1]", "[trigger phrase 2]"
- User provides: [required inputs]

## Inputs
- **Input 1:** Description
- **Input 2:** Description

## Process

### Step 1: [First Step]
[What happens]

```bash
# Commands
command here

Step 2: [Second Step]

[What happens]

Outputs

  • Output 1: Description
  • Output 2: Description

Quality Gates

  • Gate 1: Description
  • Gate 2: Description

Rules

  1. RULE 1 (why it matters)
  2. RULE 2 (why it matters) ```

Agent Composition Patterns

Pattern 1: Sequential Agents

One agent spawns another:

# Agent: orchestrator

## Process
### Step 3: Spawn Database Agent
1. Read `scripts/agents/migration-builder.md`
2. Task("Build migration", "{prompt + task}", "general-purpose")
3. Wait for completion

### Step 4: Spawn Test Agent
1. Read `scripts/agents/test-runner.md`
2. Task("Run tests", "{prompt + task}", "general-purpose")

Pattern 2: Parallel Agents

One agent spawns multiple agents at once:

# Agent: parallel-orchestrator

## Process
### Step 2: Spawn All Agents (Parallel)
[Single Message]:
  Task("Database", "{migration-builder prompt + task}", "general-purpose")
  Task("Frontend", "{frontend-builder prompt + task}", "general-purpose")
  Task("Backend", "{edge-function-builder prompt + task}", "general-purpose")

Pattern 3: Conditional Agents

Agent decides which other agent to invoke:

# Agent: smart-router

## Process
### Step 2: Route to Specialist
If database changes needed:
  - Invoke migration-builder
Else if Edge Function needed:
  - Invoke edge-function-builder
Else:
  - Invoke frontend-builder

Configuration Files

Some agents read configuration from your project:

1. Contract Files (docs/contracts/*.yml)

Agents like specflow-writer generate these based on issues.

2. CLAUDE.md (Project Instructions)

Global project context all agents inherit.

Add project-specific rules:

## Project-Specific Rules
- We use Tailwind, never inline styles
- We use React Query, never SWR
- We use Supabase, never Firebase

3. Agent Metadata (Optional)

Create scripts/agents/.metadata.json:

{
  "project": "your-project",
  "stack": "React + Vite + Supabase",
  "conventions": {
    "tableNames": "snake_case",
    "componentNames": "PascalCase",
    "testFiles": "*.spec.ts"
  }
}

Agents can read this for context.


Testing Agent Customizations

1. Dry Run

Invoke agent with --dry-run flag (if supported):

# Conceptual - adjust for your setup
claude-code task run migration-builder --dry-run

2. Small Test Case

Run on a single, simple issue first:

# Issue #999: Test Agent Customization
Simple test to verify agent follows new rules.

## Acceptance Criteria
- [ ] Agent creates file following new naming convention

3. Verify Output

Check agent output matches expectations:

  • File names follow conventions
  • Code patterns match rules
  • Quality gates enforced

Common Pitfalls

❌ Overly Specific Rules

Bad:

- Table `employees` MUST have column `full_name_with_title`

Good:

- All tables MUST follow snake_case naming

Why: Specific rules don’t generalize to other tables.


❌ Contradictory Rules

Bad:

- All files MUST be under 200 lines
- All logic MUST be in single file (no splitting)

Why: These rules conflict.


❌ Vague Triggers

Bad:

## Trigger Conditions
- When needed

Good:

## Trigger Conditions
- User says: "create migration", "add table", "update schema"
- Files changed: `supabase/migrations/*.sql`

Why: Clear triggers = correct agent invocation.


Example: Custom Deployment Agent

File: scripts/agents/deploy-to-vercel.md

# Agent: deploy-to-vercel

## Role
Deploy application to Vercel with environment variables and preview URLs

## Trigger Conditions
- User says: "deploy to vercel", "push to production", "create preview"
- After: All tests pass, contracts verified

## Inputs
- **Environment:** `production` or `preview`
- **Branch:** Git branch to deploy
- **Env Vars:** List of environment variables to set

## Process
### Step 1: Verify Prerequisites
```bash
# Check Vercel CLI installed
vercel --version

# Check logged in
vercel whoami

Step 2: Deploy

# Production
vercel --prod

# Preview
vercel

Step 3: Set Environment Variables

vercel env add DATABASE_URL production
vercel env add API_KEY production

Step 4: Verify Deployment

  • Check deployment URL returns 200
  • Run smoke tests against deployed URL

Outputs

  • Deployment URL: https://your-app.vercel.app
  • Preview URL: https://your-app-git-branch.vercel.app
  • Deployment logs: Link to Vercel dashboard

Quality Gates

  • Deployment succeeds (exit code 0)
  • Health check returns 200
  • Environment variables set
  • SSL certificate valid

Rules

  1. NEVER deploy if tests failing
  2. ALWAYS create preview for non-main branches
  3. ALWAYS tag production deployments in Git ```

Usage:

Task("Deploy to Vercel production", "{deploy-to-vercel.md prompt}\n\n---\n\nBranch: main, Environment: production", "general-purpose")

Next Steps


Best Practices

  1. Start small: Customize one agent at a time
  2. Test in isolation: Run agent on simple issues first
  3. Document changes: Add comments explaining custom rules
  4. Version control: Commit agent prompts to Git
  5. Share patterns: If a customization works well, contribute back to Specflow repo