First Run

You've installed orchex and configured your MCP client. Now let's execute your first parallel orchestration.

What You'll Build

A simple two-stream orchestration that creates a greeting module and its tests — in parallel where possible, with automatic dependency resolution.

Wave 1: [greeting]        → Creates src/greeting.ts
Wave 2: [greeting-test]   → Creates tests/greeting.test.ts, runs tests

Step 1: Set Your API Key

Export your LLM provider's API key:

# Pick one:
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="AI..."
export DEEPSEEK_API_KEY="sk-..."

orchex auto-detects the provider from whichever key is set.

Step 2: Ask Your Assistant

Open your AI assistant (Claude Code, Cursor, or Windsurf) in a project directory and say:

Use orchex to create a greeting module with two streams:
1. A "greeting" stream that creates src/greeting.ts with a greet(name) function
2. A "greeting-test" stream that creates tests/greeting.test.ts with tests for greet()

Your assistant will call the orchex MCP tools automatically.

Step 3: What Happens Behind the Scenes

The assistant calls two orchex tools:

`orchex.init` — Define the orchestration

orchex.init({
  feature: "hello-world",
  streams: {
    "greeting": {
      name: "Greeting Module",
      owns: ["src/greeting.ts"],
      plan: "Create a greeting module that exports a greet(name: string): string function. It should return 'Hello, {name}!' for any input."
    },
    "greeting-test": {
      name: "Greeting Tests",
      owns: ["tests/greeting.test.ts"],
      reads: ["src/greeting.ts"],
      deps: ["greeting"],
      plan: "Write vitest tests for the greet function. Test with various names and edge cases.",
      verify: ["npx vitest run tests/greeting.test.ts"]
    }
  }
});

Notice the key properties:

  • owns — Each stream declares exclusive file ownership
  • reads — The test stream reads the source file but cannot modify it
  • deps — The test stream depends on the greeting stream
  • verify — Runs after the stream completes to check correctness

`orchex.execute` — Run all waves

orchex.execute({ mode: "auto" });

orchex resolves the dependency graph and executes:

Wave 1: greeting        → LLM creates src/greeting.ts
         ↓ (completes)
Wave 2: greeting-test   → LLM creates tests/greeting.test.ts
                         → Runs: npx vitest run tests/greeting.test.ts

Step 4: Check the Output

After execution, you'll have two new files:

src/greeting.ts:

export function greet(name: string): string {
  return `Hello, ${name}!`;
}

tests/greeting.test.ts:

import { describe, it, expect } from 'vitest';
import { greet } from '../src/greeting';

describe('greet', () => {
  it('greets by name', () => {
    expect(greet('World')).toBe('Hello, World!');
  });

  it('handles empty string', () => {
    expect(greet('')).toBe('Hello, !');
  });
});

Step 5: What Just Happened

You just experienced orchex's core workflow:

  1. Streams — Two independent units of work, each with clear file ownership
  2. Waves — orchex automatically grouped streams into execution waves based on dependencies
  3. Ownership — The test stream could read src/greeting.ts but not modify it
  4. Verification — The test stream ran vitest after writing code to confirm correctness
  5. Self-healing — If the tests had failed, orchex would have generated a fix stream automatically (up to 3 attempts)

Try Something Bigger

Now try a 3-stream orchestration with parallel execution:

Use orchex to add a calculator module:
- "types" stream: Create src/calculator/types.ts with Operation type and CalcResult interface
- "calculator" stream: Create src/calculator/index.ts with add, subtract, multiply, divide functions (depends on types)
- "calculator-test" stream: Create tests/calculator.test.ts (depends on calculator)

This creates a dependency graph with real parallelism:

Wave 1: [types]           → Foundation types
Wave 2: [calculator]      → Implementation (needs types)
Wave 3: [calculator-test] → Tests (needs implementation)

Scaling Up: `orchex learn`

For larger features, writing stream definitions by hand gets tedious. orchex learn parses a markdown plan document and generates stream definitions automatically:

Use orchex learn on my plan document at docs/plans/auth-feature.md

orchex analyzes the document, infers dependencies from file references, and generates properly-sliced streams with ownership — ready to execute. See the orchex learn tutorial for details.

Common Issues

"No LLM API key found"

Set one of the provider environment variables. See Configure MCP.

Tests fail on first run

This is normal — orchex's self-healing kicks in automatically. It categorizes the failure, generates a targeted fix stream, and retries (up to 3 times).

Stream times out

The default timeout is generous, but complex streams with large file contexts can exceed it. Add timeoutMs to the stream definition to increase it.

Next Steps

  • Streams — Deep dive into stream definitions and lifecycle
  • Waves — Understand wave planning and parallelism
  • Ownership — Why file ownership matters for parallel agents
  • orchex learn — Turn markdown plans into parallel streams