Skip to content

Workflows

Workflows compose typed deployments into a single logical application. A workflow is a JSON graph where nodes reference deployments and edges define intent-based routing conditions.

Key design principle

The LLM is the router. Workflows compile to an orchestrator system prompt and MCP tool registrations. There is no runtime engine, no LangGraph, no Temporal -- the orchestrator LLM reasons about user intent and calls MCP tools to route requests.

What Is a Workflow?

A workflow connects multiple deployment types into a unified application:

flowchart LR
    User["User"] --> Orch["Orchestrator<br/>(model)"]
    Orch -->|"billing intent"| Billing["Billing Agent<br/>(model)"]
    Orch -->|"support intent"| Support["Support Agent<br/>(model)"]
    Orch -->|"chatwoot webhook"| Bridge["Chatwoot Bridge<br/>(bridge)"]
    Bridge --> Support
    UI["Chat UI<br/>(app)"] --> Orch

Each node in the graph is a typed deployment (full, model, app, or bridge). The orchestrator node is always a model deployment whose system prompt and MCP tools are generated from the workflow definition.

Workflow JSON Schema

A workflow is defined as a JSON document with nodes and edges:

{
  "id": "wf-customer-support",
  "name": "Customer Support Pipeline",
  "version": "1.0.0",
  "nodes": [
    {
      "id": "orchestrator",
      "deployment_ref": "env-orch-abc123",
      "type": "model",
      "role": "orchestrator",
      "config": {
        "model_id": "meta-llama/Llama-3.3-70B",
        "system_prompt_prefix": "You are a customer support router."
      }
    },
    {
      "id": "billing-agent",
      "deployment_ref": "env-billing-def456",
      "type": "model",
      "config": {
        "model_id": "meta-llama/Llama-3.1-8B"
      }
    },
    {
      "id": "support-agent",
      "deployment_ref": "env-support-ghi789",
      "type": "model",
      "config": {
        "model_id": "meta-llama/Llama-3.3-70B"
      }
    },
    {
      "id": "chatwoot-bridge",
      "deployment_ref": "env-bridge-jkl012",
      "type": "bridge",
      "config": {
        "bridge_type": "chatwoot",
        "webhook_url": "https://chatwoot.example.com/webhooks"
      }
    },
    {
      "id": "chat-ui",
      "deployment_ref": "env-ui-mno345",
      "type": "app",
      "config": {
        "app_id": "open-webui"
      }
    }
  ],
  "edges": [
    {
      "from": "chat-ui",
      "to": "orchestrator"
    },
    {
      "from": "chatwoot-bridge",
      "to": "orchestrator"
    },
    {
      "from": "orchestrator",
      "to": "billing-agent",
      "condition": {
        "intent": "billing",
        "description": "Route billing inquiries, invoice questions, and payment issues"
      }
    },
    {
      "from": "orchestrator",
      "to": "support-agent",
      "condition": {
        "intent": "support",
        "description": "Route technical support, troubleshooting, and general help"
      }
    }
  ]
}

Node Fields

Field Required Description
id Yes Unique node identifier within the workflow
deployment_ref Yes Reference to an existing LucidEnvironment
type Yes Deployment type: full, model, app, or bridge
role No Special role: orchestrator (one per workflow)
config No Node-specific configuration overrides

Edge Fields

Field Required Description
from Yes Source node ID
to Yes Target node ID
condition No Routing condition (only for edges from orchestrator)
condition.intent No Named intent that triggers this route
condition.description No Natural-language description for the orchestrator's system prompt

How Compilation Works

When a workflow is deployed, the platform compiles the graph into two artifacts:

  1. Orchestrator system prompt -- Generated from edge conditions, describing available intents and when to route to each downstream deployment
  2. MCP tool registrations -- Each downstream node becomes an MCP tool the orchestrator can call
flowchart TD
    WF["Workflow JSON"] --> Compiler["Workflow Compiler"]
    Compiler --> SP["System Prompt<br/><i>You can route to: billing-agent<br/>(billing inquiries), support-agent<br/>(technical support)...</i>"]
    Compiler --> Tools["MCP Tools<br/><i>call_billing_agent(message)<br/>call_support_agent(message)<br/>...</i>"]
    SP --> LLM["Orchestrator LLM"]
    Tools --> LLM

The orchestrator LLM receives a user message, reasons about the intent, and calls the appropriate MCP tool. The tool invocation is a standard MCP call to the downstream deployment's API.

No runtime engine

Compilation produces a system prompt and tool registrations -- nothing more. The orchestrator is a standard model deployment. This means workflow behavior is debuggable with the same tools you use for any LLM (trace inspection, prompt engineering, observability auditor).

Workflow Statuses

Status Description
draft Workflow is defined but not yet deployed. Nodes may reference environments that do not yet exist.
deploying The platform is provisioning environments and compiling the orchestrator.
active All nodes are running and the orchestrator is accepting traffic.
stopped The workflow has been manually stopped. Node environments remain provisioned but receive no traffic.
error One or more nodes failed to deploy. Check node-level status for details.

Workflow Passports

A workflow generates a composite AI Passport that aggregates attestations from all participating nodes:

  • Each node contributes its own TEE attestation and auditor claims
  • The workflow passport includes a graph hash proving the deployed topology matches the declared workflow
  • Edge conditions are included as evidence, so relying parties can verify the routing logic
{
  "passportId": "pass-wf-abc123",
  "type": "workflow",
  "workflowId": "wf-customer-support",
  "graphHash": "sha256:abc123...",
  "nodeAttestations": [
    {
      "nodeId": "orchestrator",
      "passportId": "pass-orch-def456",
      "tee": { "type": "AMD_SEV_SNP" },
      "auditors": ["guardrails", "observability"]
    },
    {
      "nodeId": "billing-agent",
      "passportId": "pass-billing-ghi789",
      "tee": { "type": "AMD_SEV_SNP" },
      "auditors": ["guardrails", "pii", "observability"]
    }
  ],
  "edges": [
    { "from": "orchestrator", "to": "billing-agent", "intent": "billing" }
  ],
  "signature": { "algorithm": "Ed25519", "value": "base64..." }
}

Verify a workflow passport the same way as a standard passport:

lucid passport verify pass-wf-abc123

Example: Customer Support with Intent Routing

A common pattern is a customer support workflow where a single orchestrator routes inbound messages to specialized agents:

  1. Chat UI (app) -- Web interface for live agents and customers
  2. Chatwoot Bridge (bridge) -- Receives webhooks from Chatwoot and translates them into OpenAI-format messages
  3. Orchestrator (model) -- Classifies user intent and routes to the correct agent
  4. Billing Agent (model) -- Handles invoice lookups, payment disputes, account upgrades
  5. Support Agent (model) -- Handles troubleshooting, how-to questions, bug reports

The orchestrator's compiled system prompt looks like:

You are a customer support router. Analyze the user's message and route it
to the appropriate agent:

- call_billing_agent: Use for billing inquiries, invoice questions, and
  payment issues
- call_support_agent: Use for technical support, troubleshooting, and
  general help

Always route the full user message. Do not attempt to answer directly.

When a user asks "Why was I charged twice?", the orchestrator calls call_billing_agent with the message. The billing agent processes it with its own auditor chain (PII detection, guardrails) and returns a response through the orchestrator.

Next Steps