Skip to Content
ExecutorsClaude Agent SDK Executor

Claude Agent SDK Executor

Native Claude executor for Ark agents, built on the Claude Agent SDK . Provides built-in tool access (Read, Write, Edit, Bash, Grep, Glob) with per-session filesystem isolation and optional OTEL tracing.

Overview

The Claude Agent SDK Executor provides:

  • Built-in Tool Access - Read, Write, Edit, Bash, Grep, Glob available out of the box
  • MCP Tool Support - MCP servers assigned to the agent are connected automatically alongside built-in tools
  • Session Persistence - Multi-turn conversations via ClaudeSDKClient with explicit session resume
  • Filesystem Isolation - Each conversationId gets its own directory at /data/sessions/<conversationId>/
  • Sandbox Isolation - Optional per-conversation container isolation via agent-sandbox  scheduler mode
  • OTEL Tracing - Optional observability via openinference-instrumentation-claude-agent-sdk
  • A2A Protocol - Compliant with the Agent-to-Agent protocol for seamless Ark integration
  • ExecutionEngine CRD - Registers as a Kubernetes-native execution engine

Deployment Modes

ModeDescriptionUse when
Standalone (default in Helm)Single executor pod with PVCSimple setups, single-tenant
Scheduler (default in DevSpace)Per-conversation sandbox podsMulti-tenant, security isolation needed

Install

ark install marketplace/executors/executor-claude-agent-sdk

Or with DevSpace (scheduler mode by default):

cd executors/claude-agent-sdk devspace deploy # scheduler mode devspace deploy -p standalone # standalone mode

Or with Helm:

# Standalone (default) helm install executor-claude-agent-sdk ./chart -n default --create-namespace # Scheduler mode helm install executor-claude-agent-sdk ./chart -n default --create-namespace --set scheduler.enabled=true

Access via:

  • Port forward: kubectl port-forward svc/executor-claude-agent-sdk 8000:8000
  • Health check: curl http://localhost:8000/health

Prerequisites

Model CRD (Required)

Create a Model CRD with your Anthropic configuration. The executor reads the API key, model name, and optional base URL from the Model CRD at request time:

apiVersion: ark.mckinsey.com/v1 kind: Model metadata: name: claude-sonnet spec: model: value: claude-sonnet-4-6 type: anthropic config: anthropic: apiKey: valueFrom: secretKeyRef: name: my-anthropic-secret key: api-key # Optional: custom base URL for proxies or private endpoints # baseUrl: https://proxy.internal.example.com

Reference the Model from your Agent CRD via spec.model.ref: claude-sonnet.

OTEL Tracing (Optional)

Enable observability by creating the otel-environment-variables secret:

kubectl create secret generic otel-environment-variables \ --from-literal=OTEL_EXPORTER_OTLP_ENDPOINT=http://phoenix.phoenix.svc.cluster.local:6006/v1/traces \ --from-literal=OTEL_EXPORTER_OTLP_HEADERS='Authorization=Bearer <token>'

Supported backends include Phoenix, Langfuse, Honeycomb, Jaeger, and any OTLP-compatible endpoint.

ExecutionEngine Resource

The executor registers as an ExecutionEngine with A2A protocol:

apiVersion: ark.mckinsey.com/v1prealpha1 kind: ExecutionEngine metadata: name: executor-claude-agent-sdk spec: address: valueFrom: serviceRef: name: executor-claude-agent-sdk description: "Claude Agent SDK Executor - A2A server with built-in tool access"

Creating Agents

Reference the execution engine in your agent spec:

apiVersion: ark.mckinsey.com/v1alpha1 kind: Agent metadata: name: my-claude-agent spec: executionEngine: name: executor-claude-agent-sdk model: ref: claude-sonnet prompt: | You are a helpful assistant with access to filesystem tools. You can read, write, and edit files in your working directory.

Scheduler Mode

The scheduler provides per-conversation sandbox isolation using kubernetes-sigs/agent-sandbox . Each conversation gets its own pod running the unchanged executor image.

Prerequisites

Install the agent-sandbox controller (core + extensions):

VERSION=v0.2.1 kubectl apply -f https://github.com/kubernetes-sigs/agent-sandbox/releases/download/${VERSION}/manifest.yaml kubectl apply -f https://github.com/kubernetes-sigs/agent-sandbox/releases/download/${VERSION}/extensions.yaml

How It Works

  1. The Ark controller sends an A2A message to the scheduler
  2. The scheduler extracts contextId and looks up (or creates) a sandbox for that conversation
  3. The request is proxied to the sandbox pod’s A2A endpoint
  4. The response is relayed back to the controller

Routing state is stored on SandboxClaim labels and annotations — the scheduler is stateless and supports multiple replicas.

Session Identity

The scheduler owns session identity. For new conversations, omit conversationId from the Query — the scheduler generates a UUID4 and injects it into the A2A message. The executor echoes it back, and the Ark controller stores it in Query.status.conversationId. Use this value for follow-up queries to route back to the same sandbox.

Non-UUID4 contextId values are rejected with a 400 error.

Scheduler Configuration

ParameterDescriptionDefault
scheduler.enabledEnable scheduler modefalse
scheduler.config.sessionIdleTTLIdle session timeout (seconds)1800
scheduler.config.shutdownPolicyDelete or Retain expired sandboxesDelete
scheduler.config.sandboxReadyTimeoutSandbox readiness timeout (seconds)60
scheduler.config.maxActiveSandboxesMax concurrent sandbox pods (0 = unlimited)0
scheduler.warmPool.enabledEnable pre-warmed sandbox poolfalse
scheduler.warmPool.replicasNumber of warm pool pods2

Configuration is hot-reloadable via ConfigMap — changes apply without restart.

Known Limitations

  • No streaming support: The proxy buffers the full upstream response. A2A message/stream (SSE) is not supported; use message/send only.

Session Behavior

Each conversation gets an isolated filesystem directory:

/data/sessions/ ├── <conversationId-1>/ ← Agent's working directory │ └── (agent-created files) ├── <conversationId-2>/ └── ... ~/.claude/projects/ ├── -data-sessions-<conversationId-1>/ ← SDK session state (auto-managed) │ └── <session-id>.jsonl └── ...
  • New conversation: Omit conversationId — the scheduler generates a UUID4. A fresh directory is created and the SDK starts a new session.
  • Continued conversation: Set conversationId to the value from Query.status.conversationId. The SDK resumes the previous session by explicit session ID.
  • Standalone mode: Session data survives pod restarts via a PersistentVolumeClaim
  • Scheduler mode: Session data lives on the sandbox pod’s ephemeral filesystem for the conversation lifetime

MCP Tools

Agents can reference MCP-type tools in their spec. The executor resolves the backing MCPServer connection info (url, transport, headers) via the ark-sdk query extension and connects to each server using the Claude Agent SDK’s native MCP support.

How it works

  1. The ark-sdk query extension resolves Tool CRDs → MCPServer CRDs, groups tools by server, and dereferences secrets in headers
  2. The executor maps each MCPServerConfig into the SDK’s mcp_servers option (renaming transporttype)
  3. Each server’s tool list acts as an allowlist — only tools the agent references are available, even if the server exposes more
  4. Built-in tools (Read, Write, Edit, Bash, Grep, Glob) remain available alongside MCP tools

Example

Define an MCPServer and Tool CRDs:

apiVersion: ark.mckinsey.com/v1alpha1 kind: MCPServer metadata: name: github-mcp spec: address: value: "http://github-mcp.default.svc.cluster.local:8080" transport: http timeout: 30s headers: - name: Authorization valueFrom: secretKeyRef: name: github-token key: token --- apiVersion: ark.mckinsey.com/v1alpha1 kind: Tool metadata: name: github-mcp-search-repos spec: type: mcp mcp: mcpServerRef: name: github-mcp toolName: search_repos

Reference the tools in your agent:

apiVersion: ark.mckinsey.com/v1alpha1 kind: Agent metadata: name: my-claude-agent spec: executionEngine: name: executor-claude-agent-sdk model: ref: claude-sonnet prompt: | You are a helpful assistant with access to GitHub. tools: - name: github-mcp-search-repos

The Claude subprocess will have access to the search_repos tool via mcp__github-mcp__search_repos, in addition to all built-in tools.

Configuration

ParameterDescriptionDefault
app.image.repositoryContainer imageghcr.io/mckinsey/agents-at-scale-marketplace/executor-claude-agent-sdk
service.portService port8000
executionEngine.enabledRegister ExecutionEngine CRDtrue
httpRoute.enabledEnable Gateway API HTTPRoutefalse
persistence.enabledEnable PVC for sessions (standalone mode)true
persistence.sizePVC size10Gi
scheduler.enabledEnable scheduler mode with sandbox isolationfalse

Environment Variables

Model name and API key are configured via the Model CRD (see Prerequisites).

VariableDescriptionDefault
HOSTServer bind address0.0.0.0
PORTServer port8000
OTEL_EXPORTER_OTLP_ENDPOINTOTLP endpoint (enables tracing)Disabled
OTEL_EXPORTER_OTLP_HEADERSOTLP authentication headersNone

Troubleshooting

Service Not Starting

Check pod status and logs:

kubectl get pods -l app=executor-claude-agent-sdk kubectl logs -l app=executor-claude-agent-sdk

Model Config Error

If requests fail with a “provider ‘anthropic’” error, verify the Model CRD exists and is referenced from the Agent CRD:

kubectl get models kubectl get agent my-claude-agent -o yaml | grep modelRef

ExecutionEngine Not Registered

Verify the ExecutionEngine CRD exists:

kubectl get executionengines

Health Check Failing

Test the health endpoint directly:

kubectl port-forward svc/executor-claude-agent-sdk 8000:8000 curl http://localhost:8000/health

Uninstall

Using DevSpace:

cd executors/claude-agent-sdk devspace purge

Using Helm:

helm uninstall executor-claude-agent-sdk -n default

Note: The PVC is not deleted on uninstall. To remove session data:

kubectl delete pvc executor-claude-agent-sdk-sessions -n default

Additional Resources

Last updated on