Upgrading
Ark uses Semantic Versioning . The technical details are in the semantic versioning specification . Briefly, they are:
- The
v0.1.xversion line is the initial API unstable development phase of a project, breaking changes can occur on new releases. - From
v0.2.xonwards, breaking changes are fully documenting in this guide, without migration instructions. - Patch releases (e.g.
v0.2.1tov0.2.2orv0.2.x) do not introduce breaking changes, and should have no special instructions to follow. - Minor releases (e.g.
v0.2.2tov0.3.0) introduce small changes. Minor workaround might be needed, and the changelog should be checked. - Minor releases prior to
v0.1.0(e.g.v0.3.0tov0.4.0) may introduce backwards incompatible behavior changes, which are documented in this guide with migration instructions.
A detailed record of every change is documented in CHANGELOG.md.
v0.1.50
Broker Service Rename
The ark-cluster-memory Helm release has been renamed to ark-cluster-broker.
If using the Ark CLI: Run ark install - it handles uninstalling the old release automatically.
If using Helm directly: You must uninstall the old release first, otherwise Helm cannot adopt the shared ark-config-streaming ConfigMap:
Error: Unable to continue with install: ConfigMap "ark-config-streaming" in namespace "default"
exists and cannot be imported into the current release: invalid ownership metadata;
annotation validation error: key "meta.helm.sh/release-name" must equal "ark-broker":
current value is "ark-cluster-memory"Migration (Helm users only):
helm uninstall ark-cluster-memoryThen proceed with installing the new version. The ark-cluster-broker release will be created.
Session ID and Conversation ID
Prior to v0.1.50, sessionId was used for both query grouping (telemetry) and memory/conversation continuity. This caused issues when users wanted to group queries by workflow but not share conversation memory.
From v0.1.50, these are separate concepts:
sessionId: Groups related queries for tracking and telemetry (e.g., workflow ID)conversationId: Associates queries with a specific memory thread for context continuity
The conversationId is generated server-side when a query uses memory. To continue a conversation, pass back the conversationId from a previous query response.
This is a breaking change if you relied on sessionId for memory continuity. Queries with the same sessionId no longer automatically share conversation context.
Migration:
If you were using sessionId to maintain conversation context across queries:
- On the first query, note the
conversationIdfrom the response - Pass
conversationIdon subsequent queries to continue the conversation
# Query with memory - conversationId generated server-side
apiVersion: ark.mckinsey.com/v1alpha1
kind: Query
spec:
input: "What did we discuss?"
target:
name: my-agent
memory:
name: ark-broker
sessionId: "workflow-123" # For telemetry grouping
conversationId: "conv-456-789" # For memory continuity (from previous response)See #550 and #596 for details.
MCP Server Path Configuration
Prior to v0.1.50, the Ark controller automatically appended /mcp or /sse to MCP server addresses based on the transport type. This prevented users from connecting to MCP servers with custom endpoint paths.
From version v0.1.50, MCP server addresses are used exactly as specified. You must include the full path in your MCPServer configuration.
This is a breaking change for existing MCPServer resources that rely on automatic path appending.
Symptoms after upgrade:
- MCPServer shows
Available=False - Event:
Warning ClientCreationFailed Failed to create MCP client - Condition reason:
ClientCreationFailed
Migration:
Update your MCPServer resources to include the endpoint path:
# Before (path auto-appended)
spec:
address:
valueFrom:
serviceRef:
name: my-mcp-server
port: http
transport: http
# After (explicit path required)
spec:
address:
valueFrom:
serviceRef:
name: my-mcp-server
port: http
path: /mcp # Add this for HTTP transport
transport: httpFor direct value addresses:
# Before
address:
value: "http://my-server:8080"
# After
address:
value: "http://my-server:8080/mcp"Use path: /sse for SSE transport.
Model Provider Field
Prior to v0.1.50, the spec.type field on Models specified the AI provider (openai, azure, bedrock).
From v0.1.50, provider and type are separate fields:
spec.provider: The model provider (openai,azure, orbedrock)spec.type: The model type (completionsfor a OpenAI V1 Chat Completions API compliant model)
New models should use the new format:
spec:
provider: openai
type: completions # optional, defaults to completionsExisting models using the old format (such as spec.type: openai) will show as unavailable with a clear error message:
provider is required - update model to migrate 'openai' from spec.type to spec.providerOn create or update, models are automatically migrated to the new format by the admission webhook.
The automatic migration will be removed in v1.0.0. Update your manifests to the new format before upgrading to v1.0.0.
To migrate existing models:
# View current models - PROVIDER column may be empty for old format, and models
# which have not been migated will have the 'type' set to openai/azure/etc.
kubectl get models
# e.g:
# NAME TYPE PROVIDER MODEL AVAILABLE AGE
# my-model openai gpt-4 False 2d16h
# Trigger migration by annotating all models
kubectl annotate models --all 'ark.mckinsey.com/migrate-provider-0.1.50=done' --overwrite
# Verify migration - PROVIDER column now populated and TYPE set correctly.
kubectl get models
# e.g:
# NAME TYPE PROVIDER MODEL AVAILABLE AGE
# my-model completions openai gpt-4 True 2d16hThis is a breaking change for existing models that have not been updated. They will remain unavailable until migrated.
Query Single Target
Prior to v0.1.50, Query resources supported multiple targets via spec.targets[]:
# Old format (no longer supported)
spec:
input: "What is 2+2?"
targets:
- type: agent
name: math-agent
- type: agent
name: backup-agentFrom v0.1.50, Query resources support a single target via spec.target:
# New format
spec:
input: "What is 2+2?"
target:
type: agent
name: math-agentThis is a breaking change. Existing Query manifests using spec.targets[] must be updated to use spec.target.
To query multiple targets, create one Query per target and check the results individually.
Agent Tool Type Deprecation
Prior to v0.1.50, agents referenced Tool resources using type: custom:
# Old format (deprecated)
spec:
tools:
- type: custom
name: get-coordinates # References an HTTP Tool
- type: custom
name: github-read-file # References an MCP ToolFrom v0.1.50, agents should use explicit tool types that match the Tool resource type:
# New format
spec:
tools:
- type: http
name: get-coordinates # HTTP tool
- type: mcp
name: github-read-file # MCP tool
- type: agent
name: sub-agent-tool # Agent tool
- type: team
name: research-team # Team tool
- type: builtin
name: noop # Builtin toolThis is a non-breaking change. type: custom continues to work but shows a deprecation warning. The controller now validates that the declared type matches the Tool CRD type (except for custom which bypasses validation for backwards compatibility).
type: custom will be removed in v1.0.0. Update your agent manifests to use explicit tool types.
To find agents using deprecated type: custom:
# List agents with 'type: custom' tools
kubectl get agents -A -o json | jq -r '.items[] | select(.spec.tools[]?.type == "custom") | "\(.metadata.namespace)/\(.metadata.name)"'Migration:
To determine the correct type for each tool, check the Tool resource:
# Check a tool's type
kubectl get tool <tool-name> -o jsonpath='{.spec.type}'Most type: custom tools are MCP tools and should be updated to type: mcp. Update your agent manifests accordingly:
# Before
tools:
- type: custom
name: github-read-file
# After
tools:
- type: mcp
name: github-read-filev0.1.34
Agent Model References
Prior to v0.1.34 if an Agent had no modelRef specified, then the model named default was assumed.
This led to some challenges:
- When viewing agent details (e.g. with
kubectl get agents), the agent model was not shown, which could make it unclear that the model was in-fact implied to bedefault - It was impossible to differentiate between an agent that was using the
defaultmodel, and an agent that has no model set (for example, A2A agents have no model) - A bug was present that meant that A2A agents would show as
Available=Falseif there was nodefaultmodel or if thedefaultmodel was not healthy
From version v0.1.34, if an Agent has no modelRef specified:
- On admission or update, if the agent is not an A2A agent (i.e. it does not have the
ark.mckinsey.com/a2a-server-nameannotation set) and if nomodelRefis specified, thenmodelRefis set todefault - A2A agents’ availability is no longer incorrectly associated with the
defaultmodel
This is a non-breaking change when you create or update your resources (applying agents via yaml or APIs and so on). It is a breaking change for existing agents that have no modelRef that are running in your cluster. Until the agent is reconciled, it will not have its model set to default.
To migrate existing agents, simply update them. The controller will reconcile the agent and set the correct model. You can annotate agents to trigger an update:
# Show agents. We have one with a model, one without, and one a2a agent.
kubectl get agents
# e.g:
# NAME MODEL AVAILABLE AGE
# code-reviewer claude-4-opus True 2d16h
# team-leader True 2d16h
# a2a-agent True 2d16h
# Update each agent by adding an annotation. Agents without a model will have it
# set to 'default'. Use '--all-namespaces' if you want to update the entire
# cluster.
kubectl annotate agents --all 'ark.mckinsey.com/migrate-0.1.34=done' --overwrite
# Show agents - the default model is now set for non-a2a agents.
kubectl get agents
# e.g:
# NAME MODEL AVAILABLE AGE
# code-reviewer claude-4-opus True 2d16h
# team-leader default True 2d16h
# a2a-agent True 2d16h