LangChain Executor
LangChain-based execution engine for Ark agents with optional Retrieval-Augmented Generation (RAG) support. Communicates using the A2A (Agent-to-Agent) protocol.
Overview
The LangChain Executor provides:
- LangChain Framework Integration - Execute agents using LangChain with OpenAI and Azure OpenAI models
- Conversation History - Server-side conversation memory keyed by
conversationId, using LangChain’sChatMessageHistory - RAG Support - Retrieval-Augmented Generation with FAISS vector search for code-aware responses
- A2A Protocol - Compliant with the Agent-to-Agent protocol for seamless integration with Ark
- ExecutionEngine CRD - Registers as a Kubernetes-native execution engine
Install
ark install marketplace/executors/executor-langchainOr with DevSpace:
cd executors/langchain
devspace deployOr with Helm:
helm install executor-langchain ./chart -n default --create-namespaceAccess via:
- Port forward:
kubectl port-forward svc/executor-langchain 8000:8000 - Health check:
curl http://localhost:8000/health
ExecutionEngine Resource
The LangChain executor registers as an ExecutionEngine with A2A protocol:
apiVersion: ark.mckinsey.com/v1prealpha1
kind: ExecutionEngine
metadata:
name: executor-langchain
spec:
address:
valueFrom:
serviceRef:
name: executor-langchain
description: "LangChain Executor - A2A server with RAG support"Creating Agents
Reference the execution engine in your agent spec:
apiVersion: ark.mckinsey.com/v1alpha1
kind: Agent
metadata:
name: my-langchain-agent
spec:
executionEngine:
name: executor-langchain
modelRef:
name: my-model
prompt: |
You are a helpful assistant.RAG Support
Enable RAG for an agent by adding the langchain: rag label:
apiVersion: ark.mckinsey.com/v1alpha1
kind: Agent
metadata:
name: my-rag-agent
labels:
langchain: rag
spec:
executionEngine:
name: executor-langchain
modelRef:
name: my-model
prompt: |
You are an expert developer assistant with deep knowledge of the codebase.
When RAG context is provided, use it to give accurate, specific answers.When RAG is enabled, the engine indexes local Python files and provides relevant code context via FAISS vector search.
Custom Embeddings Model
Specify a custom embeddings model via the langchain-embeddings-model label:
metadata:
labels:
langchain: rag
langchain-embeddings-model: text-embedding-3-smallConfiguration
| Parameter | Description | Default |
|---|---|---|
app.image.repository | Container image | ghcr.io/mckinsey/agents-at-scale-marketplace/executor-langchain |
service.port | Service port | 8000 |
executionEngine.enabled | Register ExecutionEngine CRD | true |
httpRoute.enabled | Enable Gateway API HTTPRoute | false |
Environment Variables
| Variable | Description | Default |
|---|---|---|
HOST | Server bind address | 0.0.0.0 |
PORT | Server port | 8000 |
Supported Model Types
| Type | Description |
|---|---|
openai | OpenAI API with configurable properties |
azure | Azure OpenAI with deployment URLs |
Troubleshooting
Service Not Starting
Check pod status and logs:
kubectl get pods -l app=executor-langchain
kubectl logs -l app=executor-langchainExecutionEngine Not Registered
Verify the ExecutionEngine CRD exists:
kubectl get executionenginesHealth Check Failing
Test the health endpoint directly:
kubectl port-forward svc/executor-langchain 8000:8000
curl http://localhost:8000/healthUninstall
Using DevSpace:
cd executors/langchain
devspace purgeUsing Helm:
helm uninstall executor-langchain -n defaultAdditional Resources
Last updated on