Skip to Content
ExecutorsLangChain Executor

LangChain Executor

LangChain-based execution engine for Ark agents with optional Retrieval-Augmented Generation (RAG) support. Communicates using the A2A (Agent-to-Agent) protocol.

Overview

The LangChain Executor provides:

  • LangChain Framework Integration - Execute agents using LangChain with OpenAI and Azure OpenAI models
  • Conversation History - Server-side conversation memory keyed by conversationId, using LangChain’s ChatMessageHistory
  • RAG Support - Retrieval-Augmented Generation with FAISS vector search for code-aware responses
  • A2A Protocol - Compliant with the Agent-to-Agent protocol for seamless integration with Ark
  • ExecutionEngine CRD - Registers as a Kubernetes-native execution engine

Install

ark install marketplace/executors/executor-langchain

Or with DevSpace:

cd executors/langchain devspace deploy

Or with Helm:

helm install executor-langchain ./chart -n default --create-namespace

Access via:

  • Port forward: kubectl port-forward svc/executor-langchain 8000:8000
  • Health check: curl http://localhost:8000/health

ExecutionEngine Resource

The LangChain executor registers as an ExecutionEngine with A2A protocol:

apiVersion: ark.mckinsey.com/v1prealpha1 kind: ExecutionEngine metadata: name: executor-langchain spec: address: valueFrom: serviceRef: name: executor-langchain description: "LangChain Executor - A2A server with RAG support"

Creating Agents

Reference the execution engine in your agent spec:

apiVersion: ark.mckinsey.com/v1alpha1 kind: Agent metadata: name: my-langchain-agent spec: executionEngine: name: executor-langchain modelRef: name: my-model prompt: | You are a helpful assistant.

RAG Support

Enable RAG for an agent by adding the langchain: rag label:

apiVersion: ark.mckinsey.com/v1alpha1 kind: Agent metadata: name: my-rag-agent labels: langchain: rag spec: executionEngine: name: executor-langchain modelRef: name: my-model prompt: | You are an expert developer assistant with deep knowledge of the codebase. When RAG context is provided, use it to give accurate, specific answers.

When RAG is enabled, the engine indexes local Python files and provides relevant code context via FAISS vector search.

Custom Embeddings Model

Specify a custom embeddings model via the langchain-embeddings-model label:

metadata: labels: langchain: rag langchain-embeddings-model: text-embedding-3-small

Configuration

ParameterDescriptionDefault
app.image.repositoryContainer imageghcr.io/mckinsey/agents-at-scale-marketplace/executor-langchain
service.portService port8000
executionEngine.enabledRegister ExecutionEngine CRDtrue
httpRoute.enabledEnable Gateway API HTTPRoutefalse

Environment Variables

VariableDescriptionDefault
HOSTServer bind address0.0.0.0
PORTServer port8000

Supported Model Types

TypeDescription
openaiOpenAI API with configurable properties
azureAzure OpenAI with deployment URLs

Troubleshooting

Service Not Starting

Check pod status and logs:

kubectl get pods -l app=executor-langchain kubectl logs -l app=executor-langchain

ExecutionEngine Not Registered

Verify the ExecutionEngine CRD exists:

kubectl get executionengines

Health Check Failing

Test the health endpoint directly:

kubectl port-forward svc/executor-langchain 8000:8000 curl http://localhost:8000/health

Uninstall

Using DevSpace:

cd executors/langchain devspace purge

Using Helm:

helm uninstall executor-langchain -n default

Additional Resources

Last updated on