LangChain Execution Engine
LangChain-based execution engine for processing agent queries with optional RAG support.
ARK includes a LangChain execution engine that provides specialized processing capabilities beyond standard AI model interactions, including Retrieval-Augmented Generation (RAG), custom chains, and LangChain framework integration.
Installation
make executor-langchain-install
Features
- LangChain framework integration
- Optional RAG (Retrieval-Augmented Generation) support via agent labels
- Memory persistence for stateful conversations
- Compatible with all Model providers (Azure OpenAI, OpenAI, Ollama)
Local Development
make executor-langchain-dev
RAG Support
Enable RAG for an agent by adding the label:
apiVersion: ark.mckinsey.com/v1alpha1
kind: Agent
metadata:
name: my-agent
labels:
langchain: rag
spec:
executionEngine:
name: langchain-engine
prompt: |
You are an expert Python developer assistant with deep knowledge of the codebase.
When RAG context is provided, use it to give accurate, specific answers about the code.
Reference specific functions, classes, and modules when relevant.
Provide code examples from the indexed codebase when helpful.
When RAG is enabled, the engine indexes local Python files and provides relevant code context to the agent.
Memory Support
To enable stateful conversations with memory persistence, you must install a Memory resource. The PostgreSQL Memory Service provides a production-ready memory implementation that can be used with this execution engine.
Example query with memory:
apiVersion: ark.mckinsey.com/v1alpha1
kind: Query
metadata:
name: my-query
spec:
input: "Remember that my name is Alice"
targets:
- type: agent
name: my-agent
memory:
name: postgres-memory # Reference to installed Memory resource
sessionId: "alice-session"
Next Steps
- PostgreSQL Memory - Persistent memory storage
- A2A Gateway - Agent-to-agent communication
Last updated on