Skip to Content
Nextra 4.0 is released šŸŽ‰

Langfuse Service

Langfuse provides comprehensive observability for AI applications, offering detailed tracing, monitoring, and analytics for your ARK agents and teams. This guide covers setting up and using Langfuse with ARK.

Overview

Langfuse integration with ARK provides:

  • LLM Call Tracing - Track all model interactions and API calls
  • Cost Monitoring - Monitor API usage and associated costs
  • Performance Analytics - Response times, throughput, and latency metrics
  • Session Management - Group related interactions and conversations
  • Error Tracking - Identify and debug issues in agent workflows

Installation

Quick Start

Deploy Langfuse as part of your ARK installation:

# From repository root - install Langfuse with headless initialization make langfuse-install # Show credentials and environment variables make langfuse-credentials # Open dashboard (automatically logs you in) make langfuse-dashboard

Headless Initialization

Langfuse is automatically configured with:

  • Organization: ark
  • Project: ark
  • User: ark@ark.com / password123
  • API Keys: Pre-configured for OpenTelemetry integration

The installation automatically:

  1. Deploys Langfuse to the telemetry namespace
  2. Configures OTEL environment variables for ARK services
  3. Restarts ARK controller with proper telemetry configuration

Accessing Langfuse Dashboard

Local Development Access

The make langfuse-dashboard command automatically opens the dashboard and logs you in. Alternatively, you can access it manually:

# Option 1: minikube tunnel (exposes on standard ports) minikube tunnel # Access via: http://localhost/langfuse # Option 2: port-forward nginx ingress (custom port) kubectl port-forward service/nginx-ingress 8080:80 -n ark-system # Access via: http://localhost:8080/langfuse # Option 3: direct port-forward to langfuse service kubectl port-forward service/langfuse-web 5264:3000 -n telemetry # Access via: http://localhost:5264

Gateway API Access

Langfuse is configured with Gateway API HTTPRoute for namespace-based access:

# Access via nip.io DNS pattern open http://langfuse.telemetry.127.0.0.1.nip.io:8080

Configuration

OTEL Environment Variables

The installation automatically configures these OpenTelemetry environment variables:

# Automatically configured by make langfuse-install OTEL_EXPORTER_OTLP_ENDPOINT=http://langfuse-web.telemetry.svc.cluster.local:3000/api/public/otel OTEL_EXPORTER_OTLP_HEADERS=Authorization=Basic <base64-encoded-credentials>

Service Configuration

Langfuse is deployed with these default settings:

  • Namespace: telemetry
  • Service Name: langfuse-web
  • Port: 3000
  • Public Key: lf_pk_1234567890
  • Secret Key: lf_sk_1234567890

Automatic OTEL Header Deployment

The installation automatically deploys OTEL configuration to:

  • ark-system namespace
  • default namespace

This enables automatic telemetry collection from ARK services.

Using Langfuse Dashboard

Dashboard Overview

Once logged in, the Langfuse dashboard provides:

Traces View

  • Complete execution traces for ARK controller operations
  • Model calls and responses
  • Timing and performance metrics

Sessions View

  • Grouped interactions and conversations
  • Session-level analytics

Models View

  • Model usage statistics
  • Cost breakdown by model
  • Performance comparisons

Viewing ARK Traces

After installation, you should immediately see a ā€˜startup’ trace from the ARK controller. This confirms that telemetry is working correctly.

Traces include:

  • Controller startup events
  • Query execution flows
  • Model interactions
  • Tool executions

Troubleshooting

Langfuse Service Issues

# Check Langfuse pod status kubectl get pods -l app=langfuse -n telemetry # View Langfuse logs kubectl logs -l app=langfuse -n telemetry # Check service connectivity kubectl exec -it <ark-controller-pod> -- curl http://langfuse-web.telemetry.svc.cluster.local:3000/health

OTEL Configuration Issues

# Check OTEL environment variables kubectl get secret otel-environment-variables -n ark-system -o yaml # Verify ARK controller has OTEL config kubectl describe deployment ark-controller-manager -n ark-system | grep -A 10 envFrom # Check telemetry initialization logs kubectl logs deployment/ark-controller-manager -n ark-system | grep telemetry

Dashboard Access Issues

# Check if Langfuse is ready kubectl get deployment langfuse-web -n telemetry # Test direct port-forward access kubectl port-forward service/langfuse-web 5264:3000 -n telemetry # Check Gateway API configuration kubectl get httproute langfuse-telemetry -n telemetry -o yaml

Uninstalling

To remove Langfuse and clean up OTEL configuration:

# Uninstall Langfuse and clean up OTEL secrets make langfuse-uninstall

This will:

  1. Remove Langfuse from the telemetry namespace
  2. Delete OTEL environment variable secrets from ark-system namespace
  3. Clean up installation stamps
Last updated on