There is a tension in building AI applications. Visual tools are fast to prototype with but you eventually hit their ceiling — you need to do something they do not support and suddenly you are stuck. Code-first frameworks give you full control but mean slow iteration and a steep learning curve for non-engineers on your team.
Langflow occupies a different position: it is a visual builder where every component is actual Python code. You drag and drop to build workflows, but you can click into any node and edit the source. The visual representation and the code are the same thing. When you need to go deeper, you can.
146,000 GitHub stars. Actively maintained. Available as a desktop app, a pip package, and a Docker image. This guide walks through setup, a real workflow, and the five features worth knowing.
Before We Start: What You Need
Langflow is a Python application. You need Python 3.10 or later.
Requirements
─────────────────────────────────────────
Python 3.10 or later
RAM 4 GB minimum (8 GB recommended)
Disk 2 GB for Langflow + dependencies
OS Linux, macOS, Windows
Langflow also supports:
- Docker — for containerised deployments
- Langflow Desktop — a native desktop app for Windows and macOS (no Python setup needed)
For this guide, we use the Python install path.
Installation
Option A: pip (recommended for development)
pip install langflow -U
Start Langflow:
langflow run
Option B: uv (faster, better dependency isolation)
pip install uv
uv pip install langflow -U
uv run langflow run
Option C: Docker
docker run -it --rm \
-p 7860:7860 \
-v langflow-data:/app/langflow \
langflowai/langflow:latest
Accessing Langflow
All three options open Langflow at:
http://127.0.0.1:7860
On first load, you will see the Langflow canvas — an empty workspace ready for components.
How Langflow Works: The Concepts
Before building, understand three core concepts:
Langflow concepts
──────────────────────────────────────────────────────────
Component A single node in the graph.
Each component is a Python class.
Examples: ChatInput, OpenAI, ChromaDB, TextOutput
Flow A directed graph of connected components.
Data flows from left to right along the edges.
Playground A built-in chat interface to test your flow
without leaving Langflow.
──────────────────────────────────────────────────────────
A simple flow looks like this:
[ChatInput] ──► [OpenAI LLM] ──► [ChatOutput]
You connect outputs to inputs by dragging from one node’s output port to another node’s input port. The data type of the output must match the input — Langflow colour-codes ports by type and will not let you make invalid connections.
Your First Flow: A Working Chatbot
Let us build a basic chatbot with memory in five steps.
Step 1 — Add a Chat Input node
On the canvas, click the + button or drag from the component sidebar. Find Chat Input under the Inputs section and add it to the canvas.
This node represents the user’s message.
Step 2 — Add a model node
Search for OpenAI or Ollama in the component sidebar. Add it to the canvas.
Configure it:
- For OpenAI: paste your API key
- For Ollama: set the base URL to
http://localhost:11434and choose your model
Step 3 — Add a Memory node
Find Message History in the sidebar. Add it. This component stores the conversation history and passes it to the model.
Step 4 — Add a Chat Output node
Add Chat Output — this is where the final response goes.
Step 5 — Connect the nodes
Draw edges between nodes in this order:
[Chat Input]
│
│ message
▼
[Message History] ◄──────────────────────────────┐
│ │
│ history │
▼ │
[OpenAI] ──── response ───► [Message History] │
│ (store response) │
│ output │
▼
[Chat Output]
Click the Playground button (top right of the canvas) to test it immediately.
The 5 Top Features
Feature 1: Visual + Code Hybrid — Edit Any Component’s Source
Every component in Langflow is a Python class. You can view and edit the source code of any component by clicking the Code button on any node.
This means:
What you can do:
- Drag and drop to prototype quickly
- Click "Code" on any node to see exactly what it does
- Edit the Python source to add custom logic
- Create entirely custom components and save them to your library
- Export the entire flow as Python code
Here is what a component looks like under the hood:
from langflow.custom import Component
from langflow.inputs import StrInput, SecretStrInput
from langflow.template import Output
class MyCustomComponent(Component):
display_name = "My Custom Node"
description = "Does something useful"
inputs = [
StrInput(name="input_text", display_name="Input Text"),
SecretStrInput(name="api_key", display_name="API Key"),
]
outputs = [
Output(display_name="Result", name="result", method="process"),
]
def process(self) -> str:
# Your custom logic here
return f"Processed: {self.input_text}"
Save this as a .py file, drop it in Langflow’s custom components directory, and it appears in the sidebar just like built-in components.
Why this matters: You never hit a ceiling. If a built-in component does not quite do what you need, you edit it. No framework lock-in, no workarounds — just Python.
Feature 2: Interactive Playground With Step-by-Step Execution
The Playground is not just a chat interface. You can run your flow in step-by-step mode — pause at each node, inspect inputs and outputs, and see exactly what data is flowing through the pipeline.
Step-by-step execution view
──────────────────────────────────────────────────────────
[Chat Input] Input: "What is RAG?"
↓ Status: ✓ Complete
[Prompt Template] Input: user_message, chat_history
↓ Status: ✓ Complete
[OpenAI] Input: formatted prompt (4 tokens)
↓ Status: ⟳ Running...
[Chat Output] Input: —
Status: ⟳ Waiting
──────────────────────────────────────────────────────────
You can inspect the exact string that was passed to the LLM, the exact response it returned, and the latency of each step. This is debugging information that is genuinely useful — not just a spinner that turns green.
Why this matters: When your flow gives a wrong answer, step-by-step mode shows you exactly where it went wrong. Was the retrieval step returning irrelevant chunks? Was the prompt template malformed? Was the LLM itself the problem? You can see each step’s input and output and diagnose precisely.
Feature 3: Export as REST API or MCP Server
A flow you build in Langflow can be deployed in two ways without writing any additional code.
As a REST API:
Every flow in Langflow gets a unique API endpoint automatically:
curl -X POST http://127.0.0.1:7860/api/v1/run/YOUR_FLOW_ID \
-H "Content-Type: application/json" \
-d '{
"input_value": "What is the refund policy?",
"output_type": "chat"
}'
You can also export the flow as Python code (File → Export → Python) and integrate it directly into any backend.
As an MCP Server:
Langflow can deploy any flow as an MCP (Model Context Protocol) server. This means your Langflow workflows become tools that MCP-compatible clients — like Claude Desktop — can call directly.
MCP architecture with Langflow
──────────────────────────────────────────────────────────
Claude Desktop
│
│ MCP protocol
▼
Langflow MCP Server
│
│ runs your flow
▼
Your workflow
(retrieval → LLM → response)
──────────────────────────────────────────────────────────
To deploy a flow as an MCP server:
- Build and test your flow in Langflow
- Go to Deploy → MCP Server
- Copy the MCP server config
- Add it to your Claude Desktop
config.json
Your flow now appears as a tool in Claude Desktop.
Why this matters: MCP is becoming the standard way AI applications expose capabilities to other AI tools. Langflow gives you the easiest path to publishing your workflows as MCP tools — no additional server code needed.
Feature 4: Multi-Agent Orchestration
Langflow supports building flows where multiple agents work together. One agent can spawn another, pass results between agents, or coordinate parallel workstreams.
Multi-agent example: Research and Write
──────────────────────────────────────────────────────────
[User Input: "Write a report on RAG frameworks"]
│
▼
[Orchestrator Agent]
│
├──► [Research Agent]
│ │
│ ├── Tool: Web Search
│ ├── Tool: RAG Knowledge Base
│ └── Returns: research notes
│
└──► [Writer Agent]
│
├── Input: research notes from Research Agent
└── Returns: formatted report
│
▼
[Chat Output: final report]
──────────────────────────────────────────────────────────
Each agent can have its own:
- System prompt
- Tool set
- Memory / conversation history
- LLM model (you can use different models for different agents)
The orchestrator decides when to call each sub-agent based on the task.
Why this matters: Single-agent systems hit limits on complex tasks. A research-plus-writing task benefits from having a specialised research agent (optimised for tool use and information gathering) and a separate writing agent (optimised for synthesis and formatting). Multi-agent flows let you compose specialised agents rather than overloading one.
Feature 5: Observability With LangSmith and LangFuse
Langflow integrates with two monitoring platforms:
Monitoring integrations
─────────────────────────────────────────────────────────
LangSmith Full trace of every LLM call
Token usage, latency, input/output
Evaluation and testing tools
LangFuse Open-source alternative
Self-hostable observability
Prompt versioning and management
─────────────────────────────────────────────────────────
To enable LangSmith:
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY="ls__your_key"
langflow run
Every flow execution is now traced in your LangSmith dashboard.
To enable LangFuse (self-hosted):
export LANGFUSE_HOST="http://your-langfuse-server"
export LANGFUSE_PUBLIC_KEY="pk-..."
export LANGFUSE_SECRET_KEY="sk-..."
langflow run
Both integrations work at the flow level — you do not need to modify individual components.
Why this matters: Without observability, debugging a misbehaving flow in production is guesswork. With traces, you can see the exact input and output of every LLM call, identify which step is degrading, and fix it without reproducing the issue locally.
Exporting and Integrating Flows
Once you have built a flow, you have three integration paths:
Integration options
─────────────────────────────────────────────────────────
1. REST API Call the flow from any backend
POST /api/v1/run/{flow_id}
2. Python SDK Import and run the flow in Python
from langflow import load_flow
flow = load_flow("my-flow.json")
3. MCP Server Expose the flow to Claude Desktop
and other MCP clients
─────────────────────────────────────────────────────────
Flows are stored as JSON files — you can version-control them in git, share them with your team, and import them into any Langflow instance.
Security Notes: Known CVEs
Langflow has had security vulnerabilities in recent versions. If you are deploying Langflow anywhere accessible from the internet, this section is important.
CVE history (versions 1.6.0 to 1.6.3)
──────────────────────────────────────────────────────────
CVE-2025-68477 Arbitrary file read via crafted request
CVE-2025-68478 Environment variable exposure (including .env)
CVE-2025-57760 Remote code execution in certain configurations
──────────────────────────────────────────────────────────
All three were fixed in version 1.7.1. If you are on 1.6.x, upgrade immediately:
pip install langflow -U
# or
uv pip install langflow -U
Verify your version:
langflow --version
# Should be 1.7.1 or later
Additional security guidance for network-accessible deployments:
- Always run behind a reverse proxy (nginx, Caddy) with HTTPS
- Enable authentication (Settings → General → Authentication)
- Do not expose port 7860 directly to the internet
- Use a firewall to restrict access to known IPs
Why this matters: Langflow runs arbitrary Python code via its Code components. A vulnerability that allows unauthorised code execution in such a system has serious consequences. The 1.7.1 fix addresses the specific file-reading and code execution bugs, but the general principle — keep it updated and behind authentication — remains important.
Troubleshooting
Langflow starts but flows do not save
Check disk space and permissions on the data directory:
df -h ~/.langflow
ls -la ~/.langflow
Langflow stores flows in SQLite by default at ~/.langflow/langflow.db. If the directory is not writable, flows will appear to save but will not persist.
Components show as “unknown” after upgrade
Langflow component APIs change between versions. After upgrading, open each flow and look for nodes with yellow warning indicators — these need to be reconfigured for the new version.
Ollama model not found
Langflow connects to Ollama at http://localhost:11434 by default. If you changed Ollama’s port or are running Langflow in Docker, update the base URL in the model component settings.
Performance is slow on large flows
Large flows with many LLM nodes can be slow because each node makes a separate API call. Use streaming mode (enable in Settings) to get partial responses faster, and consider batching operations in custom components where possible.
What Is Great, What Is Good, What Still Needs Work
What is great
Visual plus code is the right hybrid. The ability to see your workflow as a graph AND edit any component’s Python source AND export as code is genuinely unique. You get the speed of visual building without the ceiling of pure no-code tools.
MCP server export. Deploying a Langflow workflow as an MCP server is the easiest path to publishing AI capabilities to tools like Claude Desktop. No additional server code, no custom API — just deploy and configure.
146,000 GitHub stars. This translates directly into components, tutorials, and community templates. Whatever you want to build, someone has probably shared a starting point.
What is good
Desktop app option. Langflow Desktop for Windows and macOS means non-Python users can run Langflow without any command-line setup. Useful for product managers or designers who want to prototype workflows.
Good observability integrations. LangSmith and LangFuse integrations work at the environment variable level — no code changes to enable tracing.
What still needs work
Security track record. Three significant CVEs in versions 1.6.0 to 1.6.3 — including an .env file read and a remote code execution — show that security is not yet a strong point. The 1.7.1 fix addresses these, but a pattern of security issues in rapid-release open-source projects bears watching.
Migration between versions. Flows saved in older versions sometimes break in newer ones. The migration path is not well documented, and fixing a broken flow after an upgrade often requires rebuilding affected nodes manually.
Release stability. The project moves fast — which is good for features, but means releases sometimes introduce regressions. Pin your version in production and test upgrades in a separate environment before applying.
Summary
Langflow is the strongest open-source option if you want visual workflow building without giving up code control. The hybrid approach — graph view plus editable Python source plus code export — is the right design for serious AI application development.
The five features worth remembering:
| Feature | Why it matters |
|---|---|
| Visual + code hybrid | Build fast visually, go deep with Python when needed |
| Step-by-step execution | Diagnose exactly where a flow fails |
| MCP server export | Deploy workflows as tools for Claude Desktop and MCP clients |
| Multi-agent orchestration | Compose specialised agents for complex tasks |
| LangSmith + LangFuse | Production-grade tracing without code changes |
To get started:
pip install langflow -U
langflow run
Then open http://127.0.0.1:7860 and build your first flow.
Remember: always run version 1.7.1 or later. Check langflow --version before deploying anywhere.
The GitHub repo is at github.com/langflow-ai/langflow. With 146,000 stars and the MCP server export feature, it is worth adding to your AI toolkit.