Updated March 2026 Workflow Builder No-Code

Dify Workflows Guide 2026: Build AI Automation Pipelines

Dify Workflows are visual, node-based AI pipelines that let you connect LLMs, databases, APIs, and code in a drag-and-drop canvas. This guide covers every node type, walks you through building your first workflow, and shows three production-ready pipeline examples.

What Are Dify Workflows?

A Dify Workflow is a visual pipeline builder where you chain nodes together to automate multi-step AI tasks. Think of it like n8n or Zapier — but built AI-native, with first-class nodes for calling LLMs, searching knowledge bases, and handling structured outputs.

Unlike a Chatbot (which responds conversationally turn-by-turn), a Workflow runs a fixed pipeline from a defined input to a defined output. You trigger it, it executes every node in sequence (or in parallel), and returns a result. This makes workflows ideal for automation, document processing, data enrichment, and any task where the process is deterministic.

Under the hood, Dify Workflows use a directed acyclic graph (DAG) execution engine. Each node receives outputs from upstream nodes as variables, processes them, and passes results downstream. You get full observability — execution logs, node-level latency, token counts — all built in.

Visual drag-and-drop canvas
AI-native LLM nodes
RAG knowledge retrieval built-in
Python & JavaScript code nodes
HTTP request any external API
Conditional branching (If/Else)
Full execution observability
REST API trigger for automation

Core Node Types

Every Dify Workflow is built from these fundamental node types. Understanding each one is the key to designing effective pipelines.

Start

The entry point of every workflow. You define the input variables here — for example, a text field called article_text, a file upload, or a URL. All subsequent nodes can reference these variables.

Tip: Keep inputs minimal and typed. Use "text" for strings, "number" for numeric inputs, "select" for dropdowns.

LLM

The core AI node. It calls any configured model (GPT-4o, Claude, Gemini, Llama, etc.) with a system prompt and user prompt. You can inject variables from any upstream node using {{variable_name}} syntax.

Tip: Use structured output mode (JSON schema) when downstream nodes need to parse the LLM response programmatically.

Knowledge Retrieval

Searches your Dify Knowledge Bases using vector similarity (RAG). Pass a query string and get back the most relevant document chunks. Connect the retrieved context to an LLM node for grounded, factual answers.

Tip: Set the top-K parameter (how many chunks to retrieve) based on your context window budget. 3-5 is usually optimal.

Code

Execute Python or JavaScript directly in the workflow. Use it to parse JSON, transform strings, compute values, filter arrays, or do anything a script can do. Input variables from upstream nodes are available as local variables.

Tip: Code nodes run in a sandboxed environment. No network access — use HTTP Request nodes for external calls.

HTTP Request

Make any REST API call — GET, POST, PUT, DELETE. Configure headers, query params, and request body using workflow variables. The response (JSON, text, or raw) is available to downstream nodes.

Tip: Store sensitive API keys in Dify environment variables, not hardcoded in the node config.

If/Else

Conditional branching. Evaluate any expression (string comparison, numeric threshold, regex match, contains check) and route the workflow to different branches. You can add multiple "Else If" conditions for complex routing logic.

Tip: Use If/Else to handle error cases — e.g., if the LLM confidence is low, route to a fallback response instead of returning a guess.

Template Transform

Transform and format data using Jinja2 templates. Combine multiple variables into a single string, format dates, loop over lists, and apply conditional logic — all without writing a full Code node.

Tip: Great for building dynamic prompts that combine multiple upstream outputs before passing them to an LLM node.

Variable Aggregator

Merge outputs from multiple parallel branches into a single variable. Essential when your workflow splits into parallel paths (e.g., calling two different LLMs simultaneously) and you need to combine results before the End node.

Tip: Use "array" mode to collect all branch outputs into a list, then process with a Code node.

End

The terminal node. Defines what the workflow returns — one or more output variables. When triggered via API, these outputs are returned in the JSON response. When used in a Chatflow, the End node content is displayed to the user.

Tip: You can have multiple End nodes (one per If/Else branch) to return different outputs based on the routing path taken.

Building Your First Workflow

Let's build a simple article summarizer — the classic "Hello World" of AI pipelines. It takes an article text as input and returns a 3-bullet summary.

1

Create a new Workflow app

In Dify Studio, click "+ Create App" → select "Workflow". Give it a name like "Article Summarizer" and click Create. You'll land on the canvas with an empty Start node and End node.

2

Configure the Start node

Click the Start node. Add an input variable: name it article_text, set type to "Paragraph" (long text). This is what the user (or API caller) will provide when triggering the workflow.

3

Add an LLM node

Click the "+" button on the canvas (or drag from the node panel). Add an LLM node. Select your model (e.g., GPT-4o Mini). In the user prompt field, type: Summarize the following article in exactly 3 bullet points. Be concise. {{article_text}}

4

Connect Start → LLM → End

Drag a connection from the Start node's output handle to the LLM node's input. Then connect the LLM node output to the End node. The End node should output the LLM node's text variable.

5

Test it

Click "Run" in the top bar. A test panel appears on the right. Paste any article text into the article_text field and click Run. You'll see the output and full execution trace — node by node, with token counts and latency.

What you built: Start (article_text input) → LLM (summarize prompt) → End (3-bullet summary). This same pattern scales to any text-in, text-out pipeline.

Workflow vs Chatbot vs Agent — When to Use Each

Dify offers three app types. Choosing the right one matters — they serve fundamentally different use cases.

Feature Workflow Chatbot Agent
Interaction model Single run: input → output Multi-turn conversation Dynamic tool selection
Determinism High — fixed pipeline Medium — LLM decides Low — agent decides
Best for Automation, batch processing Customer support, Q&A Research, task completion
Debuggability Excellent — full trace Good — conversation logs Harder — dynamic steps
API triggerable Yes — REST endpoint Yes — chat API Yes — chat API
Knowledge retrieval Yes — via node Yes — via context Yes — via tool
External API calls Yes — HTTP Request node Limited — via plugins Yes — via tools
Parallel execution Yes — native support No No
Rule of thumb: If the process is the same every time and you can draw a flowchart of it, use a Workflow. If users need back-and-forth conversation, use a Chatbot. If the AI needs to decide which tools to call, use an Agent.

3 Practical Workflow Examples

These three pipelines cover the most common real-world Dify Workflow patterns. Each is battle-tested and production-ready.

Example 1

Content Summarization Pipeline

Takes any article text and produces a structured summary with a headline, 3 key points, and a one-sentence takeaway. Useful for content teams, newsletter editors, and research assistants.

Start (article_text) LLM (summarize + structure) End (summary output)

LLM Prompt: "You are a content editor. Given the article below, return: (1) a compelling headline, (2) exactly 3 key bullet points, (3) a one-sentence takeaway. Format as JSON. Article: {{article_text}}"

Example 2

Customer Support Triage

Classifies incoming support tickets by intent (billing, technical, general), then routes each category to a specialized LLM response tailored for that topic. Reduces escalations by 40-60% for teams using it.

Start (ticket_text) LLM (classify intent) If/Else (route by category) LLM (specialized response) End

If/Else logic: If intent == "billing" → billing LLM (knows pricing, refund policy). If intent == "technical" → tech LLM (knows product docs). Else → general support LLM.

Example 3

Document Q&A Pipeline

Takes a question and retrieves the most relevant document chunks from your knowledge base, then passes them to an LLM for a grounded, citation-backed answer. Perfect for legal docs, technical manuals, and internal wikis.

Start (question) Knowledge Retrieval (top-5 chunks) LLM (answer with context) End (answer)

LLM Prompt: "Answer the question using ONLY the context provided. If the answer isn't in the context, say so. Context: {{retrieved_chunks}} — Question: {{question}}"

Running Workflows via API

Every published Dify Workflow gets a REST API endpoint automatically. This is how you integrate workflows into your applications, trigger them from schedulers, or chain multiple workflows together.

POST https://your-dify-instance/v1/workflows/run

curl -X POST 'https://your-dify-instance/v1/workflows/run' \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{
    "inputs": {
      "article_text": "Your article content goes here..."
    },
    "response_mode": "blocking",
    "user": "user-123"
  }'

Response (blocking mode)

{
  "task_id": "abc-123",
  "workflow_run_id": "xyz-456",
  "data": {
    "outputs": {
      "result": "• Key point 1\n• Key point 2\n• Key point 3"
    },
    "status": "succeeded",
    "elapsed_time": 2.34,
    "total_tokens": 312
  }
}

blocking

Waits for the full result before responding. Best for short workflows under 30s.

streaming

Returns a Server-Sent Events stream. Best for long workflows or streaming LLM output to a UI.

Tips for Production Workflows

Getting a workflow to run once is easy. Getting it to run reliably at scale requires a few extra considerations:

Use structured outputs from LLM nodes

Enable JSON mode on your LLM nodes and define a schema. This makes parsing by downstream Code nodes reliable and eliminates hallucinated formatting.

Add error handling with If/Else

Check the output of critical nodes before proceeding. If an HTTP Request returns a 4xx status, route to an error-handling branch rather than letting it silently corrupt downstream data.

Keep LLM prompts focused

In a workflow, each LLM node should do one thing well. Avoid "mega-prompts" that try to classify, summarize, and format in a single call — split them into separate LLM nodes for better reliability and debuggability.

Monitor token usage per workflow run

Dify logs token counts per node. Identify your most expensive nodes and consider using smaller models (e.g., GPT-4o Mini) for classification tasks where reasoning depth is less important.

Test edge cases with the built-in runner

Use Dify's built-in test runner to batch-test your workflow with multiple input variations before deploying. Save test cases for regression testing when you update the workflow.

Pin your model versions

When a workflow is in production, pin to a specific model version (e.g., gpt-4o-2024-08-06) rather than "latest". Model updates can silently change output behavior.

Frequently Asked Questions

What is a Dify Workflow?

A Dify Workflow is a visual, node-based AI pipeline. You connect nodes (LLM, code, HTTP requests, knowledge retrieval, conditions) on a canvas to automate multi-step tasks. Unlike chatbots, workflows run a fixed pipeline from input to output — ideal for automation and batch processing.

When should I use a Workflow instead of a Chatbot?

Use a Workflow for automation tasks with defined inputs/outputs (summarizing documents, processing data, triage). Use a Chatbot for open-ended conversations. Use an Agent when you need dynamic tool selection. Workflows are deterministic and easier to debug.

Can I run Dify Workflows automatically on a schedule?

Dify Workflows can be triggered via API, which you can call from any scheduler (cron, n8n, Zapier, GitHub Actions). There is no built-in scheduler in Dify itself, but the API trigger makes scheduling straightforward with any external tool.

Can Dify Workflows call external APIs?

Yes. The HTTP Request node lets your workflow call any REST API — fetch data from external services, send webhooks, interact with third-party platforms. Combine it with Code nodes to transform API responses before passing them to an LLM.

Host Dify and Start Building Workflows

Dify Workflows run best on a self-hosted instance where you control the compute, have no credit limits, and can process unlimited workflow runs. Get started for as little as €3.79/month on Hetzner, or use a one-click managed deployment on Elestio.

Deploy on Hetzner → One-Click on Elestio → Compare All Hosting Options