Skip to main content

Using Dify with FahemAI

Dify is an open-source large language model (LLM) application development platform. It combines the concepts of Backend as Service and LLMOps, enabling developers to quickly build production-grade generative AI applications.

FahemAI integrates with Dify to support three types of applications:

  • Chat Assistant (including Chatflow)
  • Agent
  • Workflow

Creating an Application in Dify

  1. Follow the Dify documentation to deploy Dify and create your application.
  2. Once published, navigate to the Access API page in Dify.
  3. Generate an API Key.

Important: Save both the API Server URL and the API Key. You will need these to configure the connection in FahemAI.

Network Configuration
  • Cloud Version: Use the standard API URL provided by Dify.
  • Self-Hosted: Use your Dify service address as the base-url in FahemAI, appending /v1 to the path.
  • Docker Deployment: If running both on the same host via Docker, ensure they share a network (e.g., langbot-network). Set the base-url to http://dify-nginx/v1.

Configuring FahemAI

To connect your Dify application:

  1. Open the Pipelines section in FahemAI.
  2. Add a new pipeline or edit an existing one.
  3. Go to the AI Capability settings.
  4. Select Dify as the provider and enter your saved API Server URL and API Key.

Workflow Output Key

If you are connecting a Dify Workflow application, you must specify the output key key to ensure FahemAI captures the response correctly.

  • Recommended Key: Use summary as the key to pass the output content from Dify.

Output Processing

  • Agents & Workflows: If you enable track-function-calls in FahemAI, the system will display intermediate tool calls (e.g., calling function xxx) to the user as they happen.
  • ChatFlow: For Chat Assistant applications using Workflow Orchestration, FahemAI will only output the final text returned by the Answer node, regardless of the tracking settings.