Using Dify with FahemAI
Dify is an open-source large language model (LLM) application development platform. It combines the concepts of Backend as Service and LLMOps, enabling developers to quickly build production-grade generative AI applications.
FahemAI integrates with Dify to support three types of applications:
- Chat Assistant (including Chatflow)
- Agent
- Workflow
Creating an Application in Dify
- Follow the Dify documentation to deploy Dify and create your application.
- Once published, navigate to the Access API page in Dify.
- Generate an API Key.
Important: Save both the API Server URL and the API Key. You will need these to configure the connection in FahemAI.
Network Configuration
- Cloud Version: Use the standard API URL provided by Dify.
- Self-Hosted: Use your Dify service address as the
base-urlin FahemAI, appending/v1to the path. - Docker Deployment: If running both on the same host via Docker, ensure they share a network (e.g.,
langbot-network). Set thebase-urltohttp://dify-nginx/v1.
Configuring FahemAI
To connect your Dify application:
- Open the Pipelines section in FahemAI.
- Add a new pipeline or edit an existing one.
- Go to the AI Capability settings.
- Select Dify as the provider and enter your saved API Server URL and API Key.
Workflow Output Key
If you are connecting a Dify Workflow application, you must specify the output key key to ensure FahemAI captures the response correctly.
- Recommended Key: Use
summaryas the key to pass the output content from Dify.
Output Processing
- Agents & Workflows: If you enable
track-function-callsin FahemAI, the system will display intermediate tool calls (e.g.,calling function xxx) to the user as they happen. - ChatFlow: For Chat Assistant applications using Workflow Orchestration, FahemAI will only output the final text returned by the Answer node, regardless of the tracking settings.