Overview
The DeepSeek node for n8n enables users to interact with DeepSeek AI's chat models. It allows you to send a series of messages (as a conversation) to an AI model and receive generated responses, making it suitable for building conversational bots, automating customer support, or integrating advanced language understanding into workflows.
Common scenarios:
- Automating Q&A or helpdesk tasks.
- Generating creative content or summaries based on user prompts.
- Building interactive assistants that require context-aware responses.
Example:
You can use this node to pass a sequence of user and system messages to a DeepSeek model and get a relevant AI-generated reply, which can then be used in subsequent workflow steps.
Properties
Below are the input properties supported by the node for the Chat resource and its default operation:
| Display Name | Type | Description |
|---|---|---|
| Model | options | The model which will generate the completion. Learn more. |
| Prompt | fixedCollection | A collection of messages forming the conversation. Each message includes: |
| - Role (options: Assistant, System, User): The role of the message sender. | ||
| - Content (string): The actual message text. | ||
| Simplify | boolean | Whether to return a simplified version of the response instead of the raw data. |
| Options | collection | Additional parameters to control the model's output: |
| - Frequency Penalty | number | Penalizes new tokens based on their frequency in the text so far. |
| - Maximum Number of Tokens | number | The maximum number of tokens to generate in the completion. |
| - Presence Penalty | number | Penalizes new tokens based on whether they appear in the text so far. |
| - Sampling Temperature | number | Controls randomness; lower values make output more deterministic. |
| - Top P | number | Controls diversity via nucleus sampling. |
| - Response Format | json | Specifies the format that the model must output. |
| - Logprobs | boolean | Whether to return log probabilities of the output tokens. |
| - Top Logprobs | number | Number of most likely tokens to return at each token position (requires Logprobs to be true). |
Output
json:
- If Simplify is enabled (default):
The output will be a simplified array of choices returned by the model, typically containing the generated message(s).[ { "message": { "role": "assistant", "content": "AI-generated response here" }, ... } ] - If Simplify is disabled:
The full raw response from the DeepSeek API is returned, which may include additional metadata such as usage statistics, model information, etc.
- If Simplify is enabled (default):
Binary Data:
This node does not output binary data.
Dependencies
- External Service: Requires access to the DeepSeek API.
- API Key: You must configure valid DeepSeek API credentials in n8n under the name
deepSeekApi. - n8n Configuration: No special environment variables required beyond standard credential setup.
Troubleshooting
Common Issues:
- Invalid API Key:
Error if the provided API key is missing or incorrect. Ensure your credentials are set up correctly in n8n. - Model Not Selected:
If no model is chosen, the request may fail. Always select a valid model from the dropdown. - Malformed Prompt:
If the prompt/messages structure is invalid (e.g., missing roles or content), the API may reject the request. - Token Limits Exceeded:
If you request more tokens than allowed by the selected model, you'll receive an error. Adjust the "Maximum Number of Tokens" accordingly.
Error Messages & Resolutions:
"401 Unauthorized": Check your API credentials."400 Bad Request": Review your input fields, especially the prompt/message structure and options."429 Too Many Requests": You have hit rate limits; try again later or reduce request frequency.