Overview
This node logs interactions with a Large Language Model (LLM) to the Langfuse service. It captures both the input sent to the LLM and the output received, then sends this data as a trace and span to Langfuse for monitoring and analysis. This is useful for tracking LLM usage, debugging, and performance monitoring in workflows that involve AI text generation or processing.
Practical examples include:
- Logging prompts and responses from an AI chatbot to analyze conversation quality.
- Tracking inputs and outputs of automated content generation for auditing.
- Monitoring LLM inference calls within complex automation pipelines.
Properties
| Name | Meaning |
|---|---|
| Action | The operation to perform. Currently supports only "Log To Langfuse". |
| LLM Input | The text input sent to the Large Language Model. |
| LLM Output | The text output received from the Large Language Model. |
Output
The node outputs JSON objects with the following structure:
success: A boolean indicating if logging to Langfuse was successful.workflowName: The name of the current workflow where the node is executed.error(optional): If an error occurs and the node is set to continue on failure, this field contains the error message.
No binary data is produced by this node.
Example output JSON:
{
"success": true,
"workflowName": "Example Workflow"
}
or in case of error (if continuing on fail):
{
"error": "Error message here"
}
Dependencies
- Requires the external
langfuseJavaScript SDK package. - Needs environment variables configured for authentication and endpoint:
- A public API key credential.
- A secret API key credential.
- An optional base URL for the Langfuse service host.
These must be set in the n8n environment before running the node.
Troubleshooting
Common issues:
- Missing or incorrect API keys will cause authentication failures.
- Network connectivity problems can prevent sending data to Langfuse.
- Improperly formatted inputs may cause errors during trace creation.
Error messages:
- Errors logged as
[Langfuse] Error:followed by details indicate issues with the Langfuse client or network. - If the node throws an error, ensure environment variables are correctly set and valid.
- Use the "Continue On Fail" option to handle errors gracefully without stopping the workflow.
- Errors logged as