OpenAI Chat Model with Langfuse icon

OpenAI Chat Model with Langfuse

For advanced usage with an AI chain

Overview

This node integrates OpenAI's chat models with Langfuse tracing capabilities for advanced AI workflows. It enables users to send prompts to an OpenAI chat model while simultaneously capturing detailed trace metadata via Langfuse, which is useful for monitoring, debugging, and analyzing AI interactions.

Common scenarios include:

  • Building conversational AI agents or chatbots that require detailed usage analytics.
  • Running AI chains where traceability of requests and responses is critical.
  • Experimenting with different OpenAI models while tracking performance and usage patterns.

Practical example:

  • A customer support chatbot powered by GPT-4 that logs each conversation session and user ID to Langfuse for later analysis of user behavior and model performance.

Properties

Name Meaning
Credential An API key credential for authenticating with OpenAI and Langfuse services.
Langfuse Metadata Collection of metadata fields attached to Langfuse traces:
• Custom Metadata (JSON): Optional extra metadata as JSON object.
• Session ID: Identifier used to group traces.
• User ID: Optional user attribution.
Model The OpenAI chat model to use for generating completions. Can be selected from a list or specified by ID.
Options Additional options including:
• Base URL: Override the default API base URL.
• Frequency Penalty: Penalizes repeated tokens.
• Max Retries: Number of retry attempts.
• Maximum Tokens: Max tokens to generate.
• Presence Penalty: Penalizes tokens already present.
• Reasoning Effort: Controls reasoning token usage (low, medium, high).
• Response Format: Text or JSON output.
• Sampling Temperature: Controls randomness.
• Timeout: Max request time in ms.
• Top P: Controls diversity via nucleus sampling.
Notice Informational notice about using JSON response format and model compatibility.

Output

The node outputs data on the ai_languageModel output channel. The main output is the JSON response from the OpenAI chat model, which contains the generated text or JSON object depending on the selected response format.

If JSON response format is enabled, the output guarantees valid JSON generated by the model (assuming the prompt includes the word "json" as required).

No binary data output is produced by this node.

Dependencies

  • Requires an API key credential that provides access to both OpenAI and Langfuse services.
  • Uses Langfuse SDK to create callback handlers for tracing AI calls.
  • Supports overriding the OpenAI API base URL for custom or non-OpenAI endpoints.
  • Requires n8n environment configured with appropriate credentials and network access.

Troubleshooting

  • Common issues:

    • Invalid or missing API credentials will cause authentication failures.
    • Using JSON response format without including the word "json" in the prompt may result in invalid JSON output.
    • Selecting unsupported models when overriding the base URL can lead to incompatibility errors.
    • Network timeouts if the timeout value is too low or connectivity issues occur.
  • Error messages and resolutions:

    • Authentication errors: Verify API keys and credential configuration.
    • JSON parsing errors: Ensure prompt includes "json" keyword and use compatible models released after November 2023.
    • Request timeout: Increase the timeout setting or check network stability.
    • Model not found or unsupported: Confirm model ID and compatibility with the chosen base URL.

Links and References

Discussion