Embeddings Google Vertex Extended (GKE WLI) icon

Embeddings Google Vertex Extended (GKE WLI)

Use Google Vertex AI Embeddings with output dimensions support

Overview

This node generates text embeddings using Google Vertex AI's embedding models, with extended support for specifying output dimensions. It is designed to convert input text into numerical vector representations that capture semantic meaning, which can then be used in various AI and machine learning workflows.

Common scenarios where this node is beneficial include:

  • Enhancing search functionality by converting documents or queries into embeddings for similarity comparison.
  • Clustering or classifying text data based on semantic content.
  • Integrating with vector stores or databases for efficient retrieval of related documents.
  • Supporting downstream AI tasks such as semantic similarity detection or document retrieval.

Practical example: You have a collection of customer feedback texts and want to find similar feedback entries or cluster them by topic. This node can generate embeddings for each feedback text, which you can then use with a vector database or clustering algorithm.

Properties

Name Meaning
Model Name The specific Google Vertex AI model used to generate embeddings. Examples include text-embedding-004 or text-multilingual-embedding-002. See Google Vertex AI Embeddings docs for details.
Output Dimensions Number of dimensions for the output embeddings. Set to 0 to use the model's default dimensionality. Only supported by certain models like text-embedding-004.
Options Additional options for embedding generation:
  Region The geographic region where the model is deployed (default: us-central1).
  Task Type The type of task for which embeddings are generated. Possible values: Retrieval Document, Retrieval Query, Semantic Similarity, Classification, Clustering. Defaults to Retrieval Document.

Output

The node outputs an array of embeddings corresponding to the input texts. Each embedding is a numeric vector representing the semantic features of the input text.

  • The output is available under the json field of the node's output data.
  • Each item in the output corresponds to one input text and contains an array of numbers representing the embedding vector.
  • The node does not output binary data.

Example output JSON snippet for one embedding:

{
  "embedding": [0.123, -0.456, 0.789, ...]
}

Dependencies

  • Requires access to Google Cloud services with appropriate authentication.
  • Uses Google authentication libraries to obtain OAuth tokens for API calls.
  • The node expects an environment where it can reach either:
    • A local internal endpoint (http://vertex-gemini.gemini.svc.cluster.local:5000/embedding) for embedding requests if the model is gemini-embedding-001.
    • Or the official Google Vertex AI REST API endpoint for other models.
  • Requires an API key credential or OAuth token with permissions to call Google Vertex AI APIs.
  • The user must configure credentials in n8n to allow authenticated requests to Google Cloud.

Troubleshooting

  • API errors: If the node throws errors like Google Vertex AI API error: <status> - <message>, it usually indicates issues with authentication, incorrect project ID, region, or model name.
    • Verify that the API credentials are valid and have sufficient permissions.
    • Check that the specified region matches the deployment location of the model.
    • Confirm the model name is correct and supported.
  • Empty or missing embeddings: If embeddings are empty or missing, ensure the input text is non-empty and properly formatted.
  • Network issues: The node may fail if it cannot reach the internal or external API endpoints. Ensure network connectivity and firewall rules allow outbound requests.
  • Unsupported output dimensions: Setting Output Dimensions to a non-zero value for unsupported models will have no effect or cause errors. Use only with compatible models like text-embedding-004.

Links and References

Discussion