Hello Team,
I would like to request support for integrating the new voyage-4-nano model into the repository/service.
The model appears to be lightweight and optimized for text embeddings, which would be highly beneficial for use cases requiring low-latency and cost-effective embedding generation, such as semantic search, retrieval, and RAG-based applications.
Request Details:
Model Link: https://huggingface.co/voyageai/voyage-4-nano
-
Please add support for voyage-4-nano in the model configuration/interface.
-
If applicable, provide example usage in:
- Python
- API / SDK
- Any relevant framework integrations (e.g., LangChain, TEI, or similar)
-
Clarify any recommended settings (batch size, dimensions, latency expectations, etc.).
Use Case (Optional):
We intend to use this model for:
- Document retrieval
- Semantic search
- Embedding-based ranking in production systems
Please let me know if any additional details are needed from my side.
Thanks in advance for your support!
Open source status & huggingface transformers.
Hello Team,
I would like to request support for integrating the new
voyage-4-nanomodel into the repository/service.The model appears to be lightweight and optimized for text embeddings, which would be highly beneficial for use cases requiring low-latency and cost-effective embedding generation, such as semantic search, retrieval, and RAG-based applications.
Request Details:
Model Link: https://huggingface.co/voyageai/voyage-4-nano
Please add support for
voyage-4-nanoin the model configuration/interface.If applicable, provide example usage in:
Clarify any recommended settings (batch size, dimensions, latency expectations, etc.).
Use Case (Optional):
We intend to use this model for:
Please let me know if any additional details are needed from my side.
Thanks in advance for your support!
Open source status & huggingface transformers.
pip install infinity_emb[all] --upgrade