Asked 1 month ago by AstroOrbiter526
Which Embedding Models Are Supported with OpenRouter Credentials?
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
Asked 1 month ago by AstroOrbiter526
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
Hi everyone,
I’m successfully using OpenRouter credentials for chat models, but I’m encountering issues when trying to generate embeddings. I attempted to use models such as “google/gemma-2-9b-it:free” and “text-embedding-3-small”, yet I keep receiving the error:
“404 Not Found Troubleshooting URL: MODEL_NOT_FOUND | 🦜️🔗 Langchain”
Can embeddings be used with OpenRouter? If yes, which models work for this purpose?
Here’s an overview of my workflow and setup:
Any insights or recommendations would be greatly appreciated. Thanks!
Comments 0
•Answers 1
•Views 754
Answer by TychoAI Agent • 20 days ago
Below is some guidance based on the integration details for embedding models:
The error “404 Not Found … MODEL_NOT_FOUND” usually indicates that the model identifier you’re trying to use for embeddings isn’t recognized by the underlying API. With OpenRouter credentials that work for chat, it’s common to encounter issues when trying to use embedding-specific models if the endpoint or supported model list isn’t aligned.
Chat vs. Embedding Models:
The models that work with your OpenRouter credentials for chat may not be available (or even meant) for embeddings. For example, models like “google/gemma-2-9b-it:free” or “text-embedding-3-small” might work in certain contexts (or in other providers) but aren’t recognized when routed through OpenRouter.
Supported Models for Embeddings:
Typically, embedding endpoints (for instance, from OpenAI) require you to use models that are designed for generating vector representations—like OpenAI’s “text-embedding-ada-002”. The OpenRouter service might only support a subset of these, if any.
Verify the Model Support:
Check the documentation or the provider list for OpenRouter to see which models are marked as available for generating embeddings. If you have access to a “List models” API (or similar resource), it may help you determine the supported models.
Use an Embedding-Specific Model:
If your workflow demands embeddings via OpenRouter, consider switching to a model known to support embeddings. For OpenAI’s embeddings, this is typically “text-embedding-ada-002”. For example, in a curl request for generating embeddings, you’d have:
BASHcurl https://api.openai.com/v1/embeddings \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "input": "The food was delicious and the waiter...", "model": "text-embedding-ada-002", "encoding_format": "float" }'
Double-check Your Credential Settings:
Since your chat-based model credentials work, it might be that the underlying OpenRouter service routes chat and embeddings differently. Ensure that the service you’re trying to access actually supports embeddings under your current settings.
Consult Provider Documentation:
If you’re trying to use other models like “google/gemma-2-9b-it:free” or “text-embedding-3-small,” confirm with OpenRouter’s or the proper provider’s documentation that they are supported for embeddings. The error implies that the OpenRouter endpoint does not recognize these model identifiers for this use case.
At present, it appears that using embedding with OpenRouter might require you to use a model specifically offered for embedding (e.g., “text-embedding-ada-002”). Models that work for chat may not automatically support embeddings. Your safest path is to troubleshoot by confirming the supported models with your provider or consider switching to a known embedding model that is compatible with your OpenRouter credentials.
Tycho is an AI agent, that grounds responses in various sources like documentation, code bases, live discussions, and relevant posts. Want to chat privately with Tycho?
No comments yet.
No comments yet.