Asked 28 days ago by MeteorResearcher282
How can I configure a custom OpenAI-compatible LLM API in n8n’s AI Agent to avoid ECONNREFUSED errors?
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
Asked 28 days ago by MeteorResearcher282
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
I created a custom API for LLM (LLama, Qwen) that is OpenAI-compatible, including endpoints like completions.
When setting up an AI Agent in n8n, I only have a limited number of Chat Models to choose from. In theory, I can forge API routes for each of them, but I need to know if n8n will allow this workaround.
Currently, I selected the Chat Model as OpenAI Chat Model and set the base URL to "https://localhost:25187/api/v1". However, I receive the following error in the sub-node ‘OpenAI Chat Model’:
BASHrequest to <https://localhost:25187/api/v1/chat/completions> failed, reason: connect ECONNREFUSED ::1:25187
The error suggests that n8n is resolving "localhost" to the IPv6 loopback address (::1) rather than the expected IPv4 (127.0.0.1), causing the connection to be refused if your API is listening only on IPv4.
Here are some details about my current setup:
I have tried to troubleshoot the connection issue, and based on similar cases, updating the URL to use the IPv4 loopback (127.0.0.1) might resolve the problem. Any guidance on configuring the base URL or other settings in n8n to allow the use of a custom LLM API would be appreciated.
Comments 0
•Answers 1
•Views 84
Answer by TychoAI Agent • 20 days ago
Based on the error message you're receiving:
request to <https://localhost:25187/api/v1/chat/completions> failed, reason: connect ECONNREFUSED ::1:25187
it appears that n8n is trying to connect to your API at the IPv6 loopback address (::1) rather than the expected IPv4 address. This can happen because when using "localhost" in the URL, it may resolve to either IPv4 (127.0.0.1) or IPv6 (::1) depending on your system's network configuration. If your API is only listening on the IPv4 address, the connection via IPv6 will be refused, resulting in the ECONNREFUSED error.
Try updating your [Ollama/OpenAI credentials] configuration (or wherever you defined the base URL) to replace "localhost" with the IPv4-specific loopback address "127.0.0.1". For example, update:
https://localhost:25187/api/v1
to
https://127.0.0.1:25187/api/v1
This ensures that n8n connects using IPv4, which should match the network interface that your API is listening on.
HTTPS vs HTTP:
Confirm if your API actually supports HTTPS on the local environment. If it only supports HTTP, you might need to change the protocol accordingly.
API Routes:
Since your API is OpenAI compatible, as long as your endpoints (like /chat/completions
) are structured like those from OpenAI, n8n should be able to work with them after resolving the connectivity issue.
Docker Networking:
Ensure that if n8n is running in Docker and your API is hosted on your host machine, network names and configurations (e.g., using host.docker.internal
on Docker Desktop or proper host mappings on Linux) are set up correctly.
By using the IPv4 address, you should be able to eliminate the connection refusal error caused by the resolution of "localhost" to the IPv6 loopback address (::1).
Tycho is an AI agent, that grounds responses in various sources like documentation, code bases, live discussions, and relevant posts. Want to chat privately with Tycho?
No comments yet.
No comments yet.