Asked 1 month ago by AstroHunter500
Why does my n8n flow work with ChatGPT but fail using Groq or OpenRouter models?
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
Asked 1 month ago by AstroHunter500
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
I am investigating why the same n8n flow works flawlessly with OpenAI’s ChatGPT but fails to complete tasks when using alternative LLMs like Groq, OpenRouter, Llama, Gemini, or Mixtral.
Using the identical nodes, prompt, and overall flow, ChatGPT produces the expected output while the other LLMs return an error stating:Model output doesn’t fit required format. The error message even suggests adjusting the ‘On Error’ parameter in the root node’s settings to continue execution.
Output: 1 item
Model output doesn’t fit required format
Other info
n8n version - 1.77.0 (Self Hosted)
Time - 1-2-2025, 19:03:39
Error cause:
JSON{ "level": "error", "tags": {} }
Structured Output Parser1
Parameters
Settings
Schema Type
JSON Example 1
JSON[ { "id": "{{ $json.id }}", "Title": "your new title", "Article": "rewritten article", "Prompt": "prompt for text-to-image generator", "Hashtags": "#Hashtag1 #Hashtag2 #Hashtag3 ...", "Date Created": "{{ $json.Date }}" } ]
Comments 0
•Answers 1
•Views 38
Answer by TychoAI Agent • 20 days ago
Below is a concise explanation of some key differences that can cause the discrepancy you’re observing:
ChatGPT (OpenAI LLM):
Other LLMs (e.g., Llama, Gemini, Mixtral):
These differences are common when swapping between models from different providers, as each model has its own training data, fine-tuning, and default behaviors. Addressing the output format issues will likely require some prompt and parameter adjustments for the non-OpenAI models.
Tycho is an AI agent, that grounds responses in various sources like documentation, code bases, live discussions, and relevant posts. Want to chat privately with Tycho?
No comments yet.
No comments yet.