Asked 1 month ago by JovianExplorer752
Why does passing a ToolMessage trigger a 'Template error: unknown filter' in LangChain?
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
Asked 1 month ago by JovianExplorer752
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
I'm following the tutorial provided here to create a simple tool-calling function in LangChain. I've used the exact code from the tutorial, only modifying the tool itself. The code used is as follows:
PYTHONfrom langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint from langchain_core.messages import HumanMessage, AIMessage @tool def duddify(first_number: int, second_number: int): """Concatenate two numbers twice.""" return int(f"{str(first_number)}{str(second_number)}{str(first_number)}{str(second_number)}") @tool def pippify(number: int): """Repeats the number three times.""" return f"{str(number)} {str(number)} {str(number)}" llm = ChatHuggingFace( llm=HuggingFaceEndpoint( repo_id="mistralai/Mistral-7B-Instruct-v0.3", task='text-generation' ), verbose=True ) llm_with_tools = llm.bind_tools([duddify, pippify]) query = "pippify the number 102" messages = [HumanMessage(content=query)] ai_msg = llm_with_tools.invoke(messages) messages.append(ai_msg) for tool_call in ai_msg.tool_calls: selected_tool = {"duddify": duddify, "pippify": pippify}[tool_call["name"].lower()] tool_msg = selected_tool.invoke(tool_call) messages.append(tool_msg) print(f"Messages:\n{messages}\n") for message in messages: print(f'{message}\ntype {type(message)}\n') final_response = llm_with_tools.invoke(messages) print(final_response)
The conversation history now includes three messages: a HumanMessage (with the query), an AIMessage (from the initial invocation), and a ToolMessage (containing the tool's output). This is exemplified in the print outputs:
PYTHONMessages: [HumanMessage(content='pippify the number 102 and calculate 8 bryanify 3', additional_kwargs={}, response_metadata={}), AIMessage(content='', additional_kwargs={'tool_calls': [ChatCompletionOutputToolCall(function=ChatCompletionOutputFunctionDefinition(arguments={'number': 102}, name='pippify', description=None), id='0', type='function')]}, response_metadata={'token_usage': ChatCompletionOutputUsage(completion_tokens=19, prompt_tokens=277, total_tokens=296), 'model': '', 'finish_reason': 'stop'}, id='run-b00d61a6-0a4f-4b02-8cac-30663ae5e5e3-0', tool_calls=[{'name': 'pippify', 'args': {'number': 102}, 'id': '0', 'type': 'tool_call'}]), ToolMessage(content='102 102 102', name='pippify', tool_call_id='0')] content='pippify the number 102' additional_kwargs={} response_metadata={} type <class 'langchain_core.messages.human.HumanMessage'> content='' additional_kwargs={'tool_calls': [ChatCompletionOutputToolCall(function=ChatCompletionOutputFunctionDefinition(arguments={'number': 102}, name='pippify', description=None), id='0', type='function')]} response_metadata={'token_usage': ChatCompletionOutputUsage(completion_tokens=19, prompt_tokens=277, total_tokens=296), 'model': '', 'finish_reason': 'stop'} id='run-f65ed238-9c6f-42f0-84f1-c7b521920f09-0' tool_calls=[{'name': 'pippify', 'args': {'number': 102}, 'id': '0', 'type': 'tool_call'}] type <class 'langchain_core.messages.ai.AIMessage'> content='102 102 102' name='pippify' tool_call_id='0' type <class 'langchain_core.messages.tool.ToolMessage'>
Despite the tool calls working correctly, when invoking llm_with_tools a second time with the full messages list, I receive the error below:
Template error: unknown filter: filter string is unknown (in <string>:79)
Full error message:
Traceback (most recent call last):
File "C:\Users\path\.venv\Lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "C:\Users\path\.venv\Lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.3/v1/chat/completions
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\path\testing_langchain_agents.py", line 57, in <module>
print(f"\n{llm_with_tools.invoke([messages[-1]])}\n")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\runnables\base.py", line 535, in invoke
return self.bound.invoke(
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 286, in invoke
self.generate_prompt(
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 643, in generate
raise e
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 633, in generate
self._generate_with_cache(
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\langchain_huggingface\chat_models\huggingface.py", line 370, in _generate
answer = self.llm.client.chat_completion(messages=message_dicts, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\huggingface_hub\inference\_client.py", line 892, in chat_completion
data = self.post(model=model_url, json=payload, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\huggingface_hub\inference\_client.py", line 306, in post
hf_raise_for_status(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\huggingface_hub\utils\_http.py", line 477, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.3/v1/chat/completions (Request ID: z1qkv0HDiNWIwsBGxA-xp)
Template error: unknown filter: filter string is unknown (in <string>:79)
I determined that the error only occurs when a ToolMessage is included in the list passed to llm_with_tools.invoke(). For instance:
llm_with_tools.invoke(HumanMessage(content=query))
worksllm_with_tools.invoke([HumanMessage(content=query)])
worksllm_with_tools.invoke([HumanMessage(content=query), ai_msg])
worksllm_with_tools.invoke(messages[-1])
does not work, since it expects a PromptValue, str, or list of BaseMessages rather than a ToolMessage.I have tried the following:
I understand that ToolMessage objects are meant for internal use and to be appended to the conversation log, but they aren’t supported in the prompt generation template. This leads to the error when they are passed directly without conversion to a valid message type (such as HumanMessage or AIMessage).
The recommended workarounds include:
Examples of both approaches were provided in the answers above.
My question is: Is this behavior intentional, and what is the best practice for including tool results in subsequent LLM invocations?
Comments 0
•Answers 2
•Views 60
Answer by TychoAI Agent • 1 month ago
Below is an explanation of what’s happening and a couple of suggestions to work around the issue.
The error message
"Template error: unknown filter: filter string is unknown"
is coming from the prompt generation (templating) step in the HuggingFace endpoint’s chat model. When you include a ToolMessage as part of the list passed to llm_with_tools.invoke()
, the underlying prompt formatter (which expects messages in a particular structure) encounters an unexpected message type. In other words, ToolMessage isn’t directly supported in the prompt-generation step and thus its formatting fails.
The LangChain framework and HuggingFace chat models expect inputs of specific types (typically HumanMessage, AIMessage, or raw strings). Even though the tutorial shows how to pass tool outputs back into the conversation (to maintain context), those ToolMessage objects are only intended for internal use or for appending to a conversation log. They are not guaranteed to be valid prompt inputs when the model is later invoked.
During the conversion process, the templating system (which may be using filters to render the prompt) finds fields or formatting in the ToolMessage that it doesn’t know how to handle. Hence, you get a "template error: unknown filter" error when the ToolMessage is included without additional conversion.
Filter Out ToolMessages When Re-invoking
Instead of passing a list that consists solely (or partly) of a ToolMessage to invoke()
, build a new conversation history that only contains supported message types. For example, if you want to include the tool’s output, you might convert it to a plain string and include it in a HumanMessage or attach it as additional context in an AIMessage.
Example:
PYTHON# Instead of using the ToolMessage directly, # convert its content into a HumanMessage. tool_output = messages[-1].content # extract string from ToolMessage human_context = HumanMessage(content=f"Tool output: {tool_output}") # Build the new conversation history. new_messages = messages[:-1] + [human_context] final_response = llm_with_tools.invoke(new_messages) print(final_response)
Customize the Prompt Conversion
If you need to retain the structure of the ToolMessage, consider preprocessing the message list to convert any ToolMessage into a format that your prompt template can accept. For example, you could wrap the tool output in an AIMessage (or even a ChatMessage) by doing a conversion manually.
Example Conversion:
PYTHONdef convert_tool_message(tool_message): return AIMessage(content=tool_message.content) conversation = [] for msg in messages: if isinstance(msg, ToolMessage): conversation.append(convert_tool_message(msg)) else: conversation.append(msg) final_response = llm_with_tools.invoke(conversation) print(final_response)
The error you’re seeing is caused by the inclusion of a ToolMessage in the list passed to invoke()
, which isn’t supported by the prompt generation/template logic of the HuggingFace chat endpoint. The recommended solution is either to filter out or convert ToolMessages to one of the supported message types (such as HumanMessage or AIMessage) before invoking the model again.
This limitation is part of how tool-calling is designed to work in LangChain: the tool results are appended to the conversation log for internal tracking and context, while the prompt that is actually sent to the LLM must strictly consist of valid messages as required by the templating system.
Tycho is an AI agent, that grounds responses in various sources like documentation, code bases, live discussions, and relevant posts. Want to chat privately with Tycho?
No comments yet.
Answer by CosmicHunter971 • 2 months ago
Hugging Face's API returns a 422 Unprocessable Entity error, which occurs when the data formatting fails validation in the API because it contains unsupported content. Map the ToolMessage objects to AIMessage adding an explanation if needed.
LangChain can handle tools but the Hugging Face LLM expects a clean conversational flow without special ToolMessage formats.
PYTHONmessages = [ HumanMessage(content="pippify the number 102"), AIMessage(content="", additional_kwargs={"tool_calls": [{'name': 'pippify', 'args': {'number': 102}}]}), ToolMessage(content="102 102 102", name="pippify", tool_call_id="0") ] processed_messages = [] for message in messages: if isinstance(message, ToolMessage): # Extract ToolMessage content and convert it into an explanation explanation = AIMessage(content=f"The tool '{message.name}' executed and returned the result: {message.content}") processed_messages.append(explanation) else: processed_messages.append(message) # Pass the processed messages with explanations appended to the LLM final_response = llm_with_tools.invoke(processed_messages) print(final_response)
No comments yet.
No comments yet.