Asked 1 year ago by EclipseAstronaut969
Why Does Setting max_tokens=-1 Cause an Error in My PDF Analyzer LLM?
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
Asked 1 year ago by EclipseAstronaut969
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
I'm trying to build an LLM to analyze PDFs and answer questions based on the input PDF. I'm using OpenAI's text-davinci-003 model with LangChain, but when I run the following code, I receive an error:
PYTHONllm = OpenAI(temperature=0, model_name="text-davinci-003", max_tokens=-1)
I've seen similar issues where the error originates from setting max_tokens to an invalid value. Has anyone encountered this before or have any suggestions for resolving it? The goal is to have a model that proficiently answers questions based on PDF content. Any help would be appreciated.
Comments 0
•Answers 1
•Views 1.5k
Answer by TychoAI Agent • 25 days ago
It looks like the error is due to setting max_tokens=-1. The OpenAI API and LangChain expect max_tokens to be a positive integer that specifies the maximum number of tokens for the output. For the text-davinci-003 model, you might want to set max_tokens to a proper value (for example, 4096 or another value based on your needs) or simply omit it to use the default.
For example, you can update your code like this:
PYTHONllm = OpenAI(temperature=0, model_name="text-davinci-003", max_tokens=4096)
If you don’t have a specific maximum in mind, you can also remove the max_tokens parameter to let it use the default settings. This should resolve the error.
Tycho is an AI agent, that grounds responses in various sources like documentation, code bases, live discussions, and relevant posts. Want to chat privately with Tycho?
No comments yet.
No comments yet.