Asked 1 year ago by AstralCosmonaut128
Why doesn't the OpenAI assistants API return a reliable answer in my Python script?
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
Asked 1 year ago by AstralCosmonaut128
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
Hello Everyone
I am using the new OpenAI assistants API in Python. The code runs without errors, but I never receive an answer. Interestingly, it sometimes works in debug mode or when run as a .ipynb file, yet fails in a standard script.
Here is the code I am using:
PYTHONfrom openai import OpenAI client = OpenAI(api_key=“Key”) def get_prompt(question): assistant = client.beta.assistants.create( name="Math tutor", instructions="I am a math tutor. I can help you with your math homework.", tools=[{"type": "code_interpreter"}], model = "gpt-4-1106-preview" ) thread = client.beta.threads.create() print(thread) message = client.beta.threads.messages.create( thread_id=thread.id, role = "user", content = question ) print(message) run = client.beta.threads.runs.create( thread_id=thread.id, assistant_id=assistant.id, ) run = client.beta.threads.runs.retrieve( thread_id=thread.id, run_id=run.id, ) messages = client.beta.threads.messages.list( thread_id=thread.id, ) for message in reversed(messages.data): print(message.role + ": " + message.content[0].text.value)
answer = get_prompt(“What is the best time to visit Paris?”)
print(answer)
I appreciate any help in getting a reliable return from the API.
Comments 0
•Answers 1
•Views 1.8k
Answer by TychoAI Agent • 25 days ago
Below is some guidance on troubleshooting your code:
Your function currently prints the results but does not return any value. For example, at the end of your function you could gather the final message text and return it:
PYTHONdef get_prompt(question): assistant = client.beta.assistants.create( name="Math tutor", instructions="I am a math tutor. I can help you with your math homework.", tools=[{"type": "code_interpreter"}], model="gpt-4-1106-preview" ) thread = client.beta.threads.create() print("Thread created:", thread) message = client.beta.threads.messages.create( thread_id=thread.id, role="user", content=question ) print("User message:", message) run = client.beta.threads.runs.create( thread_id=thread.id, assistant_id=assistant.id, ) run = client.beta.threads.runs.retrieve( thread_id=thread.id, run_id=run.id, ) messages = client.beta.threads.messages.list( thread_id=thread.id, ) # Extract the awaited assistant reply from the messages. # (Adjust the extraction depending on how the message object is structured) for msg in reversed(messages.data): if msg.role == "assistant": assistant_reply = msg.content[0].text.value print("Assistant:", assistant_reply) return assistant_reply return None
Sometimes these API operations are asynchronous. It might be that the assistant’s reply isn't ready by the time you retrieve messages. Consider adding a polling mechanism or a slight delay before retrieving the run result:
PYTHONimport time # Right after creating the run, consider waiting a few seconds time.sleep(2) # Adjust the sleep duration as needed.
Your experience of varied behavior between debug mode and execution in a notebook may relate to timing differences. In notebooks or debug mode, execution might be slower, giving the API enough time to process the assistant's run before you query messages. In a script, consider using explicit waits or checking the run status before retrieving messages.
Ensure you’re using standard quotation marks (") for strings. Sometimes curly quotes (“ ”) can cause issues in Python.
These changes should help you get a reliable return from the API call. Let me know if you have any further questions on using the OpenAI API or if you'd like additional details on handling asynchronous responses.
Tycho is an AI agent, that grounds responses in various sources like documentation, code bases, live discussions, and relevant posts. Want to chat privately with Tycho?
No comments yet.
No comments yet.