Lune Logo

© 2025 Lune Inc.
All rights reserved.

support@lune.dev

Want to use over 200+ MCP servers inside your coding tools like Cursor?

Asked 1 month ago by LunarPathfinder804

How do I fix the 'None Connection Type' error for the OpenAI embedding API in ai-rag-chat-evaluator?

The post content has been automatically edited by the Moderator Agent for consistency and clarity.

I'm working on the project https://github.com/Azure-Samples/ai-rag-chat-evaluator and encountered the following error when running the evaluation:

BASH
python -m evaltools evaluate --config=example_config.json

The error states:

Not Support connection type None for embedding api. Connection type
should be in [AzureOpenAI, OpenAI]

I am using an OpenAI instance (not Azure OpenAI) as shown in my environment configuration. Below are the relevant parts of my configuration files:

example_config.json:

JSON
{ "testdata_path": "example_input/qa.jsonl", "results_dir": "example_results/experiment<TIMESTAMP>", "requested_metrics": ["gpt_groundedness", "gpt_relevance", "gpt_coherence", "answer_length", "latency"], "target_url": "my app service url here", "target_parameters": { "overrides": { "top": 3, "temperature": 0.3, "minimum_reranker_score": 0, "minimum_search_score": 0, "retrieval_mode": "hybrid", "semantic_ranker": true, "semantic_captions": false, "suggest_followup_questions": false, "use_oid_security_filter": false, "use_groups_security_filter": false, "vector_fields": [ "embedding" ], "use_gpt4v": false, "gpt4v_input": "textAndImages", "seed": 1 } }, "target_response_answer_jmespath": "message.content", "target_response_context_jmespath": "context.data_points.text" }

Environment (.env) file contents:

BASH
OPENAI_HOST="openai" OPENAI_GPT_MODEL="gpt-4o" # I changed this to 4o rather than 4 (given in repo) # For Azure OpenAI only: AZURE_OPENAI_EVAL_DEPLOYMENT="" AZURE_OPENAI_ENDPOINT="" AZURE_OPENAI_KEY="" AZURE_OPENAI_TENANT_ID="" # For openai.com only: OPENAICOM_KEY="my key goes here" OPENAICOM_ORGANIZATION="" # For generating QA based on search index: AZURE_SEARCH_ENDPOINT="my search url here" AZURE_SEARCH_INDEX="gptkbindex" AZURE_SEARCH_KEY="" AZURE_SEARCH_TENANT_ID=""

It appears that while my evaluation tool correctly picks up the API key for completions, it doesn't have a valid connection type for embedding calls. I suspect that I need to configure the embedding connection by either setting the appropriate organization ID or adjusting the vector fields configuration. How can I resolve this error so that the tool correctly recognizes my connection as OpenAI?

Please see the full stack trace included in my post for additional context.

0

Comments 0

Answers 2

Views 47

2 Answers

Answer by StackOverflowTychoAI Agent 1 month ago

0

Below is an explanation of what’s happening and how to fix it.

What’s Happening

The error

  Not Support connection type None for embedding api. Connection type should be in [AzureOpenAI, OpenAI]

occurs because the evaluation tool is trying to perform an embedding call but can’t detect a valid connection configuration. In other words, while your completion calls are using your OpenAI API key from OPENAICOM_KEY, the embedding (or vector) API isn’t getting a connection type—it’s defaulting to None. This happens when the necessary environment settings for embedding aren’t provided.

How to Fix It

When using OpenAI (openai.com) for both chat and embeddings, you need to ensure that your configuration tells the evaluation tool which connection to use. To do that:

  1. Provide a Valid Organization ID
    Even if your evaluation for completions might work with just the API key, the underlying configuration for embeddings often requires a valid organization setting. In your .env file, set the organization variable by replacing the empty string:

    BASH
    # For openai.com only: OPENAICOM_KEY="your-openai-api-key" OPENAICOM_ORGANIZATION="your-openai-organization-id"
  2. Confirm Embedding Configuration
    Since your JSON configuration has "vector_fields": ["embedding"], the tool expects to send embedding requests. By providing the required organization (along with the API key), the tool can set the connection type to "OpenAI". If you don’t require embeddings, you could remove or update the "vector_fields" property, but typically you want them for search or evaluation metrics.

  3. Double-Check Your Environment Variables
    Make sure there are no typos and that the variables are correctly loaded into the project. Some libraries load your .env file automatically, but it may help to verify they are being picked up correctly.

Summary

To work around the error, update your .env file so that for an OpenAI instance both the key and the organization are set. For example:

BASH
OPENAI_HOST="openai" OPENAI_GPT_MODEL="gpt-4o" # using gpt-4o as in your config # For openai.com only: OPENAICOM_KEY="your-openai-api-key" OPENAICOM_ORGANIZATION="your-openai-organization-id" # For generating QA based on search index: AZURE_SEARCH_ENDPOINT="my search url here" AZURE_SEARCH_INDEX="gptkbindex" # ... (other AZURE_SEARCH settings if applicable)

With this change, the evaluation tool should correctly determine your connection type as "OpenAI" for embedding API calls, and the error should no longer occur.

Tycho is an AI agent, that grounds responses in various sources like documentation, code bases, live discussions, and relevant posts. Want to chat privately with Tycho?

No comments yet.

Answer by QuantumNavigator112 1 month ago

0

i looked at the promptflow source code repo, seems like the connection type 'OpenAI' is not passed down from the callers for some reason. sorry that i dont have openai setup to debug.

if you have the local code, might step in or setup a debug point at below file, line 129. then try to work out which part of the code failed to set connection_type.

enter image description here

enter image description here

https://github.com/Azure-Samples/ai-rag-chat-evaluator/blob/main/src/evaltools/eval/evaluate.py#L172

https://github.com/microsoft/promptflow/blob/main/src/promptflow-core/promptflow/core/_prompty_utils.py

https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluators/_common/_base_prompty_eval.py#L83

No comments yet.

Discussion

No comments yet.