Skip to content

Commit

Permalink
Python: Raise exceptions when services are not set up in integration …
Browse files Browse the repository at this point in the history
…test workflow (#9874)

### Motivation and Context

<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
  1. Why is this change required?
  2. What problem does it solve?
  3. What scenario does it contribute to?
  4. If it fixes an open issue, please link to the issue here.
-->
Our integration tests should encompass all test cases unless a case is
explicitly marked as skipped or expected to fail (xfail). Currently, our
setup permits test cases to be skipped if the required environment
variables are not configured. While this approach is beneficial for
local testing—since not all developers have access to all necessary
resources - it presents challenges in our pipeline. In this environment,
changes to environment variables can lead to test cases being skipped
without notice, potentially allowing issues to go undetected until a
specific connector is reported as problematic.

### Description

<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->

The `is_service_setup_for_testing` now also reads the `raise_if_not_set`
argument from an environment variable
(`INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION`). When
`INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is not set nor the argument
is not passed in, the method will not raise, allowing people to run the
integration tests locally.

When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set or
`raise_if_not_set` is explicitly set to true, test collection will fail.

`INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set to true in our
pipeline environment.

### Contribution Checklist

<!-- Before submitting this PR, please make sure: -->

- [x] The code builds clean without any errors or warnings
- [x] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [x] All unit tests pass, and I have added new tests where possible
- [x] I didn't break anyone 😄
  • Loading branch information
TaoChenOSU authored Dec 3, 2024
1 parent 3b8a7c2 commit 30b67ec
Show file tree
Hide file tree
Showing 9 changed files with 43 additions and 39 deletions.
1 change: 1 addition & 0 deletions .github/workflows/python-integration-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
Python_Integration_Tests: Python_Integration_Tests
INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION: ${{ true }}
AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME: ${{ vars.AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME }} # azure-text-embedding-ada-002
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ vars.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_TEXT_DEPLOYMENT_NAME: ${{ vars.AZURE_OPENAI_TEXT_DEPLOYMENT_NAME }}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# There is only the whisper model available on Azure OpenAI for audio to text. And that model is
# only available in the North Switzerland region. Therefore, the endpoint is different than the one
# we use for other services.
azure_setup = is_service_setup_for_testing(["AZURE_OPENAI_AUDIO_TO_TEXT_ENDPOINT"], raise_if_not_set=False)
azure_setup = is_service_setup_for_testing(["AZURE_OPENAI_AUDIO_TO_TEXT_ENDPOINT"])


class AudioToTextTestBase:
Expand Down
3 changes: 2 additions & 1 deletion python/tests/integration/audio_to_text/test_audio_to_text.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

from semantic_kernel.connectors.ai.audio_to_text_client_base import AudioToTextClientBase
from semantic_kernel.contents import AudioContent
from tests.integration.audio_to_text.audio_to_text_test_base import AudioToTextTestBase
from tests.integration.audio_to_text.audio_to_text_test_base import AudioToTextTestBase, azure_setup

pytestmark = pytest.mark.parametrize(
"service_id, audio_content, expected_text",
Expand All @@ -21,6 +21,7 @@
"azure_openai",
AudioContent.from_audio_file(os.path.join(os.path.dirname(__file__), "../../", "assets/sample_audio.mp3")),
["hi", "how", "are", "you", "doing"],
marks=pytest.mark.skipif(not azure_setup, reason="Azure Audio to Text not setup."),
id="azure_openai",
),
],
Expand Down
26 changes: 11 additions & 15 deletions python/tests/integration/completions/chat_completion_test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,21 +56,17 @@
# There is no single model in Ollama that supports both image and tool call in chat completion
# We are splitting the Ollama test into three services: chat, image, and tool call. The chat model
# can be any model that supports chat completion. Also, Ollama is only available on Linux runners in our pipeline.
ollama_setup: bool = is_service_setup_for_testing(
["OLLAMA_CHAT_MODEL_ID"], raise_if_not_set=False
) and is_test_running_on_supported_platforms(["Linux"])
ollama_image_setup: bool = is_service_setup_for_testing(
["OLLAMA_CHAT_MODEL_ID_IMAGE"], raise_if_not_set=False
) and is_test_running_on_supported_platforms(["Linux"])
ollama_tool_call_setup: bool = is_service_setup_for_testing(
["OLLAMA_CHAT_MODEL_ID_TOOL_CALL"], raise_if_not_set=False
) and is_test_running_on_supported_platforms(["Linux"])
google_ai_setup: bool = is_service_setup_for_testing(
["GOOGLE_AI_API_KEY", "GOOGLE_AI_GEMINI_MODEL_ID"], raise_if_not_set=False
)
vertex_ai_setup: bool = is_service_setup_for_testing(
["VERTEX_AI_PROJECT_ID", "VERTEX_AI_GEMINI_MODEL_ID"], raise_if_not_set=False
)
ollama_setup: bool = is_service_setup_for_testing(["OLLAMA_CHAT_MODEL_ID"]) and is_test_running_on_supported_platforms([
"Linux"
])
ollama_image_setup: bool = is_service_setup_for_testing([
"OLLAMA_CHAT_MODEL_ID_IMAGE"
]) and is_test_running_on_supported_platforms(["Linux"])
ollama_tool_call_setup: bool = is_service_setup_for_testing([
"OLLAMA_CHAT_MODEL_ID_TOOL_CALL"
]) and is_test_running_on_supported_platforms(["Linux"])
google_ai_setup: bool = is_service_setup_for_testing(["GOOGLE_AI_API_KEY", "GOOGLE_AI_GEMINI_MODEL_ID"])
vertex_ai_setup: bool = is_service_setup_for_testing(["VERTEX_AI_PROJECT_ID", "VERTEX_AI_GEMINI_MODEL_ID"])
onnx_setup: bool = is_service_setup_for_testing(
["ONNX_GEN_AI_CHAT_MODEL_FOLDER"], raise_if_not_set=False
) # Tests are optional for ONNX
Expand Down
10 changes: 5 additions & 5 deletions python/tests/integration/completions/test_text_completion.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,11 @@
from tests.integration.completions.completion_test_base import CompletionTestBase, ServiceType
from tests.utils import is_service_setup_for_testing, is_test_running_on_supported_platforms, retry

ollama_setup: bool = is_service_setup_for_testing(
["OLLAMA_TEXT_MODEL_ID"], raise_if_not_set=False
) and is_test_running_on_supported_platforms(["Linux"])
google_ai_setup: bool = is_service_setup_for_testing(["GOOGLE_AI_API_KEY"], raise_if_not_set=False)
vertex_ai_setup: bool = is_service_setup_for_testing(["VERTEX_AI_PROJECT_ID"], raise_if_not_set=False)
ollama_setup: bool = is_service_setup_for_testing(["OLLAMA_TEXT_MODEL_ID"]) and is_test_running_on_supported_platforms([
"Linux"
])
google_ai_setup: bool = is_service_setup_for_testing(["GOOGLE_AI_API_KEY"])
vertex_ai_setup: bool = is_service_setup_for_testing(["VERTEX_AI_PROJECT_ID"])
onnx_setup: bool = is_service_setup_for_testing(
["ONNX_GEN_AI_TEXT_MODEL_FOLDER"], raise_if_not_set=False
) # Tests are optional for ONNX
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,15 +41,11 @@
mistral_ai_setup: bool = is_service_setup_for_testing(
["MISTRALAI_API_KEY", "MISTRALAI_EMBEDDING_MODEL_ID"], raise_if_not_set=False
) # We don't have a MistralAI deployment
google_ai_setup: bool = is_service_setup_for_testing(
["GOOGLE_AI_API_KEY", "GOOGLE_AI_EMBEDDING_MODEL_ID"], raise_if_not_set=False
)
vertex_ai_setup: bool = is_service_setup_for_testing(
["VERTEX_AI_PROJECT_ID", "VERTEX_AI_EMBEDDING_MODEL_ID"], raise_if_not_set=False
)
ollama_setup: bool = is_service_setup_for_testing(
["OLLAMA_EMBEDDING_MODEL_ID"], raise_if_not_set=False
) and is_test_running_on_supported_platforms(["Linux"])
google_ai_setup: bool = is_service_setup_for_testing(["GOOGLE_AI_API_KEY", "GOOGLE_AI_EMBEDDING_MODEL_ID"])
vertex_ai_setup: bool = is_service_setup_for_testing(["VERTEX_AI_PROJECT_ID", "VERTEX_AI_EMBEDDING_MODEL_ID"])
ollama_setup: bool = is_service_setup_for_testing([
"OLLAMA_EMBEDDING_MODEL_ID"
]) and is_test_running_on_supported_platforms(["Linux"])


class EmbeddingServiceTestBase:
Expand Down
3 changes: 2 additions & 1 deletion python/tests/integration/text_to_audio/test_text_to_audio.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

from semantic_kernel.connectors.ai.text_to_audio_client_base import TextToAudioClientBase
from semantic_kernel.contents import AudioContent
from tests.integration.text_to_audio.text_to_audio_test_base import TextToAudioTestBase
from tests.integration.text_to_audio.text_to_audio_test_base import TextToAudioTestBase, azure_setup

pytestmark = pytest.mark.parametrize(
"service_id, text",
Expand All @@ -18,6 +18,7 @@
pytest.param(
"azure_openai",
"Hello World!",
marks=pytest.mark.skipif(not azure_setup, reason="Azure Audio to Text not setup."),
id="azure_openai",
),
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

# TTS model on Azure model is not available in regions at which we have chat completion models.
# Therefore, we need to use a different endpoint for testing.
is_service_setup_for_testing(["AZURE_OPENAI_TEXT_TO_AUDIO_ENDPOINT"])
azure_setup = is_service_setup_for_testing(["AZURE_OPENAI_TEXT_TO_AUDIO_ENDPOINT"])


class TextToAudioTestBase:
Expand All @@ -21,5 +21,7 @@ def services(self) -> dict[str, TextToAudioClientBase]:
"""Return text-to-audio services."""
return {
"openai": OpenAITextToAudio(),
"azure_openai": AzureTextToAudio(endpoint=os.environ["AZURE_OPENAI_TEXT_TO_AUDIO_ENDPOINT"]),
"azure_openai": AzureTextToAudio(endpoint=os.environ["AZURE_OPENAI_TEXT_TO_AUDIO_ENDPOINT"])
if azure_setup
else None,
}
17 changes: 12 additions & 5 deletions python/tests/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,18 +41,25 @@ async def retry(
return None


def is_service_setup_for_testing(env_var_names: list[str], raise_if_not_set: bool = True) -> bool:
def is_service_setup_for_testing(env_var_names: list[str], raise_if_not_set: bool | None = None) -> bool:
"""Check if the environment variables are set and not empty.
By default, this function raises an exception if the environment variable is not set.
This is to make sure we throw before starting any tests and we cover all services in our pipeline.
Returns True if all the environment variables in the list are set and not empty. Otherwise, returns False.
This method can also be configured to raise an exception if the environment variables are not set.
For local testing, you can set `raise_if_not_set=False` to avoid the exception.
By default, this function does not raise an exception if the environment variables in the list are not set.
To raise an exception, set the environment variable `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` to `true`,
or set the `raise_if_not_set` parameter to `True`.
On local testing, not raising an exception is useful to avoid the need to set up all services.
On CI, the environment variables should be set, and the tests should fail if they are not set.
Args:
env_var_names (list[str]): Environment variable names.
raise_if_not_set (bool): Raise exception if the environment variable is not set.
raise_if_not_set (bool | None): Raise an exception if the environment variables are not set.
"""
if raise_if_not_set is None:
raise_if_not_set = os.getenv("INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION", "false").lower() == "true"

def does_env_var_exist(env_var_name):
exist = env_var_name in os.environ and os.environ[env_var_name]
Expand Down

0 comments on commit 30b67ec

Please sign in to comment.