Skip to content

Feat/add api tests#5

Closed
cgoncalves94 wants to merge 11 commits intomainfrom
feat/add-api-tests
Closed

Feat/add api tests#5
cgoncalves94 wants to merge 11 commits intomainfrom
feat/add-api-tests

Conversation

@cgoncalves94
Copy link
Copy Markdown
Owner

This pull request introduces automated testing for the project by adding a GitHub Actions workflow, configuring development dependencies in pyproject.toml, and setting up test fixtures for FastAPI in conftest.py. These changes aim to streamline testing and ensure compatibility across multiple Python versions.

Automated Testing Setup:

  • .github/workflows/run-tests.yml: Added a GitHub Actions workflow to run Python tests on pull requests targeting the main or master branches. The workflow tests against Python versions 3.9, 3.10, and 3.11, installs dependencies using Poetry, and runs tests with pytest.

Development Dependencies Configuration:

  • pyproject.toml: Added a [tool.poetry.group.dev.dependencies] section to include pytest, pytest-asyncio, and httpx as development dependencies for testing.

Test Fixtures for FastAPI:

  • tests/conftest.py: Introduced pytest fixtures for testing FastAPI applications. Added an async_client fixture using httpx.AsyncClient for asynchronous tests and a placeholder client fixture for potential synchronous tests.

This commit introduces a suite of tests for the API endpoints defined in `src/backend/routes.py`.

Key changes include:

1.  **Testing Environment Setup**:
    *   Added `pytest`, `pytest-asyncio`, and `httpx` as development dependencies in `pyproject.toml`.
    *   Created a `tests` directory with `tests/conftest.py` to provide an `async_client` fixture for FastAPI testing.
    *   Created `tests/__init__.py`.

2.  **Test Coverage for Endpoints**:
    *   `GET /api/`: Basic health check.
    *   `POST /api/upload-document/`: Tests for successful uploads (.txt, .md), unsupported file types, and error handling by mocking the `ingest_file` utility.
    *   `GET /api/new-thread`: Ensures new thread ID generation.
    *   `GET /api/conversations-list`: Tests retrieval of conversation lists, including empty lists and error scenarios, by mocking `langgraph_client.threads.search`.
    *   `GET /api/conversations/{thread_id}`: Verifies retrieval of specific conversations and correct parsing of diverse message types (user, assistant, tool calls, tool results). This involved mocking `langgraph_client.threads.create` and `langgraph_client.threads.get_state`.
    *   `POST /api/conversations/{thread_id}/send-message`: Tests message sending functionality, mocking `langgraph_client.runs.create`.
    *   `GET /api/conversations/{thread_id}/stream-message`: Implements a detailed test for Server-Sent Events (SSE) by mocking the `langgraph_client.runs.join_stream` async generator and validating the stream content and event types.
    *   `DELETE /api/conversations/{thread_id}`: Tests deletion of specific threads, covering success and "not found" cases by mocking `langgraph_client.threads.get_state` and `langgraph_client.threads.delete`.

All tests are asynchronous and utilize `httpx.AsyncClient`. Mocking of the `langgraph_client` and other internal utilities is used extensively to ensure tests are isolated and repeatable.

The test for the `DELETE /api/conversations` (delete all threads) endpoint was planned but not implemented in this commit.
This commit completes the test suite for the API endpoints and introduces a GitHub Actions workflow for CI.

Changes include:

1.  **Tests for `DELETE /api/conversations`**:
    *   Added tests for the "delete all conversations" endpoint.
    *   Covers scenarios such as successful deletion of multiple threads, no threads existing, failure during the search for threads, and failure of individual delete operations within the loop.
    *   Utilizes mocking for `langgraph_client.threads.search` and `langgraph_client.threads.delete`.

2.  **GitHub Actions Workflow (`.github/workflows/run-tests.yml`)**:
    *   Added a workflow to automatically run tests on pull requests targeting `main` and `master` branches.
    *   The workflow uses a matrix strategy to test across Python versions 3.9, 3.10, and 3.11.
    *   It checks out the code, sets up Python, installs Poetry, installs project dependencies using `poetry install`, and then runs `pytest`.

This ensures that all API endpoints are tested and that tests are automatically executed as part of the PR process to maintain code quality.
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @cgoncalves94, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team,

Gemini here, providing a summary of this pull request by @cgoncalves94. The primary goal of this PR is to establish a foundational automated testing setup for the project's API. This involves configuring the necessary development dependencies, creating reusable test fixtures specifically for testing FastAPI applications, and integrating these tests into a GitHub Actions workflow for continuous integration.

The changes introduce pytest for running tests, pytest-asyncio for handling asynchronous tests with FastAPI, and httpx for making asynchronous HTTP requests to the API endpoints during testing. A dedicated test file (tests/test_routes.py) has been added containing numerous tests that cover various API routes, including file uploads, thread management (creation, listing, deletion), sending messages, and streaming responses. Mocking is extensively used in these tests to isolate the API logic from external dependencies like the LangGraph client and file ingestion functions, allowing for focused and reliable testing of the endpoint behavior and error handling.

Highlights

  • Automated Testing Setup: Introduces automated API testing using pytest.
  • CI Workflow: Adds a GitHub Actions workflow (.github/workflows/run-tests.yml) to automatically run tests on pull requests targeting main or master.
  • Development Dependencies: Configures pytest, pytest-asyncio, and httpx as development dependencies in pyproject.toml.
  • FastAPI Test Fixtures: Adds pytest fixtures in tests/conftest.py for easily creating asynchronous test clients (async_client) for the FastAPI application.
  • Comprehensive API Tests: Adds extensive tests in tests/test_routes.py covering /api/, /api/upload-document/, /api/new-thread, /api/conversations-list, /api/conversations/{thread_id}, /api/conversations, and /api/conversations/{thread_id}/stream-message endpoints, including success and error scenarios, and using mocking for external services.

Changelog

Click here to see the changelog
  • .github/workflows/run-tests.yml
    • Added new file.
    • Configured GitHub Actions workflow to run tests on pull requests.
    • Set up testing across Python versions 3.9, 3.10, and 3.11.
    • Included steps for checking out code, setting up Python, installing dependencies with Poetry, and running tests with pytest.
  • pyproject.toml
    • Added pytest, pytest-asyncio, and httpx to development dependencies.
  • tests/conftest.py
    • Created new file.
    • Added pytest fixtures for FastAPI testing, including an async_client using httpx.AsyncClient.
  • tests/test_routes.py
    • Created new file.
    • Added numerous asynchronous tests for various API endpoints.
    • Implemented mocking for external dependencies (ingest_file, langgraph_client) to isolate API logic.
    • Tested success cases, file type handling, ingestion errors, thread management, message sending, and stream parsing.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


API endpoints stand,
Tested now, across the land.
Async calls fly,
Mocked services comply,
Confidence is close at hand.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a valuable addition, introducing a comprehensive suite of automated API tests using GitHub Actions, pytest, and httpx. This significantly enhances the project's maintainability and reliability. The tests cover a good range of scenarios for the API routes, including success cases, various error conditions, and the complex message streaming logic. The use of mocking is appropriate and well-implemented.

Overall, the quality of the tests is high, and they demonstrate a good understanding of the system under test. I have a couple of suggestions for minor improvements to further enhance clarity and robustness.

Summary of Findings

  • Test Fixture Clarity: The base_url in the async_client fixture in tests/conftest.py could be set to "http://testserver" instead of "http://127.0.0.1:8000" to more accurately reflect that httpx.AsyncClient(app=app) handles requests internally rather than hitting a live network address. This is a medium severity suggestion for improved clarity.
  • Test Data Specificity: A comment in tests/test_routes.py (line 407) regarding a message type (AIMessageChunk vs AssistantMessage) introduces slight uncertainty. Clarifying the expected type from LangGraph would improve test robustness and maintainability. This is a medium severity suggestion.
  • Placeholder Fixture (Not Commented): The client fixture in tests/conftest.py is a placeholder with a pass statement. While noted in the PR description, if synchronous tests are not planned imminently, it could be removed to avoid unused code. (Severity: low, not commented due to review settings).
  • Minor Formatting (Not Commented): There are minor formatting inconsistencies (extra spaces) in tests/test_routes.py when patching multiple mocks in with statements (e.g., line 258). (Severity: low, not commented due to review settings).
  • Debug Code in Tests (Not Commented): The test_stream_message_success_and_logic function in tests/test_routes.py includes print statements for debugging (lines 486-488). These are generally removed from final test code. (Severity: low, not commented due to review settings).

Merge Readiness

This pull request introduces a valuable and well-structured set of API tests, significantly improving the project's robustness. The tests are comprehensive and cover many important scenarios.

I've identified a couple of medium severity areas for potential improvement related to clarity in the test fixture setup and specificity in test data assumptions. Addressing these points would further strengthen the test suite.

As an AI, I am not authorized to approve pull requests. I recommend that the author considers the suggestions provided. Once addressed, the PR should be in excellent shape for a final review and merge by a human maintainer.

Comment thread tests/conftest.py Outdated

@pytest.fixture(scope="function") # Use "function" scope for async client if tests need isolation
async def async_client():
async with AsyncClient(app=app, base_url="http://127.0.0.1:8000") as ac: # Ensure base_url matches your test server setup
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The base_url here is set to "http://127.0.0.1:8000". When AsyncClient is initialized with the app argument (as done here), it routes requests internally to the FastAPI application without making actual network calls. Using "http://127.0.0.1:8000" might be slightly misleading, as it could imply that a live server is running and being targeted on that port, which isn't the case for these tests.

A more common convention for base_url in this scenario (when app is provided) is "http://testserver". This makes it clearer that the client is interacting with a test instance of the app.

Could we consider changing this to "http://testserver" for better clarity regarding the test setup?

Suggested change
async with AsyncClient(app=app, base_url="http://127.0.0.1:8000") as ac: # Ensure base_url matches your test server setup
async with AsyncClient(app=app, base_url="http://testserver") as ac: # base_url for internal routing with app

Comment thread tests/test_routes.py Outdated
google-labs-jules Bot and others added 9 commits May 22, 2025 00:56
This commit modifies the `.github/workflows/run-tests.yml` workflow to address issues with `poetry install` failures.

Key changes:
- Updated the Python version matrix to `["3.11", "3.12"]` to align with your project's `requires-python = ">=3.11"` setting in `pyproject.toml`.
- Added a step to clear Poetry's cache before installation to prevent issues with stale cached data.
- Included steps to print debug information (Python version, Poetry version, relevant `pyproject.toml` lines, and `poetry env info`) to aid in diagnosing CI failures.
- Added the `--verbose` flag to the `poetry install` command for more detailed output.

These changes aim to make the CI test runs more robust and provide better diagnostics if dependency resolution issues persist.
I've modified the GitHub Actions test workflow (`run-tests.yml`) to remove the `poetry.lock` file before running `poetry install`. This is intended to force Poetry to perform a fresh dependency resolution based on the `pyproject.toml` file, which should help resolve persistent version conflicts observed in CI, particularly the incorrect Python version constraint being inferred.

The `pyproject.toml` specifies `requires-python = ">=3.11"`, and this change aims to ensure the lock file does not interfere with this declaration during CI runs.
To ensure these `backend.*` imports are resolved correctly during
testing, I updated `pyproject.toml` to include:
```toml
[tool.pytest.ini_options]
python_paths = ["src"]
```
This adds the `src` directory to pytest's PYTHONPATH, allowing
modules under `src` (like `backend`) to be imported as if they are
top-level packages.
@cgoncalves94 cgoncalves94 deleted the feat/add-api-tests branch May 22, 2025 01:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants