Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark with extensions #3723

Merged
merged 9 commits into from
Dec 12, 2024
Merged

Add benchmark with extensions #3723

merged 9 commits into from
Dec 12, 2024

Conversation

patrick91
Copy link
Member

@patrick91 patrick91 commented Dec 12, 2024

Description

Types of Changes

  • Core
  • Bugfix
  • New feature
  • Enhancement/optimization
  • Documentation

Issues Fixed or Closed by This PR

Checklist

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have read the CONTRIBUTING document.
  • I have added tests to cover my changes.
  • I have tested the changes and verified that they work and don't break anything (as well as I can manage).

Summary by Sourcery

Add a new benchmark test for executing queries with extensions and update the GitHub Actions workflow to use a newer version of the CodSpeed action. Refactor function names in the pyinstrument test for better clarity.

New Features:

  • Introduce a new benchmark test for executing queries with extensions in the test suite.

Enhancements:

  • Refactor function names in the pyinstrument test to improve clarity and readability.

CI:

  • Update the GitHub Actions workflow to use CodSpeedHQ/action@v3.2.0 for running benchmarks.

Tests:

  • Add a new test file for benchmarking query execution with different extensions.

Copy link
Contributor

sourcery-ai bot commented Dec 12, 2024

Reviewer's Guide by Sourcery

This PR adds benchmark tests for evaluating GraphQL query execution performance with different extensions. The implementation includes refactoring of existing pyinstrument tests, updates to dependencies, and improvements to test configurations.

ER diagram for dependency updates

erDiagram
    PYTEST ||--o{ PYTEST_CODSPEED : uses
    PYTEST_CODSPEED {
        string version "^3.0.0"
        string python ">=3.9"
    }
    note for PYTEST_CODSPEED "Updated version and Python compatibility for pytest-codspeed."
Loading

Class diagram for test configuration changes

classDiagram
    class Session {
        +run_always(command: str, *args)
        +_session: Session
    }
    class TestConfiguration {
        +tests_starlette(session: Session, gql_core: str)
        +test_pydantic(session: Session, pydantic: str, gql_core: str)
    }
    Session --> TestConfiguration
    note for TestConfiguration "Updated test configurations to ignore benchmarks and refactored Starlette tests."
Loading

File-Level Changes

Change Details Files
Added new benchmark tests for GraphQL query execution with extensions
  • Created new benchmark test file with parametrized tests for different numbers of items
  • Implemented test cases for no extensions, SimpleExtension, and ResolveExtension
  • Added support for async execution in benchmark tests
tests/benchmarks/test_execute_with_extensions.py
Refactored pyinstrument extension tests for better resource management
  • Improved temporary file handling using context manager
  • Renamed test functions for better clarity
  • Updated assertions to be more precise
tests/extensions/test_pyinstrument.py
Updated project dependencies and configurations
  • Upgraded pytest-codspeed to version >=3.0.0
  • Updated CodSpeed GitHub Action to v3.2.0
  • Modified Starlette test configuration
  • Added benchmark tests directory to ignore list in pydantic tests
pyproject.toml
.github/workflows/test.yml
noxfile.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time. You can also use
    this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

codspeed-hq bot commented Dec 12, 2024

CodSpeed Performance Report

Merging #3723 will not alter performance

Comparing benchmark/extensions (e504448) with main (6553c9e)

🎉 Hooray! pytest-codspeed just leveled up to 3.1.0!

A heads-up, this is a breaking change and it might affect your current performance baseline a bit. But here's the exciting part - it's packed with new, cool features and promises improved result stability 🥳!
Curious about what's new? Visit our releases page to delve into all the awesome details about this new version.

Summary

✅ 15 untouched benchmarks
🆕 6 new benchmarks

Benchmarks breakdown

Benchmark main benchmark/extensions Change
🆕 test_execute[with_no_extensions-items_1000] N/A 66.7 ms N/A
🆕 test_execute[with_no_extensions-items_100] N/A 11.1 ms N/A
🆕 test_execute[with_resolveextension-items_1000] N/A 183.2 ms N/A
🆕 test_execute[with_resolveextension-items_100] N/A 22.2 ms N/A
🆕 test_execute[with_simpleextension-items_1000] N/A 66.9 ms N/A
🆕 test_execute[with_simpleextension-items_100] N/A 11.1 ms N/A

Copy link

codecov bot commented Dec 12, 2024

Codecov Report

Attention: Patch coverage is 95.83333% with 2 lines in your changes missing coverage. Please review.

Project coverage is 97.01%. Comparing base (6553c9e) to head (e504448).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3723      +/-   ##
==========================================
+ Coverage   97.00%   97.01%   +0.01%     
==========================================
  Files         501      502       +1     
  Lines       33490    33520      +30     
  Branches     5592     5598       +6     
==========================================
+ Hits        32487    32521      +34     
- Misses        791      793       +2     
+ Partials      212      206       -6     

@botberry
Copy link
Member

botberry commented Dec 12, 2024

Apollo Federation Subgraph Compatibility Results

Federation 1 Support Federation 2 Support
_service🟢
@key (single)🟢
@key (multi)🟢
@key (composite)🟢
repeatable @key🟢
@requires🟢
@provides🟢
federated tracing🔲
@link🟢
@shareable🟢
@tag🟢
@override🟢
@inaccessible🟢
@composeDirective🟢
@interfaceObject🟢

Learn more:

@patrick91 patrick91 force-pushed the benchmark/extensions branch from 486368a to ff5758a Compare December 12, 2024 13:28
@patrick91 patrick91 force-pushed the benchmark/extensions branch from 1a86482 to dd29dcf Compare December 12, 2024 13:32
@patrick91 patrick91 marked this pull request as ready for review December 12, 2024 17:46
@botberry
Copy link
Member

Hi, thanks for contributing to Strawberry 🍓!

We noticed that this PR is missing a RELEASE.md file. We use that to automatically do releases here on GitHub and, most importantly, to PyPI!

So as soon as this PR is merged, a release will be made 🚀.

Here's an example of RELEASE.md:

Release type: patch

Description of the changes, ideally with some examples, if adding a new feature.

Release type can be one of patch, minor or major. We use semver, so make sure to pick the appropriate type. If in doubt feel free to ask :)

Here's the tweet text:

🆕 Release (next) is out! Thanks to @patrick91 for the PR 👏

Get it here 👉 https://strawberry.rocks/release/(next)

@patrick91 patrick91 merged commit 882e989 into main Dec 12, 2024
108 of 111 checks passed
@patrick91 patrick91 deleted the benchmark/extensions branch December 12, 2024 17:47
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @patrick91 - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Please fill out the PR template sections (especially the Types of Changes and Checklist) to help reviewers understand the scope and status of your changes.
Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟡 Testing: 1 issue found
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +41 to +50
def test_execute(
benchmark: BenchmarkFixture, items: int, extensions: List[SchemaExtension]
):
schema = strawberry.Schema(query=Query, extensions=extensions)

def run():
return asyncio.run(
schema.execute(items_query, variable_values={"count": items})
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): Missing validation of benchmark results

The benchmark test doesn't verify the correctness of the returned data. Consider adding assertions to ensure the query results are valid while measuring performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants