-
-
Notifications
You must be signed in to change notification settings - Fork 544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmark with extensions #3723
Conversation
Reviewer's Guide by SourceryThis PR adds benchmark tests for evaluating GraphQL query execution performance with different extensions. The implementation includes refactoring of existing pyinstrument tests, updates to dependencies, and improvements to test configurations. ER diagram for dependency updateserDiagram
PYTEST ||--o{ PYTEST_CODSPEED : uses
PYTEST_CODSPEED {
string version "^3.0.0"
string python ">=3.9"
}
note for PYTEST_CODSPEED "Updated version and Python compatibility for pytest-codspeed."
Class diagram for test configuration changesclassDiagram
class Session {
+run_always(command: str, *args)
+_session: Session
}
class TestConfiguration {
+tests_starlette(session: Session, gql_core: str)
+test_pydantic(session: Session, pydantic: str, gql_core: str)
}
Session --> TestConfiguration
note for TestConfiguration "Updated test configurations to ignore benchmarks and refactored Starlette tests."
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
CodSpeed Performance ReportMerging #3723 will not alter performanceComparing 🎉 Hooray!
|
Benchmark | main |
benchmark/extensions |
Change | |
---|---|---|---|---|
🆕 | test_execute[with_no_extensions-items_1000] |
N/A | 66.7 ms | N/A |
🆕 | test_execute[with_no_extensions-items_100] |
N/A | 11.1 ms | N/A |
🆕 | test_execute[with_resolveextension-items_1000] |
N/A | 183.2 ms | N/A |
🆕 | test_execute[with_resolveextension-items_100] |
N/A | 22.2 ms | N/A |
🆕 | test_execute[with_simpleextension-items_1000] |
N/A | 66.9 ms | N/A |
🆕 | test_execute[with_simpleextension-items_100] |
N/A | 11.1 ms | N/A |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3723 +/- ##
==========================================
+ Coverage 97.00% 97.01% +0.01%
==========================================
Files 501 502 +1
Lines 33490 33520 +30
Branches 5592 5598 +6
==========================================
+ Hits 32487 32521 +34
- Misses 791 793 +2
+ Partials 212 206 -6 |
Apollo Federation Subgraph Compatibility Results
Learn more: |
486368a
to
ff5758a
Compare
1a86482
to
dd29dcf
Compare
Hi, thanks for contributing to Strawberry 🍓! We noticed that this PR is missing a So as soon as this PR is merged, a release will be made 🚀. Here's an example of Release type: patch
Description of the changes, ideally with some examples, if adding a new feature. Release type can be one of patch, minor or major. We use semver, so make sure to pick the appropriate type. If in doubt feel free to ask :) Here's the tweet text:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @patrick91 - I've reviewed your changes - here's some feedback:
Overall Comments:
- Please fill out the PR template sections (especially the Types of Changes and Checklist) to help reviewers understand the scope and status of your changes.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟡 Testing: 1 issue found
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
def test_execute( | ||
benchmark: BenchmarkFixture, items: int, extensions: List[SchemaExtension] | ||
): | ||
schema = strawberry.Schema(query=Query, extensions=extensions) | ||
|
||
def run(): | ||
return asyncio.run( | ||
schema.execute(items_query, variable_values={"count": items}) | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (testing): Missing validation of benchmark results
The benchmark test doesn't verify the correctness of the returned data. Consider adding assertions to ensure the query results are valid while measuring performance.
Description
Types of Changes
Issues Fixed or Closed by This PR
Checklist
Summary by Sourcery
Add a new benchmark test for executing queries with extensions and update the GitHub Actions workflow to use a newer version of the CodSpeed action. Refactor function names in the pyinstrument test for better clarity.
New Features:
Enhancements:
CI:
Tests: