-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implemented test for shadow classification #112
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @dicaearchus, looks good! Very nice to have tests on the accuracy of the shadow transforms.
Just a couple of specific comments below and two general comments here:
- the PEP8 check reports some inconsistencies, you can see specific violations in the output of the GitHub action
- the newly introduced test fails (see here), seems to be due to the testdata files not being included in the repository?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conventionally, you add all the tests referring to Python file 'X.py' in a file called 'test_X.py'. This allows you to immediately identify the test file referring to a given function. So this file should actually be called test_shadow_transforms.py
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this has now changed with 71adebe
methods = list(methods[i] for i in [0, 5, 6, 7, 8, 10, 11, 12]) | ||
|
||
I, M = _get_shadow_test_imagery() | ||
for method in methods: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here you are running the same test code on a list of parameters. Using a for loop works perfectly fine, so no need to change anything here, but just so that you know: you can achieve the same result with a pytest fixture using parameters (see here). The advantage of the latter approach is that you get a separate test for each of the parameter, so that it is then easier to identify potentially failing tests from the output (with the for loop, you will only see the overall test failing and it might be a bit trickier to find out which is(are) the problematic values).
The test fail, as I have uploaded the files in the master branch, but these are not present in this branch. Hence is a rebase needed to resolve this @fnattino ? |
i.e.: |
You could do rebase, or I typically do "merge" to bring in additional commits to the master branch into this branch as well: # make sure you are on the feature branch
git checkout test-shadow-detection
# bring new commits of master inside this branch
git merge master |
No description provided.