Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implemented test for shadow classification #112

Merged
merged 3 commits into from
Nov 28, 2023
Merged

Conversation

dicaearchus
Copy link
Contributor

No description provided.

@dicaearchus dicaearchus requested a review from fnattino November 21, 2023 13:21
Copy link
Contributor

@fnattino fnattino left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @dicaearchus, looks good! Very nice to have tests on the accuracy of the shadow transforms.

Just a couple of specific comments below and two general comments here:

  • the PEP8 check reports some inconsistencies, you can see specific violations in the output of the GitHub action
  • the newly introduced test fails (see here), seems to be due to the testdata files not being included in the repository?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Conventionally, you add all the tests referring to Python file 'X.py' in a file called 'test_X.py'. This allows you to immediately identify the test file referring to a given function. So this file should actually be called test_shadow_transforms.py.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this has now changed with 71adebe

methods = list(methods[i] for i in [0, 5, 6, 7, 8, 10, 11, 12])

I, M = _get_shadow_test_imagery()
for method in methods:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you are running the same test code on a list of parameters. Using a for loop works perfectly fine, so no need to change anything here, but just so that you know: you can achieve the same result with a pytest fixture using parameters (see here). The advantage of the latter approach is that you get a separate test for each of the parameter, so that it is then easier to identify potentially failing tests from the output (with the for loop, you will only see the overall test failing and it might be a bit trickier to find out which is(are) the problematic values).

@dicaearchus
Copy link
Contributor Author

The test fail, as I have uploaded the files in the master branch, but these are not present in this branch. Hence is a rebase needed to resolve this @fnattino ?

@dicaearchus
Copy link
Contributor Author

i.e.: git rebase master develop

@fnattino
Copy link
Contributor

You could do rebase, or I typically do "merge" to bring in additional commits to the master branch into this branch as well:

# make sure you are on the feature branch
git checkout test-shadow-detection
# bring new commits of master inside this branch
git merge master

@dicaearchus dicaearchus merged commit e236979 into master Nov 28, 2023
@dicaearchus dicaearchus deleted the test-shadow-detection branch November 28, 2023 09:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants