-
Notifications
You must be signed in to change notification settings - Fork 5
Compare output with result description for framework tests #91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
||
<ParameterList name="result_description"> | ||
<Parameter name="ExpectedPressure" type="double" value="0.0060690998"/> | ||
<Parameter name="ExpectedPressureTolerance" type="double" value="1e-6"/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tolerance is quite high compared to the value, can we set this to a smaller value and the tests still pass?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I use the current output in the ExpectedPressure
. I can set it by printing the current output using std::setprecision(16)
. And then I can use a lower tolerance. I am not sure what is a good value for this tolerance. Should I use 1e-10
? The test still passes if I use this tolerance. I also intentionally broke the code by using an incorrect value for the expected result and it works.
std::chrono::duration_cast<std::chrono::seconds>(finish - start).count(); | ||
std::chrono::duration_cast<std::chrono::duration<double>>(finish - start).count(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems unrelated to this merge request. Was this a bug you found on the side?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it is not a bug. Earlier, the time taken was printed in integers of seconds. So, for very small runs, the elapsed time showed zero seconds. Therefore I changed this to see the time elapsed in floating value in seconds.
double& ElasticComplianceCorrection, double& GridSize, double& Tolerance, double& Delta, | ||
std::string& TopologyFilePath, int& Resolution, double& InitialTopologyStdDeviation, | ||
const std::string& inputFileName, bool& RandomTopologyFlag, double& Hurst, bool& RandomSeedFlag, | ||
int& RandomGeneratorSeed, bool& WarmStartingFlag, int& MaxIteration, bool& PressureGreenFunFlag) | ||
int& RandomGeneratorSeed, bool& WarmStartingFlag, int& MaxIteration, bool& PressureGreenFunFlag, | ||
double& ExpectedPressure, double& ExpectedPressureTolerance) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is getting quite long. This should be replaced with a parameter container in the future, to improve readability and usability.
// Test for correct output if the result_description is given in the input file | ||
if (ExpectedPressure >= 0) | ||
{ | ||
TEUCHOS_TEST_FOR_EXCEPTION(std::abs(pressure - ExpectedPressure) > ExpectedPressureTolerance, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the wrong test macro. You are testing if the code raises a std::runtime_error
. You would need something like TEST_FLOATING_EQUALITY
.
Side note: When implementing tests, always try to intentionally break the code and see if the test catches it. I assume in this case if you put the expected pressure to a wrong value the test will still pass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. I will change the test macro. However, the current test macro worked without any issues. I did intentionally break the code with an incorrect expected value and it does catch it.
Description and Context
Earlier, the output from the code was not compared to a result description for the framework tests. The tests were passing if they run through without any errors. In this MR, a result description is added in the input file, which is then compared to the output and an error is thrown if the results do not match.
Related Issues and Pull Requests
How Has This Been Tested?
Running
ctest
with incorrect results in the result_description shows up as a failed test. Not providing any result_description in the input file does not constitute any error.Checklist
Additional Information
Interested Parties / Possible Reviewers
@isteinbrecher @mayrmt