-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH(sr3): refactor threshold and thresholder property #548
Conversation
- move to private function - rename lam to regularization_weight - remove cad - add docstring - add validation
e6de7cc
to
e49304d
Compare
Thanks Watcharin! We can talk about this today, but this PR should target |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, now check the rest of examples/ for python files with SR3(threshold=...)
Don't forget the ipynb files! Run examples/publish_notebook.py on the example scripts
if regularizer == "l1": | ||
cost = cp.sum_squares(x[:, i] - x @ xi) + cp.sum( | ||
cp.multiply(lam, cp.abs(xi)) | ||
) | ||
elif regularizer == "weighted_l1": | ||
cost = cp.sum_squares(x[:, i] - x @ xi) + cp.sum( | ||
cp.multiply(lam[:, i], cp.abs(xi)) | ||
) | ||
elif regularizer == "l2": | ||
cost = cp.sum_squares(x[:, i] - x @ xi) + cp.sum( | ||
cp.multiply(lam, xi**2) | ||
) | ||
elif regularizer == "weighted_l2": | ||
cost = cp.sum_squares(x[:, i] - x @ xi) + cp.sum( | ||
cp.multiply(lam[:, i], xi**2) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we find a way to use _calculate_penalty
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic is slightly different from ConstrainedSR3. I'm not sure if we can combine them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it doesn't work, override the superclass _calculate_penalty()
- rename to reg_weight and regularizer - update all related test cases - fix all child classes
- reg_weight to reg_weight_lam - nu to relax_coeff_nu
80bf1e1
to
5e6b777
Compare
# Conflicts: # examples/1_feature_overview/example.ipynb # examples/1_feature_overview/example.py # pysindy/optimizers/constrained_sr3.py # pysindy/optimizers/sr3.py # pysindy/optimizers/stable_linear_sr3.py # test/test_optimizers.py
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #548 +/- ##
==========================================
+ Coverage 94.23% 94.64% +0.40%
==========================================
Files 37 37
Lines 4045 4018 -27
==========================================
- Hits 3812 3803 -9
+ Misses 233 215 -18 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost done with this! Mostly just a linting issue
if regularizer == "l1": | ||
cost = cp.sum_squares(x[:, i] - x @ xi) + cp.sum( | ||
cp.multiply(lam, cp.abs(xi)) | ||
) | ||
elif regularizer == "weighted_l1": | ||
cost = cp.sum_squares(x[:, i] - x @ xi) + cp.sum( | ||
cp.multiply(lam[:, i], cp.abs(xi)) | ||
) | ||
elif regularizer == "l2": | ||
cost = cp.sum_squares(x[:, i] - x @ xi) + cp.sum( | ||
cp.multiply(lam, xi**2) | ||
) | ||
elif regularizer == "weighted_l2": | ||
cost = cp.sum_squares(x[:, i] - x @ xi) + cp.sum( | ||
cp.multiply(lam[:, i], xi**2) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it doesn't work, override the superclass _calculate_penalty()
Threshold parameter is specific to l0 norm. This PR aims to generalize SR3 to other norm by moving threshold calculation to a separate function and using generic regularization weight instead.