Implement caching of the mass for the PowerSphericalPotentialwCutoff potential #720
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi @jobovy, here are the second set of changes to close off #701.
I've taken two different approaches in two commits, happy to rebase this branch to contain whichever you prefer.
In 020eb77 I've implemented the caching of the mass in the
Rforce
andzforce
functions. I left the Planar functions as-is (including the nargs/potentialArgs) because I don't think there would be any performance benefit from caching in the Planar case, but just let me know if you'd prefer the caching to be included for the Planar functions as well for consistency (or if I'm mistaken about the lack of benefit).In 84da14b I instead implemented the caching inside of the mass function itself. This requires that the potentialArgs be passed to the mass function, but reduces the duplicate code in each of the calling functions. In this case the caching needs to be set up for any situation that the mass function going to be called, i.e. including the Planar case.
I guess the tradeoff is that first option is potentially more flexible, whereas the second option is a bit tidier and reduces some repetition in the code.
In terms of performance both commits are the same; compared to the
main
branch I see a 25% performance improvement. E.g., using this simple test case;With the current
main
branch I getand with this change I get
Finally I haven't implemented any caching for the Python integrator, but am happy to include that here if you like (on that front, functools.lru_cache is an interesting option).