Memory efficiency of Cashvalue_ME_EX4 #56
Replies: 2 comments 5 replies
-
Did some more reading and I see that the blog post linked above reports that -
The implementation of the model appears to be O(P*T) where I am trying to make some publication for the benchmarks repository in a journal like Journal of Open Source Software. It will be discussing the memory and time complexity of actuarial modeling and we are focusing on the basic term and cashvalue_ME_EX4 models. A coauthor reports that lifelib is consuming more memory than Julia, and I believe it is related to this issue. |
Beta Was this translation helpful? Give feedback.
-
The vectorized models(those with an '_M' suffix), are designed to load extensive data onto memory. This approach allows us to pass large datasets to the native level, where homogeneous primitive operations can be applied simultaneously via vectorized or matrix operations. This is an area where numpy excels, though at the cost of significant memory consumption. So, this high memory usage is an expected outcome rather than an unforeseen issue. |
Beta Was this translation helpful? Give feedback.
-
Code like this seems to be having impacts on memory efficiency
I imagine that the logic described in this blog post (https://modelx.io/blog/2022/03/26/running-model-while-saving-memory/) will allow for some slight modifications to be made so that the memory efficiency is much improved.
Beta Was this translation helpful? Give feedback.
All reactions