You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Gonna make a discussion here to continue this thread since it's not really nutpie related. So, for the coregion kernel you wouldn't really use it on its own. It multiplies another kernel (quick description of the model here) so I don't think it's possible to take advantage of the low rank structure.
BUT, there are definitely loads of opportunities to take advantage of matrix structure! Another example is the one you showed me @aseyboldt with the tridiagonal solver here. I'm less sure about attaching them to the covariance though, because many also depend on the structure of the inputs X. I talked to @ricardoV94 recently about moving some of the GP and Covariance stuff into PyTensor, so PyTensor can apply optimizations like these.
I could use this discussion to compile a list of papers I know of that are about different GP speedups? The other sort of dimension here is approximations. Some approximations might be faster than a particular linear algebra trick, if one is relevant.
I'm wondering if a LinearOperator idea like this or this would make sense in PyTensor then, since I would imagine this is useful outside of GPs. Also talked to @daniel-dodd about some similar stuff recently.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Gonna make a discussion here to continue this thread since it's not really nutpie related. So, for the coregion kernel you wouldn't really use it on its own. It multiplies another kernel (quick description of the model here) so I don't think it's possible to take advantage of the low rank structure.
BUT, there are definitely loads of opportunities to take advantage of matrix structure! Another example is the one you showed me @aseyboldt with the tridiagonal solver here. I'm less sure about attaching them to the covariance though, because many also depend on the structure of the inputs X. I talked to @ricardoV94 recently about moving some of the GP and Covariance stuff into PyTensor, so PyTensor can apply optimizations like these.
I could use this discussion to compile a list of papers I know of that are about different GP speedups? The other sort of dimension here is approximations. Some approximations might be faster than a particular linear algebra trick, if one is relevant.
I'm wondering if a LinearOperator idea like this or this would make sense in PyTensor then, since I would imagine this is useful outside of GPs. Also talked to @daniel-dodd about some similar stuff recently.
Beta Was this translation helpful? Give feedback.
All reactions