You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently found a bug that I wanted to seek your advice on. When I Variationalize an LSTM model, I see an error when porting it to CUDA. For example, here's the working version:
net = nn.LSTM(10, 10)
net.cuda()
This works fine. Then,
net = nn.LSTM(10, 10)
net = pyvarinf.Variationalize(net)
net.cuda()
This gives the bug:
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
<ipython-input-14-0c80e37b52e0> in <module>()
1 net = nn.LSTM(10, 10)
2 net = pyvarinf.Variationalize(net)
----> 3 net.cuda()
3 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py in flatten_parameters(self)
94 Otherwise, it's a no-op.
95 """
---> 96 any_param = next(self.parameters()).data
97 if not any_param.is_cuda or not torch.backends.cudnn.is_acceptable(any_param):
98 return
StopIteration:
Any idea what this is from? Maybe I shouldn't be using this with an RNN-like model?
Thanks!
Miles
The text was updated successfully, but these errors were encountered:
Hi @ctallec,
Great package! Thanks for writing it.
I recently found a bug that I wanted to seek your advice on. When I Variationalize an LSTM model, I see an error when porting it to CUDA. For example, here's the working version:
This works fine. Then,
This gives the bug:
Any idea what this is from? Maybe I shouldn't be using this with an RNN-like model?
Thanks!
Miles
The text was updated successfully, but these errors were encountered: