Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replacing source_id by channel_id as multiprocessed simulation are crashing on this line #762

Closed
wants to merge 7 commits into from

Conversation

RCagnol
Copy link

@RCagnol RCagnol commented Oct 20, 2022

I noticed that the new version of neo is using channel_id to get the id of neurons of the SpikeTrainList object. Multiprocessed nest simulations are crashing on this line of code, replacing channel_id by source_id there seems to solve this issue.

@coveralls
Copy link

coveralls commented Oct 20, 2022

Coverage Status

coverage: 71.733% (+0.06%) from 71.675% when pulling e34477f on RCagnol:master into 4ac102b on NeuralEnsemble:master.

@apdavison
Copy link
Member

channel_id appears nowhere else in the PyNN codebase, whereas source_id appears in several places. This makes me think that a more extensive PR might be needed.

@RCagnol
Copy link
Author

RCagnol commented Jan 2, 2023

Indeed, I didn't identify correctly the source of the problem when creating this fix. In the _get_current_segment() method, when the spike times are contained in a tuple, the SpikeTrainList object is created (here: https://github.com/NeuralEnsemble/PyNN/blob/master/pyNN/recording/__init__.py#L289 ) without any annotations corresponding to the source_id. I'll update the PR.

@RCagnol
Copy link
Author

RCagnol commented Jan 11, 2023

Do you have any insight on why the tests are failing in the github action workflow? This confuses me because all nest tests pass successfully when I ran them locally (Linux, python 3.8.10, nest 3.3), and what causes the tests to fail there is not displayed as the workflow is cancelled before finishing.

@apdavison
Copy link
Member

apdavison commented Jan 11, 2023

Seems to be that the tests are now using numpy 1.24, which removes numpy.bool

see https://numpy.org/devdocs/release/1.24.0-notes.html#expired-deprecations

=========================== short test summary info ===========================
[1139](https://github.com/NeuralEnsemble/PyNN/actions/runs/3830478130/jobs/6518394611#step:11:1140)
FAILED test/unittests/test_random.py::ParallelTests::test_parallel_safe_with_mask - AttributeError: module 'numpy' has no attribute 'bool'
[1140](https://github.com/NeuralEnsemble/PyNN/actions/runs/3830478130/jobs/6518394611#step:11:1141)
FAILED test/unittests/test_random.py::ParallelTests::test_parallel_unsafe_with_mask - AttributeError: module 'numpy' has no attribute 'bool'
[1141](https://github.com/NeuralEnsemble/PyNN/actions/runs/3830478130/jobs/6518394611#step:11:1142)
==== 2 failed, 470 passed, 209 skipped, 56 warnings in 1632.33s (0:27:12) =====

@RCagnol
Copy link
Author

RCagnol commented Jan 11, 2023

Thanks for the answer. At first I didn't think this that the presence of 'np.bool' was the reason for almost all the Nest tests failing, but I just noticed that Nest 3.3 faces the same problem: nest/nest-simulator#2571
Therefore the tests would still fail (beside the workflow 'Test on windows-latest with Python 3.8' which skips the NEST tests), even after replacing 'np.bool' by 'bool' in PyNN.
So probably the solution would be to use numpy 1.23 in the workflow until Nest 3.4 comes out?

@apdavison
Copy link
Member

Hi @RCagnol I extended your solution, and made a new PR, #797.

@apdavison apdavison closed this Apr 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants