Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nonseq #245

Open
wants to merge 393 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 151 commits
Commits
Show all changes
393 commits
Select commit Hold shift + click to select a range
e203855
refactor: using self._get_conv_output_shape() in place of previous in…
Willian-Girao Sep 13, 2024
72c8e6e
Remove obsolete files dynapcnn_layer_v2.py and dynapcnn_layer_old.py
bauerfe Sep 17, 2024
f9a797d
Simplify DynapcnnLayer by removing _pool_layers attribute.
bauerfe Sep 17, 2024
dda052d
DynapcnnLayer: Reintroduce methods get_output_shape and zero_grad
bauerfe Sep 17, 2024
210a34d
Keep DynapcnnCompatibleNetwork for now. (remove in future release to …
bauerfe Sep 17, 2024
05546a4
Minor modifications to dynapcnn.py
bauerfe Sep 17, 2024
7c3f7a2
Minor changes to NIRGraphExtractor.py:
bauerfe Sep 17, 2024
6a04d88
Rename NIRGraphExtractor.py to nir_graph_extractor.py
bauerfe Sep 17, 2024
9986b0b
Refactor NIRtoDynapcnnNetworkGraph._get_edges_from_nir
bauerfe Sep 17, 2024
a18a060
Refactor nir_graph_extractor.py
bauerfe Sep 26, 2024
e138673
NIRtoDynapcnnNetowrkGraph properties return copies so that original o…
bauerfe Sep 26, 2024
f48f43f
Refactor edge_handler
bauerfe Sep 27, 2024
0a6b616
Improve type hints in edges handler
bauerfe Sep 27, 2024
8fcf594
Run black and isort
bauerfe Sep 27, 2024
3b78e69
Graph extractor removes nodes-in place
bauerfe Oct 2, 2024
8e3c7cd
Fix indentation of DynapcnnCompatibleNetwork
bauerfe Oct 2, 2024
32bbfed
Remove dependency on DynapcnnNetwork for Dynapcnn Config builder to p…
bauerfe Oct 2, 2024
4ab8492
Fix multiple minor bugs causing graph extractor test to fail
bauerfe Oct 2, 2024
8c28e61
DynapcnnNetwork does not need to keep track of removed nodes anymore
bauerfe Oct 2, 2024
847ff1b
Merge branch 'nonseq' of github.com:synsense/sinabs into nonseq
bauerfe Oct 2, 2024
c48687e
Fix set merger in GraphExtractor
bauerfe Oct 2, 2024
414e7a7
Merge branch 'nonseq' of github.com:synsense/sinabs into nonseq
bauerfe Oct 2, 2024
83be978
DynapcnnNetwork does not need to copy entry nodes from tracer
bauerfe Oct 2, 2024
0359610
Properly update modules map when removing nodes from graph extractor
bauerfe Oct 2, 2024
1b02d23
Nir Graph Extractor: Rename remove-nodes method
bauerfe Oct 2, 2024
6c59b3a
Fix dynapcnnlayer test to use layer handler
bauerfe Oct 2, 2024
8052cde
Unified name GraphExtractor
bauerfe Oct 2, 2024
5a1c6d1
Simplify graph extraction from NIR
bauerfe Oct 2, 2024
57abd11
Simplify name-to-module map generation
bauerfe Oct 2, 2024
defd334
Minor change to exception: Sequential is Module
bauerfe Oct 2, 2024
8efe58f
Fix failing graph extractor test
bauerfe Oct 2, 2024
80c9ef9
Refactor GraphExtractor.remove_nodes_by_class method
bauerfe Oct 2, 2024
86889b8
Remove need for merge_handler: Remove merge nodes directly as "ignore…
bauerfe Oct 3, 2024
80dcff1
GraphExtractor does formal verification of extracted graph.
bauerfe Oct 3, 2024
d7d0e93
General linting and minor refactoring
bauerfe Oct 3, 2024
46078a4
sinabs_edges_utils becomes connectivity_specs
bauerfe Oct 3, 2024
19d4ebd
Fix minor bugs
bauerfe Oct 3, 2024
aec0951
Remove obsolete graph_tracer.py
bauerfe Oct 3, 2024
dde3b78
Tidy up edge types
bauerfe Oct 3, 2024
9aeb937
Rename module_map to indx_2_module_map for consistency and clarity
bauerfe Oct 3, 2024
3b5180c
Node to dcnnl mapping: support pooling-pooling edegs. work independen…
bauerfe Oct 3, 2024
c3b5fcf
Fix bugs from previous commit
bauerfe Oct 3, 2024
088157b
Try reducing io shape extraction effort
bauerfe Oct 4, 2024
b4fabcf
(WIP) Update dynapcnn layer instantiation
bauerfe Oct 4, 2024
a44d40e
Fix non-optimal unpacking syntax
bauerfe Oct 4, 2024
6195461
Fix syntax bug
bauerfe Oct 4, 2024
5ec0ae5
Fix indentation
bauerfe Oct 4, 2024
a173f1a
Run black and isort
bauerfe Oct 4, 2024
8c6ec25
Fix missing import
bauerfe Oct 4, 2024
452af1a
Rerun black
bauerfe Oct 4, 2024
befc244
(WIP) Update dynapcnn layer instantiation: handling of input shapes f…
bauerfe Oct 8, 2024
af0922f
Remove methods from DynapcnnLayerHandler that will be obsolete after …
bauerfe Oct 8, 2024
03cb842
Fix type hint for `remve_nodes_by_class` method
bauerfe Oct 9, 2024
e84e2fa
Refactor DynapcnnLayer generation
bauerfe Oct 9, 2024
95e6c6d
Bugfix: layer_info always has "destinations" entry
bauerfe Oct 9, 2024
3cbd1d6
Fix bugs related to DynapcnnLayer refactoring
bauerfe Oct 9, 2024
c9ed7a2
(WIP): Update dynapcnn layer tests
bauerfe Oct 9, 2024
ebb2be3
(WIP): Update DynapcnnLayer tests
bauerfe Oct 10, 2024
fb0e661
Finish updating dynapcnn layer unit tests
bauerfe Oct 11, 2024
8d4c340
Separate dynapcnn_layer_utils module
bauerfe Oct 11, 2024
39192e9
Enable pooling without subsequent destination layer
bauerfe Oct 14, 2024
111a1a0
Final layer destinations get unique negative integers
bauerfe Oct 14, 2024
7875238
(WIP) DynapcnnNetwork forward pass happens in DynapcnnNetworkModule. …
bauerfe Oct 14, 2024
ed6345b
Rerun black
bauerfe Oct 14, 2024
537c7a7
Add complete type hint to DynapcnnNetwork.forward
bauerfe Oct 14, 2024
227b484
Update dynapcnn network unit tests
bauerfe Oct 15, 2024
bb3e211
Ensure exit layers generate output by setting destination None
bauerfe Oct 15, 2024
25a14d2
Remove need for DynapcnnLayerHandler (WIP)
bauerfe Oct 15, 2024
3fd4106
Temporarily add dynapcnn_layer_handler definition again to prevent im…
bauerfe Oct 15, 2024
1d17010
DynapcnnNetworkModule using torch compatible ModuleDict
bauerfe Oct 15, 2024
8c020b2
Update dynapcnn layer tests
bauerfe Oct 15, 2024
5082450
Move layer and network-module instantiation to graph-extractor
bauerfe Oct 15, 2024
70e649e
Make optional for
bauerfe Oct 17, 2024
2d3d32c
doc
Willian-Girao Oct 22, 2024
482499e
Restore original dynapcnn layer attribute names conv_layer and spk_layer
bauerfe Oct 22, 2024
7b63aca
(WIP) Remove dependency on DynapcnnLayerHandler for deployment
bauerfe Oct 22, 2024
5e59539
Update class definitions for ConfigBuilder child classes
bauerfe Oct 23, 2024
98eb42f
Remove dynapcnn_layer_handler.py
bauerfe Oct 23, 2024
62f1e88
Replace `chip_layers_ordering` by layer2core_map.
bauerfe Oct 23, 2024
0d77c73
Update DynapcnnNetwork layer monitoring
bauerfe Oct 23, 2024
59c3a42
Remove now obsolete methods `get_output_core_id` and `get_input_core_…
bauerfe Oct 23, 2024
7887583
Fix minor import issues
bauerfe Oct 24, 2024
7280e8b
Fix import related issues in tests
bauerfe Oct 24, 2024
864a2ff
GraphExtractor: maintain NIRTorch node naming scheme
bauerfe Oct 24, 2024
30b03d0
Edges handler: Make all edge types except weight-neuron optional
bauerfe Oct 24, 2024
82b7c1e
Minor code cleanup in edges handler
bauerfe Oct 24, 2024
eeefbdc
DynapcnnNetwork: Bring back methods `make_config` and `is_compatible_…
bauerfe Oct 24, 2024
704e13e
DynapcnnNetwork: Fix dynapcnn_layers attribute lookup
bauerfe Oct 24, 2024
2f87ef2
DynapcnnNetwork: Add method `has_dvs_layer`
bauerfe Oct 24, 2024
de5ba33
DynapcnnNetworkModule: Try saving `dynapcnn_layers` with integer indi…
bauerfe Oct 24, 2024
d970f18
Fix bugs in mapping
bauerfe Oct 24, 2024
25fb659
Move to utils. New function: . Fix handling of pooling in deployment
bauerfe Oct 24, 2024
a7bbc36
Remove redundant warning for monitoring pooled layers
bauerfe Oct 24, 2024
c9e6caa
minor edit
Willian-Girao Oct 25, 2024
90f9575
Integrate tests for sequential models in test_dynapcnnnetwork
bauerfe Oct 25, 2024
8495b4e
Fix test_auto_mapping
bauerfe Oct 25, 2024
ff68bc1
(WIP - DVS input)
Willian-Girao Oct 25, 2024
6a0bdca
(WIP - DVS input)
Willian-Girao Oct 25, 2024
e9bf8a1
(WIP - DVS input)
Willian-Girao Oct 25, 2024
d0ae8a5
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
cb63e6f
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
49c013a
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
5366513
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
460e7c6
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
bcaec92
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
eedc62d
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
e6aafe0
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
5003086
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
d0c5389
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
6a807ef
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
3711bc0
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
f301599
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
0915c1a
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
8a6553e
WIP DVS - DVS node not given
Willian-Girao Oct 28, 2024
0fa7bf6
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
cf5be86
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
acba916
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
d7f6be3
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
5b3edb2
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
c945a41
WIP DVS - DVS node not given
Willian-Girao Oct 29, 2024
22b69c6
WIP - DVS node not given
Willian-Girao Oct 29, 2024
beade18
WIP - DVS node not given
Willian-Girao Oct 29, 2024
89200d9
WIP - DVS node not given
Willian-Girao Oct 29, 2024
94a67c4
WIP - DVS node not given
Willian-Girao Oct 29, 2024
3a5e419
WIP - DVS node not given
Willian-Girao Oct 29, 2024
430af98
DONE - DVS node not given
Willian-Girao Oct 29, 2024
2988ce2
DONE - DVS node given
Willian-Girao Oct 30, 2024
7b37885
WIP - SW forward with DVS
Willian-Girao Oct 30, 2024
c9e5e70
WIP - SW forward with DVS
Willian-Girao Oct 30, 2024
a709914
WIP - SW forward with DVS
Willian-Girao Oct 30, 2024
b9cc18c
WIP - SW forward with DVS
Willian-Girao Oct 30, 2024
3428c49
WIP - SW forward with DVS
Willian-Girao Oct 30, 2024
986bdfb
DONE - SW forward with DVS
Willian-Girao Oct 30, 2024
51c65a9
Graph extraction: Ignore some classes right away
bauerfe Oct 30, 2024
7ef74e8
fix doorbell test
bauerfe Oct 30, 2024
7e22662
More meaningful exceptions for invalid graph structures
bauerfe Oct 30, 2024
ef02dab
Fix dynapcnn layer scaling
bauerfe Oct 30, 2024
1866a7c
Infer shape after removing flatten
bauerfe Oct 30, 2024
b75885d
Fix behavior when entry nodes are removed from graph
bauerfe Oct 30, 2024
2c60c14
Update unit tests
bauerfe Oct 30, 2024
b68e588
DONE - chip deployment with DVS
Willian-Girao Oct 30, 2024
e3ad6a5
notebooks used to validate deployment with DVS
Willian-Girao Oct 30, 2024
b1b4ec7
notebooks used to validate deployment with DVS
Willian-Girao Oct 30, 2024
5835dae
Merge branch 'nonseq' of github.com:synsense/sinabs into nonseq
bauerfe Oct 31, 2024
e30d67f
Merge local diverging changes
bauerfe Oct 31, 2024
b81cbf1
Fix dynapcnnnetwork test
bauerfe Oct 31, 2024
9c9ec26
Ensure functioning across different nirtorch versions
bauerfe Oct 31, 2024
b2e9214
Fix unit tests
bauerfe Oct 31, 2024
a55e4ef
Merge nonseq
bauerfe Oct 31, 2024
a79addf
DONE - DVSLayer->pooling edge
Willian-Girao Oct 31, 2024
577509a
DONE - DVSLayer->pooling edge
Willian-Girao Oct 31, 2024
02ca1b2
(WIP) Minor refactoring of new dvs layer support
bauerfe Oct 31, 2024
c23acf8
Merge branch 'nonseq_dvs' of github.com:synsense/sinabs into nonseq_dvs
bauerfe Oct 31, 2024
b24edb8
Fix TorchGraph handlers
bauerfe Oct 31, 2024
b3e2ad3
Minor changes to DVS part
bauerfe Oct 31, 2024
2970d66
Update `extend_readout_layer` function to work with new DynapcnnNetwork
bauerfe Oct 31, 2024
3d4793f
Update failing unit tests in `test_large_net`
bauerfe Oct 31, 2024
bb0e132
Add `memory_summary` back to DynapcnnNetwork
bauerfe Oct 31, 2024
6b1a927
WIP improving DVS setup
Willian-Girao Nov 1, 2024
fbf7fff
WIP improving DVS setup
Willian-Girao Nov 1, 2024
a9b45f7
DONE - improving DVS setup
Willian-Girao Nov 1, 2024
41cfc04
Merge branch 'nonseq' of https://github.com/synsense/sinabs into nons…
Willian-Girao Nov 1, 2024
327cfa1
(WIP) Merge Conv2d with BatchNorm2d
Willian-Girao Nov 4, 2024
bf8d450
(WIP) Merge Conv2d with BatchNorm2d
Willian-Girao Nov 4, 2024
10b887d
(WIP) Merge Conv2d with BatchNorm2d
Willian-Girao Nov 4, 2024
50991af
(WIP) Merge Conv2d with BatchNorm2d
Willian-Girao Nov 4, 2024
5ad1211
(DONE) Merge Conv2d with BatchNorm2d
Willian-Girao Nov 4, 2024
14f647b
(DONE) Merge Linear with BatchNorm1d
Willian-Girao Nov 4, 2024
1ec5bd0
Minor revisions in nir graph. Fix merge_polarities
bauerfe Nov 5, 2024
4e297e3
GraphExtractor: Tidy up init method
bauerfe Nov 5, 2024
3cdf548
DynapcnnNetwork: Don't send DVS node info to config builder
bauerfe Nov 5, 2024
6469d91
Minor revisions in dynapcnn layer utils
bauerfe Nov 5, 2024
5d8beee
dynapcnn layer utils: minor syntax improvements
bauerfe Nov 5, 2024
641e4ac
(WIP) DVS layer gets index 'dvs'
bauerfe Nov 5, 2024
6c9e3cc
DVS layer info in separate dict
bauerfe Nov 5, 2024
ef2c6d7
Reformat
bauerfe Nov 5, 2024
c196f98
Remove obsolete functions
bauerfe Nov 5, 2024
3fa8bcf
DVSLayer: Remove redundante comparisons
bauerfe Nov 5, 2024
f914374
Edges handler: Proper checks before merging dvs and pooling layers
bauerfe Nov 5, 2024
ccfc5d2
Make deepcopy of provided DVSLayers
bauerfe Nov 5, 2024
f39833a
Merge nonseq
bauerfe Nov 5, 2024
22edb5f
Reduce code redundancy in batch norm merging
bauerfe Nov 5, 2024
4764923
Blacken edges handler. Fix import in graph extractor
bauerfe Nov 5, 2024
799e2c1
Run black on tests
bauerfe Nov 5, 2024
0a45350
Run Black
bauerfe Nov 5, 2024
5139f1c
Remove outdated functions
bauerfe Nov 5, 2024
cf3a318
Handle edge case where snn is a sequential with only a dvslayer
bauerfe Nov 5, 2024
36bceb8
More meaningful exceptions
bauerfe Nov 5, 2024
c5bdfdf
New test file for networks that should fail
bauerfe Nov 6, 2024
291bba7
Ensure network state is maintained when generating dynapcnn network
bauerfe Nov 6, 2024
37d08d1
Test for incorrect node types
bauerfe Nov 6, 2024
ff39502
Fix a range of bugs
bauerfe Nov 6, 2024
72040d2
WIP: Merge dvs pooling layer
bauerfe Nov 6, 2024
1a5eb27
Fix issues for graphs with DVSLayer and batch norm.
bauerfe Nov 8, 2024
beca062
Fix handling of isolated layers.
bauerfe Nov 8, 2024
660d102
Fix issues related to DVS. Ensure only IAFSqueeze is used in Dynapcnn…
bauerfe Nov 8, 2024
afab026
Correctly handle dvs_input False when dvs layer is provided: Disable …
bauerfe Nov 8, 2024
eb173f1
Improve docstring of DynapcnnNetwork to explain behavior of `dvs_input`.
bauerfe Nov 8, 2024
189cfb8
WIP: Fix DVS input unit tests.
bauerfe Nov 8, 2024
a2bbb4c
(WIP): Fix doorbell tests.
bauerfe Nov 8, 2024
eec090d
Fix doorbell test
bauerfe Nov 8, 2024
876962f
Sort dynapcnn layers in network by key
bauerfe Nov 8, 2024
9460edc
Further bugfixes and improved readability of dynapcnn network repr.
bauerfe Nov 8, 2024
8897d24
Fix monitoring. Enable monitoring exit layers with -1
bauerfe Nov 8, 2024
2680ac1
Properly copy DVSLayer when instantiating DynapcnnNetwork. Fix DVS in…
bauerfe Nov 12, 2024
1cb7b5c
Reintroduce missing methods of DynapcnnNetwork: `reset_states`, `zero…
bauerfe Nov 12, 2024
f1e308d
Support model with only DVS
bauerfe Nov 12, 2024
b8827fa
Fix unit test `test_single_neuron....py`
bauerfe Nov 12, 2024
4d46656
Fix speckmini unit test
bauerfe Nov 12, 2024
681987f
Fix neuron leak unit test
bauerfe Nov 12, 2024
9cdb712
Provide more meaningful error when specific device is not found
bauerfe Nov 12, 2024
57333ce
Remove duplicate test
bauerfe Nov 12, 2024
c0291e1
Remove obsolete TODO
bauerfe Nov 12, 2024
a794647
Minor fixes.
bauerfe Nov 12, 2024
10e1bab
Undo erroneous comment
bauerfe Nov 12, 2024
205077d
Run black and isort
bauerfe Nov 12, 2024
db0b9b5
Fix 'deque' type hint
bauerfe Nov 12, 2024
9fbdf81
Try resolving "Subscripted generics cannot be used with class and ins…
bauerfe Nov 12, 2024
c23ad4e
Re-run black
bauerfe Nov 12, 2024
1954b61
Fix initialization issue
bauerfe Nov 13, 2024
9a194ad
Fix non-deterministic dynapcnn-network test
bauerfe Nov 13, 2024
347e225
Improve numerical robustness of input diff hook unit test
bauerfe Nov 13, 2024
e8fb2c7
Merge branch 'develop' into nonseq
ssinhaleite Feb 5, 2025
2889fe3
Remove author name from files
ssinhaleite Feb 5, 2025
ac00985
Format code
ssinhaleite Feb 5, 2025
7d620cf
Fix typo
ssinhaleite Feb 5, 2025
d91ba3d
Remove author name from files
ssinhaleite Feb 5, 2025
9f13403
Format code
ssinhaleite Feb 5, 2025
efd9866
Fix use of deprecated method
ssinhaleite Feb 5, 2025
c053666
Fix use of deprecated method
ssinhaleite Feb 5, 2025
b7767eb
Format code
ssinhaleite Feb 5, 2025
e0d25c8
Add update versions of python and torch
ssinhaleite Feb 5, 2025
0ea8a78
Update paramete to load model on torch
ssinhaleite Feb 5, 2025
b104b83
Rename variable: map > info
ssinhaleite Feb 24, 2025
0927065
Change order of parameter to match previous implementation
ssinhaleite Feb 24, 2025
2ae7085
Remove author tag from file
ssinhaleite Feb 24, 2025
ff1799c
Update error messages
ssinhaleite Feb 24, 2025
69b8fe6
Update sinabs/backend/dynapcnn/chips/dynapcnn.py
ssinhaleite Feb 24, 2025
e3fb3f1
Update sinabs/backend/dynapcnn/dynapcnn_layer_utils.py
ssinhaleite Feb 24, 2025
e0f2c98
Update sinabs/backend/dynapcnn/dynapcnn_layer_utils.py
ssinhaleite Feb 24, 2025
0c2ed4e
Update sinabs/backend/dynapcnn/dynapcnn_layer_utils.py
ssinhaleite Feb 24, 2025
52f7e46
Fix typo
ssinhaleite Feb 24, 2025
47d9ebb
Update comment
ssinhaleite Feb 24, 2025
bfce288
Fix typo
ssinhaleite Feb 24, 2025
56511a8
Fix typos
ssinhaleite Feb 24, 2025
2cd49ff
Update comment
ssinhaleite Feb 24, 2025
c1e700c
Add requirement needed for tests
ssinhaleite Feb 25, 2025
5b25fda
Move import to outside method
ssinhaleite Feb 25, 2025
fa489d3
Update variable name dlcnn_map > dlcnn_info
ssinhaleite Feb 25, 2025
f8f36c4
Check if torch v2 works to fix the CI
ssinhaleite Feb 25, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
830 changes: 830 additions & 0 deletions examples/dynapcnn_network/snn_deployment.ipynb

Large diffs are not rendered by default.

11 changes: 10 additions & 1 deletion sinabs/backend/dynapcnn/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,14 @@
from .dynapcnn_network import ( # second one for compatibility purposes
from .dynapcnn_network import (
DynapcnnCompatibleNetwork,
DynapcnnNetwork,
)

from .dynapcnn_layer import (
DynapcnnLayer,
)

from .dynapcnn_layer_handler import (
DynapcnnLayerHandler,
)

from .dynapcnn_visualizer import DynapcnnVisualizer
194 changes: 106 additions & 88 deletions sinabs/backend/dynapcnn/chips/dynapcnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,10 @@
from sinabs.backend.dynapcnn.config_builder import ConfigBuilder
from sinabs.backend.dynapcnn.dvs_layer import DVSLayer, expand_to_pair
from sinabs.backend.dynapcnn.dynapcnn_layer import DynapcnnLayer
from sinabs.backend.dynapcnn.dynapcnn_network import DynapcnnNetwork
from sinabs.backend.dynapcnn.dynapcnn_layer_handler import DynapcnnLayerHandler
from sinabs.backend.dynapcnn.mapping import LayerConstraints


class DynapcnnConfigBuilder(ConfigBuilder):
@classmethod
def get_samna_module(cls):
Expand Down Expand Up @@ -44,26 +45,26 @@ def set_kill_bits(cls, layer: DynapcnnLayer, config_dict: dict) -> dict:
"""
config_dict = copy.deepcopy(config_dict)

if layer.conv_layer.bias is not None:
(weights, biases) = layer.conv_layer.parameters()
if layer.conv.bias is not None:
(weights, biases) = layer.conv.parameters()
else:
(weights,) = layer.conv_layer.parameters()
biases = torch.zeros(layer.conv_layer.out_channels)
(weights,) = layer.conv.parameters()
biases = torch.zeros(layer.conv.out_channels)

config_dict["weights_kill_bit"] = (~weights.bool()).tolist()
config_dict["biases_kill_bit"] = (~biases.bool()).tolist()

# - Neuron states
if not layer.spk_layer.is_state_initialised():
if not layer.spk.is_state_initialised():
# then we assign no initial neuron state to DYNAP-CNN.
f, h, w = layer.get_neuron_shape()
neurons_state = torch.zeros(f, w, h)
elif layer.spk_layer.v_mem.dim() == 4:
elif layer.spk.v_mem.dim() == 4:
# 4-dimensional states should be the norm when there is a batch dim
neurons_state = layer.spk_layer.v_mem.transpose(2, 3)[0]
neurons_state = layer.spk.v_mem.transpose(2, 3)[0]
else:
raise ValueError(
f"Current v_mem (shape: {layer.spk_layer.v_mem.shape}) of spiking layer not understood."
f"Current v_mem (shape: {layer.spk.v_mem.shape}) of spiking layer not understood."
)

config_dict["neurons_value_kill_bit"] = (
Expand All @@ -73,12 +74,12 @@ def set_kill_bits(cls, layer: DynapcnnLayer, config_dict: dict) -> dict:
return config_dict

@classmethod
def get_dynapcnn_layer_config_dict(cls, layer: DynapcnnLayer):
def get_dynapcnn_layer_config_dict(cls, layer: DynapcnnLayer, layer_handler: DynapcnnLayerHandler, all_handlers: dict) -> dict:
config_dict = {}
config_dict["destinations"] = [{}, {}]

# Update the dimensions
channel_count, input_size_y, input_size_x = layer.input_shape
channel_count, input_size_y, input_size_x = layer.in_shape
dimensions = {"input_shape": {}, "output_shape": {}}
dimensions["input_shape"]["size"] = {"x": input_size_x, "y": input_size_y}
dimensions["input_shape"]["feature_count"] = channel_count
Expand All @@ -90,148 +91,164 @@ def get_dynapcnn_layer_config_dict(cls, layer: DynapcnnLayer):
dimensions["output_shape"]["size"]["x"] = w
dimensions["output_shape"]["size"]["y"] = h
dimensions["padding"] = {
"x": layer.conv_layer.padding[1],
"y": layer.conv_layer.padding[0],
"x": layer.conv.padding[1],
"y": layer.conv.padding[0],
}
dimensions["stride"] = {
"x": layer.conv_layer.stride[1],
"y": layer.conv_layer.stride[0],
"x": layer.conv.stride[1],
"y": layer.conv.stride[0],
}
dimensions["kernel_size"] = layer.conv_layer.kernel_size[0]
dimensions["kernel_size"] = layer.conv.kernel_size[0]

if dimensions["kernel_size"] != layer.conv_layer.kernel_size[1]:
if dimensions["kernel_size"] != layer.conv.kernel_size[1]:
raise ValueError("Conv2d: Kernel must have same height and width.")
config_dict["dimensions"] = dimensions
# Update parameters from convolution
if layer.conv_layer.bias is not None:
(weights, biases) = layer.conv_layer.parameters()
if layer.conv.bias is not None:
(weights, biases) = layer.conv.parameters()
else:
(weights,) = layer.conv_layer.parameters()
biases = torch.zeros(layer.conv_layer.out_channels)
(weights,) = layer.conv.parameters()
biases = torch.zeros(layer.conv.out_channels)
weights = weights.transpose(2, 3) # Need this to match samna convention
config_dict["weights"] = weights.int().tolist()
config_dict["biases"] = biases.int().tolist()
config_dict["leak_enable"] = biases.bool().any()
# config_dict["weights_kill_bit"] = torch.zeros_like(weights).bool().tolist()
# config_dict["biases_kill_bit"] = torch.zeros_like(biases).bool().tolist()

# Update parameters from the spiking layer

# - Neuron states
if not layer.spk_layer.is_state_initialised():
if not layer.spk.is_state_initialised():
# then we assign no initial neuron state to DYNAP-CNN.
f, h, w = layer.get_neuron_shape()
neurons_state = torch.zeros(f, w, h)
elif layer.spk_layer.v_mem.dim() == 4:
elif layer.spk.v_mem.dim() == 4:
# 4-dimensional states should be the norm when there is a batch dim
neurons_state = layer.spk_layer.v_mem.transpose(2, 3)[0]
neurons_state = layer.spk.v_mem.transpose(2, 3)[0]
else:
raise ValueError(
f"Current v_mem (shape: {layer.spk_layer.v_mem.shape}) of spiking layer not understood."
f"Current v_mem (shape: {layer.spk.v_mem.shape}) of spiking layer not understood."
)

# - Resetting vs returning to 0
if isinstance(layer.spk_layer.reset_fn, sinabs.activation.MembraneReset):
if isinstance(layer.spk.reset_fn, sinabs.activation.MembraneReset):
return_to_zero = True
elif isinstance(layer.spk_layer.reset_fn, sinabs.activation.MembraneSubtract):
elif isinstance(layer.spk.reset_fn, sinabs.activation.MembraneSubtract):
return_to_zero = False
else:
raise Exception(
"Unknown reset mechanism. Only MembraneReset and MembraneSubtract are currently understood."
)

# if (not return_to_zero) and self.spk_layer.membrane_subtract != self.spk_layer.threshold:
# warn(
# "SpikingConv2dLayer: Subtraction of membrane potential is always by high threshold."
# )
if layer.spk_layer.min_v_mem is None:
if layer.spk.min_v_mem is None:
min_v_mem = -(2**15)
else:
min_v_mem = int(layer.spk_layer.min_v_mem)
min_v_mem = int(layer.spk.min_v_mem)
config_dict.update(
{
"return_to_zero": return_to_zero,
"threshold_high": int(layer.spk_layer.spike_threshold),
"threshold_high": int(layer.spk.spike_threshold),
"threshold_low": min_v_mem,
"monitor_enable": False,
"neurons_initial_value": neurons_state.int().tolist(),
# "neurons_value_kill_bit" : torch.zeros_like(neurons_state).bool().tolist()
}
)
# Update parameters from pooling
if layer.pool_layer is not None:
config_dict["destinations"][0]["pooling"] = expand_to_pair(
layer.pool_layer.kernel_size
)[0]
config_dict["destinations"][0]["enable"] = True
else:
pass

# setting destinations config. based on destinations destination nodes of the nodes withing this `dcnnl`.
destinations = []
for node_id, destination_nodes in layer_handler.nodes_destinations.items():
for dest_node in destination_nodes:
core_id = DynapcnnLayerHandler.find_nodes_core_id(dest_node, all_handlers)
kernel_size = layer_handler.get_pool_kernel_size(node_id)

dest_data = {
'layer': core_id,
'enable': True,
'pooling': expand_to_pair(kernel_size if kernel_size else 1),
}

destinations.append(dest_data)
config_dict["destinations"] = destinations

# Set kill bits
config_dict = cls.set_kill_bits(layer=layer, config_dict=config_dict)

return config_dict

@classmethod
def write_dynapcnn_layer_config(
cls, layer: DynapcnnLayer, chip_layer: "CNNLayerConfig"
):
"""Write a single layer configuration to the dynapcnn conf object.
def write_dynapcnn_layer_config(cls, layer: DynapcnnLayer, chip_layer: "CNNLayerConfig", layer_handler: DynapcnnLayerHandler, all_handlers: dict) -> None:
""" Write a single layer configuration to the dynapcnn conf object. Uses the data in `layer` to configure a `CNNLayerConfig` to be
deployed on chip.

Parameters
----------
layer:
The dynapcnn layer to write the configuration for
chip_layer: CNNLayerConfig
DYNAPCNN configuration object representing the layer to which
configuration is written.
- layer (DynapcnnLayer): the layer for which the condiguration will be written.
- chip_layer (CNNLayerConfig): configuration object representing the layer to which configuration is written.
- layer_handler (DynapcnnLayerHandler): ...
- all_handlers (dict): ...
"""
config_dict = cls.get_dynapcnn_layer_config_dict(layer=layer)
# Update configuration of the DYNAPCNN layer

# extracting from a DynapcnnLayer the config. variables for its CNNLayerConfig.
config_dict = cls.get_dynapcnn_layer_config_dict(layer=layer, layer_handler=layer_handler, all_handlers=all_handlers)

# update configuration of the DYNAPCNN layer.
chip_layer.dimensions = config_dict["dimensions"]
config_dict.pop("dimensions")
for i in range(len(config_dict["destinations"])):
if "pooling" in config_dict["destinations"][i]:
chip_layer.destinations[i].pooling = config_dict["destinations"][i][
"pooling"
]
config_dict.pop("destinations")

# set the destinations configuration.
for i in range(len(config_dict['destinations'])):
chip_layer.destinations[i].layer = config_dict['destinations'][i]['layer']
chip_layer.destinations[i].enable = config_dict['destinations'][i]['enable']
chip_layer.destinations[i].pooling = config_dict['destinations'][i]['pooling']

config_dict.pop('destinations')

# set remaining configuration.
for param, value in config_dict.items():
try:
setattr(chip_layer, param, value)
except TypeError as e:
raise TypeError(f"Unexpected parameter {param} or value. {e}")

@classmethod
def build_config(cls, model: "DynapcnnNetwork", chip_layers: List[int]):
layers = model.sequence
def build_config(cls, model: DynapcnnNetwork) -> DynapcnnConfiguration:
""" Uses `DynapcnnLayer` objects to configure their equivalent chip core via a `CNNLayerConfig` object that is built
using using the `DynapcnnLayer` properties.

Parameters
----------
- model (DynapcnnNetwork): network instance used to read out `DynapcnnLayer` instances.

Returns
----------
- config (DynapcnnConfiguration): an instance of a `DynapcnnConfiguration`.
"""
config = cls.get_default_config()

has_dvs_layer = False
i_cnn_layer = 0 # Instantiate an iterator for the cnn cores
for i, chip_equivalent_layer in enumerate(layers):
if isinstance(chip_equivalent_layer, DVSLayer):
chip_layer = config.dvs_layer
cls.write_dvs_layer_config(chip_equivalent_layer, chip_layer)
has_dvs_layer = True
elif isinstance(chip_equivalent_layer, DynapcnnLayer):
chip_layer = config.cnn_layers[chip_layers[i_cnn_layer]]
cls.write_dynapcnn_layer_config(chip_equivalent_layer, chip_layer)
i_cnn_layer += 1
else:
# in our generated network there is a spurious layer...
# should never happen
raise TypeError("Unexpected layer in the model")
if not isinstance(model, DynapcnnNetwork):
raise ValueError(f"`model` has to be of type DynapcnnNetwork, but is {type(model)}.")

has_dvs_layer = False # TODO DVSLayer not supported yet.

if i == len(layers) - 1:
# last layer
chip_layer.destinations[0].enable = False
else:
# Set destination layer
chip_layer.destinations[0].layer = chip_layers[i_cnn_layer]
chip_layer.destinations[0].enable = True
# Loop over layers in network and write corresponding configurations
for layer_index, ith_dcnnl in model.layers_mapper.items():
if isinstance(ith_dcnnl, DVSLayer):
# TODO DVSLayer not supported yet.
pass

elif isinstance(ith_dcnnl, DynapcnnLayer):
# retrieve assigned core from the handler of this DynapcnnLayer (`ith_dcnnl`) instance.
chip_layer = config.cnn_layers[model.layers_handlers[layer_index].assigned_core]
# write core configuration.
cls.write_dynapcnn_layer_config(ith_dcnnl, chip_layer, model.layers_handlers[layer_index], model.layers_handlers)

else:
# shouldn't happen since type checks are made previously.
raise TypeError(f"Layer (index {layer_index}) is unexpected in the model: \n{ith_dcnnl}")

if not has_dvs_layer:
# TODO DVSLayer not supported yet.
config.dvs_layer.pass_sensor_events = False
else:
config.dvs_layer.pass_sensor_events = False

return config
Expand Down Expand Up @@ -293,12 +310,13 @@ def monitor_layers(cls, config: "DynapcnnConfiguration", layers: List):
config.dvs_layer.monitor_enable = True
if config.dvs_layer.pooling.x != 1 or config.dvs_layer.pooling.y != 1:
warn(
f"DVS layer has pooling and is being monitored. "
"DVS layer has pooling and is being monitored. "
"Note that pooling will not be reflected in the monitored events."
)
monitor_layers.remove("dvs")
for lyr_indx in monitor_layers:
config.cnn_layers[lyr_indx].monitor_enable = True

if any(
dest.pooling != 1 for dest in config.cnn_layers[lyr_indx].destinations
):
Expand Down
6 changes: 3 additions & 3 deletions sinabs/backend/dynapcnn/chips/speck2cmini.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from typing import List
from typing import List, Dict

import samna
from samna.speck2cMini.configuration import SpeckConfiguration
Expand Down Expand Up @@ -29,8 +29,8 @@ def get_output_buffer(cls):
return samna.BasicSinkNode_speck2c_mini_event_output_event()

@classmethod
def get_dynapcnn_layer_config_dict(cls, layer: DynapcnnLayer):
config_dict = super().get_dynapcnn_layer_config_dict(layer=layer)
def get_dynapcnn_layer_config_dict(cls, layer: DynapcnnLayer, layers_mapper: Dict[int, DynapcnnLayer]) -> dict:
config_dict = super().get_dynapcnn_layer_config_dict(layer=layer, layers_mapper=layers_mapper)
config_dict.pop("weights_kill_bit")
config_dict.pop("biases_kill_bit")
config_dict.pop("neurons_value_kill_bit")
Expand Down
6 changes: 4 additions & 2 deletions sinabs/backend/dynapcnn/chips/speck2e.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@

from .dynapcnn import DynapcnnConfigBuilder

from typing import Dict

# Since most of the configuration is identical to DYNAP-CNN, we can simply inherit this class


Expand All @@ -30,6 +32,6 @@ def set_kill_bits(cls, layer: DynapcnnLayer, config_dict: dict) -> dict:
return config_dict

@classmethod
def get_dynapcnn_layer_config_dict(cls, layer: DynapcnnLayer):
config_dict = super().get_dynapcnn_layer_config_dict(layer=layer)
def get_dynapcnn_layer_config_dict(cls, layer: DynapcnnLayer, layers_mapper: Dict[int, DynapcnnLayer]) -> dict:
config_dict = super().get_dynapcnn_layer_config_dict(layer=layer, layers_mapper=layers_mapper)
return config_dict
6 changes: 4 additions & 2 deletions sinabs/backend/dynapcnn/chips/speck2f.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@

from .dynapcnn import DynapcnnConfigBuilder

from typing import Dict

# Since most of the configuration is identical to DYNAP-CNN, we can simply inherit this class


Expand All @@ -26,8 +28,8 @@ def get_output_buffer(cls):
return samna.BasicSinkNode_speck2f_event_output_event()

@classmethod
def get_dynapcnn_layer_config_dict(cls, layer: DynapcnnLayer):
config_dict = super().get_dynapcnn_layer_config_dict(layer=layer)
def get_dynapcnn_layer_config_dict(cls, layer: DynapcnnLayer, layers_mapper: Dict[int, DynapcnnLayer]) -> dict:
config_dict = super().get_dynapcnn_layer_config_dict(layer=layer, layers_mapper=layers_mapper)
config_dict.pop("weights_kill_bit")
config_dict.pop("biases_kill_bit")
config_dict.pop("neurons_value_kill_bit")
Expand Down
Loading
Loading