Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactoring of RTL/HLS component integration #928

Merged
merged 336 commits into from
Mar 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
336 commits
Select commit Hold shift + click to select a range
8416358
[CustomOp] Move stream declaration for hls code into hlsbackend
auphelia Jan 30, 2024
3a0da24
[CustomOp] Move dataout cpp function to hlsbackend
auphelia Jan 30, 2024
bf5de4d
Revert "[TBS] Clean up branch for HLS variant only"
Jan 30, 2024
36603f6
[tests] add rtl impl style to threshold test
Jan 30, 2024
8843c0e
[CustomOp] Add Thresholding RTL Class
Jan 30, 2024
7b272bd
[CustomOp] Clean up tests and move dynamic mode in swg hw abstraction…
auphelia Jan 31, 2024
2cafe59
[Tests] Fix linting for swg dynamic test
auphelia Jan 31, 2024
a0e5639
[CustomOp] rtl threshold must inherit abstraction and rtlbackend
Jan 31, 2024
33ed740
[CustomOp] Remove duplicate inherited functions and attributes from t…
Jan 31, 2024
5f05460
[Test] add helper functions
Feb 1, 2024
ff3d60c
[Test] RTL test skip cppsim exec mode
Feb 1, 2024
0e7dbcd
Merge remote-tracking branch 'origin/refactor/rtl_integration' into r…
Feb 1, 2024
c54d32c
[Pyverilator] update to new rtlsim_multi_io implementation
Feb 1, 2024
2110334
[CustomOp] overload thresholding rtl code_generation_ipgen function
Feb 1, 2024
be5ae02
[tests] relax rtlsim cycle count match
Feb 1, 2024
f0dcec3
[hlsbackend]: update limit HLS axi streams (8k-1)
mmrahorovic Jan 11, 2024
5176eb7
[mvau hls]: refactored MVAU_hls custom_op
mmrahorovic Jan 26, 2024
b7480bb
[refactor]: call to base_op_type method instead of custom_op type
mmrahorovic Jan 26, 2024
4f707d8
[hls custom-op]: add mvau_hls
mmrahorovic Jan 26, 2024
7c6065c
[hw custom-op]: refactor MVAU
mmrahorovic Jan 26, 2024
0cb2d59
[VVAU hw custom-op]: add base_op_type method
mmrahorovic Jan 26, 2024
627639a
[transform]: add transformation to infer MVAU hw custom-op
mmrahorovic Jan 26, 2024
cd3d431
[test mvau]: modified to support new custom-ops
mmrahorovic Jan 26, 2024
0348a7c
[vvau hls]: add custom op to dict
mmrahorovic Feb 1, 2024
b2c10d8
[vvu hw-op]: refactored hw custom-op VVAU
mmrahorovic Feb 1, 2024
f7d0ad9
[vvau hls-op]: refactored HLS custom-op VVAU
mmrahorovic Feb 1, 2024
f9b8fbc
[convert-to-hw]: added transformations to infer binary-MVAU and VVAU
mmrahorovic Feb 1, 2024
8be157c
[mvau/vvau hw-op]: remove duplicate node attribute
mmrahorovic Feb 1, 2024
445cfa6
[hw vvau]: rename specific method to more generic name
mmrahorovic Feb 2, 2024
e33104e
[hw vvau]: minor bugfix to node execution
mmrahorovic Feb 2, 2024
6884030
[test]: extend vvau test to simulate HW custom-op as well
mmrahorovic Feb 2, 2024
8aaec4b
[hw mvau]: minor bugfix to node execution and cleaned up code
mmrahorovic Feb 2, 2024
07f977e
[test]: cleaned up mvau test
mmrahorovic Feb 2, 2024
3466e88
[hw mvau]: minor bugfix
mmrahorovic Feb 2, 2024
496869f
updated copyright header
mmrahorovic Feb 2, 2024
2910aca
[hls vvau]: add execute_node function
mmrahorovic Feb 2, 2024
b4fb604
[CustomOp] re arrange threshold mem_mode related attributes to match …
Feb 2, 2024
9ec0a3d
[rtllib] Remove threshold IP Package
Feb 2, 2024
3fd5260
[CustomOp] do not allow ram_style for threshold RTL
Feb 2, 2024
c8281c3
Revert "[Pyverilator] update to new rtlsim_multi_io implementation"
Feb 2, 2024
7fd0cee
Merge pull request #971 from mmrahorovic/refactor/hls_mvu_vvu
auphelia Feb 2, 2024
3f3b7c5
[CustomOp] Thresholding node must explicitly reset_rtlsim
Feb 2, 2024
84ec9ea
[CustomOp/Transform] Fix linting and cleanup
auphelia Feb 2, 2024
05881df
[Util] Introduce new functions to check if node is hls or rtl
auphelia Feb 7, 2024
94a2ff3
[Tests] First cleanup over tests to update to new flow
auphelia Feb 7, 2024
d24ef63
[CustomOp] Thresholding Generate Param
Feb 8, 2024
c4b7b4b
[Tests/transforms] Cleanup tests and transforms for new flow
auphelia Feb 8, 2024
0fe2e30
[Tests] Update infer data layout test
auphelia Feb 9, 2024
40cfe01
[Builder/Transform] Update builder and transformations according to n…
auphelia Feb 9, 2024
7c3ccd3
[Tests] Change cnv dictionary for bnn pynq test
auphelia Feb 9, 2024
91d5839
Merge dev into refactor/rtl_integration
auphelia Feb 9, 2024
64c0c7d
[Tests] Update folding test
auphelia Feb 9, 2024
ba56a2d
[Tests] Update fifo and ipstitch test to new flow
auphelia Feb 12, 2024
79ef071
[CustomOp] Fix typo in HLS SWG LUT estimation
auphelia Feb 13, 2024
5b10b98
[Tests] Update cybsec mlp test to new flow
auphelia Feb 13, 2024
100d281
[hw mvau]: remove dsp/lut estimation functions, modified how ip gets …
mmrahorovic Feb 13, 2024
3a36ef1
[hls mvau]: added lut/dsp estimation functions, instantiate_ip method…
mmrahorovic Feb 13, 2024
4266e08
[test]: added GiveUniqueNodeNames transform and changed RTLsim test p…
mmrahorovic Feb 13, 2024
5dfc440
post linting
mmrahorovic Feb 13, 2024
a6a3d4c
[tests] Split threshold runtime tests to runtime read and write tests
Feb 13, 2024
9c96192
[CustomOp] Zero pad row of threshold weight dat file
Feb 13, 2024
d460dac
Merge remote-tracking branch 'origin/refactor/rtl_integration' into r…
Feb 13, 2024
526e71f
[hls mvau]: minor style change
mmrahorovic Feb 16, 2024
1091ce9
[Builder] Expose swg expection for FIFOs to build args
auphelia Feb 16, 2024
462a79c
linting
mmrahorovic Feb 16, 2024
f31f844
[IPStitching] Check if node has hls or rtl backend
auphelia Feb 16, 2024
11b4370
Merge pull request #980 from mmrahorovic/bugfix/mvu_hls_refactor
auphelia Feb 16, 2024
91679a1
[MVAU] Shorten op type MatrixVectorActivation to MVAU
auphelia Feb 16, 2024
b99035a
[MVAU/Tests] Change rtlsim function in MVAU execute node
auphelia Feb 19, 2024
e29485a
[Tests] Change tests to use new op type for MVAU
auphelia Feb 19, 2024
7429ee6
[CustomOp] Zero Pad threshold weights file between channel folds
Feb 20, 2024
b8b7baf
[Tests] Fix MVAU test with large depth decoupled mode
auphelia Feb 20, 2024
8220852
[NB] First cleanup over notebooks
auphelia Feb 20, 2024
c5ca128
[NB] Update cybersec notebooks
auphelia Feb 20, 2024
0928d31
[test]: added extra tests to RTL-based MVAU
mmrahorovic Feb 20, 2024
ef8157c
Reply to readbacks from padded memory areas.
preusser Feb 21, 2024
a395fc7
[Transform/Analysis] Cleanup usage of is_fpgadataflow_node
auphelia Feb 21, 2024
34716ba
[tests] only check hls model analysis on hls modules
Feb 22, 2024
2bf40ca
[tests] increase folding config for threshold tests
Feb 22, 2024
c09005b
[tests] rename threshold weight files for distributed testing
Feb 22, 2024
0f03e37
[CustomOp] threshold stage loop starts from 0
Feb 22, 2024
da909e2
Merge remote-tracking branch 'origin/refactor/rtl_integration' into r…
Feb 22, 2024
c4e57da
[tests] convert to hw test for thresholding layers
Feb 22, 2024
666356a
[CustomOp] Update copyright headers for thresholding
Feb 23, 2024
8937811
[CustomOp] Move calc_tmem to abstraction layer
Feb 23, 2024
ce14ea2
[RTL layers] Default to parent execute node function for cppsim
auphelia Feb 23, 2024
2438831
[CustomOps] threshold mem_mode for HLS variant only
Feb 23, 2024
e3e8c97
[Transform] Clean up SpecializeLayers transform
auphelia Feb 23, 2024
55671ac
[Transform] Cleanup InsertDWC check if node is dwc node
auphelia Feb 23, 2024
b60dc42
[RTL layers] Remove warning for cppsim
auphelia Feb 23, 2024
e7c1e5f
[CustomOp] restructure class methods from class hierachy
Feb 28, 2024
d16d493
[CustomOp] Remove redudent methods from thresholding rtl
Mar 1, 2024
d612c29
[CustomOp] clean up threshold weight generation
Mar 1, 2024
503efe7
[CustomOps] make weight files during HDL file generation
Mar 1, 2024
2c50994
[tests] threshold test get the right impl_style
Mar 1, 2024
d48c711
[CustomOp] Add doc string for memutil function
Mar 1, 2024
9b281e1
Merge remote-tracking branch 'origin/refactor/rtl_integration' into r…
Mar 4, 2024
ff3458b
[build dataflow]: add fpgapart as argument to SpecializeLayers transform
mmrahorovic Mar 4, 2024
4f4385f
[hls mvau]: remove duplicate method
mmrahorovic Mar 4, 2024
055c8fe
[hw mvau]: move get_verilog_top_module_intf_names to hw-op abstractio…
mmrahorovic Mar 4, 2024
fd0f796
added MVAU_rtl custom-op
mmrahorovic Mar 4, 2024
91a8c00
[transform]: minor fix to extracting op_type from node, added fpgapar…
mmrahorovic Mar 4, 2024
d7f8714
[transform]: added fpgapart as attribute and functions to determine w…
mmrahorovic Mar 4, 2024
11d0c5c
[util]: added function to check if device is part of Versal family
mmrahorovic Mar 4, 2024
ea6fb35
[rtl mvu/vvu]: rtl compute core, flow control and axi wrapper for MVU…
mmrahorovic Mar 4, 2024
b295329
[tb]: testbench for replay_buffer and mvu/vvu layers
mmrahorovic Mar 4, 2024
7cf62c7
[Tests] Specialize layers before checksum hook insertion
auphelia Mar 4, 2024
83fe7e8
[rtl mvu]: added MVU_rtl layer
mmrahorovic Mar 4, 2024
4f19aa4
[Tests] Fix for cppsim with impl style rtl in SWG
auphelia Mar 4, 2024
649c428
[test]: added mvau_rtl test case
mmrahorovic Mar 4, 2024
58cfbb4
Merge remote-tracking branch 'upstream/refactor/rtl_integration' into…
mmrahorovic Mar 4, 2024
87f551f
[Pre-commit] Run linting
auphelia Mar 5, 2024
f8c987c
[RTL layers] Pass model by default to generate hdl functionality and …
auphelia Mar 5, 2024
3244048
[Thresholding HLS] Clean up weightstream width functions
auphelia Mar 5, 2024
bf17bc3
[Threshold RTL] Remove unused generate params fct
auphelia Mar 5, 2024
8a68327
[Thresholding] Code clean for generation of hw compatible tensor
auphelia Mar 5, 2024
4e244a7
[Tests] Add comment to params for thresholding test
auphelia Mar 5, 2024
ac56bae
[NBs] Update folding notebook
auphelia Mar 5, 2024
4a3eeda
[Thresholding RTL] Add doc strings to class methods
Mar 6, 2024
f759400
[tests] functional validation thresholding to_hw transform
Mar 6, 2024
741fcfe
Merge dev in refactor/rtl_integration
auphelia Mar 6, 2024
06607d5
[mem mode] Refactor mem_mode argument
auphelia Mar 6, 2024
4b3737a
[NBs] Cleanup advanced builder nb and add placeholder for specialize …
auphelia Mar 6, 2024
c79f364
[Threshold RTL] Remove redundent functions
Mar 6, 2024
999ed82
[mvau]: renamed method to more generic name
mmrahorovic Mar 6, 2024
209b81c
[rtl mvau]: add CPPsim functionality (fall back to MVAU exec)
mmrahorovic Mar 6, 2024
0f216a7
[specialize layers]: minor bugfix and removed VVU-related support
mmrahorovic Mar 6, 2024
dd0369c
[test]: added RTL-MVAU CPPsim test
mmrahorovic Mar 6, 2024
41b7615
[tests] remove util test
Mar 7, 2024
105e089
Merge remote-tracking branch 'origin/refactor/rtl_integration' into r…
Mar 7, 2024
216cb0d
[tests] Dont skip BIPOLAR test for thresholding
Mar 7, 2024
0014745
[Thresholding] bipolar type do not require negative activation
Mar 7, 2024
95b51ba
[refactor] linting
Mar 7, 2024
0d240f6
[rtl mvau]: added methods related to RTL file retrieval and corrected…
mmrahorovic Mar 7, 2024
943dcf3
updated copyright header
mmrahorovic Mar 7, 2024
dde16a9
[transform]: renamed variable
mmrahorovic Mar 7, 2024
8986c23
[rtlbackend]: added additional parameters to generate_hdl
mmrahorovic Mar 7, 2024
ee5312e
[rtl op]: extended generate_hdl argument list
mmrahorovic Mar 7, 2024
b69a0fd
[rtlbackend]: extended argument list of abstractmethod accordingly
mmrahorovic Mar 7, 2024
366db07
[mvau]: renamed method to more generic name
mmrahorovic Mar 8, 2024
4334dd9
minor fix to abstractmethod parameters
mmrahorovic Mar 8, 2024
0929798
minor fix to comment
mmrahorovic Mar 8, 2024
eb6e0ae
[test]: cleaned up test and minor modifications for supporting RTL-op
mmrahorovic Mar 8, 2024
5f7e9ae
[test]: minor change to get_nodes_by_op_type call
mmrahorovic Mar 8, 2024
4bb2e88
updated PyVerilator commit hash
mmrahorovic Mar 8, 2024
dbd715d
[rtl mvau]: updated DSP resource estimates
mmrahorovic Mar 8, 2024
8859d81
[transform]: added additional check for rtl-MVAU and added is_versal …
mmrahorovic Mar 8, 2024
38a3d05
Merge branch 'dev' into refactor/rtl_integration
auphelia Mar 8, 2024
eab06ee
Merge remote-tracking branch 'upstream/refactor/rtl_integration' into…
mmrahorovic Mar 8, 2024
f61ced8
[Tests] Set mem_mode only if impl_style=hls for thresholding
auphelia Mar 8, 2024
5fe519d
[Thresholding] Rename mem mode to internal_decoupled
auphelia Mar 8, 2024
b67281b
[transform]: add default empty string to fpgapart
mmrahorovic Mar 8, 2024
dfc1b20
[transform]: minor fix to how fpgapart is propagated
mmrahorovic Mar 8, 2024
e1a18c7
[Tests] Update runtime thresholding test with new mem mode
auphelia Mar 8, 2024
0883e56
Merge pull request #968 from Xilinx/refactor/threshold_rtl
auphelia Mar 8, 2024
62b1655
[transform]: minor fix to infer right MVAU type
mmrahorovic Mar 8, 2024
f35dbf8
[Tests] Remove mem_mode from conversion to hw in end2end tests
auphelia Mar 8, 2024
c9b1d37
[Tests] Extend check to cover all cases for cppsim rtl swg
auphelia Mar 11, 2024
96712fd
[RTL Thresholding] Temporarily defaulting to HLS variant in conversion
auphelia Mar 11, 2024
acd4c55
[NBs] Update tfc end2end notebooks to reflect new flow
auphelia Mar 11, 2024
561e69b
[NBs] Fix linting for verification svg
auphelia Mar 11, 2024
c4aa418
[mvu]: updated comments and removed mvu_vvu_lut module
mmrahorovic Mar 12, 2024
07ac1c9
[Thresholding] Add NC case to HW op execution fct
auphelia Mar 12, 2024
68ea110
[NBs] Update cnv end2end and advanced builder settings notebook
auphelia Mar 12, 2024
9aab2a4
[Docs] Update auto generated docs files
auphelia Mar 14, 2024
13afb71
updated mvu_rtl checker
mmrahorovic Mar 14, 2024
f1d4c2c
[rtl mvau]: added more info to assertion message
mmrahorovic Mar 14, 2024
15ce083
minor fix to if-branch
mmrahorovic Mar 14, 2024
cfdf0bc
minor fix to if-branch
mmrahorovic Mar 14, 2024
f87d290
[tests]: fixed assert statement for fifo characterization
mmrahorovic Mar 14, 2024
a6e4376
[tests]: fixed assert statement for fifo characterization
mmrahorovic Mar 14, 2024
79ca572
[rtl mvau]: update mem_mode options
mmrahorovic Mar 14, 2024
e2e0a4c
[tests]: clean-up
mmrahorovic Mar 14, 2024
f38e7ed
[rtl mvau]: update mem_mode options
mmrahorovic Mar 14, 2024
7c8dc6d
[tests]: clean-up
mmrahorovic Mar 14, 2024
a17bb19
[renaming]: renamed VectorVectorActivation to VVAU due to buffer over…
mmrahorovic Mar 14, 2024
8a48cac
[hls vvau]: renamed layer and added method to instantiate ip
mmrahorovic Mar 14, 2024
400c043
[rtl vvau]: RTL VVAU custom-op
mmrahorovic Mar 14, 2024
8424342
[vvau]: changed weight file generation and execution_node; accounted …
mmrahorovic Mar 14, 2024
0f25d43
[transform]: added support for converting to VVAU-RTL layer
mmrahorovic Mar 14, 2024
94f0830
[test]: added test for RTL-VVAU
mmrahorovic Mar 14, 2024
3fe3e06
Broadcast quantization scale to channel dimension
Mar 13, 2024
88f59b3
Broadcast per tensor threshold weights to all channels
Mar 15, 2024
e7d5af3
Revert "Broadcast quantization scale to channel dimension"
Mar 15, 2024
aac490b
Merge remote-tracking branch 'upstream/refactor/rtl_integration' into…
mmrahorovic Mar 15, 2024
61257b1
Merge remote-tracking branch 'upstream/refactor/rtl_integration' into…
mmrahorovic Mar 15, 2024
c0a1d73
[mvau]: update mem_mode name
mmrahorovic Mar 15, 2024
a43d96c
[mvau]: update mem_mode name
mmrahorovic Mar 15, 2024
73bfb34
[vvau]: moved/added get_verilog_top_module_intf_names to HW-custom op
mmrahorovic Mar 15, 2024
b2a87d6
cleaned up comments and obsolete methods
mmrahorovic Mar 15, 2024
9eb746a
[mvau]: set default resType to auto
mmrahorovic Mar 15, 2024
c3bfa3f
[folding]: add MVAU_rtl in auto-folding
mmrahorovic Mar 15, 2024
2f2db73
[transform]: added comments and extra check to prevent binaryxnor_mod…
mmrahorovic Mar 15, 2024
fdbfc88
Merge remote-tracking branch 'origin/refactor/rtl_ops' into refactor/…
mmrahorovic Mar 15, 2024
b050024
[HWop/Tests] Cleanup of unsused fct in HWCustomOp and invalid skippin…
auphelia Mar 15, 2024
f61aa0d
add MVAU_rtl extension
mmrahorovic Mar 15, 2024
e4caf06
update comments
mmrahorovic Mar 15, 2024
aa5b46f
Merge remote-tracking branch 'origin/refactor/rtl_ops' into refactor/…
mmrahorovic Mar 15, 2024
6f07732
cleaned up with pre-commit
mmrahorovic Mar 15, 2024
76c0b3b
Merge pull request #995 from mmrahorovic/refactor/rtl_ops
auphelia Mar 15, 2024
f0a3cd0
Merge remote-tracking branch 'origin/refactor/rtl_ops' into refactor/…
mmrahorovic Mar 15, 2024
bd16f2e
[Tests] Fix checks for tests if converted to RTL MVU
auphelia Mar 19, 2024
f70f531
[Tests] Update tests
auphelia Mar 19, 2024
3524e16
[Tests] Add minimize accumulator width to deconv test
auphelia Mar 19, 2024
6bb4d65
Merge remote-tracking branch 'upstream/refactor/rtl_integration' into…
mmrahorovic Mar 19, 2024
9c8406b
[transform]: updated comment VVU-RTL checker
mmrahorovic Mar 19, 2024
212c44a
[transform]: fix to default to HLS MVAU if bit-width < 4
mmrahorovic Mar 20, 2024
f7e1a83
[rtl vvau]: removed unused methods
mmrahorovic Mar 20, 2024
7704654
renamed VectorVectorActivation_{hls,rtl} to VVAU_{hls,rtl}
mmrahorovic Mar 20, 2024
d8b251c
[transform]: fix to default to HLS VVAU if bit-width < 4
mmrahorovic Mar 20, 2024
a0c7b58
Merge pull request #1005 from mmrahorovic/refactor/spec_mvu_rtl
auphelia Mar 20, 2024
6870a4b
Merge remote-tracking branch 'upstream/refactor/rtl_integration' into…
auphelia Mar 20, 2024
ff31d9f
[Tests] Infer RTL VVAUs in end2end mobilenet test
auphelia Mar 20, 2024
e0cfeee
[Thresholding rtl] Update template wrapper file names to match top mo…
Mar 20, 2024
1e71186
Update comment
Mar 20, 2024
755dacb
[transform]: unsigned weights currently not supported
mmrahorovic Mar 21, 2024
87820f9
Merge pull request #1008 from mmrahorovic/fix/rtl-mvu-signed-weights
auphelia Mar 21, 2024
e361d6d
Merge pull request #1002 from Xilinx/bugfix/broadcast_threshold
auphelia Mar 21, 2024
04242f9
Merge pull request #1006 from Xilinx/bugfix/rtl_wrapper
auphelia Mar 21, 2024
a6020e9
Merge remote-tracking branch 'upstream/refactor/rtl_integration' into…
mmrahorovic Mar 21, 2024
7f29a42
[transform]: RTL-VVU exclude unsigned weights
mmrahorovic Mar 21, 2024
a4a2ae4
[Tests] Update bnn pynq to use rtl components - thresh and swg
auphelia Mar 21, 2024
d9a1bd9
Merge remote-tracking branch 'origin/refactor/rtl_ops_vvu' into refac…
mmrahorovic Mar 21, 2024
942735d
[Tests] Remove mem mode setting for RTL Thresholding
auphelia Mar 21, 2024
2fc9590
[Thresholding] Use new wrapper name in prepare rtlsim
auphelia Mar 22, 2024
7b30c63
Merge pull request #1000 from mmrahorovic/refactor/rtl_ops_vvu
auphelia Mar 22, 2024
5df52b9
Fix linting
auphelia Mar 22, 2024
e1c326d
[Docs] First sweep to update the documentation
auphelia Mar 22, 2024
fc6877b
[Thresholding RTL] Prepend dummy threshold for narrow range quantization
Mar 22, 2024
f5ca9f2
Merge pull request #1010 from Xilinx/bugfix/narrow_quant_threshold
auphelia Mar 25, 2024
85baad0
[Test] Apply parallelism independent if it is HLS or RTL variant
auphelia Mar 25, 2024
7b50f16
[Docs] Update manually written docs
auphelia Mar 25, 2024
b59e851
[transform]: remove resType selection of VVAU
mmrahorovic Mar 25, 2024
e8ae3c4
[tests]: renamed VectorVectorActivation to VVAU
mmrahorovic Mar 25, 2024
6226ab5
Merge pull request #1012 from mmrahorovic/bugfix/vvau_renaming
auphelia Mar 25, 2024
e057fc9
[Docs] Update top level markdown files
auphelia Mar 25, 2024
7b13840
[Docs] Fix typo in CONTRIBUTING markdown
auphelia Mar 25, 2024
d4fbd21
[Docs] Update AUTHORS md
auphelia Mar 25, 2024
6d03694
[Tests/Docs] Set SWG to HLS for depthwise conv cppsim tests
auphelia Mar 25, 2024
00e6e51
[Deps] Update dockerfile with new copyright header
auphelia Mar 25, 2024
1e97e97
[Tests] Force HLS components for special case cnv-2-2 on u250 and pyn…
Mar 26, 2024
aa361f5
[rtl swg]: interleave channels for CPPsim
mmrahorovic Mar 26, 2024
87d11c6
[vvau]: RTL-swg in cppsim now interleaves channels -- updated 'pe' se…
mmrahorovic Mar 26, 2024
42852df
[tests]: remove defaulting SWG to HLS
mmrahorovic Mar 26, 2024
61b27b4
Merge pull request #1014 from mmrahorovic/fix/rtl_swg_cppsim
auphelia Mar 26, 2024
86e28e4
[Tests] Enable interleaving of output for dw only
auphelia Mar 26, 2024
cf8e9df
Merge pull request #1015 from Xilinx/testing/force_hls_cnv
auphelia Mar 26, 2024
84654a3
Fix linting
auphelia Mar 26, 2024
9507e23
[Thresholding RTL] extract RAM trigger to json config
Mar 27, 2024
b9d4e62
Merge pull request #1016 from Xilinx/bugfix/thresh_json
auphelia Mar 27, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions AUTHORS.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,9 @@ Contributors
* Matthias Gehre (@mgehre-amd)
* Hugo Le Blevec (@hleblevec)
* Patrick Geel (@patrickgeel)
* John Monks (@jmonks-amd)
* Tim Paine (@timkpaine)
* Linus Jungemann (@LinusJungemann)
* Shashwat Khandelwal (@shashwat1198)
* Ian Colbert (@i-colbert)
* Rachit Garg (@rstar900)
10 changes: 0 additions & 10 deletions CHANGELOG.rst

This file was deleted.

56 changes: 55 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,60 @@ Please follow the steps below and be sure that your contribution complies with o
1. The <a href="https://github.com/Xilinx/finn" target="_blank">main branch</a> should always be treated as stable and clean. Only hot fixes are allowed to be pull-requested. The hot fix is supposed to be very important such that without this fix, a lot of things will break.
2. For new features, smaller bug fixes, doc updates, and many other fixes, users should pull request against the <a href="https://github.com/Xilinx/finn/tree/dev" target="_blank">development branch</a>.

3. We will review your contribution and, if any additional fixes or modifications are
3. Sign Your Work

Please use the *Signed-off-by* line at the end of your patch which indicates that you accept the Developer Certificate of Origin (DCO) defined by https://developercertificate.org/ reproduced below::

```
Developer Certificate of Origin
Version 1.1

Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.


Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or

(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or

(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.

(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```

You can enable Signed-off-by automatically by adding the `-s` flag to the `git commit` command.

Here is an example Signed-off-by line which indicates that the contributor accepts DCO:

```
This is my commit message

Signed-off-by: Jane Doe <jane.doe@example.com>
```

4. We will review your contribution and, if any additional fixes or modifications are
necessary, may provide feedback to guide you. When accepted, your pull request will
be merged to the repository. If you have more questions please contact us.
3 changes: 2 additions & 1 deletion LICENSE.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
Copyright (c) 2020, Xilinx
Copyright (C) 2020-2022, Xilinx, Inc.
Copyright (C) 2022-2024, Advanced Micro Devices, Inc.
All rights reserved.

Redistribution and use in source and binary forms, with or without
Expand Down
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,12 @@



<img align="left" src="https://raw.githubusercontent.com/Xilinx/finn/github-pages/docs/img/finn-stack.png" alt="drawing" style="margin-right: 20px" width="250"/>
<img align="left" src="https://raw.githubusercontent.com/Xilinx/finn/github-pages/docs/img/finn-stack.PNG" alt="drawing" style="margin-right: 20px" width="250"/>

[![GitHub Discussions](https://img.shields.io/badge/discussions-join-green)](https://github.com/Xilinx/finn/discussions)
[![ReadTheDocs](https://readthedocs.org/projects/finn/badge/?version=latest&style=plastic)](http://finn.readthedocs.io/)

FINN is an experimental framework from Xilinx Research Labs to explore deep neural network
inference on FPGAs.
FINN is an experimental framework from Integrated Communications and AI Lab of AMD Research & Advanced Development to explore deep neural network inference on FPGAs.
It specifically targets <a href="https://github.com/maltanar/qnn-inference-examples" target="_blank">quantized neural
networks</a>, with emphasis on
generating dataflow-style architectures customized for each network.
Expand All @@ -28,7 +27,7 @@ Please see the [Getting Started](https://finn.readthedocs.io/en/latest/getting_s

## Documentation

You can view the documentation on [readthedocs](https://finn.readthedocs.io) or build them locally using `python setup.py doc` from inside the Docker container. Additionally, there is a series of [Jupyter notebook tutorials](https://github.com/Xilinx/finn/tree/main/notebooks), which we recommend running from inside Docker for a better experience.
You can view the documentation on [readthedocs](https://finn.readthedocs.io). Additionally, there is a series of [Jupyter notebook tutorials](https://github.com/Xilinx/finn/tree/main/notebooks), which we recommend running from inside Docker for a better experience.

## Community

Expand Down
5 changes: 3 additions & 2 deletions docker/Dockerfile.finn
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Copyright (c) 2021, Xilinx
# Copyright (C) 2021-2022, Xilinx, Inc.
# Copyright (C) 2022-2024, Advanced Micro Devices, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
Expand Down Expand Up @@ -27,7 +28,7 @@
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

FROM ubuntu:jammy-20230126
LABEL maintainer="Yaman Umuroglu <yamanu@xilinx.com>"
LABEL maintainer="Jakoba Petri-Koenig <jakoba.petri-koenig@amd.com>, Yaman Umuroglu <yaman.umuroglu@amd.com>"

ARG XRT_DEB_VERSION="xrt_202220.2.14.354_22.04-amd64-xrt"

Expand Down
10 changes: 5 additions & 5 deletions docs/finn/brevitas_export.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ Brevitas Export
:scale: 70%
:align: center

FINN expects an ONNX model as input. This can be a model trained with `Brevitas <https://github.com/Xilinx/brevitas>`_. Brevitas is a PyTorch library for quantization-aware training and the FINN Docker image comes with several `example Brevitas networks <https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/bnn_pynq>`_. Brevitas provides an export of a quantized network in ONNX representation in several flavors.
Two of the Brevitas-exported ONNX variants can be ingested by FINN:

* FINN-ONNX: Quantized weights exported as tensors with additional attributes to mark low-precision datatypes. Quantized activations exported as MultiThreshold nodes.
* QONNX: All quantization is represented using Quant, BinaryQuant or Trunc nodes. QONNX must be converted into FINN-ONNX by :py:mod:`finn.transformation.qonnx.convert_qonnx_to_finn`
FINN expects an ONNX model as input. This can be a model trained with `Brevitas <https://github.com/Xilinx/brevitas>`_. Brevitas is a PyTorch library for quantization-aware training and the FINN Docker image comes with several `example Brevitas networks <https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/bnn_pynq>`_.
Brevitas provides an export of a quantized network in QONNX representation, which is the format that can be ingested by FINN.
In a QONNX graph, all quantization is represented using Quant, BinaryQuant or Trunc nodes.
QONNX must be converted into FINN-ONNX by :py:mod:`finn.transformation.qonnx.convert_qonnx_to_finn`. FINN-ONNX is the intermediate representation (IR) FINN uses internally.
In this IR, quantized weights are indicated through tensors with additional attributes to mark low-precision datatypes and quantized activations are expressed as MultiThreshold nodes.

To work with either type of ONNX model, it is loaded into a :ref:`modelwrapper` provided by FINN.

Expand Down
48 changes: 28 additions & 20 deletions docs/finn/command_line.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ two command line entry points for productivity and ease-of-use:
Jupyter notebook as a starting point, visualizing the model at intermediate
steps and adding calls to new transformations as needed.
Once you have a working flow, you can implement a command line entry for this
by using the "advanced mode" described here.
by using the "advanced mode".


Simple dataflow build mode
--------------------------

This mode is intended for simpler networks whose topologies resemble the
FINN end-to-end examples.
It runs a fixed build flow spanning tidy-up, streamlining, HLS conversion
It runs a fixed build flow spanning tidy-up, streamlining, HW conversion
and hardware synthesis.
It can be configured to produce different outputs, including stitched IP for
integration in Vivado IPI as well as bitfiles.
Expand All @@ -43,7 +43,9 @@ To use it, first create a folder with the necessary configuration and model file
3. Create a JSON file with the build configuration. It must be named ``dataflow_build_dir/dataflow_build_config.json``.
Read more about the build configuration options on :py:mod:`finn.builder.build_dataflow_config.DataflowBuildConfig`.
You can find an example .json file under ``src/finn/qnn-data/build_dataflow/dataflow_build_config.json``
4. (Optional) create a JSON file with the folding configuration. It must be named ``dataflow_build_dir/folding_config.json``.
4. (Optional) create a JSON file with the specialize layers configuration. It must be named ``dataflow_build_dir/specialize_layers_config.json``
You can find an example .json file under ``src/finn/qnn-data/build_dataflow/specialize_layers_config.json``.
5. (Optional) create a JSON file with the folding configuration. It must be named ``dataflow_build_dir/folding_config.json``.
You can find an example .json file under ``src/finn/qnn-data/build_dataflow/folding_config.json``.
Instead of specifying the folding configuration, you can use the `target_fps` option in the build configuration
to control the degree of parallelization for your network.
Expand All @@ -59,25 +61,28 @@ as it goes through numerous steps:

.. code-block:: none

Building dataflow accelerator from /home/maltanar/sandbox/build_dataflow/model.onnx
Building dataflow accelerator from build_dataflow/model.onnx
Outputs will be generated at output_tfc_w1a1_Pynq-Z1
Build log is at output_tfc_w1a1_Pynq-Z1/build_dataflow.log
Running step: step_tidy_up [1/16]
Running step: step_streamline [2/16]
Running step: step_convert_to_hls [3/16]
Running step: step_create_dataflow_partition [4/16]
Running step: step_target_fps_parallelization [5/16]
Running step: step_apply_folding_config [6/16]
Running step: step_generate_estimate_reports [7/16]
Running step: step_hls_codegen [8/16]
Running step: step_hls_ipgen [9/16]
Running step: step_set_fifo_depths [10/16]
Running step: step_create_stitched_ip [11/16]
Running step: step_measure_rtlsim_performance [12/16]
Running step: step_make_pynq_driver [13/16]
Running step: step_out_of_context_synthesis [14/16]
Running step: step_synthesize_bitfile [15/16]
Running step: step_deployment_package [16/16]
Running step: step_qonnx_to_finn [1/19]
Running step: step_tidy_up [2/19]
Running step: step_streamline [3/19]
Running step: step_convert_to_hw [4/19]
Running step: step_create_dataflow_partition [5/19]
Running step: step_specialize_layers [6/19]
Running step: step_target_fps_parallelization [7/19]
Running step: step_apply_folding_config [8/19]
Running step: step_minimize_bit_width [9/19]
Running step: step_generate_estimate_reports [10/19]
Running step: step_hw_codegen [11/19]
Running step: step_hw_ipgen [12/19]
Running step: step_set_fifo_depths [13/19]
Running step: step_create_stitched_ip [14/19]
Running step: step_measure_rtlsim_performance [15/19]
Running step: step_out_of_context_synthesis [16/19]
Running step: step_synthesize_bitfile [17/19]
Running step: step_make_pynq_driver [18/19]
Running step: step_deployment_package [19/19]


You can read a brief description of what each step does on
Expand All @@ -99,6 +104,7 @@ The following outputs will be generated regardless of which particular outputs a
* ``build_dataflow.log`` is the build logfile that will contain any warnings/errors
* ``time_per_step.json`` will report the time (in seconds) each build step took
* ``final_hw_config.json`` will contain the final (after parallelization, FIFO sizing etc) hardware configuration for the build
* ``template_specialize_layers_config.json`` is an example json file that can be used to set the specialize layers config
* ``intermediate_models/`` will contain the ONNX file(s) produced after each build step


Expand Down Expand Up @@ -206,3 +212,5 @@ You can launch the desired custom build flow using:
This will mount the specified folder into the FINN Docker container and launch
the build flow. If ``<name-of-build-flow>`` is not specified it will default to ``build``
and thus execute ``build.py``. If it is specified, it will be ``<name-of-build-flow>.py``.

If you would like to learn more about advance builder settings, please have a look at `our tutorial about this topic <https://github.com/Xilinx/finn/blob/main/notebooks/advanced/4_advanced_builder_settings.ipynb>`_.
2 changes: 1 addition & 1 deletion docs/finn/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
# -- Project information -----------------------------------------------------

project = "FINN"
copyright = "2020, Xilinx"
copyright = "2020-2022, Xilinx, 2022-2024, AMD"
author = "Y. Umuroglu and J. Petri-Koenig"


Expand Down
31 changes: 12 additions & 19 deletions docs/finn/developers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Power users may also find this information useful.
Prerequisites
================

Before starting to do development on FINN it's a good idea to start
Before starting to do development on FINN it is a good idea to start
with understanding the basics as a user. Going through all of the
:ref:`tutorials` is strongly recommended if you haven't already done so.
Additionally, please review the documentation available on :ref:`internals`.
Expand Down Expand Up @@ -61,7 +61,7 @@ further detailed below:
Docker images
===============

If you want to add new dependencies (packages, repos) to FINN it's
If you want to add new dependencies (packages, repos) to FINN it is
important to understand how we handle this in Docker.

The finn.dev image is built and launched as follows:
Expand All @@ -70,7 +70,7 @@ The finn.dev image is built and launched as follows:

2. run-docker.sh launches the build of the Docker image with `docker build` (unless ``FINN_DOCKER_PREBUILT=1``). Docker image is built from docker/Dockerfile.finn using the following steps:

* Base: PyTorch dev image
* Base: Ubuntu 22.04 LTS image
* Set up apt dependencies: apt-get install a few packages for verilator and
* Set up pip dependencies: Python packages FINN depends on are listed in requirements.txt, which is copied into the container and pip-installed. Some additional packages (such as Jupyter and Netron) are also installed.
* Install XRT deps, if needed: For Vitis builds we need to install the extra dependencies for XRT. This is only triggered if the image is built with the INSTALL_XRT_DEPS=1 argument.
Expand All @@ -84,9 +84,9 @@ The finn.dev image is built and launched as follows:

4. Entrypoint script (docker/finn_entrypoint.sh) upon launching container performs the following:

* Source Vivado settings64.sh from specified path to make vivado and vivado_hls available.
* Download PYNQ board files into the finn root directory, unless they already exist.
* Source Vitits settings64.sh if Vitis is mounted.
* Source Vivado settings64.sh from specified path to make vivado and vitis_hls available.
* Download board files into the finn root directory, unless they already exist or ``FINN_SKIP_BOARD_FILES=1``.
* Source Vitis settings64.sh if Vitis is mounted.

5. Depending on the arguments to run-docker.sh a different application is launched. run-docker.sh notebook launches a Jupyter server for the tutorials, whereas run-docker.sh build_custom and run-docker.sh build_dataflow trigger a dataflow build (see documentation). Running without arguments yields an interactive shell. See run-docker.sh for other options.

Expand All @@ -106,7 +106,7 @@ Linting
We use a pre-commit hook to auto-format Python code and check for issues.
See https://pre-commit.com/ for installation. Once you have pre-commit, you can install
the hooks into your local clone of the FINN repo.
It's recommended to do this **on the host** and not inside the Docker container:
It is recommended to do this **on the host** and not inside the Docker container:

::

Expand All @@ -119,7 +119,7 @@ you may have to fix it manually, then run `git commit` once again.
The checks are configured in .pre-commit-config.yaml under the repo root.

Testing
=======
========

Tests are vital to keep FINN running. All the FINN tests can be found at https://github.com/Xilinx/finn/tree/main/tests.
These tests can be roughly grouped into three categories:
Expand All @@ -132,7 +132,7 @@ These tests can be roughly grouped into three categories:

Additionally, qonnx, brevitas and finn-hlslib also include their own test suites.
The full FINN compiler test suite
(which will take several hours to run and require a PYNQ board) can be executed
(which will take several hours to run) can be executed
by:

::
Expand All @@ -146,7 +146,7 @@ requiring Vivado or as slow-running tests:

bash ./run-docker.sh quicktest

When developing a new feature it's useful to be able to run just a single test,
When developing a new feature it is useful to be able to run just a single test,
or a group of tests that e.g. share the same prefix.
You can do this inside the Docker container
from the FINN root directory as follows:
Expand Down Expand Up @@ -178,16 +178,9 @@ FINN provides two types of documentation:
* manually written documentation, like this page
* autogenerated API docs from Sphinx

Everything is built using Sphinx, which is installed into the finn.dev
Docker image. You can build the documentation locally by running the following
inside the container:

::

python setup.py docs
Everything is built using Sphinx.

You can view the generated documentation on build/html/index.html.
The documentation is also built online by readthedocs:
The documentation is built online by readthedocs:

* finn.readthedocs.io contains the docs from the master branch
* finn-dev.readthedocs.io contains the docs from the dev branch
Expand Down
6 changes: 5 additions & 1 deletion docs/finn/end_to_end_flow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,11 @@
End-to-End Flow
***************

The following image shows an example end-to-end flow in FINN, starting from a trained PyTorch/Brevitas network and going all the way to a running FPGA accelerator.
The following image shows an example end-to-end flow in FINN for a PYNQ board.
Please note that you can build an IP block for your neural network **for every Xilinx-AMD FPGA**, but we only provide automatic system integration for a limited number of boards.
However, you can use Vivado to integrate an IP block generated by FINN into your own design.

The example flow in this image starts from a trained PyTorch/Brevitas network and goes all the way to a running FPGA accelerator.
As you can see in the picture, FINN has a high modularity and has the property that the flow can be stopped at any point and the intermediate result can be used for further processing or other purposes. This enables a wide range of users to benefit from FINN, even if they do not use the whole flow.

.. image:: ../../notebooks/end2end_example/bnn-pynq/finn-design-flow-example.svg
Expand Down
Loading
Loading