You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 18, 2023. It is now read-only.
Test Environments:
• DNNL and Discrete IE-MKLDNN tested on e306860
• Integrated IE-MKLDNN tested on 6f1b378
• Native OpenVINO IE-MKLDNN tested with openvino toolkit 2020.3
• CPU: Intel i7-1065G7 CPU @ 1.30 GHz 1.50GHZ,
• OS: Windows
there seems to be larger gap to native when executing small models, say MobileNet (70% native), SquuzeNet (50% native). I guess the reason is the ratio between Execution I/O vs. compute time where small models have relatively short compute time. It looks like there are room to improve the Execution I/O implementation.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Test Environments:
• DNNL and Discrete IE-MKLDNN tested on e306860
• Integrated IE-MKLDNN tested on 6f1b378
• Native OpenVINO IE-MKLDNN tested with openvino toolkit 2020.3
• CPU: Intel i7-1065G7 CPU @ 1.30 GHz 1.50GHZ,
• OS: Windows
there seems to be larger gap to native when executing small models, say MobileNet (70% native), SquuzeNet (50% native). I guess the reason is the ratio between Execution I/O vs. compute time where small models have relatively short compute time. It looks like there are room to improve the Execution I/O implementation.
The text was updated successfully, but these errors were encountered: