Skip to content

Commit dfeb465

Browse files
WafaaTyanbing-jjiayisunxluis-realma-pineda
authored
Sync with r3.0 (#1592)
--------- Signed-off-by: Felipe Leza Alvarez <109559376+flezaalv@users.noreply.github.com> Signed-off-by: Abolfazl Shahbazi <abolfazl.shahbazi@intel.com> Signed-off-by: Felipe Leza Alvarez <felipe.leza.alvarez@intel.com> Co-authored-by: YanbingJiang <yanbing.jiang@intel.com> Co-authored-by: jiayisunx <jiayi.sun@intel.com> Co-authored-by: Real Novo, Luis <luis.real.novo@intel.com> Co-authored-by: ma-pineda <110496466+ma-pineda@users.noreply.github.com> Co-authored-by: jojivk-intel-nervana <jojimon.varghese@intel.com> Co-authored-by: xiaofeij <xiaofei.jiang@intel.com> Co-authored-by: WeizhuoZhang-intel <weizhuo.zhang@intel.com> Co-authored-by: jianan-gu <jianan.gu@intel.com> Co-authored-by: liangan1 <liangang.zhang@intel.com> Co-authored-by: leslie-fang-intel <leslie.fang@intel.com> Co-authored-by: zhuhaozhe <haozhe.zhu@intel.com> Co-authored-by: lerealno <112975902+lerealno@users.noreply.github.com> Co-authored-by: gera-aldama <111396864+gera-aldama@users.noreply.github.com> Co-authored-by: xiangdong <40376367+zxd1997066@users.noreply.github.com> Co-authored-by: Jitendra Patil <jitendra.patil@intel.com> Co-authored-by: Srikanth Ramakrishna <srikanth.ramakrishna@intel.com> Co-authored-by: mahathis <36486206+Mahathi-Vatsal@users.noreply.github.com> Co-authored-by: Clayne Robison <clayne.b.robison@intel.com> Co-authored-by: akhilgoe <114951738+akhilgoe@users.noreply.github.com> Co-authored-by: Jesus Herrera Ledon <110855758+j3su5pro-intel@users.noreply.github.com> Co-authored-by: Felipe Leza Alvarez <109559376+flezaalv@users.noreply.github.com> Co-authored-by: aagalleg <alberto.gallegos.muro@intel.com> Co-authored-by: Gerardo Dominguez <gerardo.dominguez.aldama@intel.com> Co-authored-by: Leza Alvarez, Felipe <felipe.leza.alvarez@intel.com> Co-authored-by: Miguel Pineda <miguel.pineda.juarez@intel.com> Co-authored-by: sachinmuradi <sachin.muradi@intel.com> Co-authored-by: Abolfazl Shahbazi <abolfazl.shahbazi@intel.com> Co-authored-by: Chunyuan WU <chunyuan.wu@intel.com> Co-authored-by: Om Thakkar <om.thakkar@intel.com> Co-authored-by: Ashiq Imran <ashiq.imran@intel.com> Co-authored-by: Mahmoud Abuzaina <mahmoud.abuzaina@intel.com> Co-authored-by: Kanvi Khanna <kanvi.khanna@intel.com> Co-authored-by: Cao E <e.cao@intel.com> Co-authored-by: Syed Shahbaaz Ahmed <syed.shahbaaz.ahmed@intel.com> Co-authored-by: ellie-jan <110058044+ellie-jan@users.noreply.github.com> Co-authored-by: zofia <110436990+zufangzhu@users.noreply.github.com> Co-authored-by: Lu Teng <teng.lu@intel.com> Co-authored-by: Mao, Yunfei <yunfei.mao@intel.com> Co-authored-by: yisonzhu <107918054+yisonzhu@users.noreply.github.com> Co-authored-by: DiweiSun <105627594+DiweiSun@users.noreply.github.com> Co-authored-by: zengxian <xiangdong.zeng@intel.com> Co-authored-by: Mahathi Vatsal <mahathi.vatsal.salopanthula@intel.com> Co-authored-by: Tyler Titsworth <tyler.titsworth@intel.com> Co-authored-by: okhleif-IL <87550612+okhleif-IL@users.noreply.github.com> Co-authored-by: Harsha Ramayanam <harsha.ramayanam@intel.com> Co-authored-by: jianyizh <jianyi.zhang@intel.com> Co-authored-by: nhatle <105756286+nhatleSummer22@users.noreply.github.com> Co-authored-by: Sharvil Shah <shahsharvil96@gmail.com> Co-authored-by: Gopi Krishna Jha <96072995+gopikrishnajha@users.noreply.github.com>
1 parent 56789bd commit dfeb465

File tree

666 files changed

+67753
-43813
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

666 files changed

+67753
-43813
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
*.pyc
44
.DS_Store
55
**.log
6+
pretrained/
67
.pytest*
78
.venv*
89
.coverage

README.md

+16-11
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,29 @@
1-
# Model Zoo for Intel® Architecture
1+
# Intel® AI Reference Models
22

33
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors and Intel® Data Center GPUs.
44

5-
Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html).
5+
Containers for running the workloads can be found at the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html).
66

7-
## Purpose of the Model Zoo
7+
[Intel® AI Reference Models in a Jupyter Notebook](/notebooks/README.md) is also available for the [listed workloads](/notebooks/README.md#supported-models)
88

9-
- Demonstrate the AI workloads and deep learning models Intel has optimized and validated to run on Intel hardware
10-
- Show how to efficiently execute, train, and deploy Intel-optimized models
11-
- Make it easy to get started running Intel-optimized models on Intel hardware in the cloud or on bare metal
9+
## Purpose of Intel® AI Reference Models
10+
11+
Intel optimizes popular deep learning frameworks such as TensorFlow* and PyTorch* by contributing to the upstream projects. Additional optimizations are built into plugins/extensions such as the [Intel Extension for Pytorch*](https://github.com/intel/intel-extension-for-pytorch) and the [Intel Extension for TensorFlow*](https://github.com/intel/intel-extension-for-tensorflow). Popular neural network models running against common datasets are the target workloads that drive these optimizations.
12+
13+
The purpose of the Intel® AI Reference Models repository (and associated containers) is to quickly replicate the complete software environment that demonstrates the best-known performance of each of these target model/dataset combinations. When executed in optimally-configured hardware environments, these software environments showcase the AI capabilities of Intel platforms.
1214

1315
***DISCLAIMER: These scripts are not intended for benchmarking Intel platforms.
1416
For any performance and/or benchmarking information on specific Intel platforms, visit [https://www.intel.ai/blog](https://www.intel.ai/blog).***
1517

1618
Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the [Intel Global Human Rights Principles](https://www.intel.com/content/www/us/en/policy/policy-human-rights.html). Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right.
1719

1820
## License
19-
The Model Zoo for Intel® Architecture is licensed under [Apache License Version 2.0](https://github.com/IntelAI/models/blob/master/LICENSE).
21+
The Intel® AI Reference Models is licensed under [Apache License Version 2.0](https://github.com/intel/ai-reference-models/blob/master/LICENSE).
2022

2123
## Datasets
2224
To the extent that any public datasets are referenced by Intel or accessed using tools or code on this site those datasets are provided by the third party indicated as the data source. Intel does not create the data, or datasets, and does not warrant their accuracy or quality. By accessing the public dataset(s) you agree to the terms associated with those datasets and that your use complies with the applicable license.
2325

24-
Please check the list of datasets used in Model Zoo for Intel® Architecture in [datasets directory](/datasets).
26+
Please check the list of datasets used in Intel® AI Reference Models in [datasets directory](/datasets).
2527

2628
Intel expressly disclaims the accuracy, adequacy, or completeness of any public datasets, and is not liable for any errors, omissions, or defects in the data, or for any reliance on the data. Intel is not liable for any liability or damages relating to your use of public datasets.
2729

@@ -30,7 +32,7 @@ The model documentation in the tables below have information on the
3032
prerequisites to run each model. The model scripts run on Linux. Certain
3133
models are also able to run using bare metal on Windows. For more information
3234
and a list of models that are supported on Windows, see the
33-
[documentation here](/docs/general/Windows.md#using-intel-model-zoo-on-windows-systems).
35+
[documentation here](/docs/general/Windows.md#using-intel-ai-reference-models-on-windows-systems).
3436

3537
Instructions available to run on [Sapphire Rapids](https://www.intel.com/content/www/us/en/newsroom/opinion/updates-next-gen-data-center-platform-sapphire-rapids.html#gs.blowcx).
3638

@@ -73,7 +75,6 @@ For best performance on Intel® Data Center GPU Flex and Max Series, please chec
7375

7476
| Model | Framework | Mode | Model Documentation | Benchmark/Test Dataset |
7577
| -------------------------------------------------------- | ---------- | ----------| ------------------- | ---------------------- |
76-
| [3D U-Net](https://arxiv.org/pdf/1606.06650.pdf) | TensorFlow | Inference | [FP32](/benchmarks/image_segmentation/tensorflow/3d_unet/inference/fp32/README.md) | [BRATS 2018](https://github.com/IntelAI/models/tree/master/benchmarks/image_segmentation/tensorflow/3d_unet/inference/fp32#datasets) |
7778
| [3D U-Net MLPerf*](https://arxiv.org/pdf/1606.06650.pdf) | TensorFlow | Inference | [FP32 BFloat16 Int8](/benchmarks/image_segmentation/tensorflow/3d_unet_mlperf/inference/README.md) | [BRATS 2019](https://www.med.upenn.edu/cbica/brats2019/data.html) |
7879
| [3D U-Net MLPerf*](https://arxiv.org/pdf/1606.06650.pdf) [Sapphire Rapids](https://www.intel.com/content/www/us/en/newsroom/opinion/updates-next-gen-data-center-platform-sapphire-rapids.html#gs.blowcx) | Tensorflow | Inference | [FP32 BFloat16 Int8 BFloat32](/quickstart/image_segmentation/tensorflow/3d_unet_mlperf/inference/cpu/README_SPR_Baremetal.md) | [BRATS 2019](https://www.med.upenn.edu/cbica/brats2019/data.html) |
7980
| [MaskRCNN](https://arxiv.org/abs/1703.06870) | TensorFlow | Inference | [FP32](/benchmarks/image_segmentation/tensorflow/maskrcnn/inference/fp32/README.md) | [MS COCO 2014](https://github.com/IntelAI/models/tree/master/benchmarks/image_segmentation/tensorflow/maskrcnn/inference/fp32#datasets-and-pretrained-model) |
@@ -114,7 +115,6 @@ For best performance on Intel® Data Center GPU Flex and Max Series, please chec
114115

115116
| Model | Framework | Mode | Model Documentation | Benchmark/Test Dataset |
116117
| ----------------------------------------------------- | ---------- | ----------| ------------------- | ---------------------- |
117-
| [Faster R-CNN](https://arxiv.org/pdf/1506.01497.pdf) | TensorFlow | Inference | [Int8](/benchmarks/object_detection/tensorflow/faster_rcnn/inference/int8/README.md) [FP32](/benchmarks/object_detection/tensorflow/faster_rcnn/inference/fp32/README.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) |
118118
| [R-FCN](https://arxiv.org/pdf/1605.06409.pdf) | TensorFlow | Inference | [Int8 FP32](/benchmarks/object_detection/tensorflow/rfcn/inference/README.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) |
119119
| [SSD-MobileNet*](https://arxiv.org/pdf/1704.04861.pdf)| TensorFlow | Inference | [Int8 FP32 BFloat16](/benchmarks/object_detection/tensorflow/ssd-mobilenet/inference/README.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) |
120120
| [SSD-MobileNet*](https://arxiv.org/pdf/1704.04861.pdf) [Sapphire Rapids](https://www.intel.com/content/www/us/en/newsroom/opinion/updates-next-gen-data-center-platform-sapphire-rapids.html#gs.blowcx) | TensorFlow | Inference | [Int8 FP32 BFloat16 BFloat32](/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/cpu/README_SPR_baremetal.md) | [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) |
@@ -145,6 +145,9 @@ For best performance on Intel® Data Center GPU Flex and Max Series, please chec
145145
| [Wide & Deep Large Dataset](https://arxiv.org/pdf/1606.07792.pdf) | TensorFlow | Training | [FP32](/benchmarks/recommendation/tensorflow/wide_deep_large_ds/training/README.md) | [Large Kaggle Display Advertising Challenge dataset](https://github.com/IntelAI/models/tree/master/benchmarks/recommendation/tensorflow/wide_deep_large_ds/training/fp32#dataset) |
146146
| [DLRM](https://arxiv.org/pdf/1906.00091.pdf) | PyTorch | Inference | [FP32 Int8 BFloat16 BFloat32](/quickstart/recommendation/pytorch/dlrm/inference/cpu/README.md) | [Criteo Terabyte](/quickstart/recommendation/pytorch/dlrm/inference/cpu/README.md#datasets) |
147147
| [DLRM](https://arxiv.org/pdf/1906.00091.pdf) | PyTorch | Training | [FP32 BFloat16 BFloat32](/quickstart/recommendation/pytorch/dlrm/training/cpu/README.md) | [Criteo Terabyte](/quickstart/recommendation/pytorch/dlrm/training/cpu/README.md#datasets) |
148+
| [DLRM v2](https://arxiv.org/pdf/1906.00091.pdf) | PyTorch | Inference | [FP32 FP16 BFloat16 BFloat32 Int8](/quickstart/recommendation/pytorch/torchrec_dlrm/inference/cpu/README.md) | [Criteo 1TB Click Logs dataset](/quickstart/recommendation/pytorch/torchrec_dlrm/inference/cpu#datasets) |
149+
| [DLRM v2](https://arxiv.org/pdf/1906.00091.pdf) | PyTorch | Training | [FP32 FP16 BFloat16 BFloat32](/quickstart/recommendation/pytorch/torchrec_dlrm/training/cpu/README.md) | [Random dataset](/quickstart/recommendation/pytorch/torchrec_dlrm/training/cpu#datasets) |
150+
| [MEMREC-DLRM](https://arxiv.org/pdf/2305.07205.pdf) | PyTorch | Inference | [FP32](/quickstart/recommendation/pytorch/memrec_dlrm/inference/cpu/README.md) | [Criteo Terabyte](/quickstart/recommendation/pytorch/memrec_dlrm/inference/cpu/README.md#datasets) |
148151

149152
### Text-to-Speech
150153

@@ -189,6 +192,8 @@ For best performance on Intel® Data Center GPU Flex and Max Series, please chec
189192
| [BERT large](https://arxiv.org/pdf/1810.04805.pdf) | PyTorch | Training | Max Series | [BFloat16](/quickstart/language_modeling/pytorch/bert_large/training/gpu/README.md) |
190193
|[BERT large](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Inference | Max Series | [FP32 FP16](/quickstart/language_modeling/tensorflow/bert_large/inference/gpu/README.md) |
191194
| [BERT large](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Training | Max Series | [BFloat16](/quickstart/language_modeling/tensorflow/bert_large/training/gpu/README.md) |
195+
| [DLRM](https://arxiv.org/pdf/1906.00091.pdf) | TensorFlow | Inference | Max Series | [FP16](/quickstart/recommendation/pytorch/torchrec_dlrm/inference/gpu/README.md) |
196+
| [DLRM](https://arxiv.org/pdf/1906.00091.pdf) | TensorFlow | Training | Max Series | [BFloat16](/quickstart/recommendation/pytorch/torchrec_dlrm/training/gpu/README.md) |
192197

193198
## How to Contribute
194199
If you would like to add a new benchmarking script, please use [this guide](/Contribute.md).

0 commit comments

Comments
 (0)