DL Boost
{{short description|Marketing name by Intel}}
{{Third-party|date=March 2021}}
Intel's Deep Learning Boost (DL Boost) is a marketing name for instruction set architecture (ISA) features on the x86-64 designed to improve performance on deep learning tasks such as training and inference."Intel Deep Learning Boost" Product Overview [https://www.intel.com/content/dam/www/public/us/en/documents/product-overviews/dl-boost-product-overview.pdf], p. 3
Features
DL Boost consists of two sets of features:
- AVX-512 VNNI, 4VNNIW, or AVX-VNNI: fast multiply-accumulation mainly for convolutional neural networks.
- AVX-512 BF16: lower-precision bfloat16 floating-point numbers for generally faster computation. Operations provided include conversion to/from float32 and dot product.
DL Boost features were introduced in the Cascade Lake architecture.
A TensorFlow-based benchmark run on the Google Cloud Platform Compute Engine shows improved performance and reduced cost compared to previous CPUs and to GPUs, especially for small batch sizes.Samantha Gurriero, "Machine Learning Optimisation: What is the Best Hardware on GCP?", Datatonic, [https://datatonic.com/insights/machine-learning-optimisation-gcp-intel/]
Notes
External links
- [https://ai.intel.com/intel-deep-learning-boost Deep Learning Boost] at Intel
- Andres Rodrigues et al., "Lower Numerical Precision Deep Learning Inference and Training", Intel White paper [https://www.intel.com/content/www/us/en/artificial-intelligence/solutions/lower-numerical-precision-deep-learning-inference-and-training.html]
- [https://indico.cern.ch/event/595059/contributions/2499304/attachments/1430242/2196659/Intel_and_ML_Talk_HansPabst.pdf Intel and ML] (2017), from Intel's Developer Relations Division