EfficientNet
{{Short description|Family of computer vision models}}
{{Infobox software
| name = EfficientNet
| developer = Google AI
| released = May 2019
| repo = {{URL|https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet}}
| programming language = Python
| license = Apache License 2.0
| website = [https://research.google/blog/efficientnet-improving-accuracy-and-efficiency-through-automl-and-model-scaling/ Google AI Blog]
}}
EfficientNet is a family of convolutional neural networks (CNNs) for computer vision published by researchers at Google AI in 2019.{{Citation |last1=Tan |first1=Mingxing |title=EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks |date=2020-09-11 |arxiv=1905.11946 |last2=Le |first2=Quoc V.}} Its key innovation is compound scaling, which uniformly scales all dimensions of depth, width, and resolution using a single parameter.
EfficientNet models have been adopted in various computer vision tasks, including image classification, object detection, and segmentation.
Compound scaling
EfficientNet introduces compound scaling, which, instead of scaling one dimension of the network at a time, such as depth (number of layers), width (number of channels), or resolution (input image size), uses a compound coefficient to scale all three dimensions simultaneously. Specifically, given a baseline network, the depth, width, and resolution are scaled according to the following equations:
\begin{aligned}
\text{depth multiplier: } d &= \alpha^{\phi} \\
\text{width multiplier: } w &= \beta^{\phi} \\
\text{resolution multiplier: } r &= \gamma^{\phi}
\end{aligned}
subject to
\alpha \cdot \beta^2 \cdot \gamma^2 \approx 2
and . The
\alpha \cdot \beta^2 \cdot \gamma^2 \approx 2
condition is such that increasing by a factor of would increase the total FLOPs of running the network on an image approximately times. The hyperparameters , , and are determined by a small grid search. The original paper suggested 1.2, 1.1, and 1.15, respectively.
Architecturally, they optimized the choice of modules by neural architecture search (NAS), and found that the inverted bottleneck convolution (which they called MBConv) used in MobileNet worked well.
The EfficientNet family is a stack of MBConv layers, with shapes determined by the compound scaling. The original publication consisted of 8 models, from EfficientNet-B0 to EfficientNet-B7, with increasing model size and accuracy. EfficientNet-B0 is the baseline network, and subsequent models are obtained by scaling the baseline network by increasing .
Variants
EfficientNet has been adapted for fast inference on edge TPUs{{Cite web |date=August 6, 2019 |title=EfficientNet-EdgeTPU: Creating Accelerator-Optimized Neural Networks with AutoML |url=https://research.google/blog/efficientnet-edgetpu-creating-accelerator-optimized-neural-networks-with-automl/ |access-date=2024-10-18 |website=research.google |language=en}} and centralized TPU or GPU clusters by NAS.{{Citation |last1=Li |first1=Sheng |title=Searching for Fast Model Families on Datacenter Accelerators |date=2021-02-10 |arxiv=2102.05610 |last2=Tan |first2=Mingxing |last3=Pang |first3=Ruoming |last4=Li |first4=Andrew |last5=Cheng |first5=Liqun |last6=Le |first6=Quoc |last7=Jouppi |first7=Norman P.}}
EfficientNet V2 was published in June 2021. The architecture was improved by further NAS search with more types of convolutional layers.{{Citation |last1=Tan |first1=Mingxing |title=EfficientNetV2: Smaller Models and Faster Training |date=2021-06-23 |arxiv=2104.00298 |last2=Le |first2=Quoc V.}} It also introduced a training method, which progressively increases image size during training, and uses regularization techniques like dropout, RandAugment,{{Cite journal |last1=Cubuk |first1=Ekin D. |last2=Zoph |first2=Barret |last3=Shlens |first3=Jonathon |last4=Le |first4=Quoc V. |date=2020 |title=Randaugment: Practical Automated Data Augmentation With a Reduced Search Space |url=https://openaccess.thecvf.com/content_CVPRW_2020/html/w40/Cubuk_Randaugment_Practical_Automated_Data_Augmentation_With_a_Reduced_Search_Space_CVPRW_2020_paper.html |pages=702–703|arxiv=1909.13719 }} and Mixup.{{Citation |last1=Zhang |first1=Hongyi |title=mixup: Beyond Empirical Risk Minimization |date=2018-04-27 |arxiv=1710.09412 |last2=Cisse |first2=Moustapha |last3=Dauphin |first3=Yann N. |last4=Lopez-Paz |first4=David}} The authors claim this approach mitigates accuracy drops often associated with progressive resizing.
See also
References
External links
- [https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling (Google AI Blog)]
{{Google AI}}{{Differentiable computing}}