Inception (deep learning architecture)
{{short description|Family of convolutional neural networks}}
{{Infobox software
| name = Inception
| logo =
| screenshot =
| screenshot size =
| caption =
| author = Google AI
| developer =
| released = 2014
| latest release version = v4
| latest release date = 2017
| repo = {{URL|https://github.com/tensorflow/models/blob/master/research/slim/README.md}}
| programming language =
| operating system =
| replaces =
| replaced_by =
| genre = {{ indented plainlist |
}}
| license = Apache 2.0
| website =
}}
Inception{{Cite book |last1=Szegedy |first1=Christian |last2=Wei Liu |last3=Yangqing Jia |last4=Sermanet |first4=Pierre |last5=Reed |first5=Scott |last6=Anguelov |first6=Dragomir |last7=Erhan |first7=Dumitru |last8=Vanhoucke |first8=Vincent |last9=Rabinovich |first9=Andrew |chapter=Going deeper with convolutions |date=June 2015 |title=2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) |chapter-url=https://ieeexplore.ieee.org/document/7298594 |publisher=IEEE |pages=1–9 |doi=10.1109/CVPR.2015.7298594 |isbn=978-1-4673-6964-0|arxiv=1409.4842 }} is a family of convolutional neural network (CNN) for computer vision, introduced by researchers at Google in 2014 as GoogLeNet (later renamed Inception v1). The series was historically important as an early CNN that separates the stem (data ingest), body (data processing), and head (prediction), an architectural design that persists in all modern CNN.{{Cite book |last1=Zhang |first1=Aston |title=Dive into deep learning |last2=Lipton |first2=Zachary |last3=Li |first3=Mu |last4=Smola |first4=Alexander J. |date=2024 |publisher=Cambridge University Press |isbn=978-1-009-38943-3 |location=Cambridge New York Port Melbourne New Delhi Singapore |chapter=8.4. Multi-Branch Networks (GoogLeNet) |chapter-url=https://d2l.ai/chapter_convolutional-modern/googlenet.html}}
Version history
= Inception v1 =
File:GoogLeNet_architecture.svg
In 2014, a team at Google developed the GoogLeNet architecture, an instance of which won the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).[https://www.kaggle.com/models/google/inception-v1 Official repo of Inception V1 on Kaggle, published by Google.]
The name came from the LeNet of 1998, since both LeNet and GoogLeNet are CNNs. They also called it "Inception" after a "we need to go deeper" internet meme, a phrase from Inception (2010) the film. Because later, more versions were released, the original Inception architecture was renamed again as "Inception v1".
The models and the code were released under Apache 2.0 license on GitHub.{{cite web |title=google/inception |date=2024-08-19 |url=https://github.com/google/inception?tab=readme-ov-file |access-date=2024-08-19 |publisher=Google}}File:Inception-v3_model_module.png
File:Inception_dimension-reduced_module.svg
The Inception v1 architecture is a deep CNN composed of 22 layers. Most of these layers were "Inception modules". The original paper stated that Inception modules are a "logical culmination" of Network in Network{{cite arXiv |last1=Lin |first1=Min |title=Network In Network |date=2014-03-04 |eprint=1312.4400 |last2=Chen |first2=Qiang |last3=Yan |first3=Shuicheng|class=cs.NE }} and (Arora et al, 2014).{{Cite journal |last1=Arora |first1=Sanjeev |last2=Bhaskara |first2=Aditya |last3=Ge |first3=Rong |last4=Ma |first4=Tengyu |date=2014-01-27 |title=Provable Bounds for Learning Some Deep Representations |url=https://proceedings.mlr.press/v32/arora14.html |journal=Proceedings of the 31st International Conference on Machine Learning |publisher=PMLR |pages=584–592}}
Since Inception v1 is deep, it suffered from the vanishing gradient problem. The team solved it by using two "auxiliary classifiers", which are linear-softmax classifiers inserted at 1/3-deep and 2/3-deep within the network, and the loss function is a weighted sum of all three:
These were removed after training was complete. This was later solved by the ResNet architecture.
The architecture consists of three parts stacked on top of one another:
- The stem (data ingestion): The first few convolutional layers perform data preprocessing to downscale images to a smaller size.
- The body (data processing): The next many Inception modules perform the bulk of data processing.
- The head (prediction): The final fully-connected layer and softmax produces a probability distribution for image classification.
This structure is used in most modern CNN architectures.
= Inception v2 =
Inception v2 was released in 2015, in a paper that is more famous for proposing batch normalization.{{Cite journal |last=Szegedy |first=Christian |last2=Vanhoucke |first2=Vincent |last3=Ioffe |first3=Sergey |last4=Shlens |first4=Jon |last5=Wojna |first5=Zbigniew |date=2016 |title=Rethinking the Inception Architecture for Computer Vision |journal=Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) |url=https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.html |pages=2818–2826}}[https://www.kaggle.com/models/google/inception-v2 Official repo of Inception V2 on Kaggle, published by Google.] It had 13.6 million parameters.
It improves on Inception v1 by adding batch normalization, and removing dropout and local response normalization which they found became unnecessary when batch normalization is used.
= Inception v3 =
{{Anchor|Inception v3}}Inception v3 was released in 2016.[https://www.kaggle.com/models/google/inception-v3/ Official repo of Inception V3 on Kaggle, published by Google.] It improves on Inception v2 by using factorized convolutions.
As an example, a single 5×5 convolution can be factored into 3×3 stacked on top of another 3×3. Both has a receptive field of size 5×5. The 5×5 convolution kernel has 25 parameters, compared to just 18 in the factorized version. Thus, the 5×5 convolution is strictly more powerful than the factorized version. However, this power is not necessarily needed. Empirically, the research team found that factorized convolutions help.
It also uses a form of dimension-reduction by concatenating the output from a convolutional layer and a pooling layer. As an example, a tensor of size can be downscaled by a convolution with stride 2 to , and by maxpooling with pool size to . These are then concatenated to .
Other than this, it also removed the lowest auxiliary classifier during training. They found that the auxiliary head worked as a form of regularization.
They also proposed label-smoothing regularization in classification. For an image with label , instead of making the model to predict the probability distribution , they made the model predict the smoothed distribution where is the total number of classes.
= Inception v4 =
In 2017, the team released Inception v4, Inception ResNet v1, and Inception ResNet v2.{{Cite journal |last1=Szegedy |first1=Christian |last2=Ioffe |first2=Sergey |last3=Vanhoucke |first3=Vincent |last4=Alemi |first4=Alexander |date=2017-02-12 |title=Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning |url=https://ojs.aaai.org/index.php/aaai/article/view/11231 |journal=Proceedings of the AAAI Conference on Artificial Intelligence |volume=31 |issue=1 |arxiv=1602.07261 |doi=10.1609/aaai.v31i1.11231 |issn=2374-3468}}
Inception v4 is an incremental update with even more factorized convolutions, and other complications that were empirically found to improve benchmarks.
Inception ResNet v1 and v2 are both modifications of Inception v4, where residual connections are added to each Inception module, inspired by the ResNet architecture.{{Cite conference |last1=He |first1=Kaiming |last2=Zhang |first2=Xiangyu |last3=Ren |first3=Shaoqing |last4=Sun |first4=Jian |date=10 Dec 2015 |title=Deep Residual Learning for Image Recognition |arxiv=1512.03385}}
= Xception =
Xception ("Extreme Inception") was published in 2017.{{Cite journal |last=Chollet |first=Francois |date=2017 |title=Xception: Deep Learning With Depthwise Separable Convolutions |journal=Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) |url=https://openaccess.thecvf.com/content_cvpr_2017/html/Chollet_Xception_Deep_Learning_CVPR_2017_paper.html |pages=1251–1258}} It is a linear stack of depthwise separable convolution layers with residual connections. The design was proposed on the hypothesis that in a CNN, the cross-channels correlations and spatial correlations in the feature maps can be entirely decoupled.
Training each network took 3 days on 60 K80 GPUs, or approximately 0.5 petaFLOP-days.{{Cite web |date=2022-06-09 |title=AI and compute |url=https://openai.com/index/ai-and-compute/ |access-date=2025-04-28 |website=openai.com |language=en-US}}
References
{{Reflist}}
External links
- A list of all Inception models released by Google: {{Cite web |title=models/research/slim/README.md at master · tensorflow/models |url=https://github.com/tensorflow/models/blob/master/research/slim/README.md#pre-trained-models |access-date=2024-10-19 |website=GitHub |language=en}}
{{Google AI}}
{{Differentiable computing}}