data augmentation

{{Short description|Data analysis technique}}

{{Machine learning}}

{{Tone|date=February 2024}}

Data augmentation is a statistical technique which allows maximum likelihood estimation from incomplete data.{{cite journal |last1=Dempster |first1=A.P. |last2=Laird |first2=N.M. |last3=Rubin |first3=D.B. |title=Maximum Likelihood from Incomplete Data Via the EM Algorithm |journal=Journal of the Royal Statistical Society. Series B (Methodological) |year=1977 |volume=39 |issue=1 |pages=1–22 |doi=10.1111/j.2517-6161.1977.tb01600.x |url=https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/j.2517-6161.1977.tb01600.x |access-date=2024-08-28 |archive-date=2022-10-10 |archive-url=https://web.archive.org/web/20221010051829/https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/j.2517-6161.1977.tb01600.x |url-status=live }}{{cite journal |last=Rubin |first=Donald |title=Comment: The Calculation of Posterior Distributions by Data Augmentation |journal=Journal of the American Statistical Association |year=1987 |volume=82 |issue=398 |pages= |doi=10.2307/2289460 |jstor=2289460 |url=https://www.jstor.org/stable/2289460 |access-date=2024-08-28 |archive-date=2024-08-07 |archive-url=https://web.archive.org/web/20240807015222/https://www.jstor.org/stable/2289460 |url-status=live }} Data augmentation has important applications in Bayesian analysis,{{cite book |last=Jackman |first=Simon |title=Bayesian Analysis for the Social Sciences |year=2009 |publisher=John Wiley & Sons |isbn=978-0-470-01154-6 |url=https://www.wiley.com/en-au/Bayesian+Analysis+for+the+Social+Sciences-p-9780470011546 |pages=236}} and the technique is widely used in machine learning to reduce overfitting when training machine learning models,{{cite journal | last1=Shorten | first1=Connor | last2=Khoshgoftaar | first2=Taghi M. | title=A survey on Image Data Augmentation for Deep Learning | journal=Mathematics and Computers in Simulation | publisher=springer | volume=6 | year=2019 | doi=10.1186/s40537-019-0197-0 | pages=60 | doi-access=free }} achieved by training models on several slightly-modified copies of existing data.

Synthetic oversampling techniques for traditional machine learning

{{main|Oversampling and undersampling in data analysis#Oversampling techniques for classification problems}}Synthetic Minority Over-sampling Technique (SMOTE) is a method used to address imbalanced datasets in machine learning. In such datasets, the number of samples in different classes varies significantly, leading to biased model performance. For example, in a medical diagnosis dataset with 90 samples representing healthy individuals and only 10 samples representing individuals with a particular disease, traditional algorithms may struggle to accurately classify the minority class. SMOTE rebalances the dataset by generating synthetic samples for the minority class. For instance, if there are 100 samples in the majority class and 10 in the minority class, SMOTE can create synthetic samples by randomly selecting a minority class sample and its nearest neighbors, then generating new samples along the line segments joining these neighbors. This process helps increase the representation of the minority class, improving model performance.{{Cite journal |last1=Wang |first1=Shujuan |last2=Dai |first2=Yuntao |last3=Shen |first3=Jihong |last4=Xuan |first4=Jingxue |date=2021-12-15 |title=Research on expansion and classification of imbalanced data based on SMOTE algorithm |journal=Scientific Reports |language=en |volume=11 |issue=1 |pages=24039 |doi=10.1038/s41598-021-03430-5 |pmid=34912009 |issn=2045-2322|pmc=8674253 |bibcode=2021NatSR..1124039W }}

Data augmentation for image classification

When convolutional neural networks grew larger in mid-1990s, there was a lack of data to use, especially considering that some part of the overall dataset should be spared for later testing. It was proposed to perturb existing data with affine transformations to create new examples with the same labels,{{cite book|author=Yann Lecun |display-authors=et. al. |title=Learning algorithms for classification: A comparison on handwritten digit recognition |url=https://nyuscholars.nyu.edu/en/publications/learning-algorithms-for-classification-a-comparison-on-handwritte |website=nyuscholars.nyu.edu |access-date=14 May 2023 |format=Conference paper |year=1995|pages=261–276 |publisher=World Scientific }} which were complemented by so-called elastic distortions in 2003,{{cite book | s2cid=4659176 | doi=10.1109/ICDAR.2003.1227801 | chapter=Best practices for convolutional neural networks applied to visual document analysis | title=Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings. | year=2003 | last1=Simard | first1=P.Y. | last2=Steinkraus | first2=D. | last3=Platt | first3=J.C. | volume=1 | pages=958–963 | isbn=0-7695-1960-1 }} and the technique was widely used as of 2010s.{{cite arXiv |title=Improving neural networks by preventing co-adaptation of feature detectors |eprint=1207.0580 |last1=Hinton |first1=Geoffrey E. |last2=Srivastava |first2=Nitish |last3=Krizhevsky |first3=Alex |last4=Sutskever |first4=Ilya |last5=Salakhutdinov |first5=Ruslan R. |class=cs.NE |year=2012}} Data augmentation can enhance CNN performance and acts as a countermeasure against CNN profiling attacks.{{Cite book |last1=Cagli |first1=Eleonora |last2=Dumas |first2=Cécile |last3=Prouff |first3=Emmanuel |title=Cryptographic Hardware and Embedded Systems – CHES 2017 |chapter=Convolutional Neural Networks with Data Augmentation Against Jitter-Based Countermeasures: Profiling Attacks Without Pre-processing |date=2017 |editor-last=Fischer |editor-first=Wieland |editor2-last=Homma |editor2-first=Naofumi |chapter-url=https://zenodo.org/record/1404232 |series=Lecture Notes in Computer Science |volume=10529 |language=en |location=Cham |publisher=Springer International Publishing |pages=45–68 |doi=10.1007/978-3-319-66787-4_3 |isbn=978-3-319-66787-4|s2cid=54088207 }}

Data augmentation has become fundamental in image classification, enriching training dataset diversity to improve model generalization and performance. The evolution of this practice has introduced a broad spectrum of techniques, including geometric transformations, color space adjustments, and noise injection.{{Cite journal |last1=Shorten |first1=Connor |last2=Khoshgoftaar |first2=Taghi M. |date=2019-07-06 |title=A survey on Image Data Augmentation for Deep Learning |journal=Journal of Big Data |volume=6 |issue=1 |pages=60 |doi=10.1186/s40537-019-0197-0 |doi-access=free |issn=2196-1115}}

= Geometric Transformations =

Geometric transformations alter the spatial properties of images to simulate different perspectives, orientations, and scales. Common techniques include:

  • Rotation: Rotating images by a specified degree to help models recognize objects at various angles.
  • Flipping: Reflecting images horizontally or vertically to introduce variability in orientation.
  • Cropping: Removing sections of the image to focus on particular features or simulate closer views.
  • Translation: Shifting images in different directions to teach models positional invariance.

= Color Space Transformations =

Color space transformations modify the color properties of images, addressing variations in lighting, color saturation, and contrast. Techniques include:

  • Brightness Adjustment: Varying the image's brightness to simulate different lighting conditions.
  • Contrast Adjustment: Changing the contrast to help models recognize objects under various clarity levels.
  • Saturation Adjustment: Altering saturation to prepare models for images with diverse color intensities.
  • Color Jittering: Randomly adjusting brightness, contrast, saturation, and hue to introduce color variability.

= Noise Injection =

Injecting noise into images simulates real-world imperfections, teaching models to ignore irrelevant variations. Techniques involve:

  • Gaussian Noise: Adding Gaussian noise mimics sensor noise or graininess.
  • Salt and Pepper Noise: Introducing black or white pixels at random simulates sensor dust or dead pixels.

Data augmentation for signal processing

Residual or block bootstrap can be used for time series augmentation.

= Biological signals =

Synthetic data augmentation is of paramount importance for machine learning classification, particularly for biological data, which tend to be high dimensional and scarce. The applications of robotic control and augmentation in disabled and able-bodied subjects still rely mainly on subject-specific analyses. Data scarcity is notable in signal processing problems such as for Parkinson's Disease Electromyography signals, which are difficult to source - Zanini, et al. noted that it is possible to use a generative adversarial network (in particular, a DCGAN) to perform style transfer in order to generate synthetic electromyographic signals that corresponded to those exhibited by sufferers of Parkinson's Disease.{{cite journal|last1=Anicet Zanini|first1=Rafael|last2=Luna Colombini|first2=Esther|title=Parkinson's Disease EMG Data Augmentation and Simulation with DCGANs and Style Transfer|journal=Sensors|volume=20|issue=9|year=2020|pages=2605|issn=1424-8220|doi=10.3390/s20092605|pmid=32375217|pmc=7248755|bibcode=2020Senso..20.2605A |doi-access=free}}

The approaches are also important in electroencephalography (brainwaves). Wang, et al. explored the idea of using deep convolutional neural networks for EEG-Based Emotion Recognition, results show that emotion recognition was improved when data augmentation was used.{{cite book|last1=Wang|first1=Fang|last2=Zhong|first2=Sheng-hua|last3=Peng|first3=Jianfeng|last4=Jiang|first4=Jianmin|last5=Liu|first5=Yan|title=MultiMedia Modeling |chapter=Data Augmentation for EEG-Based Emotion Recognition with Deep Convolutional Neural Networks|series=Lecture Notes in Computer Science|volume=10705|year=2018|pages=82–93|issn=0302-9743|doi=10.1007/978-3-319-73600-6_8|isbn=978-3-319-73599-3}}

A common approach is to generate synthetic signals by re-arranging components of real data. Lotte{{cite journal|last1=Lotte|first1=Fabien|title=Signal Processing Approaches to Minimize or Suppress Calibration Time in Oscillatory Activity-Based Brain–Computer Interfaces|journal=Proceedings of the IEEE|volume=103|issue=6|year=2015|pages=871–890|issn=0018-9219|doi=10.1109/JPROC.2015.2404941|s2cid=22472204|url=https://hal.inria.fr/hal-01159171/file/lotte_sigProcCalibReduction-final.pdf|access-date=2022-11-05|archive-date=2023-04-03|archive-url=https://web.archive.org/web/20230403215441/https://hal.inria.fr/hal-01159171/file/lotte_sigProcCalibReduction-final.pdf|url-status=live}} proposed a method of "Artificial Trial Generation Based on Analogy" where three data examples x_{1}, x_{2}, x_{3} provide examples and an artificial x_{synthetic} is formed which is to x_{3} what x_{2} is to x_{1}. A transformation is applied to x_{1} to make it more similar to x_{2}, the same transformation is then applied to x_{3} which generates x_{synthetic}. This approach was shown to improve performance of a Linear Discriminant Analysis classifier on three different datasets.

Current research shows great impact can be derived from relatively simple techniques. For example, Freer{{cite journal|last1=Freer|first1=Daniel|last2=Yang|first2=Guang-Zhong|title=Data augmentation for self-paced motor imagery classification with C-LSTM|journal=Journal of Neural Engineering|volume=17|issue=1|year=2020|pages=016041|issn=1741-2552|doi=10.1088/1741-2552/ab57c0|pmid=31726440|bibcode=2020JNEng..17a6041F|hdl=10044/1/75376|s2cid=208034533 |hdl-access=free}} observed that introducing noise into gathered data to form additional data points improved the learning ability of several models which otherwise performed relatively poorly. Tsinganos et al.{{cite journal|last1=Tsinganos|first1=Panagiotis|last2=Cornelis|first2=Bruno|last3=Cornelis|first3=Jan|last4=Jansen|first4=Bart|last5=Skodras|first5=Athanassios|title=Data Augmentation of Surface Electromyography for Hand Gesture Recognition|journal=Sensors|volume=20|issue=17|year=2020|pages=4892|issn=1424-8220|doi=10.3390/s20174892|pmid=32872508|pmc=7506981|bibcode=2020Senso..20.4892T |doi-access=free}} studied the approaches of magnitude warping, wavelet decomposition, and synthetic surface EMG models (generative approaches) for hand gesture recognition, finding classification performance increases of up to +16% when augmented data was introduced during training. More recently, data augmentation studies have begun to focus on the field of deep learning, more specifically on the ability of generative models to create artificial data which is then introduced during the classification model training process. In 2018, Luo et al.{{cite book|last1=Luo|first1=Yun|last2=Lu|first2=Bao-Liang|title=2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)|chapter=EEG Data Augmentation for Emotion Recognition Using a Conditional Wasserstein GAN|year=2018|volume=2018|pages=2535–2538|doi=10.1109/EMBC.2018.8512865|pmid=30440924|isbn=978-1-5386-3646-6|s2cid=53105445}} observed that useful EEG signal data could be generated by Conditional Wasserstein Generative Adversarial Networks (GANs) which was then introduced to the training set in a classical train-test learning framework. The authors found classification performance was improved when such techniques were introduced.

= Mechanical signals =

The prediction of mechanical signals based on data augmentation brings a new generation of technological innovations, such as new energy dispatch, 5G communication field, and robotics control engineering.{{cite journal|last1=Yang|first1=Yang|title=Wind speed forecasting with correlation network pruning and augmentation: A two-phase deep learning method|journal=Renewable Energy|volume=198|issue=1|year=2022|pages=267–282|issn=0960-1481|doi=10.1016/j.renene.2022.07.125|arxiv=2306.01986 |bibcode=2022REne..198..267Y |s2cid=251511199 }} In 2022, Yang et al. integrate constraints, optimization and control into a deep network framework based on data augmentation and data pruning with spatio-temporal data correlation, and improve the interpretability, safety and controllability of deep learning in real industrial projects through explicit mathematical programming equations and analytical solutions.

See also

References

{{reflist}}

{{data}}

{{Artificial intelligence navbox}}

Category:Machine learning