Audification

{{Short description|Technique for representing data as sound}}

Audification is an auditory display technique for representing a sequence of data values as sound. By definition, it is described as a "direct translation of a data waveform to the audible domain."{{Cite book|title=The Oxford Handbook of Computer Music|last=Dean|first=Roger|publisher=Oxford University Press|year=2009|isbn=9780195331615|location=New York|pages=321}} Audification interprets a data sequence and usually a time series, as an audio waveform where input data are mapped to sound pressure levels. Various signal processing techniques are used to assess data features. The technique allows the listener to hear periodic components as frequencies. Audification typically requires large data sets with periodic components.{{Citation|author1=Hermann, T.|title=Sound and meaning in auditory data display|url=http://www.dei.unipd.it/~musica/IM/P6_Hermann_04.pdf|journal=Proceedings of the IEEE|volume=92|number=4|pages=730–741|year=2004|publisher=IEEE|doi=10.1109/jproc.2004.825904|name-list-style=amp|author2=Ritter, H.|s2cid=12354787}}

Audification is most commonly applied to get the most direct and simple representation of data from sound and to convert it into a visual. In most cases it will always be used for taking sounds and breaking it down in a way that we can visually understand it and construct more data from it.

History

{{context|date=June 2019}} The idea of audification was introduced in 1992 by Greg Kramer, initially as a sonification technique. This was the beginning of audification, but is also why most people to this day still consider audification a type of sonification.

The goal of audification is to allow the listener to audibly experience the results of scientific measurements or simulations.

A 2007 study by Sandra Pauletto and Andy Hunt at the University of York suggested that users were able to detect attributes such as noise, repetitive elements, regular oscillations, discontinuities, and signal power in audification of time-series data to a degree comparable with visual inspection of spectrograms.{{Citation |

title = A comparison of audio & visual analysis of complex time-series data sets|

journal = Proceedings of the 11th International Conference on Auditory Display (ICAD2005)|

editor = Brazil, Eoin|

year = 2005|

pages = 175–181|

url = http://www.icad.org/Proceedings/2005/PaulettoHunt2005.pdf|author1=Pauletto, S. |author2=Hunt, A. |name-list-style=amp }}

Applications

Applications include audification of seismic data{{Citation |

title = Using audification in planetary seismology|

journal = Proceedings of the 7th International Conference on Auditory Display (ICAD2001)|editor1=Hiipakka, J. |editor2=Zacharov, N. |editor3=Takala, T. |

year = 2001|

pages = 227–230|

url = http://www.icad.org/Proceedings/2001/Dombois2001.pdf|

author = Dombois, Florian }}

and of human neurophysiological signals.{{Citation

|title=Easy listening to sleep recordings: tools and examples

|author=Olivan, J.

|author2=Kemp, B.

|author3=Roessen, M.

|name-list-style=amp

|journal=Sleep Medicine

|volume=5

|number=6

|pages=601–603

|year=2004

|url=http://www.hsr.nl/bobkemp/papers/2004Easy%20listeningToSleep.pdf

|doi=10.1016/j.sleep.2004.07.010

|pmid=15511709

|url-status=dead

|archive-url=https://web.archive.org/web/20120425141834/http://www.hsr.nl/bobkemp/papers/2004Easy%20listeningToSleep.pdf

|archive-date=2012-04-25

}}

An example is the esophageal stethoscope, which amplifies naturally occurring sound without conveying inherently noiseless variables such as the result of gas analysis.{{Cite journal|title=Advanced Patient Monitoring Displays: Tools for Continuous Informing|last1=Sanderson|first1=Penelope|last2=Watson|first2=Marcus|date=2005|journal=Anesthesia and Analgesia|last3=Russell|first3=W. John|s2cid=18818792|volume=101|issue=1|pages=161-8, table of contents|doi=10.1213/01.ANE.0000154080.67496.AE|pmid=15976225|doi-access=free}}

= Medicine =

Converting ultrasound to audible sound is a form of audification that provides a form of echolocation.{{cite journal |last1=Davies |first1=T. Claire |last2=Pinder |first2=Shane D. |last3=Dodd |first3=George |last4=Burns |first4=Catherine M. |title=Where did that sound come from? Comparing the ability to localise using audification and audition |journal=Disability and Rehabilitation: Assistive Technology |date=March 2012 |volume=7 |issue=2 |pages=130–138 |doi=10.3109/17483107.2011.602172 |pmid=21923566 |url=https://pubmed.ncbi.nlm.nih.gov/21923566/}}{{Cite book|title=Audification of Ultrasound for Human Echolocation|last=Davies|first=Clare|publisher=Clare Davies|year=2008}} Other uses in the medical field include the stethoscope{{Cite web|url=https://www.maximintegrated.com/en/app-notes/index.mvp/id/4694|title=Introduction to Digital Stethoscopes and Electrical Component Selection Criteria - Tutorial - Maxim|website=www.maximintegrated.com|access-date=2019-05-07}} and the audification of an EEG.{{Cite book|last1=Temko|first1=A.|last2=Marnane|first2=W.|last3=Boylan|first3=G.|last4=O'Toole|first4=J. M.|last5=Lightbody|first5=G.|title=2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society |chapter=Neonatal EEG audification for seizure detection |date=August 2014|volume=2014|pages=4451–4454|doi=10.1109/EMBC.2014.6944612|pmid=25570980|isbn=978-1-4244-7929-0|s2cid=18784120}}

= Music =

The development of electronic music can also be considered the history of audification. This is because all electronic instruments involve electric process audified using a loudspeaker.

= Seismology =

Audification is of interest for research into Auditory Seismology. It is used in earthquake prediction.{{Cite web|url=https://sos.allshookup.org/|title=Sounds of Seismic - Earth System Soundscape|website=sos.allshookup.org|access-date=2019-05-07}} Applications include using seismic data to differentiate bomb blasts from earthquakes.

The technique presents sound waves of earthquakes alongside a visual representation. The addition of audio allows both the eye and ears to contribute to better understanding.

= NASA =

NASA has used audification to represent radio and plasma wave{{Cite journal|last1=Scarf|first1=F. L.|last2=Gurnett|first2=D. A.|last3=Kurth|first3=W. S.|last4=Coroniti|first4=F. V.|last5=Kennel|first5=C. F.|last6=Poynter|first6=R. L.|date=1987|title=Plasma wave measurements in the magnetosphere of Uranus|journal=Journal of Geophysical Research: Space Physics|volume=92|issue=A13|pages=15217–15224|doi=10.1029/JA092iA13p15217|bibcode=1987JGR....9215217S|issn=2156-2202}} measurements.{{Cite web|url=http://legacy.spa.aalto.fi/icad2001/proceedings/papers/dombois.pdf|title=Using Audification in planetary seismology|last=Dombois|first=Florian|website=Legacy}}

Sonification

Both sonification and audification are representational techniques in which data sets or its selected features are mapped into audio signals.{{Cite arXiv|title=Direct Segmented Sonification of Characteristic Features of the Data Domain|last1=Vickers|first1=Paul|last2=Holdrich|first2=Robert|date=December 2017 |eprint = 1711.11368|class = cs.HC}} However, audification is a kind of sonification, a term which encompasses all techniques for representing data in non-speech audio.{{citation needed|date=May 2017}} Their relationship can be demonstrated in the way data values in some sonifications that directly define audio signals are called audification.{{Cite book|title=The Aesthetics of Scientific Data Representation: More than Pretty Pictures|last1=Philipsen|first1=Lotte|last2=Kjærgaard|first2=Rikke|publisher=Routledge|year=2018|isbn=9781138679375|location=New York}}{{Clarification|reason=|date=July 2019}}

References