Computational audiology

Computational audiology is a branch of audiology that employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. Computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment.{{cite journal|last1=Winslow|first1=Raimond L.|last2=Trayanova|first2=Natalia|last3=Geman|first3=Donald|last4=Miller|first4=Michael I.|date=31 October 2012|title=Computational Medicine: Translating Models to Clinical Care|journal=Science Translational Medicine|volume=4|issue=158|pages=158rv11|doi=10.1126/scitranslmed.3003528|pmid=23115356|pmc=3618897}}

Overview

In contrast to traditional methods in audiology and hearing science research, computational audiology emphasizes predictive modeling and large-scale analytics ("big data") rather than inferential statistics and small-cohort hypothesis testing. The aim of computational audiology is to translate advances in hearing science, data science, information technology, and machine learning to clinical audiological care. Research to understand hearing function and auditory processing in humans as well as relevant animal species represents translatable work that supports this aim. Research and development to implement more effective diagnostics and treatments represent translational work that supports this aim.{{Cite journal|last=Gannon|first=Frank|date=November 2014|title=The steps from translatable to translational research|journal=EMBO Reports|volume=15|issue=11|pages=1107–1108|doi=10.15252/embr.201439587|issn=1469-221X|pmc=4253482|pmid=25296643}}

For people with hearing difficulties, tinnitus, hyperacusis, or balance problems, these advances might lead to more precise diagnoses, novel therapies, and advanced rehabilitation options including smart prostheses and e-Health/mHealth apps. For care providers, it can provide actionable knowledge and tools for automating part of the clinical pathway.{{cite journal |last1=Wasmann |first1=Jan-Willem A. |last2=Lanting |first2=Cris P. |last3=Huinck |first3=Wendy J. |last4=Mylanus |first4=Emmanuel A. M. |last5=van der Laak |first5=Jeroen W. M. |last6=Govaerts |first6=Paul J. |last7=Swanepoel |first7=De Wet |author-link7=De Wet Swanepoel |last8=Moore |first8=David R. |last9=Barbour |first9=Dennis L. |date=November–December 2021 |title=Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age |journal=Ear and Hearing |volume=42 |issue=6 |pages=1499–1507 |doi=10.1097/AUD.0000000000001041 |pmc=8417156 |pmid=33675587}}

The field is interdisciplinary and includes foundations in audiology, auditory neuroscience, computer science, data science, machine learning, psychology, signal processing, natural language processing, otology and vestibulology.

Applications

In computational audiology, models and algorithms are used to understand the principles that govern the auditory system, to screen for hearing loss, to diagnose hearing disorders, to provide rehabilitation, and to generate simulations for patient education, among others.

= Computational models of hearing, speech and auditory perception =

For decades, phenomenological & biophysical (computational) models have been developed to simulate characteristics of the human auditory system. Examples include models of the mechanical properties of the basilar membrane,{{Citation|last=De Boer|first=Egbert|title=Mechanics of the Cochlea: Modeling Efforts|date=1996|url=http://link.springer.com/10.1007/978-1-4612-0757-3_5|work=The Cochlea|series=Springer Handbook of Auditory Research|volume=8|pages=258–317|editor-last=Dallos|editor-first=Peter|place=New York, NY|publisher=Springer New York|doi=10.1007/978-1-4612-0757-3_5|isbn=978-1-4612-6891-8|access-date=2022-01-18|editor2-last=Popper|editor2-first=Arthur N.|editor3-last=Fay|editor3-first=Richard R.|url-access=subscription}} the electrically stimulated cochlea,{{Cite journal|last1=Frijns|first1=J. H. M.|last2=de Snoo|first2=S. L.|last3=Schoonhoven|first3=R.|date=1995-07-01|title=Potential distributions and neural excitation patterns in a rotationally symmetric model of the electrically stimulated cochlea|url=https://dx.doi.org/10.1016/0378-5955%2895%2900090-Q|journal=Hearing Research|language=en|volume=87|issue=1|pages=170–186|doi=10.1016/0378-5955(95)00090-Q|pmid=8567435|s2cid=4762235|issn=0378-5955|url-access=subscription}}{{Cite journal|last1=Rubinstein|first1=Jay T.|last2=Hong|first2=Robert|date=September 2003|title=Signal Coding in Cochlear Implants: Exploiting Stochastic Effects of Electrical Stimulation|url=https://doi.org/10.1177/00034894031120S904|journal=Annals of Otology, Rhinology & Laryngology|volume=112|issue=9_suppl|pages=14–19|doi=10.1177/00034894031120s904|pmid=14533839|s2cid=32157848|issn=0003-4894|url-access=subscription}} middle ear mechanics,{{Cite journal|last1=Sun|first1=Q.|last2=Gan|first2=R. Z.|last3=Chang|first3=K.-H.|last4=Dormer|first4=K. J.|date=2002-10-01|title=Computer-integrated finite element modeling of human middle ear|url=https://doi.org/10.1007/s10237-002-0014-z|journal=Biomechanics and Modeling in Mechanobiology|language=en|volume=1|issue=2|pages=109–122|doi=10.1007/s10237-002-0014-z|pmid=14595544|s2cid=8781577|issn=1617-7959|url-access=subscription}} bone conduction,{{Cite journal|last=Stenfelt|first=Stefan|date=2016-10-01|title=Model predictions for bone conduction perception in the human|url=https://www.sciencedirect.com/science/article/pii/S0378595515300769|journal=Hearing Research|series=MEMRO 2015 – Basic Science meets Clinical Otology|language=en|volume=340|pages=135–143|doi=10.1016/j.heares.2015.10.014|pmid=26657096|s2cid=4862153|issn=0378-5955}} and the central auditory pathway.{{Cite journal|date=2010|editor-last=Meddis|editor-first=Ray|editor2-last=Lopez-Poveda|editor2-first=Enrique A.|editor3-last=Fay|editor3-first=Richard R.|editor4-last=Popper|editor4-first=Arthur N.|title=Computational Models of the Auditory System|url=https://link.springer.com/book/10.1007/978-1-4419-5934-8|journal=Springer Handbook of Auditory Research|volume=35|language=en-gb|doi=10.1007/978-1-4419-5934-8|isbn=978-1-4419-1370-8|issn=0947-2657|url-access=subscription}} Saremi et al. (2016) compared 7 contemporary models including parallel filterbanks, cascaded filterbanks, transmission lines and biophysical models.{{Cite journal|last1=Saremi|first1=Amin|last2=Beutelmann|first2=Rainer|last3=Dietz|first3=Mathias|last4=Ashida|first4=Go|last5=Kretzberg|first5=Jutta|last6=Verhulst|first6=Sarah|date=September 2016|title=A comparative study of seven human cochlear filter models|url=http://dx.doi.org/10.1121/1.4960486|journal=The Journal of the Acoustical Society of America|volume=140|issue=3|pages=1618–1634|doi=10.1121/1.4960486|pmid=27914400|bibcode=2016ASAJ..140.1618S|issn=0001-4966|url-access=subscription}} More recently, convolutional neural networks (CNNs) have been constructed and trained that can replicate human auditory function{{Cite journal|last1=Kell|first1=Alexander J. E.|last2=Yamins|first2=Daniel L. K.|last3=Shook|first3=Erica N.|last4=Norman-Haignere|first4=Sam V.|last5=McDermott|first5=Josh H.|date=2018-05-02|title=A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy|journal=Neuron|language=English|volume=98|issue=3|pages=630–644.e16|doi=10.1016/j.neuron.2018.03.044|issn=0896-6273|pmid=29681533|s2cid=5084719|doi-access=free}} or complex cochlear mechanics with high accuracy.{{Cite journal|last1=Baby|first1=Deepak|last2=Van Den Broucke|first2=Arthur|last3=Verhulst|first3=Sarah|date=February 2021|title=A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications|journal=Nature Machine Intelligence|language=en|volume=3|issue=2|pages=134–143|doi=10.1038/s42256-020-00286-8|issn=2522-5839|pmc=7116797|pmid=33629031}} Although inspired by the interconnectivity of biological neural networks, the architecture of CNNs is distinct from the organization of the natural auditory system.

= e-Health / mHealth (connected hearing healthcare, wireless- and internet-based services) =

Online pure-tone threshold audiometry (or screening) tests, electrophysiological measures, for example distortion-product otoacoustic emissions (DPOAEs) and speech-in-noise screening tests are becoming increasingly available as a tools to promote awareness and enable accurate early identification of hearing loss across ages, monitor the effects of ototoxicity and/or noise, and guide ear and hearing care decisions and provide support to clinicians.{{Cite journal|last1=Paglialonga|first1=Alessia|last2=Cleveland Nielsen|first2=Annette|last3=Ingo|first3=Elisabeth|last4=Barr|first4=Caitlin|last5=Laplante-Lévesque|first5=Ariane|date=2018-07-31|title=eHealth and the hearing aid adult patient journey: a state-of-the-art review|journal=BioMedical Engineering OnLine|volume=17|issue=1|pages=101|doi=10.1186/s12938-018-0531-3|issn=1475-925X|pmc=6069792|pmid=30064497 |doi-access=free }}{{Cite journal|last1=Frisby|first1=Caitlin|last2=Eikelboom|first2=Robert|last3=Mahomed-Asmail|first3=Faheema|last4=Kuper|first4=Hannah|last5=Swanepoel|first5=De Wet|date=2021-12-30|title=MHealth Applications for Hearing Loss: A Scoping Review|url=https://www.liebertpub.com/doi/10.1089/tmj.2021.0460|journal=Telemedicine and e-Health|volume=28 |issue=8 |pages=1090–1099 |doi=10.1089/tmj.2021.0460|pmid=34967683|s2cid=245567480|issn=1530-5627|hdl=2263/84486|hdl-access=free}} Smartphone-based tests have been proposed to detect middle ear fluid using acoustic reflectometry and machine learning.{{Cite journal |last1=Chan |first1=Justin |last2=Raju |first2=Sharat |last3=Nandakumar |first3=Rajalakshmi |last4=Bly |first4=Randall |last5=Gollakota |first5=Shyamnath |date=2019-05-15 |title=Detecting middle ear fluid using smartphones |journal=Science Translational Medicine |language=en |volume=11 |issue=492 |pages=eaav1102 |doi=10.1126/scitranslmed.aav1102 |pmid=31092691 |s2cid=155102882 |issn=1946-6234|doi-access=free }} Smartphone attachments have also been designed to perform tympanometry for acoustic evaluation of the middle ear eardrum.{{Cite journal |last1=Chan |first1=Justin |last2=Najafi |first2=Ali |last3=Baker |first3=Mallory |last4=Kinsman |first4=Julie |last5=Mancl |first5=Lisa R. |last6=Norton |first6=Susan |last7=Bly |first7=Randall |last8=Gollakota |first8=Shyamnath |date=2022-06-16 |title=Performing tympanometry using smartphones |journal=Communications Medicine |language=en |volume=2 |issue=1 |page=57 |doi=10.1038/s43856-022-00120-9 |pmid=35721828 |pmc=9203539 |s2cid=249811632 |issn=2730-664X}}{{Cite web |last=Community |first=Nature Portfolio Bioengineering |date=2022-06-15 |title=Computing for Audiology: Smartphone tympanometer for diagnosing middle ear disorders |url=http://bioengineeringcommunity.nature.com/posts/computing-for-audiology-smartphone-tympanometer-for-diagnosing-middle-ear-disorders |access-date=2022-06-21 |website=Nature Portfolio Bioengineering Community |language=en}} Low-cost earphones attached to smartphones have also been prototyped to help detect the faint otoacoustic emissions from the cochlea and perform neonatal hearing screening.{{Cite journal |last=Goodman |first=Shawn S. |date=2022-10-31 |title=Affordable hearing screening |url=https://www.nature.com/articles/s41551-022-00959-2 |journal=Nature Biomedical Engineering |volume=6 |issue=11 |language=en |pages=1199–1200 |doi=10.1038/s41551-022-00959-2 |pmid=36316370 |s2cid=253246312 |issn=2157-846X|url-access=subscription }}{{Cite journal |last1=Chan |first1=Justin |last2=Ali |first2=Nada |last3=Najafi |first3=Ali |last4=Meehan |first4=Anna |last5=Mancl |first5=Lisa R. |last6=Gallagher |first6=Emily |last7=Bly |first7=Randall |last8=Gollakota |first8=Shyamnath |date=2022-10-31 |title=An off-the-shelf otoacoustic-emission probe for hearing screening via a smartphone |journal=Nature Biomedical Engineering |volume=6 |issue=11 |language=en |pages=1203–1213 |doi=10.1038/s41551-022-00947-6 |pmid=36316369 |pmc=9717525 |issn=2157-846X}}

= Big data and AI in audiology and hearing healthcare =

Collecting large numbers of audiograms (e.g. from databases from the National Institute for Occupational Safety and Health or NIOSH{{Cite journal|last1=Masterson|first1=Elizabeth A.|last2=Tak|first2=SangWoo|last3=Themann|first3=Christa L.|last4=Wall|first4=David K.|last5=Groenewold|first5=Matthew R.|last6=Deddens|first6=James A.|last7=Calvert|first7=Geoffrey M.|date=June 2013|title=Prevalence of hearing loss in the United States by industry|url=https://onlinelibrary.wiley.com/doi/10.1002/ajim.22082|journal=American Journal of Industrial Medicine|language=en|volume=56|issue=6|pages=670–681|doi=10.1002/ajim.22082|pmid=22767358|url-access=subscription}} or National Health and Nutrition Examination Survey or NHANES) provides researchers with opportunities to find patterns of hearing status in the population{{Cite journal|last1=Charih|first1=François|last2=Bromwich|first2=Matthew|last3=Mark|first3=Amy E.|last4=Lefrançois|first4=Renée|last5=Green|first5=James R.|date=December 2020|title=Data-Driven Audiogram Classification for Mobile Audiometry|journal=Scientific Reports|language=en|volume=10|issue=1|pages=3962|doi=10.1038/s41598-020-60898-3|issn=2045-2322|pmc=7054524|pmid=32127604|bibcode=2020NatSR..10.3962C}}{{Cite journal|last1=Cox|first1=Marco|last2=de Vries|first2=Bert|date=2021|title=Bayesian Pure-Tone Audiometry Through Active Learning Under Informed Priors|journal=Frontiers in Digital Health|volume=3|page=723348|doi=10.3389/fdgth.2021.723348|issn=2673-253X|pmc=8521968|pmid=34713188|doi-access=free}} or to train AI systems that can classify audiograms.{{Cite journal|last1=Crowson|first1=Matthew G.|last2=Lee|first2=Jong Wook|last3=Hamour|first3=Amr|last4=Mahmood|first4=Rafid|last5=Babier|first5=Aaron|last6=Lin|first6=Vincent|last7=Tucci|first7=Debara L.|last8=Chan|first8=Timothy C. Y.|date=2020-08-07|title=AutoAudio: Deep Learning for Automatic Audiogram Interpretation|url=https://doi.org/10.1007/s10916-020-01627-1|journal=Journal of Medical Systems|language=en|volume=44|issue=9|pages=163|doi=10.1007/s10916-020-01627-1|pmid=32770269|s2cid=221035573|issn=1573-689X}} Machine learning can be used to predict the relationship between multiple factors e.g. predict depression based on self-reported hearing loss{{Cite journal|last1=Crowson|first1=Matthew G.|last2=Franck|first2=Kevin H.|last3=Rosella|first3=Laura C.|last4=Chan|first4=Timothy C. Y.|date=July–August 2021|title=Predicting Depression From Hearing Loss Using Machine Learning|url=https://journals.lww.com/ear-hearing/Abstract/2021/07000/Predicting_Depression_From_Hearing_Loss_Using.21.aspx|journal=Ear and Hearing|language=en-US|volume=42|issue=4|pages=982–989|doi=10.1097/AUD.0000000000000993|pmid=33577219|s2cid=231901726|issn=1538-4667}} or the relationship between genetic profile and self-reported hearing loss.{{cite | last=Wells | first=Helena Rr. | last2=Freidin | first2=Maxim B. | last3=Zainul Abidin | first3=Fatin N. | last4=Payton | first4=Antony | last5=Dawes | first5=Piers | last6=Munro | first6=Kevin J. | last7=Morton | first7=Cynthia C. | last8=Moore | first8=David R. | last9=Dawson | first9=Sally J | last10=Williams | first10=Frances Mk. |date=2019-02-14|title=Genome-wide association study identifies 44 independent genomic loci for self-reported adult hearing difficulty in the UK Biobank cohort|url=http://dx.doi.org/10.1101/549071|access-date=2022-01-20|doi=10.1101/549071|s2cid=92606662}} Hearing aids and wearables provide the option to monitor the soundscape of the user or log the usage patterns which can be used to automatically recommend settings that are expected to benefit the user.{{Cite journal|last1=Christensen|first1=Jeppe H.|last2=Saunders|first2=Gabrielle H.|last3=Porsbo|first3=Michael|last4=Pontoppidan|first4=Niels H.|title=The everyday acoustic environment and its association with human heart rate: evidence from real-world data logging with hearing aids and wearables|journal=Royal Society Open Science|year=2021|volume=8|issue=2|pages=201345|doi=10.1098/rsos.201345|pmc=8074664|pmid=33972852|bibcode=2021RSOS....801345C}}

= Computational approaches to improving hearing devices and auditory implants =

Methods to improve rehabilitation by auditory implants include improving music perception,{{Cite journal|last1=Tahmasebi|first1=Sina|last2=Gajȩcki|first2=Tom|last3=Nogueira|first3=Waldo|date=2020|title=Design and Evaluation of a Real-Time Audio Source Separation Algorithm to Remix Music for Cochlear Implant Users|journal=Frontiers in Neuroscience|volume=14|page=434 |doi=10.3389/fnins.2020.00434|pmid=32508564|pmc=7248365 |issn=1662-453X|doi-access=free}} models of the electrode-neuron interface,{{Cite journal|last1=Garcia|first1=Charlotte|last2=Goehring|first2=Tobias|last3=Cosentino|first3=Stefano|last4=Turner|first4=Richard E.|last5=Deeks|first5=John M.|last6=Brochier|first6=Tim|last7=Rughooputh|first7=Taren|last8=Bance|first8=Manohar|last9=Carlyon|first9=Robert P.|date=2021-10-01|title=The Panoramic ECAP Method: Estimating Patient-Specific Patterns of Current Spread and Neural Health in Cochlear Implant Users|url=https://doi.org/10.1007/s10162-021-00795-2|journal=Journal of the Association for Research in Otolaryngology|language=en|volume=22|issue=5|pages=567–589|doi=10.1007/s10162-021-00795-2|issn=1438-7573|pmc=8476702|pmid=33891218}} and an AI based Cochlear Implant fitting assistant.{{Cite journal|last1=Battmer|first1=Rolf-Dieter|last2=Borel|first2=Stephanie|last3=Brendel|first3=Martina|last4=Buchner|first4=Andreas|last5=Cooper|first5=Huw|last6=Fielden|first6=Claire|last7=Gazibegovic|first7=Dzemal|last8=Goetze|first8=Romy|last9=Govaerts|first9=Paul|last10=Kelleher|first10=Katherine|last11=Lenartz|first11=Thomas|date=2015-03-01|title=Assessment of 'Fitting to Outcomes Expert' FOX™ with new cochlear implant users in a multi-centre study|url=https://doi.org/10.1179/1754762814Y.0000000093|journal=Cochlear Implants International|volume=16|issue=2|pages=100–109|doi=10.1179/1754762814Y.0000000093|issn=1467-0100|pmid=25118042|s2cid=4674778|url-access=subscription}}

= Data-based investigations into hearing loss and tinnitus =

Online surveys processed with ML-based classification have been used to diagnose somatosensory tinnitus.{{Cite journal|last1=Michiels|first1=Sarah|last2=Cardon|first2=Emilie|last3=Gilles|first3=Annick|last4=Goedhart|first4=Hazel|last5=Vesala|first5=Markku|last6=Schlee|first6=Winfried|date=2021-07-14|title=Somatosensory Tinnitus Diagnosis: Diagnostic Value of Existing Criteria|url=http://dx.doi.org/10.1097/aud.0000000000001105|journal=Ear & Hearing|volume=43|issue=1|pages=143–149|doi=10.1097/aud.0000000000001105|pmid=34261856|hdl=1942/34681 |s2cid=235907109|issn=1538-4667|hdl-access=free}} Automated Natural Language Processing (NPL) techniques, including unsupervised and supervised Machine Learning have been used to analyze social posts about tinnitus and analyze the heterogeneity of symptoms.{{Cite journal|last1=Palacios|first1=Guillaume|last2=Noreña|first2=Arnaud|last3=Londero|first3=Alain|date=2020|title=Assessing the Heterogeneity of Complaints Related to Tinnitus and Hyperacusis from an Unsupervised Machine Learning Approach: An Exploratory Study|journal=Audiology and Neurotology|volume=25|issue=4|pages=174–189|doi=10.1159/000504741|pmid=32062654|s2cid=211135952|issn=1420-3030|doi-access=free}}{{Cite web|date=2021-06-07|title=What can we learn about tinnitus from social media posts?|url=https://computationalaudiology.com/what-can-we-learn-about-tinnitus-from-social-media-posts/|access-date=2022-01-20|website=Computational Audiology|language=en-US}}

= Diagnostics for hearing problems, acoustics to facilitate hearing =

Machine learning has been applied to audiometry to create flexible, efficient estimation tools that do not require excessive testing time to determine someone's individual's auditory profile.{{Cite journal|last1=Barbour|first1=Dennis L.|last2=Howard|first2=Rebecca T.|last3=Song|first3=Xinyu D.|last4=Metzger|first4=Nikki|last5=Sukesan|first5=Kiron A.|last6=DiLorenzo|first6=James C.|last7=Snyder|first7=Braham R. D.|last8=Chen|first8=Jeff Y.|last9=Degen|first9=Eleanor A.|last10=Buchbinder|first10=Jenna M.|last11=Heisey|first11=Katherine L.|date=July 2019|title=Online Machine Learning Audiometry|url=https://journals.lww.com/00003446-201907000-00014|journal=Ear & Hearing|language=en|volume=40|issue=4|pages=918–926|doi=10.1097/AUD.0000000000000669|issn=0196-0202|pmc=6476703|pmid=30358656}}{{Cite journal|last1=Schlittenlacher|first1=Josef|last2=Turner|first2=Richard E.|last3=Moore|first3=Brian C. J.|date=2018-07-01|title=Audiogram estimation using Bayesian active learning|url=https://asa.scitation.org/doi/10.1121/1.5047436|journal=The Journal of the Acoustical Society of America|volume=144|issue=1|pages=421–430|doi=10.1121/1.5047436|pmid=30075695|bibcode=2018ASAJ..144..421S|s2cid=51910371|issn=0001-4966|url-access=subscription}} Similarly, machine learning based versions of other auditory tests including determining dead regions in the cochlea or equal loudness contours,{{Cite journal|last1=Schlittenlacher|first1=Josef|last2=Moore|first2=Brian C. J.|date=2020|title=Fast estimation of equal-loudness contours using Bayesian active learning and direct scaling|url=https://www.jstage.jst.go.jp/article/ast/41/1/41_E19252/_article|journal=Acoustical Science and Technology|volume=41|issue=1|pages=358–360|doi=10.1250/ast.41.358|s2cid=214270892|doi-access=free}} have been created.

= e-Research (remote testing, online experiments, new tools and frameworks) =

Examples of e-Research tools include including the Remote Testing Wiki,{{Cite web|title=PP Remote Testing Wiki {{!}} Main / RemoteTesting|url=https://www.spatialhearing.org/remotetesting/Main/RemoteTesting|access-date=2022-01-20|website=www.spatialhearing.org}} the Portable Automated Rapid Testing (PART), Ecological Momentary Assessment (EMA) and the NIOSH sound level meter. A number of tools can be found online.{{Cite web|title=Resources|url=https://computationalaudiology.com/resources/|access-date=2022-01-20|website=Computational Audiology|language=en-US}}

Software and tools

Software and large datasets are important for the development and adoption of computational audiology. As with many scientific computing fields, much of the field of computational audiology existentially depends on open source software and its continual maintenance, development, and advancement.{{Cite journal|last1=Fortunato|first1=Laura|last2=Galassi|first2=Mark|date=2021-05-17|title=The case for free and open source software in research and scholarship|url=https://royalsocietypublishing.org/doi/10.1098/rsta.2020.0079|journal=Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences|volume=379|issue=2197|pages=20200079|doi=10.1098/rsta.2020.0079|pmid=33775148|bibcode=2021RSPTA.37900079F|s2cid=232387092}}

Related fields

Computational biology, computational medicine, and computational pathology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science.

See also

{{Wikimedia Commons|Category:Audiology|Audiology}}

{{Wikiversity|Global Audiology}}

References