language processing in the brain

{{Short description|How humans use words to communicate}}{{redirect|Language processing|the processing of language by computers|Natural language processing}}

{{Multiple issues|

{{Primary sources|date=October 2018|article}}

{{update|date=October 2018}}

}}

File:Neurolinguistics.png Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.

]]

In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.{{cite journal|last1=Seidenberg|first1=Mark S.|last2=Petitto|first2=Laura A. | name-list-style = vanc |date=1987|title=Communication, symbolic communication, and language: Comment on Savage-Rumbaugh, McDonald, Sevcik, Hopkins, and Rupert (1986) |journal=Journal of Experimental Psychology: General | volume=116 |issue=3 |pages=279–287 |doi=10.1037/0096-3445.116.3.279 |s2cid=18329599 }}

Throughout the 20th century the dominant model{{cite journal | vauthors = Geschwind N | title = Disconnexion syndromes in animals and man. I | journal = Brain | volume = 88 | issue = 2 | pages = 237–94 | date = June 1965 | pmid = 5318481 | doi = 10.1093/brain/88.2.237 | department = review | doi-access = free }} for language processing in the brain was the Geschwind–Lichteim–Wernicke model, which is based primarily on the analysis of brain-damaged patients. However, due to improvements in intra-cortical electrophysiological recordings of monkey and human brains, as well non-invasive techniques such as fMRI, PET, MEG and EEG, an auditory pathway consisting of two parts{{cite journal | vauthors = Hickok G, Poeppel D | title = The cortical organization of speech processing | journal = Nature Reviews. Neuroscience | volume = 8 | issue = 5 | pages = 393–402 | date = May 2007 | pmid = 17431404 | doi = 10.1038/nrn2113 | s2cid = 6199399 | department = review }}{{cite journal | vauthors = Gow DW | title = The cortical organization of lexical knowledge: a dual lexicon model of spoken language processing | journal = Brain and Language | volume = 121 | issue = 3 | pages = 273–88 | date = June 2012 | pmid = 22498237 | doi = 10.1016/j.bandl.2012.03.005 | department = review | pmc = 3348354 }} has been revealed and a two-streams model has been developed. In accordance with this model, there are two pathways that connect the auditory cortex to the frontal lobe, each pathway accounting for different linguistic roles. The auditory ventral stream pathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. The auditory dorsal stream in both humans and non-human primates is responsible for sound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway (especially in the left hemisphere) is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory. In accordance with the 'from where to what' model of language evolution,{{cite journal | vauthors = Poliva O | title = From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans | journal = F1000Research | volume = 4 | pages = 67 | date = 2017-09-20 | pmid = 28928931 | pmc = 5600004 | doi = 10.12688/f1000research.6175.3 | department = review | doi-access = free }} 50px Material was copied from this source, which is available under a [https://creativecommons.org/licenses/by/4.0/ Creative Commons Attribution 4.0 International License].{{cite journal | vauthors = Poliva O | title = From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language | journal = Frontiers in Neuroscience | volume = 10 | pages = 307 | date = 2016 | pmid = 27445676 | pmc = 4928493 | doi = 10.3389/fnins.2016.00307 | department = review | doi-access = free }} 50px Material was copied from this source, which is available under a [https://creativecommons.org/licenses/by/4.0/ Creative Commons Attribution 4.0 International License]. the reason the ADS is characterized with such a broad range of functions is that each indicates a different stage in language evolution.

The division of the two streams first occurs in the auditory nerve where the anterior branch enters the anterior cochlear nucleus in the brainstem which gives rise to the auditory ventral stream. The posterior branch enters the dorsal and posteroventral cochlear nucleus to give rise to the auditory dorsal stream.{{cite book | vauthors = Pickles JO | chapter = Chapter 1: Auditory pathways: anatomy and physiology | veditors = Aminoff MJ, Boller F, Swaab DF | title = Handbook of Clinical Neurology | volume = 129 | pages = 3–25 | date = 2015 | pmid = 25726260 | doi = 10.1016/B978-0-444-62630-1.00001-9 | isbn = 978-0-444-62630-1 | department = review }}{{rp|8}}

Language processing can also occur in relation to signed languages or written content.

Early neurolinguistics models

Image:Brain Surface Gyri.SVG is represented in orange, supramarginal gyrus is represented in yellow, Broca's area is represented in blue, Wernicke's area is represented in green and the primary auditory cortex is represented in pink.]]

Throughout the 20th century, our knowledge of language processing in the brain was dominated by the Wernicke–Lichtheim–Geschwind model.{{cite journal |vauthors=Lichteim L|date=1885-01-01 |title=On Aphasia |journal=Brain |volume=7 |issue=4 |pages=433–484 |doi=10.1093/brain/7.4.433 |hdl=11858/00-001M-0000-002C-5780-B |hdl-access=free }}{{cite book |title=Der aphasische Symptomenkomplex |last=Wernicke |first=Carl |name-list-style=vanc |date=1974 |publisher=Springer Berlin Heidelberg |isbn=978-3-540-06905-8 |pages=1–70 }} The Wernicke–Lichtheim–Geschwind model is primarily based on research conducted on brain-damaged individuals who were reported to possess a variety of language related disorders. In accordance with this model, words are perceived via a specialized word reception center (Wernicke's area) that is located in the left temporoparietal junction. This region then projects to a word production center (Broca's area) that is located in the left inferior frontal gyrus. Because almost all language input was thought to funnel via Wernicke's area and all language output to funnel via Broca's area, it became extremely difficult to identify the basic properties of each region. This lack of clear definition for the contribution of Wernicke's and Broca's regions to human language rendered it extremely difficult to identify their homologues in other primates.{{cite journal | vauthors = Aboitiz F, García VR | title = The evolutionary origin of the language areas in the human brain. A neuroanatomical perspective | journal = Brain Research. Brain Research Reviews | volume = 25 | issue = 3 | pages = 381–96 | date = December 1997 | pmid = 9495565 | doi = 10.1016/s0165-0173(97)00053-2 | s2cid = 20704891 }} With the advent of the fMRI and its application for lesion mappings, however, it was shown that this model is based on incorrect correlations between symptoms and lesions.{{cite journal | vauthors = Anderson JM, Gilmore R, Roper S, Crosson B, Bauer RM, Nadeau S, Beversdorf DQ, Cibula J, Rogish M, Kortencamp S, Hughes JD, Gonzalez Rothi LJ, Heilman KM | title = Conduction aphasia and the arcuate fasciculus: A reexamination of the Wernicke–Geschwind model | journal = Brain and Language | volume = 70 | issue = 1 | pages = 1–12 | date = October 1999 | pmid = 10534369 | doi = 10.1006/brln.1999.2135 | s2cid = 12171982 }}{{cite journal | vauthors = DeWitt I, Rauschecker JP | title = Wernicke's area revisited: parallel streams and word processing | journal = Brain and Language | volume = 127 | issue = 2 | pages = 181–91 | date = November 2013 | pmid = 24404576 | doi = 10.1016/j.bandl.2013.09.014 | pmc=4098851}}{{cite journal | vauthors = Dronkers NF | title = The pursuit of brain-language relationships | journal = Brain and Language | volume = 71 | issue = 1 | pages = 59–61 | date = January 2000 | pmid = 10716807 | doi = 10.1006/brln.1999.2212 | s2cid = 7224731 }}{{cite journal | vauthors = Dronkers NF, Wilkins DP, Van Valin RD, Redfern BB, Jaeger JJ | title = Lesion analysis of the brain areas involved in language comprehension | journal = Cognition | volume = 92 | issue = 1–2 | pages = 145–77 | date = May 2004 | pmid = 15037129 | doi = 10.1016/j.cognition.2003.11.002 | hdl = 11858/00-001M-0000-0012-6912-A | s2cid = 10919645 | hdl-access = free }}{{cite journal | vauthors = Mesulam MM, Thompson CK, Weintraub S, Rogalski EJ | title = The Wernicke conundrum and the anatomy of language comprehension in primary progressive aphasia | journal = Brain | volume = 138 | issue = Pt 8 | pages = 2423–37 | date = August 2015 | pmid = 26112340 | doi = 10.1093/brain/awv154 | pmc = 4805066 }}{{cite journal | vauthors = Poeppel D, Emmorey K, Hickok G, Pylkkänen L | title = Towards a new neurobiology of language | journal = The Journal of Neuroscience | volume = 32 | issue = 41 | pages = 14125–31 | date = October 2012 | pmid = 23055482 | doi = 10.1523/jneurosci.3244-12.2012 | pmc = 3495005 }}{{cite journal | vauthors = Vignolo LA, Boccardi E, Caverni L | title = Unexpected CT-scan findings in global aphasia | journal = Cortex; A Journal Devoted to the Study of the Nervous System and Behavior | volume = 22 | issue = 1 | pages = 55–69 | date = March 1986 | pmid = 2423296 | doi = 10.1016/s0010-9452(86)80032-6 | s2cid = 4479679 | doi-access = free }} The refutation of such an influential and dominant model opened the door to new models of language processing in the brain.

Current neurolinguistics models

= Anatomy =

In the last two decades, significant advances occurred in our understanding of the neural processing of sounds in primates. Initially by recording of neural activity in the auditory cortices of monkeys{{cite journal | vauthors = Bendor D, Wang X | title = Cortical representations of pitch in monkeys and humans | journal = Current Opinion in Neurobiology | volume = 16 | issue = 4 | pages = 391–9 | date = August 2006 | pmid = 16842992 | doi = 10.1016/j.conb.2006.07.001 | pmc = 4325365 }}{{cite journal | vauthors = Rauschecker JP, Tian B, Hauser M | title = Processing of complex sounds in the macaque nonprimary auditory cortex | journal = Science | volume = 268 | issue = 5207 | pages = 111–4 | date = April 1995 | pmid = 7701330 | doi = 10.1126/science.7701330 | bibcode = 1995Sci...268..111R | s2cid = 19590708 }} and later elaborated via histological staining{{cite journal | vauthors = de la Mothe LA, Blumell S, Kajikawa Y, Hackett TA | title = Cortical connections of the auditory cortex in marmoset monkeys: core and medial belt regions | journal = The Journal of Comparative Neurology | volume = 496 | issue = 1 | pages = 27–71 | date = May 2006 | pmid = 16528722 | doi = 10.1002/cne.20923 | s2cid = 38393074 }}{{cite journal | vauthors = de la Mothe LA, Blumell S, Kajikawa Y, Hackett TA | title = Cortical connections of auditory cortex in marmoset monkeys: lateral belt and parabelt regions | journal = Anatomical Record | volume = 295 | issue = 5 | pages = 800–21 | date = May 2012 | pmid = 22461313 | doi = 10.1002/ar.22451 | pmc=3379817}}{{cite journal | vauthors = Kaas JH, Hackett TA | title = Subdivisions of auditory cortex and processing streams in primates | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 97 | issue = 22 | pages = 11793–9 | date = October 2000 | pmid = 11050211 | doi = 10.1073/pnas.97.22.11793 | pmc=34351| bibcode = 2000PNAS...9711793K | doi-access = free }} and fMRI scanning studies,{{cite journal | vauthors = Petkov CI, Kayser C, Augath M, Logothetis NK | title = Functional imaging reveals numerous fields in the monkey auditory cortex | journal = PLOS Biology | volume = 4 | issue = 7 | pages = e215 | date = July 2006 | pmid = 16774452 | doi = 10.1371/journal.pbio.0040215 | pmc=1479693 | doi-access = free }} 3 auditory fields were identified in the primary auditory cortex, and 9 associative auditory fields were shown to surround them (Figure 1 top left). Anatomical tracing and lesion studies further indicated of a separation between the anterior and posterior auditory fields, with the anterior primary auditory fields (areas R-RT) projecting to the anterior associative auditory fields (areas AL-RTL), and the posterior primary auditory field (area A1) projecting to the posterior associative auditory fields (areas CL-CM).{{cite journal | vauthors = Morel A, Garraghty PE, Kaas JH | title = Tonotopic organization, architectonic fields, and connections of auditory cortex in macaque monkeys | journal = The Journal of Comparative Neurology | volume = 335 | issue = 3 | pages = 437–59 | date = September 1993 | pmid = 7693772 | doi = 10.1002/cne.903350312 | s2cid = 22872232 }}{{cite journal | vauthors = Rauschecker JP, Tian B | title = Mechanisms and streams for processing of "what" and "where" in auditory cortex | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 97 | issue = 22 | pages = 11800–6 | date = October 2000 | pmid = 11050212 | doi = 10.1073/pnas.97.22.11800 | pmc=34352| bibcode = 2000PNAS...9711800R | doi-access = free }}{{cite journal | vauthors = Rauschecker JP, Tian B, Pons T, Mishkin M | title = Serial and parallel processing in rhesus monkey auditory cortex | journal = The Journal of Comparative Neurology | volume = 382 | issue = 1 | pages = 89–103 | date = May 1997 | pmid = 9136813 | doi = 10.1002/(sici)1096-9861(19970526)382:1<89::aid-cne6>3.3.co;2-y }} Recently, evidence accumulated that indicates homology between the human and monkey auditory fields. In humans, histological staining studies revealed two separate auditory fields in the primary auditory region of Heschl's gyrus,{{cite journal | vauthors = Sweet RA, Dorph-Petersen KA, Lewis DA | title = Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus | journal = The Journal of Comparative Neurology | volume = 491 | issue = 3 | pages = 270–89 | date = October 2005 | pmid = 16134138 | doi = 10.1002/cne.20702 | s2cid = 40822276 }}{{cite journal | vauthors = Wallace MN, Johnston PW, Palmer AR | title = Histochemical identification of cortical areas in the auditory region of the human brain | journal = Experimental Brain Research | volume = 143 | issue = 4 | pages = 499–508 | date = April 2002 | pmid = 11914796 | doi = 10.1007/s00221-002-1014-z | s2cid = 24211906 }} and by mapping the tonotopic organization of the human primary auditory fields with high resolution fMRI and comparing it to the tonotopic organization of the monkey primary auditory fields, homology was established between the human anterior primary auditory field and monkey area R (denoted in humans as area hR) and the human posterior primary auditory field and the monkey area A1 (denoted in humans as area hA1).{{cite journal | vauthors = Da Costa S, van der Zwaag W, Marques JP, Frackowiak RS, Clarke S, Saenz M | title = Human primary auditory cortex follows the shape of Heschl's gyrus | journal = The Journal of Neuroscience | volume = 31 | issue = 40 | pages = 14067–75 | date = October 2011 | pmid = 21976491 | doi = 10.1523/jneurosci.2000-11.2011 | pmc = 6623669 | doi-access = free }}{{cite journal | vauthors = Humphries C, Liebenthal E, Binder JR | title = Tonotopic organization of human auditory cortex | journal = NeuroImage | volume = 50 | issue = 3 | pages = 1202–11 | date = April 2010 | pmid = 20096790 | doi = 10.1016/j.neuroimage.2010.01.046 | pmc=2830355}}{{cite journal | vauthors = Langers DR, van Dijk P | title = Mapping the tonotopic organization in human auditory cortex with minimally salient acoustic stimulation | journal = Cerebral Cortex | volume = 22 | issue = 9 | pages = 2024–38 | date = September 2012 | pmid = 21980020 | doi = 10.1093/cercor/bhr282 | pmc=3412441}}{{cite journal | vauthors = Striem-Amit E, Hertz U, Amedi A | title = Extensive cochleotopic mapping of human auditory cortical fields obtained with phase-encoding fMRI | journal = PLOS ONE | volume = 6 | issue = 3 | pages = e17832 | date = March 2011 | pmid = 21448274 | doi = 10.1371/journal.pone.0017832 | pmc=3063163| bibcode = 2011PLoSO...617832S | doi-access = free }}{{cite journal | vauthors = Woods DL, Herron TJ, Cate AD, Yund EW, Stecker GC, Rinne T, Kang X | title = Functional properties of human auditory cortical fields | journal = Frontiers in Systems Neuroscience | volume = 4 | pages = 155 | date = 2010 | pmid = 21160558 | doi = 10.3389/fnsys.2010.00155 | pmc=3001989| doi-access = free }} Intra-cortical recordings from the human auditory cortex further demonstrated similar patterns of connectivity to the auditory cortex of the monkey. Recording from the surface of the auditory cortex (supra-temporal plane) reported that the anterior Heschl's gyrus (area hR) projects primarily to the middle-anterior superior temporal gyrus (mSTG-aSTG) and the posterior Heschl's gyrus (area hA1) projects primarily to the posterior superior temporal gyrus (pSTG) and the planum temporale (area PT; Figure 1 top right).{{cite journal | vauthors = Gourévitch B, Le Bouquin Jeannès R, Faucon G, Liégeois-Chauvel C | title = Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas | journal = Hearing Research | volume = 237 | issue = 1–2 | pages = 1–18 | date = March 2008 | pmid = 18255243 | doi = 10.1016/j.heares.2007.12.003 | s2cid = 15271578 | url = https://www.hal.inserm.fr/inserm-00254870/file/Temporal_envelope_processing_in_the_human_auditory_cortex.pdf }}{{cite journal | vauthors = Guéguin M, Le Bouquin-Jeannès R, Faucon G, Chauvel P, Liégeois-Chauvel C | title = Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing | journal = Cerebral Cortex | volume = 17 | issue = 2 | pages = 304–13 | date = February 2007 | pmid = 16514106 | doi = 10.1093/cercor/bhj148 | pmc=2111045}} Consistent with connections from area hR to the aSTG and hA1 to the pSTG is an fMRI study of a patient with impaired sound recognition (auditory agnosia), who was shown with reduced bilateral activation in areas hR and aSTG but with spared activation in the mSTG-pSTG.{{cite journal | vauthors = Poliva O, Bestelmeyer PE, Hall M, Bultitude JH, Koller K, Rafal RD | title = Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus | journal = Cognitive and Behavioral Neurology | volume = 28 | issue = 3 | pages = 160–80 | date = September 2015 | pmid = 26413744 | doi = 10.1097/wnn.0000000000000072 | s2cid = 913296 | url = http://opus.bath.ac.uk/48370/1/Poliva_Rafal_Auditory_Agnosia_CBN_v28n3_Sept2015.pdf }} This connectivity pattern is also corroborated by a study that recorded activation from the lateral surface of the auditory cortex and reported of simultaneous non-overlapping activation clusters in the pSTG and mSTG-aSTG while listening to sounds.{{cite journal | vauthors = Chang EF, Edwards E, Nagarajan SS, Fogelson N, Dalal SS, Canolty RT, Kirsch HE, Barbaro NM, Knight RT | title = Cortical spatio-temporal dynamics underlying phonological target detection in humans | journal = Journal of Cognitive Neuroscience | volume = 23 | issue = 6 | pages = 1437–46 | date = June 2011 | pmid = 20465359 | doi = 10.1162/jocn.2010.21466 | pmc=3895406}}

Downstream to the auditory cortex, anatomical tracing studies in monkeys delineated projections from the anterior associative auditory fields (areas AL-RTL) to ventral prefrontal and premotor cortices in the inferior frontal gyrus (IFG){{cite journal | vauthors = Muñoz M, Mishkin M, Saunders RC | title = Resection of the medial temporal lobe disconnects the rostral superior temporal gyrus from some of its projection targets in the frontal lobe and thalamus | journal = Cerebral Cortex | volume = 19 | issue = 9 | pages = 2114–30 | date = September 2009 | pmid = 19150921 | doi = 10.1093/cercor/bhn236 | pmc=2722427}}{{cite journal | vauthors = Romanski LM, Bates JF, Goldman-Rakic PS | title = Auditory belt and parabelt projections to the prefrontal cortex in the rhesus monkey | journal = The Journal of Comparative Neurology | volume = 403 | issue = 2 | pages = 141–57 | date = January 1999 | pmid = 9886040 | doi = 10.1002/(sici)1096-9861(19990111)403:2<141::aid-cne1>3.0.co;2-v | s2cid = 42482082 }} and amygdala.{{cite journal | vauthors = Tanaka D | title = Thalamic projections of the dorsomedial prefrontal cortex in the rhesus monkey (Macaca mulatta) | journal = Brain Research | volume = 110 | issue = 1 | pages = 21–38 | date = June 1976 | pmid = 819108 | doi = 10.1016/0006-8993(76)90206-7 | s2cid = 21529048 }} Cortical recording and functional imaging studies in macaque monkeys further elaborated on this processing stream by showing that acoustic information flows from the anterior auditory cortex to the temporal pole (TP) and then to the IFG.{{cite journal | vauthors = Perrodin C, Kayser C, Logothetis NK, Petkov CI | title = Voice cells in the primate temporal lobe | journal = Current Biology | volume = 21 | issue = 16 | pages = 1408–15 | date = August 2011 | pmid = 21835625 | doi = 10.1016/j.cub.2011.07.028 | pmc=3398143| bibcode = 2011CBio...21.1408P }}{{cite journal | vauthors = Petkov CI, Kayser C, Steudel T, Whittingstall K, Augath M, Logothetis NK | title = A voice region in the monkey brain | journal = Nature Neuroscience | volume = 11 | issue = 3 | pages = 367–74 | date = March 2008 | pmid = 18264095 | doi = 10.1038/nn2043 | s2cid = 5505773 }}{{cite journal | vauthors = Poremba A, Malloy M, Saunders RC, Carson RE, Herscovitch P, Mishkin M | title = Species-specific calls evoke asymmetric activity in the monkey's temporal poles | journal = Nature | volume = 427 | issue = 6973 | pages = 448–51 | date = January 2004 | pmid = 14749833 | doi = 10.1038/nature02268 | bibcode = 2004Natur.427..448P | s2cid = 4402126 }}{{cite journal | vauthors = Romanski LM, Averbeck BB, Diltz M | title = Neural representation of vocalizations in the primate ventrolateral prefrontal cortex | journal = Journal of Neurophysiology | volume = 93 | issue = 2 | pages = 734–47 | date = February 2005 | pmid = 15371495 | doi = 10.1152/jn.00675.2004 }}{{cite journal | vauthors = Russ BE, Ackelson AL, Baker AE, Cohen YE | title = Coding of auditory-stimulus identity in the auditory non-spatial processing stream | journal = Journal of Neurophysiology | volume = 99 | issue = 1 | pages = 87–95 | date = January 2008 | pmid = 18003874 | doi = 10.1152/jn.01069.2007 | pmc=4091985}}{{cite journal | vauthors = Tsunada J, Lee JH, Cohen YE | title = Representation of speech categories in the primate auditory cortex | journal = Journal of Neurophysiology | volume = 105 | issue = 6 | pages = 2634–46 | date = June 2011 | pmid = 21346209 | doi = 10.1152/jn.00037.2011 | pmc = 3118748 }} This pathway is commonly referred to as the auditory ventral stream (AVS; Figure 1, bottom left-red arrows). In contrast to the anterior auditory fields, tracing studies reported that the posterior auditory fields (areas CL-CM) project primarily to dorsolateral prefrontal and premotor cortices (although some projections do terminate in the IFG.{{cite journal | vauthors = Cusick CG, Seltzer B, Cola M, Griggs E | title = Chemoarchitectonics and corticocortical terminations within the superior temporal sulcus of the rhesus monkey: evidence for subdivisions of superior temporal polysensory cortex | journal = The Journal of Comparative Neurology | volume = 360 | issue = 3 | pages = 513–35 | date = September 1995 | pmid = 8543656 | doi = 10.1002/cne.903600312 | s2cid = 42281619 }} Cortical recordings and anatomical tracing studies in monkeys further provided evidence that this processing stream flows from the posterior auditory fields to the frontal lobe via a relay station in the intra-parietal sulcus (IPS).{{cite journal | vauthors = Cohen YE, Russ BE, Gifford GW, Kiringoda R, MacLean KA | title = Selectivity for the spatial and nonspatial attributes of auditory stimuli in the ventrolateral prefrontal cortex | journal = The Journal of Neuroscience | volume = 24 | issue = 50 | pages = 11307–16 | date = December 2004 | pmid = 15601937 | doi = 10.1523/jneurosci.3935-04.2004 | pmc = 6730358 | doi-access = free }}{{cite journal|last=Deacon|first=Terrence W | name-list-style = vanc |date= February 1992 |title=Cortical connections of the inferior arcuate sulcus cortex in the macaque brain | journal=Brain Research|volume=573|issue=1|pages=8–26|doi=10.1016/0006-8993(92)90109-m|pmid=1374284 |s2cid=20670766 |issn=0006-8993}}{{cite journal | vauthors = Lewis JW, Van Essen DC | title = Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey | journal = The Journal of Comparative Neurology | volume = 428 | issue = 1 | pages = 112–37 | date = December 2000 | pmid = 11058227 | doi = 10.1002/1096-9861(20001204)428:1<112::aid-cne8>3.0.co;2-9 | s2cid = 16153360 }}{{cite journal | vauthors = Roberts AC, Tomic DL, Parkinson CH, Roeling TA, Cutter DJ, Robbins TW, Everitt BJ | title = Forebrain connectivity of the prefrontal cortex in the marmoset monkey (Callithrix jacchus): an anterograde and retrograde tract-tracing study | journal = The Journal of Comparative Neurology | volume = 502 | issue = 1 | pages = 86–112 | date = May 2007 | pmid = 17335041 | doi = 10.1002/cne.21300 | s2cid = 18262007 }}{{cite journal | vauthors = Schmahmann JD, Pandya DN, Wang R, Dai G, D'Arceuil HE, de Crespigny AJ, Wedeen VJ | title = Association fibre pathways of the brain: parallel observations from diffusion spectrum imaging and autoradiography | journal = Brain | volume = 130 | issue = Pt 3 | pages = 630–53 | date = March 2007 | pmid = 17293361 | doi = 10.1093/brain/awl359 | doi-access = free }}{{cite journal | vauthors = Seltzer B, Pandya DN | title = Further observations on parieto-temporal connections in the rhesus monkey | journal = Experimental Brain Research | volume = 55 | issue = 2 | pages = 301–12 | date = July 1984 | pmid = 6745368 | doi = 10.1007/bf00237280 | s2cid = 20167953 }} This pathway is commonly referred to as the auditory dorsal stream (ADS; Figure 1, bottom left-blue arrows). Comparing the white matter pathways involved in communication in humans and monkeys with diffusion tensor imaging techniques indicates of similar connections of the AVS and ADS in the two species (Monkey, Human{{cite journal | vauthors = Catani M, Jones DK, ffytche DH | title = Perisylvian language networks of the human brain | journal = Annals of Neurology | volume = 57 | issue = 1 | pages = 8–16 | date = January 2005 | pmid = 15597383 | doi = 10.1002/ana.20319 | s2cid = 17743067 | doi-access = free }}{{cite journal | vauthors = Frey S, Campbell JS, Pike GB, Petrides M | title = Dissociating the human language pathways with high angular resolution diffusion fiber tractography | journal = The Journal of Neuroscience | volume = 28 | issue = 45 | pages = 11435–44 | date = November 2008 | pmid = 18987180 | doi = 10.1523/jneurosci.2388-08.2008 | pmc = 6671318 | doi-access = free }}{{cite journal | vauthors = Makris N, Papadimitriou GM, Kaiser JR, Sorg S, Kennedy DN, Pandya DN | title = Delineation of the middle longitudinal fascicle in humans: a quantitative, in vivo, DT-MRI study | journal = Cerebral Cortex | volume = 19 | issue = 4 | pages = 777–85 | date = April 2009 | pmid = 18669591 | doi = 10.1093/cercor/bhn124 | pmc=2651473}}{{cite journal | vauthors = Menjot de Champfleur N, Lima Maldonado I, Moritz-Gasser S, Machi P, Le Bars E, Bonafé A, Duffau H | title = Middle longitudinal fasciculus delineation within language pathways: a diffusion tensor imaging study in human | journal = European Journal of Radiology | volume = 82 | issue = 1 | pages = 151–7 | date = January 2013 | pmid = 23084876 | doi = 10.1016/j.ejrad.2012.05.034 | url = http://repositorio.ufba.br/ri/handle/ri/13927 }}{{cite journal | vauthors = Turken AU, Dronkers NF | title = The neural architecture of the language comprehension network: converging evidence from lesion and connectivity analyses | journal = Frontiers in Systems Neuroscience | volume = 5 | pages = 1 | date = 2011 | pmid = 21347218 | doi = 10.3389/fnsys.2011.00001 | pmc=3039157| doi-access = free }}{{cite journal | vauthors = Saur D, Kreher BW, Schnell S, Kümmerer D, Kellmeyer P, Vry MS, Umarova R, Musso M, Glauche V, Abel S, Huber W, Rijntjes M, Hennig J, Weiller C | title = Ventral and dorsal pathways for language | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 105 | issue = 46 | pages = 18035–40 | date = November 2008 | pmid = 19004769 | doi = 10.1073/pnas.0805234105 | pmc=2584675| bibcode = 2008PNAS..10518035S | doi-access = free }}). In humans, the pSTG was shown to project to the parietal lobe (sylvian parietal-temporal junction-inferior parietal lobule; Spt-IPL), and from there to dorsolateral prefrontal and premotor cortices (Figure 1, bottom right-blue arrows), and the aSTG was shown to project to the anterior temporal lobe (middle temporal gyrus-temporal pole; MTG-TP) and from there to the IFG (Figure 1 bottom right-red arrows).

= {{Anchor|Auditory ventral stream}} Auditory ventral stream =

The auditory ventral stream (AVS) connects the auditory cortex with the middle temporal gyrus and temporal pole, which in turn connects with the inferior frontal gyrus. This pathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. The functions of the AVS include the following.

== Sound recognition ==

Accumulative converging evidence indicates that the AVS is involved in recognizing auditory objects. At the level of the primary auditory cortex, recordings from monkeys showed higher percentage of neurons selective for learned melodic sequences in area R than area A1,{{cite journal | vauthors = Yin P, Mishkin M, Sutter M, Fritz JB | title = Early stages of melody processing: stimulus-sequence and task-dependent neuronal activity in monkey auditory cortical fields A1 and R | journal = Journal of Neurophysiology | volume = 100 | issue = 6 | pages = 3009–29 | date = December 2008 | pmid = 18842950 | doi = 10.1152/jn.00828.2007 | pmc=2604844}} and a study in humans demonstrated more selectivity for heard syllables in the anterior Heschl's gyrus (area hR) than posterior Heschl's gyrus (area hA1).{{cite journal | vauthors = Steinschneider M, Volkov IO, Fishman YI, Oya H, Arezzo JC, Howard MA | title = Intracortical responses in human and monkey primary auditory cortex support a temporal processing mechanism for encoding of the voice onset time phonetic parameter | journal = Cerebral Cortex | volume = 15 | issue = 2 | pages = 170–86 | date = February 2005 | pmid = 15238437 | doi = 10.1093/cercor/bhh120 | doi-access = free }} In downstream associative auditory fields, studies from both monkeys and humans reported that the border between the anterior and posterior auditory fields (Figure 1-area PC in the monkey and mSTG in the human) processes pitch attributes that are necessary for the recognition of auditory objects. The anterior auditory fields of monkeys were also demonstrated with selectivity for con-specific vocalizations with intra-cortical recordings.{{cite journal | vauthors = Russ BE, Ackelson AL, Baker AE, Cohen YE | title = Coding of auditory-stimulus identity in the auditory non-spatial processing stream | journal = Journal of Neurophysiology | volume = 99 | issue = 1 | pages = 87–95 | date = January 2008 | pmid = 18003874 | doi = 10.1152/jn.01069.2007 | pmc=4091985}} and functional imaging{{cite journal | vauthors = Joly O, Pallier C, Ramus F, Pressnitzer D, Vanduffel W, Orban GA | title = Processing of vocalizations in humans and monkeys: a comparative fMRI study | journal = NeuroImage | volume = 62 | issue = 3 | pages = 1376–89 | date = September 2012 | pmid = 22659478 | doi = 10.1016/j.neuroimage.2012.05.070 | s2cid = 9441377 | url = https://hal.archives-ouvertes.fr/hal-02327524/file/ProcessingVocalizations.pdf }} One fMRI monkey study further demonstrated a role of the aSTG in the recognition of individual voices. The role of the human mSTG-aSTG in sound recognition was demonstrated via functional imaging studies that correlated activity in this region with isolation of auditory objects from background noise,{{cite journal | vauthors = Scheich H, Baumgart F, Gaschler-Markefski B, Tegeler C, Tempelmann C, Heinze HJ, Schindler F, Stiller D | title = Functional magnetic resonance imaging of a human auditory cortex area involved in foreground-background decomposition | journal = The European Journal of Neuroscience | volume = 10 | issue = 2 | pages = 803–9 | date = February 1998 | pmid = 9749748 | doi = 10.1046/j.1460-9568.1998.00086.x | s2cid = 42898063 }}{{cite journal | vauthors = Zatorre RJ, Bouffard M, Belin P | title = Sensitivity to auditory object features in human temporal neocortex | journal = The Journal of Neuroscience | volume = 24 | issue = 14 | pages = 3637–42 | date = April 2004 | pmid = 15071112 | doi = 10.1523/jneurosci.5458-03.2004 | pmc = 6729744 | doi-access = free }} and with the recognition of spoken words,{{cite journal | vauthors = Binder JR, Desai RH, Graves WW, Conant LL | title = Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies | journal = Cerebral Cortex | volume = 19 | issue = 12 | pages = 2767–96 | date = December 2009 | pmid = 19329570 | doi = 10.1093/cercor/bhp055 | pmc=2774390}}{{cite journal | vauthors = Davis MH, Johnsrude IS | title = Hierarchical processing in spoken language comprehension | journal = The Journal of Neuroscience | volume = 23 | issue = 8 | pages = 3423–31 | date = April 2003 | pmid = 12716950 | doi = 10.1523/jneurosci.23-08-03423.2003 | pmc = 6742313 | doi-access = free }}{{cite journal | vauthors = Liebenthal E, Binder JR, Spitzer SM, Possing ET, Medler DA | title = Neural substrates of phonemic perception | journal = Cerebral Cortex | volume = 15 | issue = 10 | pages = 1621–31 | date = October 2005 | pmid = 15703256 | doi = 10.1093/cercor/bhi040 | doi-access = free }}{{cite journal | vauthors = Narain C, Scott SK, Wise RJ, Rosen S, Leff A, Iversen SD, Matthews PM | title = Defining a left-lateralized response specific to intelligible speech using fMRI | journal = Cerebral Cortex | volume = 13 | issue = 12 | pages = 1362–8 | date = December 2003 | pmid = 14615301 | doi = 10.1093/cercor/bhg083 | doi-access = free }}{{cite journal | vauthors = Obleser J, Boecker H, Drzezga A, Haslinger B, Hennenlotter A, Roettinger M, Eulitz C, Rauschecker JP | title = Vowel sound extraction in anterior superior temporal cortex | journal = Human Brain Mapping | volume = 27 | issue = 7 | pages = 562–71 | date = July 2006 | pmid = 16281283 | doi = 10.1002/hbm.20201 | pmc = 6871493 }}{{cite journal | vauthors = Obleser J, Zimmermann J, Van Meter J, Rauschecker JP | title = Multiple stages of auditory speech perception reflected in event-related FMRI | journal = Cerebral Cortex | volume = 17 | issue = 10 | pages = 2251–7 | date = October 2007 | pmid = 17150986 | doi = 10.1093/cercor/bhl133 | doi-access = free }}{{cite journal | vauthors = Scott SK, Blank CC, Rosen S, Wise RJ | title = Identification of a pathway for intelligible speech in the left temporal lobe | journal = Brain | volume = 123 | issue = 12 | pages = 2400–6 | date = December 2000 | pmid = 11099443 | doi = 10.1093/brain/123.12.2400 | pmc=5630088}} voices,{{cite journal | vauthors = Belin P, Zatorre RJ | title = Adaptation to speaker's voice in right anterior temporal lobe | journal = NeuroReport | volume = 14 | issue = 16 | pages = 2105–2109 | date = November 2003 | pmid = 14600506 | doi = 10.1097/00001756-200311140-00019 | s2cid = 34183900 }} melodies,{{cite journal | vauthors = Benson RR, Whalen DH, Richardson M, Swainson B, Clark VP, Lai S, Liberman AM | title = Parametrically dissociating speech and nonspeech perception in the brain using fMRI | journal = Brain and Language | volume = 78 | issue = 3 | pages = 364–96 | date = September 2001 | pmid = 11703063 | doi = 10.1006/brln.2001.2484 | s2cid = 15328590 }}{{cite journal | vauthors = Leaver AM, Rauschecker JP | title = Cortical representation of natural complex sounds: effects of acoustic features and auditory object category | journal = The Journal of Neuroscience | volume = 30 | issue = 22 | pages = 7604–12 | date = June 2010 | pmid = 20519535 | doi = 10.1523/jneurosci.0296-10.2010 | pmc=2930617}} environmental sounds,{{cite journal | vauthors = Lewis JW, Phinney RE, Brefczynski-Lewis JA, DeYoe EA | title = Lefties get it "right" when hearing tool sounds | journal = Journal of Cognitive Neuroscience | volume = 18 | issue = 8 | pages = 1314–30 | date = August 2006 | pmid = 16859417 | doi = 10.1162/jocn.2006.18.8.1314 | s2cid = 14049095 }}{{cite journal | vauthors = Maeder PP, Meuli RA, Adriani M, Bellmann A, Fornari E, Thiran JP, Pittet A, Clarke S | title = Distinct pathways involved in sound recognition and localization: a human fMRI study | journal = NeuroImage | volume = 14 | issue = 4 | pages = 802–16 | date = October 2001 | pmid = 11554799 | doi = 10.1006/nimg.2001.0888 | s2cid = 1388647 | url = http://infoscience.epfl.ch/record/86832/files/Maeder2003_155.pdf }}{{cite journal | vauthors = Viceic D, Fornari E, Thiran JP, Maeder PP, Meuli R, Adriani M, Clarke S | title = Human auditory belt areas specialized in sound recognition: a functional magnetic resonance imaging study | journal = NeuroReport | volume = 17 | issue = 16 | pages = 1659–62 | date = November 2006 | pmid = 17047449 | doi = 10.1097/01.wnr.0000239962.75943.dd | s2cid = 14482187 | url = http://infoscience.epfl.ch/record/91044/files/00001756-200611060-00001.pdf }} and non-speech communicative sounds.{{cite journal | vauthors = Shultz S, Vouloumanos A, Pelphrey K | title = The superior temporal sulcus differentiates communicative and noncommunicative auditory signals | journal = Journal of Cognitive Neuroscience | volume = 24 | issue = 5 | pages = 1224–32 | date = May 2012 | pmid = 22360624 | doi = 10.1162/jocn_a_00208 | s2cid = 10784270 }} A meta-analysis of fMRI studies{{cite journal | vauthors = DeWitt I, Rauschecker JP | title = Phoneme and word recognition in the auditory ventral stream | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 109 | issue = 8 | pages = E505-14 | date = February 2012 | pmid = 22308358 | doi = 10.1073/pnas.1113427109 | pmc=3286918| bibcode = 2012PNAS..109E.505D | doi-access = free }} further demonstrated functional dissociation between the left mSTG and aSTG, with the former processing short speech units (phonemes) and the latter processing longer units (e.g., words, environmental sounds). A study that recorded neural activity directly from the left pSTG and aSTG reported that the aSTG, but not pSTG, was more active when the patient listened to speech in her native language than unfamiliar foreign language.{{cite journal | vauthors = Lachaux JP, Jerbi K, Bertrand O, Minotti L, Hoffmann D, Schoendorff B, Kahane P | title = A blueprint for real-time functional mapping via human intracranial recordings | journal = PLOS ONE | volume = 2 | issue = 10 | pages = e1094 | date = October 2007 | pmid = 17971857 | doi = 10.1371/journal.pone.0001094 | pmc=2040217| bibcode = 2007PLoSO...2.1094L | doi-access = free }} Consistently, electro stimulation to the aSTG of this patient resulted in impaired speech perception (see also{{cite journal | vauthors = Matsumoto R, Imamura H, Inouchi M, Nakagawa T, Yokoyama Y, Matsuhashi M, Mikuni N, Miyamoto S, Fukuyama H, Takahashi R, Ikeda A | title = Left anterior temporal cortex actively engages in speech perception: A direct cortical stimulation study | journal = Neuropsychologia | volume = 49 | issue = 5 | pages = 1350–1354 | date = April 2011 | pmid = 21251921 | doi = 10.1016/j.neuropsychologia.2011.01.023 | hdl = 2433/141342 | s2cid = 1831334 | hdl-access = free }}{{cite journal | vauthors = Roux FE, Miskin K, Durand JB, Sacko O, Réhault E, Tanova R, Démonet JF | title = Electrostimulation mapping of comprehension of auditory and visual words | journal = Cortex; A Journal Devoted to the Study of the Nervous System and Behavior | volume = 71 | pages = 398–408 | date = October 2015 | pmid = 26332785 | doi = 10.1016/j.cortex.2015.07.001 | s2cid = 39964328 }} for similar results). Intra-cortical recordings from the right and left aSTG further demonstrated that speech is processed laterally to music. An fMRI study of a patient with impaired sound recognition (auditory agnosia) due to brainstem damage was also shown with reduced activation in areas hR and aSTG of both hemispheres when hearing spoken words and environmental sounds. Recordings from the anterior auditory cortex of monkeys while maintaining learned sounds in working memory, and the debilitating effect of induced lesions to this region on working memory recall,{{cite journal | vauthors = Fritz J, Mishkin M, Saunders RC | title = In search of an auditory engram | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 102 | issue = 26 | pages = 9359–64 | date = June 2005 | pmid = 15967995 | doi = 10.1073/pnas.0503998102 | pmc=1166637| bibcode = 2005PNAS..102.9359F | doi-access = free }}{{cite journal | vauthors = Stepien LS, Cordeau JP, Rasmussen T | title = The effect of temporal lobe and hippocampal lesions on auditory and visual recent memory in monkeys |date=1960| journal=Brain|volume=83|issue=3|pages=470–489|doi=10.1093/brain/83.3.470|issn=0006-8950|doi-access=free}}{{cite journal | vauthors = Strominger NL, Oesterreich RE, Neff WD | title = Sequential auditory and visual discriminations after temporal lobe ablation in monkeys | journal = Physiology & Behavior | volume = 24 | issue = 6 | pages = 1149–56 | date = June 1980 | pmid = 6774349 | doi = 10.1016/0031-9384(80)90062-1 | s2cid = 7494152 }} further implicate the AVS in maintaining the perceived auditory objects in working memory. In humans, area mSTG-aSTG was also reported active during rehearsal of heard syllables with MEG.{{cite journal | vauthors = Kaiser J, Ripper B, Birbaumer N, Lutzenberger W | title = Dynamics of gamma-band activity in human magnetoencephalogram during auditory pattern working memory | journal = NeuroImage | volume = 20 | issue = 2 | pages = 816–27 | date = October 2003 | pmid = 14568454 | doi = 10.1016/s1053-8119(03)00350-1 | s2cid = 19373941 }} and fMRI{{cite journal | vauthors = Buchsbaum BR, Olsen RK, Koch P, Berman KF | title = Human dorsal and ventral auditory streams subserve rehearsal-based and echoic processes during verbal working memory | journal = Neuron | volume = 48 | issue = 4 | pages = 687–97 | date = November 2005 | pmid = 16301183 | doi = 10.1016/j.neuron.2005.09.029 | s2cid = 13202604 | doi-access = free }} The latter study further demonstrated that working memory in the AVS is for the acoustic properties of spoken words and that it is independent to working memory in the ADS, which mediates inner speech. Working memory studies in monkeys also suggest that in monkeys, in contrast to humans, the AVS is the dominant working memory store.{{cite journal | vauthors = Scott BH, Mishkin M, Yin P | title = Monkeys have a limited form of short-term memory in audition | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 109 | issue = 30 | pages = 12237–41 | date = July 2012 | pmid = 22778411 | doi = 10.1073/pnas.1209685109 | pmc=3409773| bibcode = 2012PNAS..10912237S | doi-access = free }}

In humans, downstream to the aSTG, the MTG and TP are thought to constitute the semantic lexicon, which is a long-term memory repository of audio-visual representations that are interconnected on the basis of semantic relationships. (See also the reviews by discussing this topic). The primary evidence for this role of the MTG-TP is that patients with damage to this region (e.g., patients with semantic dementia or herpes simplex virus encephalitis) are reported{{cite journal | vauthors = Noppeney U, Patterson K, Tyler LK, Moss H, Stamatakis EA, Bright P, Mummery C, Price CJ | title = Temporal lobe lesions and semantic impairment: a comparison of herpes simplex virus encephalitis and semantic dementia | journal = Brain | volume = 130 | issue = Pt 4 | pages = 1138–47 | date = April 2007 | pmid = 17251241 | doi = 10.1093/brain/awl344 | doi-access = free }}{{cite journal | vauthors = Patterson K, Nestor PJ, Rogers TT | title = Where do you know what you know? The representation of semantic knowledge in the human brain | journal = Nature Reviews. Neuroscience | volume = 8 | issue = 12 | pages = 976–87 | date = December 2007 | pmid = 18026167 | doi = 10.1038/nrn2277 | s2cid = 7310189 }} with an impaired ability to describe visual and auditory objects and a tendency to commit semantic errors when naming objects (i.e., semantic paraphasia). Semantic paraphasias were also expressed by aphasic patients with left MTG-TP damage{{cite journal | vauthors = Schwartz MF, Kimberg DY, Walker GM, Faseyitan O, Brecher A, Dell GS, Coslett HB | title = Anterior temporal involvement in semantic word retrieval: voxel-based lesion-symptom mapping evidence from aphasia | journal = Brain | volume = 132 | issue = Pt 12 | pages = 3411–27 | date = December 2009 | pmid = 19942676 | doi = 10.1093/brain/awp284 | pmc=2792374}} and were shown to occur in non-aphasic patients after electro-stimulation to this region.{{cite journal | vauthors = Hamberger MJ, McClelland S, McKhann GM, Williams AC, Goodman RR | title = Distribution of auditory and visual naming sites in nonlesional temporal lobe epilepsy patients and patients with space-occupying temporal lobe lesions | journal = Epilepsia | volume = 48 | issue = 3 | pages = 531–8 | date = March 2007 | pmid = 17326797 | doi = 10.1111/j.1528-1167.2006.00955.x | s2cid = 12642281 | doi-access = free }} or the underlying white matter pathway{{cite journal | vauthors = Duffau H | title = The anatomo-functional connectivity of language revisited. New insights provided by electrostimulation and tractography | journal = Neuropsychologia | volume = 46 | issue = 4 | pages = 927–34 | date = March 2008 | pmid = 18093622 | doi = 10.1016/j.neuropsychologia.2007.10.025 | s2cid = 40514753 }} Two meta-analyses of the fMRI literature also reported that the anterior MTG and TP were consistently active during semantic analysis of speech and text;{{cite journal | vauthors = Vigneau M, Beaucousin V, Hervé PY, Duffau H, Crivello F, Houdé O, Mazoyer B, Tzourio-Mazoyer N | title = Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing | journal = NeuroImage | volume = 30 | issue = 4 | pages = 1414–32 | date = May 2006 | pmid = 16413796 | doi = 10.1016/j.neuroimage.2005.11.002 | s2cid = 8870165 }} and an intra-cortical recording study correlated neural discharge in the MTG with the comprehension of intelligible sentences.{{cite journal | vauthors = Creutzfeldt O, Ojemann G, Lettich E | title = Neuronal activity in the human lateral temporal lobe. I. Responses to speech | journal = Experimental Brain Research | volume = 77 | issue = 3 | pages = 451–75 | date = October 1989 | pmid = 2806441 | doi = 10.1007/bf00249600 | s2cid = 19952034 | hdl = 11858/00-001M-0000-002C-89EA-3 | hdl-access = free }}

== Sentence comprehension ==

In addition to extracting meaning from sounds, the MTG-TP region of the AVS appears to have a role in sentence comprehension, possibly by merging concepts together (e.g., merging the concept 'blue' and 'shirt' to create the concept of a 'blue shirt'). The role of the MTG in extracting meaning from sentences has been demonstrated in functional imaging studies reporting stronger activation in the anterior MTG when proper sentences are contrasted with lists of words, sentences in a foreign or nonsense language, scrambled sentences, sentences with semantic or syntactic violations and sentence-like sequences of environmental sounds.{{cite journal | vauthors = Mazoyer BM, Tzourio N, Frak V, Syrota A, Murayama N, Levrier O, Salamon G, Dehaene S, Cohen L, Mehler J | title = The cortical representation of speech | journal = Journal of Cognitive Neuroscience | volume = 5 | issue = 4 | pages = 467–79 | date = October 1993 | pmid = 23964919 | doi = 10.1162/jocn.1993.5.4.467 | s2cid = 22265355 | url = http://www.archipel.uqam.ca/2942/1/The_cortical_representation_of_speech.pdf }}{{cite journal | vauthors = Humphries C, Love T, Swinney D, Hickok G | title = Response of anterior temporal cortex to syntactic and prosodic manipulations during sentence processing | journal = Human Brain Mapping | volume = 26 | issue = 2 | pages = 128–38 | date = October 2005 | pmid = 15895428 | doi = 10.1002/hbm.20148 | pmc = 6871757 }}{{cite journal | vauthors = Humphries C, Willard K, Buchsbaum B, Hickok G | title = Role of anterior temporal cortex in auditory sentence comprehension: an fMRI study | journal = NeuroReport | volume = 12 | issue = 8 | pages = 1749–52 | date = June 2001 | pmid = 11409752 | doi = 10.1097/00001756-200106130-00046 | s2cid = 13039857 }}{{cite journal | vauthors = Vandenberghe R, Nobre AC, Price CJ | title = The response of left temporal cortex to sentences | journal = Journal of Cognitive Neuroscience | volume = 14 | issue = 4 | pages = 550–60 | date = May 2002 | pmid = 12126497 | doi = 10.1162/08989290260045800 | s2cid = 21607482 }}{{cite journal | vauthors = Friederici AD, Rüschemeyer SA, Hahne A, Fiebach CJ | title = The role of left inferior frontal and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes | journal = Cerebral Cortex | volume = 13 | issue = 2 | pages = 170–7 | date = February 2003 | pmid = 12507948 | doi = 10.1093/cercor/13.2.170 | doi-access = free | hdl = 11858/00-001M-0000-0010-E3AB-B | hdl-access = free }}{{cite journal | pmid = 15809000 | doi = 10.1016/j.neuroimage.2004.12.013 | volume=25 | title=Language in context: emergent features of word, sentence, and narrative comprehension | year=2005 | journal=NeuroImage | pages=1002–15 | vauthors=Xu J, Kemeny S, Park G, Frattali C, Braun A| issue = 3 | s2cid = 25570583 }}{{cite journal | vauthors = Rogalsky C, Hickok G | title = Selective attention to semantic and syntactic features modulates sentence processing networks in anterior temporal cortex | journal = Cerebral Cortex | volume = 19 | issue = 4 | pages = 786–96 | date = April 2009 | pmid = 18669589 | pmc = 2651476 | doi = 10.1093/cercor/bhn126 }}{{cite journal | vauthors = Pallier C, Devauchelle AD, Dehaene S | title = Cortical representation of the constituent structure of sentences | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 108 | issue = 6 | pages = 2522–7 | date = February 2011 | pmid = 21224415 | pmc = 3038732 | doi = 10.1073/pnas.1018711108 | doi-access = free }} One fMRI study{{cite journal | vauthors = Brennan J, Nir Y, Hasson U, Malach R, Heeger DJ, Pylkkänen L | title = Syntactic structure building in the anterior temporal lobe during natural story listening | journal = Brain and Language | volume = 120 | issue = 2 | pages = 163–73 | date = February 2012 | pmid = 20472279 | pmc = 2947556 | doi = 10.1016/j.bandl.2010.04.002 }} in which participants were instructed to read a story further correlated activity in the anterior MTG with the amount of semantic and syntactic content each sentence contained. An EEG study{{cite journal| vauthors = Kotz SA, von Cramon DY, Friederici AD |date= October 2003 |title=Differentiation of syntactic processes in the left and right anterior temporal lobe: Event-related brain potential evidence from lesion patients |journal=Brain and Language|volume=87|issue=1|pages=135–136|doi=10.1016/s0093-934x(03)00236-0 |s2cid= 54320415 }} that contrasted cortical activity while reading sentences with and without syntactic violations in healthy participants and patients with MTG-TP damage, concluded that the MTG-TP in both hemispheres participate in the automatic (rule based) stage of syntactic analysis (ELAN component), and that the left MTG-TP is also involved in a later controlled stage of syntax analysis (P600 component). Patients with damage to the MTG-TP region have also been reported with impaired sentence comprehension.{{cite journal | vauthors = Martin RC, Shelton JR, Yaffee LS | title = Language processing and working memory: Neuropsychological evidence for separate phonological and semantic capacities. | journal = Journal of Memory and Language | date = February 1994 | volume = 33 | issue = 1 | pages = 83–111 | doi = 10.1006/jmla.1994.1005 }}{{cite journal | vauthors = Magnusdottir S, Fillmore P, den Ouden DB, Hjaltason H, Rorden C, Kjartansson O, Bonilha L, Fridriksson J | title = Damage to left anterior temporal cortex predicts impairment of complex syntactic processing: a lesion-symptom mapping study | journal = Human Brain Mapping | volume = 34 | issue = 10 | pages = 2715–23 | date = October 2013 | pmid = 22522937 | doi = 10.1002/hbm.22096 | pmc = 6869931 }} See review{{cite journal | vauthors = Bornkessel-Schlesewsky I, Schlesewsky M, Small SL, Rauschecker JP | title = Neurobiological roots of language in primate audition: common computational properties | journal = Trends in Cognitive Sciences | volume = 19 | issue = 3 | pages = 142–50 | date = March 2015 | pmid = 25600585 | pmc = 4348204 | doi = 10.1016/j.tics.2014.12.008 }} for more information on this topic.

== Bilaterality ==

In contradiction to the Wernicke–Lichtheim–Geschwind model that implicates sound recognition to occur solely in the left hemisphere, studies that examined the properties of the right or left hemisphere in isolation via unilateral hemispheric anesthesia (i.e., the WADA procedure{{cite journal | vauthors = Hickok G, Okada K, Barr W, Pa J, Rogalsky C, Donnelly K, Barde L, Grant A | title = Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures | journal = Brain and Language | volume = 107 | issue = 3 | pages = 179–84 | date = December 2008 | pmid = 18976806 | doi = 10.1016/j.bandl.2008.09.006 | pmc=2644214}}) or intra-cortical recordings from each hemisphere provided evidence that sound recognition is processed bilaterally. Moreover, a study that instructed patients with disconnected hemispheres (i.e., split-brain patients) to match spoken words to written words presented to the right or left hemifields, reported vocabulary in the right hemisphere that almost matches in size with the left hemisphere{{cite journal|last=Zaidel|first=Eran | name-list-style = vanc |date= September 1976|title=Auditory Vocabulary of the Right Hemisphere Following Brain Bisection or Hemidecortication | journal=Cortex|volume=12|issue=3|pages=191–211|doi=10.1016/s0010-9452(76)80001-9|pmid=1000988 |s2cid=4479925 |issn=0010-9452|doi-access=free}} (The right hemisphere vocabulary was equivalent to the vocabulary of a healthy 11-years old child). This bilateral recognition of sounds is also consistent with the finding that unilateral lesion to the auditory cortex rarely results in deficit to auditory comprehension (i.e., auditory agnosia), whereas a second lesion to the remaining hemisphere (which could occur years later) does.{{cite journal | vauthors = Poeppel D | date = October 2001 |title=Pure word deafness and the bilateral processing of the speech code | journal=Cognitive Science |volume=25 |issue=5 |pages=679–693 |doi=10.1016/s0364-0213(01)00050-7 |doi-access=free }}{{cite journal | vauthors = Ulrich G | title = Interhemispheric functional relationships in auditory agnosia. An analysis of the preconditions and a conceptual model | journal = Brain and Language | volume = 5 | issue = 3 | pages = 286–300 | date = May 1978 | pmid = 656899 | doi = 10.1016/0093-934x(78)90027-5 | s2cid = 33841186 }} Finally, as mentioned earlier, an fMRI scan of an auditory agnosia patient demonstrated bilateral reduced activation in the anterior auditory cortices, and bilateral electro-stimulation to these regions in both hemispheres resulted with impaired speech recognition.

= {{Anchor|Auditory dorsal stream}} Auditory dorsal stream =

The auditory dorsal stream connects the auditory cortex with the parietal lobe, which in turn connects with inferior frontal gyrus. In both humans and non-human primates, the auditory dorsal stream is responsible for sound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway (especially in the left hemisphere) is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory.

== Speech production ==

Studies of present-day humans have demonstrated a role for the ADS in speech production, particularly in the vocal expression of the names of objects. For instance, in a series of studies in which sub-cortical fibers were directly stimulated interference in the left pSTG and IPL resulted in errors during object-naming tasks, and interference in the left IFG resulted in speech arrest. Magnetic interference in the pSTG and IFG of healthy participants also produced speech errors and speech arrest, respectively{{cite journal | vauthors = Stewart L, Walsh V, Frith U, Rothwell JC | title = TMS produces two dissociable types of speech disruption | journal = NeuroImage | volume = 13 | issue = 3 | pages = 472–8 | date = March 2001 | pmid = 11170812 | doi = 10.1006/nimg.2000.0701 | s2cid = 10392466 | url = https://research.gold.ac.uk/60/1/PSY-Stewart2001h_GRO.pdf }}{{cite journal | vauthors = Acheson DJ, Hamidi M, Binder JR, Postle BR | title = A common neural substrate for language production and verbal working memory | journal = Journal of Cognitive Neuroscience | volume = 23 | issue = 6 | pages = 1358–67 | date = June 2011 | pmid = 20617889 | doi = 10.1162/jocn.2010.21519 | pmc=3053417}} One study has also reported that electrical stimulation of the left IPL caused patients to believe that they had spoken when they had not and that IFG stimulation caused patients to unconsciously move their lips.{{cite journal | vauthors = Desmurget M, Reilly KT, Richard N, Szathmari A, Mottolese C, Sirigu A | title = Movement intention after parietal cortex stimulation in humans | journal = Science | volume = 324 | issue = 5928 | pages = 811–3 | date = May 2009 | pmid = 19423830 | doi = 10.1126/science.1169896 | bibcode = 2009Sci...324..811D | s2cid = 6555881 }} The contribution of the ADS to the process of articulating the names of objects could be dependent on the reception of afferents from the semantic lexicon of the AVS, as an intra-cortical recording study reported of activation in the posterior MTG prior to activation in the Spt-IPL region when patients named objects in pictures{{cite journal | vauthors = Edwards E, Nagarajan SS, Dalal SS, Canolty RT, Kirsch HE, Barbaro NM, Knight RT | title = Spatiotemporal imaging of cortical activation during verb generation and picture naming | journal = NeuroImage | volume = 50 | issue = 1 | pages = 291–301 | date = March 2010 | pmid = 20026224 | pmc = 2957470 | doi = 10.1016/j.neuroimage.2009.12.035 }} Intra-cortical electrical stimulation studies also reported that electrical interference to the posterior MTG was correlated with impaired object naming{{cite journal | vauthors = Boatman D, Gordon B, Hart J, Selnes O, Miglioretti D, Lenz F | title = Transcortical sensory aphasia: revisited and revised | journal = Brain | volume = 123 | issue = 8 | pages = 1634–42 | date = August 2000 | pmid = 10908193 | doi = 10.1093/brain/123.8.1634 | doi-access = free }}

Additionally, lesion studies of stroke patients have provided evidence supporting the dual stream model's role in speech production. Recent research using multivariate lesion/disconnectome symptom mapping has shown that lower scores in speech production tasks are associated with lesions and abnormalities in the left inferior parietal lobe and frontal lobe. These findings from stroke patients further support the involvement of the dorsal stream pathway in speech production, complementing the stimulation and interference studies in healthy participants.{{Cite journal |last1=Jiang |first1=Yaya |last2=Gong |first2=Gaolang |date=2024-01-23 |title=Common and distinct patterns underlying different linguistic tasks: multivariate disconnectome symptom mapping in poststroke patients |url=https://login.libaccess.sjlibrary.org/login?qurl=https://doi.org%2f10.1093%2fcercor%2fbhae008 |journal=Cerebral Cortex |language=en |volume=34 |issue=2 |doi=10.1093/cercor/bhae008 |pmid=38265297 |issn=1047-3211}}

== Vocal mimicry ==

Although sound perception is primarily ascribed with the AVS, the ADS appears associated with several aspects of speech perception. For instance, in a meta-analysis of fMRI studies (Turkeltaub and Coslett, 2010), in which the auditory perception of phonemes was contrasted with closely matching sounds, and the studies were rated for the required level of attention, the authors concluded that attention to phonemes correlates with strong activation in the pSTG-pSTS region. An intra-cortical recording study in which participants were instructed to identify syllables also correlated the hearing of each syllable with its own activation pattern in the pSTG.{{cite journal | vauthors = Chang EF, Rieger JW, Johnson K, Berger MS, Barbaro NM, Knight RT | title = Categorical speech representation in human superior temporal gyrus | language = En | journal = Nature Neuroscience | volume = 13 | issue = 11 | pages = 1428–32 | date = November 2010 | pmid = 20890293 | pmc = 2967728 | doi = 10.1038/nn.2641 }} The involvement of the ADS in both speech perception and production has been further illuminated in several pioneering functional imaging studies that contrasted speech perception with overt or covert speech production.{{Cite journal| vauthors = Buchsbaum BR, Hickok G, Humphries C | date= September 2001 |title=Role of left posterior superior temporal gyrus in phonological processing for speech perception and production |journal=Cognitive Science |volume=25|issue=5|pages=663–678|doi=10.1207/s15516709cog2505_2|issn=0364-0213|doi-access=free}}{{cite journal | vauthors = Wise RJ, Scott SK, Blank SC, Mummery CJ, Murphy K, Warburton EA | title = Separate neural subsystems within 'Wernicke's area' | journal = Brain | volume = 124 | issue = Pt 1 | pages = 83–95 | date = January 2001 | pmid = 11133789 | doi = 10.1093/brain/124.1.83 | doi-access = free }}{{cite journal | vauthors = Hickok G, Buchsbaum B, Humphries C, Muftuler T | title = Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt | journal = Journal of Cognitive Neuroscience | volume = 15 | issue = 5 | pages = 673–82 | date = July 2003 | pmid = 12965041 | doi = 10.1162/089892903322307393 }} These studies demonstrated that the pSTS is active only during the perception of speech, whereas area Spt is active during both the perception and production of speech. The authors concluded that the pSTS projects to area Spt, which converts the auditory input into articulatory movements.{{cite journal | vauthors = Warren JE, Wise RJ, Warren JD | title = Sounds do-able: auditory-motor transformations and the posterior temporal plane | journal = Trends in Neurosciences | volume = 28 | issue = 12 | pages = 636–43 | date = December 2005 | pmid = 16216346 | doi = 10.1016/j.tins.2005.09.010 | s2cid = 36678139 }}{{cite journal | vauthors = Hickok G, Poeppel D | title = The cortical organization of speech processing | journal = Nature Reviews. Neuroscience | volume = 8 | issue = 5 | pages = 393–402 | date = May 2007 | pmid = 17431404 | doi = 10.1038/nrn2113 | s2cid = 6199399 }} Similar results have been obtained in a study in which participants' temporal and parietal lobes were electrically stimulated. This study reported that electrically stimulating the pSTG region interferes with sentence comprehension and that stimulation of the IPL interferes with the ability to vocalize the names of objects. The authors also reported that stimulation in area Spt and the inferior IPL induced interference during both object-naming and speech-comprehension tasks. The role of the ADS in speech repetition is also congruent with the results of the other functional imaging studies that have localized activation during speech repetition tasks to ADS regions.{{cite journal | vauthors = Karbe H, Herholz K, Weber-Luxenburger G, Ghaemi M, Heiss WD | title = Cerebral networks and functional brain asymmetry: evidence from regional metabolic changes during word repetition | journal = Brain and Language | volume = 63 | issue = 1 | pages = 108–21 | date = June 1998 | pmid = 9642023 | doi = 10.1006/brln.1997.1937 | s2cid = 31335617 }}{{cite journal | vauthors = Giraud AL, Price CJ | title = The constraints functional neuroimaging places on classical models of auditory word processing | journal = Journal of Cognitive Neuroscience | volume = 13 | issue = 6 | pages = 754–65 | date = August 2001 | pmid = 11564320 | doi = 10.1162/08989290152541421 | s2cid = 13916709 | url = https://hal.science/hal-03995087/file/giraud2001.pdf }}{{cite journal | vauthors = Graves WW, Grabowski TJ, Mehta S, Gupta P | title = The left posterior superior temporal gyrus participates specifically in accessing lexical phonology | journal = Journal of Cognitive Neuroscience | volume = 20 | issue = 9 | pages = 1698–710 | date = September 2008 | pmid = 18345989 | pmc = 2570618 | doi = 10.1162/jocn.2008.20113 }} An intra-cortical recording study that recorded activity throughout most of the temporal, parietal and frontal lobes also reported activation in the pSTG, Spt, IPL and IFG when speech repetition is contrasted with speech perception.{{cite journal | vauthors = Towle VL, Yoon HA, Castelle M, Edgar JC, Biassou NM, Frim DM, Spire JP, Kohrman MH | title = ECoG gamma activity during a language task: differentiating expressive and receptive speech areas | journal = Brain | volume = 131 | issue = Pt 8 | pages = 2013–27 | date = August 2008 | pmid = 18669510 | pmc = 2724904 | doi = 10.1093/brain/awn147 }} Neuropsychological studies have also found that individuals with speech repetition deficits but preserved auditory comprehension (i.e., conduction aphasia) suffer from circumscribed damage to the Spt-IPL area{{cite journal | vauthors = Selnes OA, Knopman DS, Niccum N, Rubens AB | title = The critical role of Wernicke's area in sentence repetition | journal = Annals of Neurology | volume = 17 | issue = 6 | pages = 549–57 | date = June 1985 | pmid = 4026225 | doi = 10.1002/ana.410170604 | s2cid = 12914191 }}{{cite journal | vauthors = Axer H, von Keyserlingk AG, Berks G, von Keyserlingk DG | title = Supra- and infrasylvian conduction aphasia | journal = Brain and Language | volume = 76 | issue = 3 | pages = 317–31 | date = March 2001 | pmid = 11247647 | doi = 10.1006/brln.2000.2425 | s2cid = 25406527 }}{{cite journal | vauthors = Bartha L, Benke T | title = Acute conduction aphasia: an analysis of 20 cases | journal = Brain and Language | volume = 85 | issue = 1 | pages = 93–108 | date = April 2003 | pmid = 12681350 | doi = 10.1016/s0093-934x(02)00502-3 | s2cid = 18466425 }}{{cite journal | vauthors = Baldo JV, Katseff S, Dronkers NF | title = Brain Regions Underlying Repetition and Auditory-Verbal Short-term Memory Deficits in Aphasia: Evidence from Voxel-based Lesion Symptom Mapping | journal = Aphasiology | volume = 26 | issue = 3–4 | pages = 338–354 | date = March 2012 | pmid = 24976669 | pmc = 4070523 | doi = 10.1080/02687038.2011.602391 }}{{cite journal | vauthors = Baldo JV, Klostermann EC, Dronkers NF | title = It's either a cook or a baker: patients with conduction aphasia get the gist but lose the trace | journal = Brain and Language | volume = 105 | issue = 2 | pages = 134–40 | date = May 2008 | pmid = 18243294 | doi = 10.1016/j.bandl.2007.12.007 | s2cid = 997735 }}{{cite journal | vauthors = Fridriksson J, Kjartansson O, Morgan PS, Hjaltason H, Magnusdottir S, Bonilha L, Rorden C | title = Impaired speech repetition and left parietal lobe damage | journal = The Journal of Neuroscience | volume = 30 | issue = 33 | pages = 11057–61 | date = August 2010 | pmid = 20720112 | doi = 10.1523/jneurosci.1120-10.2010 | pmc=2936270}}{{cite journal | vauthors = Buchsbaum BR, Baldo J, Okada K, Berman KF, Dronkers N, D'Esposito M, Hickok G | title = Conduction aphasia, sensory-motor integration, and phonological short-term memory - an aggregate analysis of lesion and fMRI data | journal = Brain and Language | volume = 119 | issue = 3 | pages = 119–28 | date = December 2011 | pmid = 21256582 | doi = 10.1016/j.bandl.2010.12.001 | pmc=3090694}} or damage to the projections that emanate from this area and target the frontal lobe{{cite journal | vauthors = Yamada K, Nagakane Y, Mizuno T, Hosomi A, Nakagawa M, Nishimura T | title = MR tractography depicting damage to the arcuate fasciculus in a patient with conduction aphasia | journal = Neurology | volume = 68 | issue = 10 | pages = 789 | date = March 2007 | pmid = 17339591 | doi = 10.1212/01.wnl.0000256348.65744.b2 | doi-access = free }}{{cite journal | vauthors = Breier JI, Hasan KM, Zhang W, Men D, Papanicolaou AC | title = Language dysfunction after stroke and damage to white matter tracts evaluated using diffusion tensor imaging | journal = AJNR. American Journal of Neuroradiology | volume = 29 | issue = 3 | pages = 483–7 | date = March 2008 | pmid = 18039757 | pmc = 3073452 | doi = 10.3174/ajnr.A0846 }}{{cite journal | vauthors = Zhang Y, Wang C, Zhao X, Chen H, Han Z, Wang Y | title = Diffusion tensor imaging depicting damage to the arcuate fasciculus in patients with conduction aphasia: a study of the Wernicke–Geschwind model | journal = Neurological Research | volume = 32 | issue = 7 | pages = 775–8 | date = September 2010 | pmid = 19825277 | doi = 10.1179/016164109x12478302362653 | s2cid = 22960870 }}{{cite journal | vauthors = Jones OP, Prejawa S, Hope TM, Oberhuber M, Seghier ML, Leff AP, Green DW, Price CJ | title = Sensory-to-motor integration during auditory repetition: a combined fMRI and lesion study | journal = Frontiers in Human Neuroscience | volume = 8 | pages = 24 | date = 2014 | pmid = 24550807 | pmc = 3908611 | doi = 10.3389/fnhum.2014.00024 | doi-access = free }} Studies have also reported a transient speech repetition deficit in patients after direct intra-cortical electrical stimulation to this same region.{{cite journal | vauthors = Quigg M, Fountain NB | title = Conduction aphasia elicited by stimulation of the left posterior superior temporal gyrus | journal = Journal of Neurology, Neurosurgery, and Psychiatry | volume = 66 | issue = 3 | pages = 393–6 | date = March 1999 | pmid = 10084542 | doi = 10.1136/jnnp.66.3.393 | pmc=1736266}}{{cite journal | vauthors = Quigg M, Geldmacher DS, Elias WJ | title = Conduction aphasia as a function of the dominant posterior perisylvian cortex. Report of two cases | language = en-US | journal = Journal of Neurosurgery | volume = 104 | issue = 5 | pages = 845–8 | date = May 2006 | pmid = 16703895 | doi = 10.3171/jns.2006.104.5.845 }} Insight into the purpose of speech repetition in the ADS is provided by longitudinal studies of children that correlated the learning of foreign vocabulary with the ability to repeat nonsense words.{{Cite journal|last1=Service|first1=Elisabet|last2=Kohonen|first2=Viljo | name-list-style = vanc |date= April 1995 |title=Is the relation between phonological memory and foreign language learning accounted for by vocabulary acquisition? |journal=Applied Psycholinguistics | volume=16|issue=2|pages=155–172|doi=10.1017/S0142716400007062 |s2cid=143974128}}{{cite journal | vauthors = Service E | title = Phonology, working memory, and foreign-language learning | journal = The Quarterly Journal of Experimental Psychology. A, Human Experimental Psychology | volume = 45 | issue = 1 | pages = 21–50 | date = July 1992 | pmid = 1636010 | doi = 10.1080/14640749208401314 | s2cid = 43268252 }}

== Speech monitoring ==

In addition to repeating and producing speech, the ADS appears to have a role in monitoring the quality of the speech output. Neuroanatomical evidence suggests that the ADS is equipped with descending connections from the IFG to the pSTG that relay information about motor activity (i.e., corollary discharges) in the vocal apparatus (mouth, tongue, vocal folds). This feedback marks the sound perceived during speech production as self-produced and can be used to adjust the vocal apparatus to increase the similarity between the perceived and emitted calls. Evidence for descending connections from the IFG to the pSTG has been offered by a study that electrically stimulated the IFG during surgical operations and reported the spread of activation to the pSTG-pSTS-Spt region{{cite journal | vauthors = Matsumoto R, Nair DR, LaPresto E, Najm I, Bingaman W, Shibasaki H, Lüders HO | title = Functional connectivity in the human language system: a cortico-cortical evoked potential study | journal = Brain | volume = 127 | issue = Pt 10 | pages = 2316–30 | date = October 2004 | pmid = 15269116 | doi = 10.1093/brain/awh246 | doi-access = free }} A study{{cite journal | vauthors = Kimura D, Watson N | title = The relation between oral movement control and speech | journal = Brain and Language | volume = 37 | issue = 4 | pages = 565–90 | date = November 1989 | pmid = 2479446 | doi = 10.1016/0093-934x(89)90112-0 | s2cid = 39913744 }} that compared the ability of aphasic patients with frontal, parietal or temporal lobe damage to quickly and repeatedly articulate a string of syllables reported that damage to the frontal lobe interfered with the articulation of both identical syllabic strings ("Bababa") and non-identical syllabic strings ("Badaga"), whereas patients with temporal or parietal lobe damage only exhibited impairment when articulating non-identical syllabic strings. Because the patients with temporal and parietal lobe damage were capable of repeating the syllabic string in the first task, their speech perception and production appears to be relatively preserved, and their deficit in the second task is therefore due to impaired monitoring. Demonstrating the role of the descending ADS connections in monitoring emitted calls, an fMRI study instructed participants to speak under normal conditions or when hearing a modified version of their own voice (delayed first formant) and reported that hearing a distorted version of one's own voice results in increased activation in the pSTG.{{cite journal | vauthors = Tourville JA, Reilly KJ, Guenther FH | title = Neural mechanisms underlying auditory feedback control of speech | journal = NeuroImage | volume = 39 | issue = 3 | pages = 1429–43 | date = February 2008 | pmid = 18035557 | pmc = 3658624 | doi = 10.1016/j.neuroimage.2007.09.054 }} Further demonstrating that the ADS facilitates motor feedback during mimicry is an intra-cortical recording study that contrasted speech perception and repetition. The authors reported that, in addition to activation in the IPL and IFG, speech repetition is characterized by stronger activation in the pSTG than during speech perception.

== Integration of phonemes with lip-movements ==

Although sound perception is primarily ascribed with the AVS, the ADS appears associated with several aspects of speech perception. For instance, in a meta-analysis of fMRI studies{{cite journal | vauthors = Turkeltaub PE, Coslett HB | title = Localization of sublexical speech perception components | journal = Brain and Language | volume = 114 | issue = 1 | pages = 1–15 | date = July 2010 | pmid = 20413149 | pmc = 2914564 | doi = 10.1016/j.bandl.2010.03.008 }} in which the auditory perception of phonemes was contrasted with closely matching sounds, and the studies were rated for the required level of attention, the authors concluded that attention to phonemes correlates with strong activation in the pSTG-pSTS region. An intra-cortical recording study in which participants were instructed to identify syllables also correlated the hearing of each syllable with its own activation pattern in the pSTG.{{cite journal | vauthors = Chang EF, Rieger JW, Johnson K, Berger MS, Barbaro NM, Knight RT | title = Categorical speech representation in human superior temporal gyrus | journal = Nature Neuroscience | volume = 13 | issue = 11 | pages = 1428–32 | date = November 2010 | pmid = 20890293 | pmc = 2967728 | doi = 10.1038/nn.2641 }} Consistent with the role of the ADS in discriminating phonemes, studies have ascribed the integration of phonemes and their corresponding lip movements (i.e., visemes) to the pSTS of the ADS. For example, an fMRI study{{cite journal | vauthors = Nath AR, Beauchamp MS | title = A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion | journal = NeuroImage | volume = 59 | issue = 1 | pages = 781–7 | date = January 2012 | pmid = 21787869 | pmc = 3196040 | doi = 10.1016/j.neuroimage.2011.07.024 }} has correlated activation in the pSTS with the McGurk illusion (in which hearing the syllable "ba" while seeing the viseme "ga" results in the perception of the syllable "da"). Another study has found that using magnetic stimulation to interfere with processing in this area further disrupts the McGurk illusion.{{cite journal | vauthors = Beauchamp MS, Nath AR, Pasalar S | title = fMRI-Guided transcranial magnetic stimulation reveals that the superior temporal sulcus is a cortical locus of the McGurk effect | journal = The Journal of Neuroscience | volume = 30 | issue = 7 | pages = 2414–7 | date = February 2010 | pmid = 20164324 | pmc = 2844713 | doi = 10.1523/JNEUROSCI.4865-09.2010 }} The association of the pSTS with the audio-visual integration of speech has also been demonstrated in a study that presented participants with pictures of faces and spoken words of varying quality. The study reported that the pSTS selects for the combined increase of the clarity of faces and spoken words.{{cite journal | vauthors = McGettigan C, Faulkner A, Altarelli I, Obleser J, Baverstock H, Scott SK | title = Speech comprehension aided by multiple modalities: behavioural and neural interactions | journal = Neuropsychologia | volume = 50 | issue = 5 | pages = 762–76 | date = April 2012 | pmid = 22266262 | pmc = 4050300 | doi = 10.1016/j.neuropsychologia.2012.01.010 }} Corroborating evidence has been provided by an fMRI study{{cite journal | vauthors = Stevenson RA, James TW | title = Audiovisual integration in human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition | journal = NeuroImage | volume = 44 | issue = 3 | pages = 1210–23 | date = February 2009 | pmid = 18973818 | doi = 10.1016/j.neuroimage.2008.09.034 | s2cid = 8342349 }} that contrasted the perception of audio-visual speech with audio-visual non-speech (pictures and sounds of tools). This study reported the detection of speech-selective compartments in the pSTS. In addition, an fMRI study{{cite journal | vauthors = Bernstein LE, Jiang J, Pantazis D, Lu ZL, Joshi A | title = Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays | journal = Human Brain Mapping | volume = 32 | issue = 10 | pages = 1660–76 | date = October 2011 | pmid = 20853377 | pmc = 3120928 | doi = 10.1002/hbm.21139 }} that contrasted congruent audio-visual speech with incongruent speech (pictures of still faces) reported pSTS activation. For a review presenting additional converging evidence regarding the role of the pSTS and ADS in phoneme-viseme integration see.{{cite journal | vauthors = Campbell R | title = The processing of audio-visual speech: empirical and neural bases | journal = Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences | volume = 363 | issue = 1493 | pages = 1001–10 | date = March 2008 | pmid = 17827105 | pmc = 2606792 | doi = 10.1098/rstb.2007.2155 }}

Empirical research has demonstrated that visual lip movements enhance speech processing along the auditory dorsal stream, particularly in noisy conditions. Recent studies {{Cite journal |last1=Zhang |first1=Lei |last2=Du |first2=Yi |date=August 2022 |title=Lip movements enhance speech representations and effective connectivity in auditory dorsal stream |url=https://linkinghub.elsevier.com/retrieve/pii/S105381192200430X |journal=NeuroImage |language=en |volume=257 |pages=119311 |doi=10.1016/j.neuroimage.2022.119311|pmid=35589000 |doi-access=free }} discovered that the dorsal stream regions, including frontal speech motor areas and supramarginal gyrus, show improved neural representations of speech sounds when visual lip movements are available.

== Phonological long-term memory ==

A growing body of evidence indicates that humans, in addition to having a long-term store for word meanings located in the MTG-TP of the AVS (i.e., the semantic lexicon), also have a long-term store for the names of objects located in the Spt-IPL region of the ADS (i.e., the phonological lexicon). For example, a study{{cite journal | vauthors = Schwartz MF, Faseyitan O, Kim J, Coslett HB | title = The dorsal stream contribution to phonological retrieval in object naming | journal = Brain | volume = 135 | issue = Pt 12 | pages = 3799–814 | date = December 2012 | pmid = 23171662 | pmc = 3525060 | doi = 10.1093/brain/aws300 }}{{cite journal | vauthors = Schwartz MF, Kimberg DY, Walker GM, Faseyitan O, Brecher A, Dell GS, Coslett HB | title = Anterior temporal involvement in semantic word retrieval: voxel-based lesion-symptom mapping evidence from aphasia | journal = Brain | volume = 132 | issue = Pt 12 | pages = 3411–27 | date = December 2009 | pmid = 19942676 | pmc = 2792374 | doi = 10.1093/brain/awp284 }} examining patients with damage to the AVS (MTG damage) or damage to the ADS (IPL damage) reported that MTG damage results in individuals incorrectly identifying objects (e.g., calling a "goat" a "sheep," an example of semantic paraphasia). Conversely, IPL damage results in individuals correctly identifying the object but incorrectly pronouncing its name (e.g., saying "gof" instead of "goat," an example of phonemic paraphasia). Semantic paraphasia errors have also been reported in patients receiving intra-cortical electrical stimulation of the AVS (MTG), and phonemic paraphasia errors have been reported in patients whose ADS (pSTG, Spt, and IPL) received intra-cortical electrical stimulation.{{cite journal|last=Ojemann|first=George A. | name-list-style = vanc |date= June 1983 |title=Brain organization for language from the perspective of electrical stimulation mapping |journal=Behavioral and Brain Sciences | volume=6|issue=2|pages=189–206|doi=10.1017/S0140525X00015491|s2cid=143189089 |issn=1469-1825}} Further supporting the role of the ADS in object naming is an MEG study that localized activity in the IPL during the learning and during the recall of object names.{{cite journal | vauthors = Cornelissen K, Laine M, Renvall K, Saarinen T, Martin N, Salmelin R | title = Learning new names for new objects: cortical effects as measured by magnetoencephalography | journal = Brain and Language | volume = 89 | issue = 3 | pages = 617–22 | date = June 2004 | pmid = 15120553 | doi = 10.1016/j.bandl.2003.12.007 | s2cid = 32224334 }} A study that induced magnetic interference in participants' IPL while they answered questions about an object reported that the participants were capable of answering questions regarding the object's characteristics or perceptual attributes but were impaired when asked whether the word contained two or three syllables.{{cite journal | vauthors = Hartwigsen G, Baumgaertner A, Price CJ, Koehnke M, Ulmer S, Siebner HR | title = Phonological decisions require both the left and right supramarginal gyri | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 107 | issue = 38 | pages = 16494–9 | date = September 2010 | pmid = 20807747 | pmc = 2944751 | doi = 10.1073/pnas.1008121107 | bibcode = 2010PNAS..10716494H | doi-access = free }} An MEG study has also correlated recovery from anomia (a disorder characterized by an impaired ability to name objects) with changes in IPL activation.{{cite journal | vauthors = Cornelissen K, Laine M, Tarkiainen A, Järvensivu T, Martin N, Salmelin R | title = Adult brain plasticity elicited by anomia treatment | journal = Journal of Cognitive Neuroscience | volume = 15 | issue = 3 | pages = 444–61 | date = April 2003 | pmid = 12729495 | doi = 10.1162/089892903321593153 | s2cid = 1597939 | url = https://aaltodoc.aalto.fi/handle/123456789/30995 }} Further supporting the role of the IPL in encoding the sounds of words are studies reporting that, compared to monolinguals, bilinguals have greater cortical density in the IPL but not the MTG.{{cite journal | vauthors = Mechelli A, Crinion JT, Noppeney U, O'Doherty J, Ashburner J, Frackowiak RS, Price CJ | title = Neurolinguistics: structural plasticity in the bilingual brain | journal = Nature | volume = 431 | issue = 7010 | pages = 757 | date = October 2004 | pmid = 15483594 | doi = 10.1038/431757a | bibcode = 2004Natur.431..757M | hdl = 11858/00-001M-0000-0013-D79B-1 | s2cid = 4338340 | hdl-access = free }}{{cite journal | vauthors = Green DW, Crinion J, Price CJ | title = Exploring cross-linguistic vocabulary effects on brain structures using voxel-based morphometry | journal = Bilingualism | volume = 10 | issue = 2 | pages = 189–199 | date = July 2007 | pmid = 18418473 | doi = 10.1017/S1366728907002933 | pmc=2312335}} Because evidence shows that, in bilinguals, different phonological representations of the same word share the same semantic representation,{{cite journal |last=Willness |first=Chelsea | name-list-style = vanc |date=2016-01-08 |title= The Oxford handbook of organizational climate and culture By Benjamin Schneider & Karen M. Barbera (Eds.) New York, NY: Oxford University Press, 2014. {{Text|ISBN}} 978-0-19-986071-5 | journal=British Journal of Psychology |department=Book Reviews |volume=107 |issue=1 |pages=201–202 |doi=10.1111/bjop.12170 }} this increase in density in the IPL verifies the existence of the phonological lexicon: the semantic lexicon of bilinguals is expected to be similar in size to the semantic lexicon of monolinguals, whereas their phonological lexicon should be twice the size. Consistent with this finding, cortical density in the IPL of monolinguals also correlates with vocabulary size.{{cite journal | vauthors = Lee H, Devlin JT, Shakeshaft C, Stewart LH, Brennan A, Glensman J, Pitcher K, Crinion J, Mechelli A, Frackowiak RS, Green DW, Price CJ | title = Anatomical traces of vocabulary acquisition in the adolescent brain | journal = The Journal of Neuroscience | volume = 27 | issue = 5 | pages = 1184–9 | date = January 2007 | pmid = 17267574 | doi = 10.1523/JNEUROSCI.4442-06.2007 | pmc=6673201}}{{cite journal | vauthors = Richardson FM, Thomas MS, Filippi R, Harth H, Price CJ | title = Contrasting effects of vocabulary knowledge on temporal and parietal brain structure across lifespan | journal = Journal of Cognitive Neuroscience | volume = 22 | issue = 5 | pages = 943–54 | date = May 2010 | pmid = 19366285 | pmc = 2860571 | doi = 10.1162/jocn.2009.21238 }} Notably, the functional dissociation of the AVS and ADS in object-naming tasks is supported by cumulative evidence from reading research showing that semantic errors are correlated with MTG impairment and phonemic errors with IPL impairment. Based on these associations, the semantic analysis of text has been linked to the inferior-temporal gyrus and MTG, and the phonological analysis of text has been linked to the pSTG-Spt- IPL{{cite journal | vauthors = Jobard G, Crivello F, Tzourio-Mazoyer N | title = Evaluation of the dual route theory of reading: a metanalysis of 35 neuroimaging studies | journal = NeuroImage | volume = 20 | issue = 2 | pages = 693–712 | date = October 2003 | pmid = 14568445 | doi = 10.1016/s1053-8119(03)00343-4 | s2cid = 739665 }}{{cite journal | vauthors = Bolger DJ, Perfetti CA, Schneider W | title = Cross-cultural effect on the brain revisited: universal structures plus writing system variation | language = fr | journal = Human Brain Mapping | volume = 25 | issue = 1 | pages = 92–104 | date = May 2005 | pmid = 15846818 | doi = 10.1002/hbm.20124 | pmc = 6871743 }}{{cite journal | vauthors = Brambati SM, Ogar J, Neuhaus J, Miller BL, Gorno-Tempini ML | title = Reading disorders in primary progressive aphasia: a behavioral and neuroimaging study | journal = Neuropsychologia | volume = 47 | issue = 8–9 | pages = 1893–900 | date = July 2009 | pmid = 19428421 | pmc = 2734967 | doi = 10.1016/j.neuropsychologia.2009.02.033 }}

== Phonological working memory ==

Working memory is often treated as the temporary activation of the representations stored in long-term memory that are used for speech (phonological representations). This sharing of resources between working memory and speech is evident by the finding{{cite journal |last1=Baddeley |first1=Alan |last2=Lewis |first2=Vivien |last3=Vallar |first3=Giuseppe | name-list-style = vanc | date = May 1984 |title=Exploring the Articulatory Loop |journal=The Quarterly Journal of Experimental Psychology Section A | volume=36|issue=2|pages=233–252|doi=10.1080/14640748408402157 |s2cid=144313607 |doi-access=free }}{{cite journal | vauthors = Cowan N | title = The magical number 4 in short-term memory: a reconsideration of mental storage capacity | journal = The Behavioral and Brain Sciences | volume = 24 | issue = 1 | pages = 87–114; discussion 114–85 | date = February 2001 | pmid = 11515286 | doi = 10.1017/S0140525X01003922 | doi-access = free }} that speaking during rehearsal results in a significant reduction in the number of items that can be recalled from working memory (articulatory suppression). The involvement of the phonological lexicon in working memory is also evidenced by the tendency of individuals to make more errors when recalling words from a recently learned list of phonologically similar words than from a list of phonologically dissimilar words (the phonological similarity effect). Studies have also found that speech errors committed during reading are remarkably similar to speech errors made during the recall of recently learned, phonologically similar words from working memory.{{cite journal | vauthors = Caplan D, Rochon E, Waters GS | title = Articulatory and phonological determinants of word length effects in span tasks | journal = The Quarterly Journal of Experimental Psychology. A, Human Experimental Psychology | volume = 45 | issue = 2 | pages = 177–92 | date = August 1992 | pmid = 1410554 | doi = 10.1080/14640749208401323 | s2cid = 32594562 }} Patients with IPL damage have also been observed to exhibit both speech production errors and impaired working memory{{cite journal | vauthors = Waters GS, Rochon E, Caplan D |date=February 1992 |title=The role of high-level speech planning in rehearsal: Evidence from patients with apraxia of speech |journal=Journal of Memory and Language |volume=31 |issue=1 |pages=54–73 |doi=10.1016/0749-596x(92)90005-i }}{{cite journal | vauthors = Cohen L, Bachoud-Levi AC | title = The role of the output phonological buffer in the control of speech timing: a single case study | journal = Cortex; A Journal Devoted to the Study of the Nervous System and Behavior | volume = 31 | issue = 3 | pages = 469–86 | date = September 1995 | pmid = 8536476 | doi = 10.1016/s0010-9452(13)80060-3 | s2cid = 4480375 }}{{cite journal | vauthors = Shallice T, Rumiati RI, Zadini A | title = The selective impairment of the phonological output buffer | journal = Cognitive Neuropsychology | volume = 17 | issue = 6 | pages = 517–46 | date = September 2000 | pmid = 20945193 | doi = 10.1080/02643290050110638 | s2cid = 14811413 }}{{cite journal | vauthors = Shu H, Xiong H, Han Z, Bi Y, Bai X | title = The selective impairment of the phonological output buffer: evidence from a Chinese patient | journal = Behavioural Neurology | volume = 16 | issue = 2–3 | pages = 179–89 | date = 2005 | pmid = 16410633 | pmc = 5478832 | doi = 10.1155/2005/647871 | doi-access = free }} Finally, the view that verbal working memory is the result of temporarily activating phonological representations in the ADS is compatible with recent models describing working memory as the combination of maintaining representations in the mechanism of attention in parallel to temporarily activating representations in long-term memory.{{cite journal|last=Oberauer|first=Klaus | name-list-style = vanc |date=2002|title=Access to information in working memory: Exploring the focus of attention. |journal=Journal of Experimental Psychology: Learning, Memory, and Cognition | volume=28|issue=3|pages=411–421|doi=10.1037/0278-7393.28.3.411 |pmid=12018494 }}{{cite journal | vauthors = Unsworth N, Engle RW | title = The nature of individual differences in working memory capacity: active maintenance in primary memory and controlled search from secondary memory | journal = Psychological Review | volume = 114 | issue = 1 | pages = 104–32 | date = January 2007 | pmid = 17227183 | doi = 10.1037/0033-295x.114.1.104 }}{{cite journal| vauthors = Barrouillet P, Camos V |date=December 2012 |title=As Time Goes By |journal=Current Directions in Psychological Science | volume=21 |issue=6 |pages=413–419 |doi=10.1177/0963721412459513 |s2cid=145540189 |url=https://archive-ouverte.unige.ch/unige:88269 }} It has been argued that the role of the ADS in the rehearsal of lists of words is the reason this pathway is active during sentence comprehension{{cite journal | vauthors = Bornkessel-Schlesewsky I, Schlesewsky M, Small SL, Rauschecker JP | title = Neurobiological roots of language in primate audition: common computational properties | journal = Trends in Cognitive Sciences | volume = 19 | issue = 3 | pages = 142–50 | date = March 2015 | pmid = 25600585 | pmc = 4348204 | doi = 10.1016/j.tics.2014.12.008 }} For a review of the role of the ADS in working memory, see.{{cite journal | vauthors = Buchsbaum BR, D'Esposito M | title = The search for the phonological store: from loop to convolution | journal = Journal of Cognitive Neuroscience | volume = 20 | issue = 5 | pages = 762–78 | date = May 2008 | pmid = 18201133 | doi = 10.1162/jocn.2008.20501 | s2cid = 17878480 }}

Studies have shown that performance on phonological working memory tasks correlates with properties of the left dorsal branch of the arcuate fasciculus (AF), which connects posterior temporal language regions with attention-regulating areas in the middle frontal gyrus. The arcuate fasciculus is a white matter pathway in the brain which contains two branches: a ventral branch connecting Wernicke's area with Broca's area and a dorsal branch connecting the posterior temporal region with the middle frontal gyrus. This dorsal branch appears to be particularly important for phonological working memory processes.{{Cite journal |last1=Barbeau |first1=Elise B |last2=Kousaie |first2=Shanna |last3=Brass |first3=Kanontienentha |last4=Descoteaux |first4=Maxime |last5=Petrides |first5=Michael |last6=Klein |first6=Denise |date=2023-06-29 |title=The importance of the dorsal branch of the arcuate fasciculus in phonological working memory |url=https://login.libaccess.sjlibrary.org/login?qurl=https://doi.org%2f10.1093%2fcercor%2fbhad226 |journal=Cerebral Cortex |language=en |volume=33 |issue=16 |pages=9554–9565 |doi=10.1093/cercor/bhad226 |pmid=37386707 |issn=1047-3211}}

File:From where to what.png

Linguistic theories

Language-processing research informs theories of language. The primary theoretical question is whether linguistic structures follow from the brain structures or vice versa. Externalist models, such as Ferdinand de Saussure's structuralism, argue that language as a social phenomenon is external to the brain. The individual receives the linguistic system from the outside, and the given language shapes the individual's brain.{{cite book |last=de Saussure |first=Ferdinand |url=https://monoskop.org/images/0/0b/Saussure_Ferdinand_de_Course_in_General_Linguistics_1959.pdf |title=Course in general linguistics |date=1959 |publisher=Philosophy Library |isbn=9780231157278 |place=New York |author-link=Ferdinand de Saussure |access-date=2020-06-16 |archive-url=https://web.archive.org/web/20200414113626/https://monoskop.org/images/0/0b/Saussure_Ferdinand_de_Course_in_General_Linguistics_1959.pdf |archive-date=2020-04-14 |url-status=dead |orig-year=First published 1916}}

This idea is opposed by internalist models including Noam Chomsky's transformational generative grammar, George Lakoff's Cognitive Linguistics, and John A. Hawkins's efficiency hypothesis. According to Chomsky, language is acquired from an innate brain structure independently of meaning.{{cite book |last=Smith |first=Neil |title=Chomsky: Ideas and Ideals. |publisher=Cambridge University Press |year=2002 |isbn=0-521-47517-1 |edition=2nd}} Lakoff argues that language emerges from the sensory systems.{{cite journal |last=Lakoff |first=George |date=1990 |title=Invariance hypothesis: is abstract reasoning based on image-schemas? |journal=Cognitive Linguistics |volume=1 |issue=1 |pages=39–74 |doi=10.1515/cogl.1990.1.1.39 |s2cid=144380802}} Hawkins hypothesizes that cross-linguistically prevalent patterns are based on the brain's natural processing preferences.{{Cite book |last=Song |first=Jae Jung |title=Word Order |publisher=Cambridge University Press |year=2012 |isbn=9781139033930}}

Additionally, models inspired by Richard Dawkins's memetics, including Construction Grammar and Usage-Based Linguistics, advocate a two-way model arguing that the brain shapes language, and language shapes the brain.{{cite journal |last1=Christiansen |first1=Morten H. |last2=Chater |first2=Nick |date=2008 |title=Language as shaped by the brain |url=https://discovery.ucl.ac.uk/id/eprint/168484/1/download10.pdf |journal=Behavioral and Brain Sciences |volume=31 |issue=5 |pages=489–558 |doi=10.1017/S0140525X08004998 |pmid=18826669 |access-date=2020-12-22}}{{cite journal |last=Blackmore |first=Susan |date=2008 |title=Memes shape brains shape memes |url=https://www.academia.edu/3444108 |journal=Behavioral and Brain Sciences |volume=31 |issue=5 |pages=513 |doi=10.1017/S0140525X08005037 |access-date=2020-12-22}}

Evidence from neuroimaging studies points towards the externalist position. ERP studies suggest that language processing is based on the interaction of syntax and semantics, and the research does not support innate grammatical structures.{{cite journal |last1=Kluender |first1=R. |last2=Kutas |first2=M. |date=1993 |title=Subjacency as a processing phenomenon |url=http://kutaslab.ucsd.edu/people/kutas/pdfs/1993.LCP.573.pdf |journal=Language and Cognitive Processes |volume=8 |issue=4 |pages=573–633 |doi=10.1080/01690969308407588 |access-date=2020-02-28}}{{cite journal |last1=Barkley |first1=C. |last2=Kluender |first2=R. |last3=Kutas |first3=M. |date=2015 |title=Referential processing in the human brain: An Event-Related Potential (ERP) study |url=http://kutaslab.ucsd.edu/people/kutas/pdfs/2015.BR.143.pdf |journal=Brain Research |volume=1629 |pages=143–159 |doi=10.1016/j.brainres.2015.09.017 |pmid=26456801 |s2cid=17053154 |access-date=2020-02-28}} MRI studies suggest that the structural characteristics of the child's first language shapes the processing connectome of the brain.{{cite journal |last1=Wei |first1=Xuehu |last2=Adamson |first2=Helyne |last3=Schwendemann |first3=Matthias |last4=Goucha |first4=Tómas |last5=Friederici |first5=Angela D. |last6=Anwander |first6=Alfred |date=19 February 2023 |title=Native language differences in the structural connectome of the human brain |journal=NeuroImage |volume=270 |issue=270 |pages=119955 |doi=10.1016/j.neuroimage.2023.119955 |pmid=36805092 |doi-access=free}} Processing research has failed to find support for the inverse idea that syntactic structures reflect the brain's natural processing preferences cross-linguistically.{{cite journal |last1=Koizumi |first1=Masatoshi |last2=Yasugi |first2=Yoshiho |last3=Tamaoka |first3=Katsuo |last4=Kiyama |first4=Sachiko |last5=Kim |first5=Jungho |last6=Ajsivinac Sian |first6=Juan Esteban |last7=García Mátzar |first7=Pedro Oscar |date=September 2014 |title=On the (non)universality of the preference for subject-object word order in sentence comprehension: A sentence-processing study in Kaqchikel Maya |url=https://www.jstor.org/stable/24672044 |journal=Language |volume=90 |issue=3 |pages=722–736 |doi=10.1353/lan.2014.0068 |jstor=24672044 |s2cid=146776347 |access-date=2023-05-15}}

The evolution of language

The auditory dorsal stream also has non-language related functions, such as sound localization{{cite journal|vauthors=Miller LM, Recanzone GH|date=April 2009|title=Populations of auditory cortical neurons can accurately encode acoustic space across stimulus intensity|journal=Proceedings of the National Academy of Sciences of the United States of America|volume=106|issue=14|pages=5931–5|doi=10.1073/pnas.0901023106|pmid=19321750|pmc=2667094|bibcode=2009PNAS..106.5931M|doi-access=free}}{{cite journal|vauthors=Tian B, Reser D, Durham A, Kustov A, Rauschecker JP|date=April 2001|title=Functional specialization in rhesus monkey auditory cortex|journal=Science|volume=292|issue=5515|pages=290–3|doi=10.1126/science.1058911|pmid=11303104|bibcode=2001Sci...292..290T|s2cid=32846215}}{{cite journal|vauthors=Alain C, Arnott SR, Hevenor S, Graham S, Grady CL|date=October 2001|title="What" and "where" in the human auditory system|journal=Proceedings of the National Academy of Sciences of the United States of America|volume=98|issue=21|pages=12301–6|doi=10.1073/pnas.211209098|pmid=11572938|pmc=59809|bibcode=2001PNAS...9812301A|doi-access=free}}{{cite journal|vauthors=De Santis L, Clarke S, Murray MM|date=January 2007|title=Automatic and intrinsic auditory "what" and "where" processing in humans revealed by electrical neuroimaging|journal=Cerebral Cortex|volume=17|issue=1|pages=9–17|doi=10.1093/cercor/bhj119|pmid=16421326|doi-access=free}}{{cite journal | vauthors = Barrett DJ, Hall DA | title = Response preferences for "what" and "where" in human non-primary auditory cortex | journal = NeuroImage | volume = 32 | issue = 2 | pages = 968–77 | date = August 2006 | pmid = 16733092 | doi = 10.1016/j.neuroimage.2006.03.050 | s2cid = 19988467 }} and guidance of eye movements.{{cite journal|vauthors=Linden JF, Grunewald A, Andersen RA|date=July 1999|title=Responses to auditory stimuli in macaque lateral intraparietal area. II. Behavioral modulation|journal=Journal of Neurophysiology|volume=82|issue=1|pages=343–58|doi=10.1152/jn.1999.82.1.343|pmid=10400963|s2cid=5317446 }}{{cite journal|vauthors=Mazzoni P, Bracewell RM, Barash S, Andersen RA|date=March 1996|title=Spatially tuned auditory responses in area LIP of macaques performing delayed memory saccades to acoustic targets|journal=Journal of Neurophysiology|volume=75|issue=3|pages=1233–41|doi=10.1152/jn.1996.75.3.1233|pmid=8867131}} Recent studies also indicate a role of the ADS in localization of family/tribe members, as a study{{cite journal | vauthors = Lachaux JP, Jerbi K, Bertrand O, Minotti L, Hoffmann D, Schoendorff B, Kahane P | title = A blueprint for real-time functional mapping via human intracranial recordings | journal = PLOS ONE | volume = 2 | issue = 10 | pages = e1094 | date = October 2007 | pmid = 17971857 | pmc = 2040217 | doi = 10.1371/journal.pone.0001094 | bibcode = 2007PLoSO...2.1094L | doi-access = free }} that recorded from the cortex of an epileptic patient reported that the pSTG, but not aSTG, is selective for the presence of new speakers. An fMRI{{cite journal|vauthors=Jardri R, Houfflin-Debarge V, Delion P, Pruvo JP, Thomas P, Pins D|date=April 2012|title=Assessing fetal response to maternal speech using a noninvasive functional brain imaging technique|journal=International Journal of Developmental Neuroscience|volume=30|issue=2|pages=159–61|doi=10.1016/j.ijdevneu.2011.11.002|pmid=22123457|s2cid=2603226}} study of fetuses at their third trimester also demonstrated that area Spt is more selective to female speech than pure tones, and a sub-section of Spt is selective to the speech of their mother in contrast to unfamiliar female voices.

It is presently unknown why so many functions are ascribed to the human ADS. An attempt to unify these functions under a single framework was conducted in the 'From where to what' model of language evolution{{cite journal | vauthors = Poliva O | title = From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans | journal = F1000Research | volume = 4 | pages = 67 | date = 2017-09-20 | pmid = 28928931 | doi = 10.12688/f1000research.6175.3 | pmc=5600004 | doi-access = free }}{{cite journal | vauthors = Poliva O | title = From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language | journal = Frontiers in Neuroscience | volume = 10 | pages = 307 | date = 2016-06-30 | pmid = 27445676 | doi = 10.3389/fnins.2016.00307 | pmc=4928493| doi-access = free }} In accordance with this model, each function of the ADS indicates of a different intermediate phase in the evolution of language. The roles of sound localization and integration of sound location with voices and auditory objects is interpreted as evidence that the origin of speech is the exchange of contact calls (calls used to report location in cases of separation) between mothers and offspring. The role of the ADS in the perception and production of intonations is interpreted as evidence that speech began by modifying the contact calls with intonations, possibly for distinguishing alarm contact calls from safe contact calls. The role of the ADS in encoding the names of objects (phonological long-term memory) is interpreted as evidence of gradual transition from modifying calls with intonations to complete vocal control. The role of the ADS in the integration of lip movements with phonemes and in speech repetition is interpreted as evidence that spoken words were learned by infants mimicking their parents' vocalizations, initially by imitating their lip movements. The role of the ADS in phonological working memory is interpreted as evidence that the words learned through mimicry remained active in the ADS even when not spoken. This resulted with individuals capable of rehearsing a list of vocalizations, which enabled the production of words with several syllables. Further developments in the ADS enabled the rehearsal of lists of words, which provided the infra-structure for communicating with sentences.

Sign language in the brain

Neuroscientific research has provided a scientific understanding of how sign language is processed in the brain. There are over 135 discrete sign languages around the world- making use of different accents formed by separate areas of a country.{{Cite web|url=http://theconversation.com/what-sign-language-teaches-us-about-the-brain-29628|title=What sign language teaches us about the brain|last=Suri|first=Sana|website=The Conversation|date=25 July 2014 |language=en|access-date=2019-10-07}}

By resorting to lesion analyses and neuroimaging, neuroscientists have discovered that whether it be spoken or sign language, human brains process language in general, in a similar manner regarding which area of the brain is being used.Lesion analyses are used to examine the consequences of damage to specific brain regions involved in language while neuroimaging explore regions that are engaged in the processing of language.

Previous hypotheses have been made that damage to Broca's area or Wernicke’s area does not affect sign language being perceived; however, it is not the case. Studies have shown that damage to these areas are similar in results in spoken language where sign errors are present and/or repeated.In both types of languages, they are affected by damage to the left hemisphere of the brain rather than the right -usually dealing with the arts.

There are obvious patterns for utilizing and processing language. In sign language, Broca’s area is activated while processing sign language employs Wernicke’s area similar to that of spoken language.

There have been other hypotheses about the lateralization of the two hemispheres. Specifically, the right hemisphere was thought to contribute to the overall communication of a language globally whereas the left hemisphere would be dominant in generating the language locally.Scientific American. (2002). Sign language in the brain. [Brochure]. Retrieved from http://lcn.salk.edu/Brochure/SciAM%20ASL.pdf Through research in aphasias, RHD signers were found to have a problem maintaining the spatial portion of their signs, confusing similar signs at different locations necessary to communicate with another properly. LHD signers, on the other hand, had similar results to those of hearing patients. Furthermore, other studies have emphasized that sign language is present bilaterally but will need to continue researching to reach a conclusion.

Writing in the brain

There is a comparatively small body of research on the neurology of reading and writing.{{cite journal | vauthors = Norton ES, Kovelman I, Petitto LA | title = Are There Separate Neural Systems for Spelling? New Insights into the Role of Rules and Memory in Spelling from Functional Magnetic Resonance Imaging | journal = Mind, Brain and Education | volume = 1 | issue = 1 | pages = 48–59 | date = March 2007 | pmid = 20011680 | pmc = 2790202 | doi = 10.1111/j.1751-228X.2007.00005.x }} Most of the studies performed deal with reading rather than writing or spelling, and the majority of both kinds focus solely on the English language.{{cite book |last1=Treiman |first1=Rebecca |last2=Kessler |first2=Brett | name-list-style = vanc |title=Writing Systems and Spelling Development |doi = 10.1002/9780470757642.ch7 |work=The Science of Reading: A Handbook |pages=120–134 |publisher=Blackwell Publishing Ltd |isbn=978-0-470-75764-2 | date = 2007 }} English orthography is less transparent than that of other languages using a Latin script. Another difficulty is that some studies focus on spelling words of English and omit the few logographic characters found in the script.

In terms of spelling, English words can be divided into three categories – regular, irregular, and “novel words” or “nonwords.” Regular words are those in which there is a regular, one-to-one correspondence between grapheme and phoneme in spelling. Irregular words are those in which no such correspondence exists. Nonwords are those that exhibit the expected orthography of regular words but do not carry meaning, such as nonce words and onomatopoeia.

An issue in the cognitive and neurological study of reading and spelling in English is whether a single-route or dual-route model best describes how literate speakers are able to read and write all three categories of English words according to accepted standards of orthographic correctness. Single-route models posit that lexical memory is used to store all spellings of words for retrieval in a single process. Dual-route models posit that lexical memory is employed to process irregular and high-frequency regular words, while low-frequency regular words and nonwords are processed using a sub-lexical set of phonological rules.

The single-route model for reading has found support in computer modelling studies, which suggest that readers identify words by their orthographic similarities to phonologically alike words. However, cognitive and lesion studies lean towards the dual-route model. Cognitive spelling studies on children and adults suggest that spellers employ phonological rules in spelling regular words and nonwords, while lexical memory is accessed to spell irregular words and high-frequency words of all types. Similarly, lesion studies indicate that lexical memory is used to store irregular words and certain regular words, while phonological rules are used to spell nonwords.

More recently, neuroimaging studies using positron emission tomography and fMRI have suggested a balanced model in which the reading of all word types begins in the visual word form area, but subsequently branches off into different routes depending upon whether or not access to lexical memory or semantic information is needed (which would be expected with irregular words under a dual-route model). A 2007 fMRI study found that subjects asked to produce regular words in a spelling task exhibited greater activation in the left posterior STG, an area used for phonological processing, while the spelling of irregular words produced greater activation of areas used for lexical memory and semantic processing, such as the left IFG and left SMG and both hemispheres of the MTG. Spelling nonwords was found to access members of both pathways, such as the left STG and bilateral MTG and ITG. Significantly, it was found that spelling induces activation in areas such as the left fusiform gyrus and left SMG that are also important in reading, suggesting that a similar pathway is used for both reading and writing.

Far less information exists on the cognition and neurology of non-alphabetic and non-English scripts. Every language has a morphological and a phonological component, either of which can be recorded by a writing system. Scripts recording words and morphemes are considered logographic, while those recording phonological segments, such as syllabaries and alphabets, are phonographic. Most systems combine the two and have both logographic and phonographic characters.

In terms of complexity, writing systems can be characterized as "transparent" or "opaque" and as "shallow" or "deep". A "transparent" system exhibits an obvious correspondence between grapheme and sound, while in an "opaque" system this relationship is less obvious. The terms "shallow" and "deep" refer to the extent that a system's orthography represents morphemes as opposed to phonological segments. Systems that record larger morphosyntactic or phonological segments, such as logographic systems and syllabaries put greater demand on the memory of users. It would thus be expected that an opaque or deep writing system would put greater demand on areas of the brain used for lexical memory than would a system with transparent or shallow orthography.

See also

References