Face perception#Schizophrenia
{{Short description|Cognitive process of visually interpreting the human face}}
{{About|the cognitive process|the psychological phenomena of seeing faces in inanimate objects|Pareidolia|computer-based facial perception|Facial recognition system}}
{{Cognitive}}
Facial perception is an individual's understanding and interpretation of the face. Here, perception implies the presence of consciousness and hence excludes automated facial recognition systems. Although facial recognition is found in other species,{{Cite web|last=Pavelas|date=19 April 2021|title=Facial Recognition is an Easy Task for Animals|url=https://skybiometry.com/facial-recognition-easy-task-animals/|url-status=live|website=Sky Biometry|access-date=19 April 2021|archive-date=19 April 2021|archive-url=https://web.archive.org/web/20210419202741/https://skybiometry.com/facial-recognition-easy-task-animals/}} this article focuses on facial perception in humans.
The perception of facial features is an important part of social cognition.{{Cite book|last=Krawczyk|first=Daniel C.|title=Reasoning; The Neuroscience of How We Think|publisher=Academic Press|year=2018|isbn=978-0-12-809285-9|pages=283–311}} Information gathered from the face helps people understand each other's identity, what they are thinking and feeling, anticipate their actions, recognize their emotions, build connections, and communicate through body language. Developing facial recognition is a necessary building block for complex societal constructs. Being able to perceive identity, mood, age, sex, and race lets people mold the way we interact with one another, and understand our immediate surroundings.{{cite journal|last1=Quinn|first1=Kimberly A.|last2=Macrae|first2=C. Neil|date=November 2011|title=The face and person perception: Insights from social cognition: Categorizing faces|journal=British Journal of Psychology|volume=102|issue=4|pages=849–867|doi=10.1111/j.2044-8295.2011.02030.x|pmid=21988388}}{{cite journal|last1=Young|first1=Andrew W.|last2=Haan|first2=Edward H. F.|last3=Bauer|first3=Russell M.|date=March 2008|title=Face perception: A very special issue|journal=Journal of Neuropsychology|volume=2|issue=1|pages=1–14|doi=10.1348/174866407x269848|pmid=19334301}}{{cite book|last1=Kanwisher|first1=Nancy|title=Handbook of Neuroscience for the Behavioral Sciences|last2=Yovel|first2=Galit|year=2009|isbn=978-0-470-47850-9|chapter=Face Perception|doi=10.1002/9780470478509.neubb002043}}
Though facial perception is mainly considered to stem from visual intake, studies have shown that even people born blind can learn face perception without vision.{{Cite journal|last=Likova|first=Lora T.|date=19 April 2021|title=Learning face perception without vision: Rebound learning effect and hemispheric differences in congenital vs late-onset blindness|journal=IS&T Int Symp Electron Imaging|volume=2019 (2019): 2371-23713|issue=12|pages=237-1-237-13|doi=10.2352/ISSN.2470-1173.2019.12.HVEI-237|pmid=31633079|pmc=6800090}} Studies have supported the notion of a specialized mechanism for perceiving faces.
Overview
Theories about the processes involved in adult face perception have largely come from two sources; research on normal adult face perception and the study of impairments in face perception that are caused by brain injury or neurological illness.
= Bruce & Young model =
File:Bruce & Young Model of Face Recognition-1986 .png
One of the most widely accepted theories of face perception argues that understanding faces involves several stages:{{cite journal|last=Bruce|first=V.|author2=Young, A|year=1986|title=Understanding Face Recognition|journal=British Journal of Psychology|volume=77|issue=3|pages=305–327|doi=10.1111/j.2044-8295.1986.tb02199.x|pmid=3756376|doi-access=free}} from basic perceptual manipulations on the sensory information to derive details about the person (such as age, gender or attractiveness), to being able to recall meaningful details such as their name and any relevant past experiences of the individual.
This model, developed by Vicki Bruce and Andrew Young in 1986, argues that face perception involves independent sub-processes working in unison.
- A "view centered description" is derived from the perceptual input. Simple physical aspects of the face are used to work out age, gender or basic facial expressions. Most analysis at this stage is on feature-by-feature basis.
- This initial information is used to create a structural model of the face, which allows it to be compared to other faces in memory. This explains why the same person from a novel angle can still be recognized (see Thatcher effect).{{Cite encyclopedia|title=Facial Recognition|encyclopedia=Corsini Encyclopedia of Psychology|date=30 January 2010|doi=10.1002/9780470479216.corpsy0342|isbn=978-0-470-47921-6|last2=Lindsay|first2=Roderick|last1=Mansour|first1=Jamal|pages=1–2 }}
- The structurally encoded representation is transferred to theoretical "face recognition units" that are used with "personal identity nodes" to identify a person through information from semantic memory. Interestingly, the ability to produce someone's name when presented with their face has been shown to be selectively damaged in some cases of brain injury, suggesting that naming may be a separate process from being able to produce other information about a person.
=Traumatic brain injury and neurological illness=
Following brain damage, faces can appear severely distorted. A wide variety of distortions can occur — features can droop, enlarge, become discolored, or the entire face can appear to shift relative to the head. This condition is known as prosopometamorphopsia (PMO). In half of the reported cases, distortions are restricted to either the left or the right side of the face, and this form of PMO is called hemi-prosopometamorphopsia (hemi-PMO). Hemi-PMO often results from lesions to the splenium, which connects the right and left hemisphere. In the other half of reported cases, features on both sides of the face appear distorted.{{cite web |last1=Duchaine |first1=Brad |title=Understanding Prosopometamorphopsia (PMO) |url=https://prosopometamorphopsia.faceblind.org/}}
Perceiving facial expressions can involve many areas of the brain, and damaging certain parts of the brain can cause specific impairments in one's ability to perceive a face. As stated earlier, research on the impairments caused by brain injury or neurological illness has helped develop our understanding of cognitive processes. The study of prosopagnosia (an impairment in recognizing faces that is usually caused by brain injury) has been particularly helpful in understanding how normal face perception might work. Individuals with prosopagnosia may differ in their abilities to understand faces, and it has been the investigation of these differences which has suggested that several stage theories might be correct.
Brain imaging studies typically show a great deal of activity in an area of the temporal lobe known as the fusiform gyrus, an area also known to cause prosopagnosia when damaged (particularly when damage occurs on both sides). This evidence has led to a particular interest in this area and it is sometimes referred to as the fusiform face area (FFA) for that reason.{{cite journal|last1=Kanwisher|first1=Nancy|last2=McDermott|first2=Josh|last3=Chun|first3=Marvin M.|date=1 June 1997|title=The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception|journal=The Journal of Neuroscience|volume=17|issue=11|pages=4302–11|doi=10.1523/JNEUROSCI.17-11-04302.1997|pmc=6573547|pmid=9151747}}
It is important to note that while certain areas of the brain respond selectively to faces, facial processing involves many neural networks which include visual and emotional processing systems. For example, prosopagnosia patients demonstrate neuropsychological support for a specialized face perception mechanism as these people (due to brain damage) have deficits in facial perception, but their cognitive perception of objects remains intact. The face inversion effect provides behavioral support of a specialized mechanism as people tend to have greater deficits in task performance when prompted to react to an inverted face than to an inverted object.{{citation needed|date=April 2021}}
Electrophysiological support comes from the finding that the N170 and M170 responses tend to be face-specific. Neuro-imaging studies, such as those with PET and fMRI, have shown support for a specialized facial processing mechanism, as they have identified regions of the fusiform gyrus that have higher activation during face perception tasks than other visual perception tasks. Theories about the processes involved in adult face perception have largely come from two sources: research on normal adult face perception and the study of impairments in face perception that are caused by brain injury or neurological illness. Novel optical illusions such as the flashed face distortion effect, in which scientific phenomenology outpaces neurological theory, also provide areas for research.
Difficulties in facial emotion processing can also be seen in individuals with traumatic brain injury, in both diffuse axonal injury and focal brain injury.{{cite journal|last1=Yassin|first1=Walid|last2=Callahan|first2=Brandy L.|last3=Ubukata|first3=Shiho|last4=Sugihara|first4=Genichi|last5=Murai|first5=Toshiya|last6=Ueda|first6=Keita|date=16 April 2017|title=Facial emotion recognition in patients with focal and diffuse axonal injury|journal=Brain Injury|volume=31|issue=5|pages=624–630|doi=10.1080/02699052.2017.1285052|pmid=28350176|s2cid=4488184}}
Early development
Despite numerous studies, there is no widely accepted time-frame in which the average human develops the ability to perceive faces.
=Ability to discern faces from other objects=
Many studies have found that infants will give preferential attention to faces in their visual field, indicating they can discern faces from other objects.
- While newborns will often show particular interest in faces at around three months of age, that preference slowly disappears, re-emerges late during the first year, and slowly declines once more over the next two years of life.{{cite journal|last1=Libertus|first1=Klaus|last2=Landa|first2=Rebecca J.|last3=Haworth|first3=Joshua L.|title=Development of Attention to Faces during the First 3 Years: Influences of Stimulus Type|journal=Frontiers in Psychology|date=17 November 2017|volume=8|pages=1976|doi=10.3389/fpsyg.2017.01976|pmid=29204130|pmc=5698271 |doi-access=free}}
- While newborns show a preference to faces as they grow older (specifically between one and four months of age) this interest can be inconsistent.{{cite book|last1=Maurer|first1=D.|year=1985|chapter=Infants' Perception of Facedness|pages=73–100|editor1-last=Field|editor1-first=Tiffany|editor2-last=Fox|editor2-first=Nathan A.|title=Social Perception in Infants|publisher=Ablex Publishing Corporation|isbn=978-0-89391-231-4 }}
- Infants turning their heads towards faces or face-like images suggest rudimentary facial processing capacities.{{cite journal|last1=Morton|first1=John|last2=Johnson|first2=Mark H.|title=CONSPEC and CONLERN: A two-process theory of infant face recognition.|journal=Psychological Review|date=1991|volume=98|issue=2|pages=164–181|doi=10.1037/0033-295x.98.2.164|pmid=2047512|citeseerx=10.1.1.492.8978 }}{{cite journal|last1=Fantz|first1=Robert L.|title=The Origin of Form Perception|journal=Scientific American|date=May 1961|volume=204|issue=5|pages=66–73|doi=10.1038/scientificamerican0561-66|pmid=13698138|bibcode=1961SciAm.204e..66F }}
- The re-emergence of interest in faces at three months is likely influenced by a child's motor abilities.{{cite journal|last1=Libertus|first1=Klaus|last2=Needham|first2=Amy|title=Reaching experience increases face preference in 3-month-old infants: Face preference and motor experience|journal=Developmental Science|date=November 2011|volume=14|issue=6|pages=1355–64|doi=10.1111/j.1467-7687.2011.01084.x|pmid=22010895|pmc=3888836 }}{{cite journal|last1=Libertus|first1=Klaus|last2=Needham|first2=Amy|title=Face preference in infancy and its relation to motor activity|journal=International Journal of Behavioral Development|date=November 2014|volume=38|issue=6|pages=529–538|doi=10.1177/0165025414535122|s2cid=19692579 }}
=Ability to detect emotion in the face=
File:Emotions according to the Atlas of Personality, Emotion and Behaviour.svg
At around seven months of age, infants show the ability to discern faces by emotion. However, whether they have fully developed emotion recognition is unclear. Discerning visual differences in facial expressions is different to understanding the valence of a particular emotion.
- Seven-month-olds seem capable of associating emotional prosodies with facial expressions. When presented with a happy or angry face, followed by an emotionally neutral word read in a happy or angry tone, their event-related potentials follow different patterns. Happy faces followed by angry vocal tones produce more changes than the other incongruous pairing, while there was no such difference between happy and angry congruous pairings. The greater reaction implies that infants held greater expectations of a happy vocal tone after seeing a happy face than an angry tone following an angry face.{{Cite journal|title = Crossmodal integration of emotional information from face and voice in the infant brain|journal = Developmental Science|volume = 9|issue = 3|pages = 309–315|date=May 2006|doi = 10.1111/j.1467-7687.2006.00494.x|pmid = 16669802|last2 = Striano|last3 = Friederici|author1 = Tobias Grossmann|s2cid = 41871753|author-link1 = Tobias Grossmann|doi-access = free}}
- By the age of seven months, children are able to recognize an angry or fearful facial expression, perhaps because of the threat-salient nature of the emotion. Despite this ability, newborns are not yet aware of the emotional content encoded within facial expressions.{{cite journal|last1=Farroni|first1=Teresa|last2=Menon|first2=Enrica|last3=Rigato|first3=Silvia|last4=Johnson|first4=Mark H.|title=The perception of facial expressions in newborns|journal=European Journal of Developmental Psychology|date=March 2007|volume=4|issue=1|pages=2–13|doi=10.1080/17405620601046832|pmid=20228970|pmc=2836746 }}
- Infants can comprehend facial expressions as social cues representing the feelings of other people before they are a year old. Seven-month-old infants show greater negative central components to angry faces that are looking directly at them than elsewhere, although the gaze of fearful faces produces no difference. In addition, two event-related potentials in the posterior part of the brain are differently aroused by the two negative expressions tested. These results indicate that infants at this age can partially understand the higher level of threat from anger directed at them. They also showed activity in the occipital areas.
{{Cite journal|author = Stefanie Hoehl & Tricia Striano|title = Neural processing of eye gaze and threat-related emotional facial expressions in infancy|journal = Child Development|volume = 79|issue = 6|pages = 1752–60|date=November–December 2008|doi = 10.1111/j.1467-8624.2008.01223.x|pmid = 19037947|last2 = Striano|s2cid = 766343
}}
- Five-month-olds, when presented with an image of a fearful expression and a happy expression, exhibit similar event-related potentials for both. However, when seven-month-olds are given the same treatment, they focus more on the fearful face. This result indicates increased cognitive focus toward fear that reflects the threat-salient nature of the emotion.{{cite journal|last1=Peltola|first1=Mikko J.|last2=Leppänen|first2=Jukka M.|last3=Mäki|first3=Silja|last4=Hietanen|first4=Jari K.|title=Emergence of enhanced attention to fearful faces between 5 and 7 months of age|journal=Social Cognitive and Affective Neuroscience|date=1 June 2009|volume=4|issue=2|pages=134–142|doi=10.1093/scan/nsn046|pmid=19174536|pmc=2686224 }} Seven-month-olds regard happy and sad faces as distinct emotive categories.{{Cite journal|title = Categorical representation of facial expressions in the infant brain|journal = Infancy|volume = 14|issue = 3|pages = 346–362|date=May 2009|doi = 10.1080/15250000902839393|pmid = 20953267|last1 = Leppanen|first1 = Jukka|last2 = Richmond|first2 = Jenny|last3 = Vogel-Farley|first3 = Vanessa|last4 = Moulson|first4 = Margaret|last5 = Nelson|first5 = Charles|pmc = 2954432}}
- By seven months, infants are able to use facial expressions to understand others' behavior. Seven-month-olds look to use facial cues to understand the motives of other people in ambiguous situations, as shown in a study where infants watched the experimenter's face longer if the experimenter took a toy from them and maintained a neutral expression, as opposed to if the experimenter made a happy expression.{{Cite journal|author = Tricia Striano & Amrisha Vaish|title = Seven- to 9-month-old infants use facial expressions to interpret others' actions|journal = British Journal of Developmental Psychology|volume = 24|pages = 753–760|year = 2010|doi = 10.1348/026151005X70319|issue = 4|last2 = Vaish|s2cid = 145375636 }} When infants are exposed to faces, it varies depending on factors including facial expression and eye gaze direction.
- Emotions likely play a large role in our social interactions. The perception of a positive or negative emotion on a face affects the way that an individual perceives and processes that face. A face that is perceived to have a negative emotion is processed in a less holistic manner than a face displaying a positive emotion.{{cite journal|last=Curby|first=K.M.|author2=Johnson, K.J.|author3=Tyson A.|title=Face to face with emotion: Holistic face processing is modulated by emotional state|journal=Cognition and Emotion|year= 2012|volume= 26|issue= 1|pages= 93–102|doi= 10.1080/02699931.2011.555752|pmid= 21557121|s2cid=26475009}}
- While seven-month-olds have been found to focus more on fearful faces, a study found that "happy expressions elicit enhanced sympathetic arousal in infants" both when facial expressions were presented subliminally and in a way that the infants were consciously aware of the stimulus.{{Cite journal|last1=Jessen|first1=Sarah|last2=Altvater-Mackensen|first2=Nicole|last3=Grossmann|first3=Tobias|date=1 May 2016|title=Pupillary responses reveal infants' discrimination of facial emotions independent of conscious perception|journal=Cognition|volume=150|pages=163–9|doi=10.1016/j.cognition.2016.02.010|pmid=26896901|s2cid=1096220}} Conscious awareness of a stimulus is not connected to an infant's reaction.
=Ability to recognize familiar faces=
It is unclear when humans develop the ability to recognize familiar faces. Studies have varying results, and may depend on multiple factors (such as continued exposure to particular faces during a certain time period).
- Early perceptual experience is crucial to the development of adult visual perception, including the ability to identify familiar people and comprehend facial expressions. The capacity to discern between faces, like language{{how so|date=April 2021}}, appears to have broad potential in early life that is whittled down to the kinds of faces experienced in early life.{{Cite journal|author=Charles A. Nelson|author-link=Charles A. Nelson|date=March–June 2001|title=The development and neural bases of face recognition|journal=Infant and Child Development|volume=10|issue=1–2|pages=3–18|citeseerx=10.1.1.130.8912|doi=10.1002/icd.239}}
- The neural substrates of face perception in infants are similar to those of adults, but the limits of child-safe imaging technology currently obscure specific information from subcortical areas{{Cite journal|title = Distinct differences in the pattern of hemodynamic response to happy and angry facial expressions in infants--a near-infrared spectroscopic study|journal = NeuroImage|volume = 54|issue = 2|pages = 1600–6|date=January 2011|doi = 10.1016/j.neuroimage.2010.09.021|pmid = 20850548|last2 = Otsuka|last3 = Kanazawa|last4 = Yamaguchi|last5 = Kakigi|author1 = Emi Nakato|s2cid = 11147913|author-link1 = Emi Nakato}} like the amygdala, which is active in adult facial perception. They also showed activity near the fusiform gyrus,
- Healthy adults likely process faces via a retinotectal (subcortical) pathway.{{cite journal|author1=Awasthi B|author2=Friedman J|author3=Williams, MA|title=Processing of low spatial frequency faces at periphery in choice reaching tasks|journal=Neuropsychologia|volume = 49|issue = 7|pages = 2136–41|year = 2011|doi = 10.1016/j.neuropsychologia.2011.03.003|pmid=21397615|s2cid=7501604 }}
- Infants can discern between macaque faces at six months of age, but, without continued exposure, cannot do so at nine months of age. If they were shown photographs of macaques during this three-month period, they were more likely to retain this ability.{{Cite journal|title = Plasticity of face processing in infancy|journal = Proceedings of the National Academy of Sciences of the United States of America|volume = 102|issue = 14|pages = 5297–5300|date=April 2005|doi = 10.1073/pnas.0406627102|pmid = 15790676|pmc = 555965|last2 = Scott|last3 = Kelly|last4 = Shannon|last5 = Nicholson|last6 = Coleman|last7 = Nelson|author1 = O. Pascalis|author-link1 = O. Pascalis|bibcode = 2005PNAS..102.5297P |doi-access = free}}
- Faces "convey a wealth of information that we use to guide our social interactions".{{cite journal|last=Jeffery|first=L.|author2=Rhodes, G.|title=Insights into the development of face recognition mechanisms revealed by face aftereffects|journal=British Journal of Psychology|year= 2011|volume=102|issue=4|pages=799–815|doi=10.1111/j.2044-8295.2011.02066.x|pmid=21988385}} They also found that the neurological mechanisms responsible for face recognition are present by age five. Children process faces is similar to that of adults, but adults process faces more efficiently. The may be because of advancements in memory and cognitive functioning.
- Interest in the social world is increased by interaction with the physical environment. They found that training three-month-old infants to reach for objects with Velcro-covered "sticky mitts" increased the attention they pay to faces compared to moving objects through their hands and control groups.
=Ability to 'mimic' faces=
A commonly disputed topic is the age at which we can mimic facial expressions.
- Infants as young as two days are capable of mimicking an adult, able to note details like mouth and eye shape as well as move their own muscles to produce similar patterns.{{cite journal|last1=Field|first1=T.|last2=Woodson|first2=R|last3=Greenberg|first3=R|last4=Cohen|first4=D|title=Discrimination and imitation of facial expression by neonates|journal=Science|date=8 October 1982|volume=218|issue=4568|pages=179–181|doi=10.1126/science.7123230|pmid=7123230|bibcode=1982Sci...218..179F }}
- However, the idea that infants younger than two could mimic facial expressions was disputed by Susan S. Jones, who believed that infants are unaware of the emotional content encoded within facial expressions, and also found they are not able to imitate facial expressions until their second year of life. She also found that mimicry emerged at different ages.{{cite journal|last1=Jones|first1=Susan S.|title=The development of imitation in infancy|journal=Philosophical Transactions of the Royal Society B: Biological Sciences|date=27 August 2009|volume=364|issue=1528|pages=2325–35|doi=10.1098/rstb.2009.0045|pmid=19620104|pmc=2865075 }}
Neuroanatomy
= Key areas of the brain =
File:Fusiform face area face recognition.jpg
Facial perception has neuroanatomical correlates in the brain.
The fusiform face area (BA37— Brodmann area 37) is located in the lateral fusiform gyrus. It is thought that this area is involved in holistic processing of faces and it is sensitive to the presence of facial parts as well as the configuration of these parts. The fusiform face area is also necessary for successful face detection and identification. This is supported by fMRI activation and studies on prosopagnosia, which involves lesions in the fusiform face area.{{cite journal|last1=Liu|first1=Jia|last2=Harris|first2=Alison|last3=Kanwisher|first3=Nancy|date=January 2010|title=Perception of Face Parts and Face Configurations: An fMRI Study|journal=Journal of Cognitive Neuroscience|volume=22|issue=1|pages=203–211|doi=10.1162/jocn.2009.21203|pmc=2888696|pmid=19302006}}{{cite journal|last1=Rossion|first1=B.|date=1 November 2003|title=A network of occipito-temporal face-sensitive areas besides the right middle fusiform gyrus is necessary for normal face processing|journal=Brain|volume=126|issue=11|pages=2381–95|doi=10.1093/brain/awg241|pmid=12876150}}{{cite journal |last1=McCarthy |first1=Gregory |last2=Puce |first2=Aina |author-link2=Aina Puce |last3=Gore |first3=John C. |last4=Allison |first4=Truett |date=October 1997 |title=Face-Specific Processing in the Human Fusiform Gyrus |journal=Journal of Cognitive Neuroscience |volume=9 |issue=5 |pages=605–610 |doi=10.1162/jocn.1997.9.5.605 |pmid=23965119 |s2cid=23333049 |hdl-access=free |hdl=2022/22741}}{{Cite journal|last1=Baldauf|first1=D.|last2=Desimone|first2=R.|date=25 April 2014|title=Neural Mechanisms of Object-Based Attention|journal=Science|language=en|volume=344|issue=6182|pages=424–7|doi=10.1126/science.1247003|pmid=24763592|bibcode=2014Sci...344..424B|s2cid=34728448|issn=0036-8075|doi-access=free}}{{Cite journal|last1=de Vries|first1=Eelke|last2=Baldauf|first2=Daniel|date=1 October 2019|title=Attentional Weighting in the Face Processing Network: A Magnetic Response Image-guided Magnetoencephalography Study Using Multiple Cyclic Entrainments|url=https://doi.org/10.1162/jocn_a_01428|journal=Journal of Cognitive Neuroscience|volume=31|issue=10|pages=1573–88|doi=10.1162/jocn_a_01428|pmid=31112470|hdl=11572/252722|s2cid=160012572|issn=0898-929X|hdl-access=free}}
The occipital face area is located in the inferior occipital gyrus. Similar to the fusiform face area, this area is also active during successful face detection and identification, a finding that is supported by fMRI and MEG activation. The occipital face area is involved and necessary in the analysis of facial parts but not in the spacing or configuration of facial parts. This suggests that the occipital face area may be involved in a facial processing step that occurs prior to fusiform face area processing.
The superior temporal sulcus is involved in recognition of facial parts and is not sensitive to the configuration of these parts. It is also thought that this area is involved in gaze perception.{{cite journal|last1=Campbell|first1=R.|last2=Heywood|first2=C.A.|last3=Cowey|first3=A.|last4=Regard|first4=M.|last5=Landis|first5=T.|date=January 1990|title=Sensitivity to eye gaze in prosopagnosic patients and monkeys with superior temporal sulcus ablation|journal=Neuropsychologia|volume=28|issue=11|pages=1123–42|doi=10.1016/0028-3932(90)90050-x|pmid=2290489|s2cid=7723950}} The superior temporal sulcus has demonstrated increased activation when attending to gaze direction.{{cite journal|last1=Marquardt|first1=Kira|last2=Ramezanpour|first2=Hamidreza|last3=Dicke|first3=Peter W.|last4=Thier|first4=Peter|date=March 2017|title=Following Eye Gaze Activates a Patch in the Posterior Temporal Cortex That Is not Part of the Human 'Face Patch' System|journal=eNeuro|volume=4|issue=2|pages=ENEURO.0317–16.2017|doi=10.1523/ENEURO.0317-16.2017|pmc=5362938|pmid=28374010}}
During face perception, major activations occur in the extrastriate areas bilaterally, particularly in the above three areas. Perceiving an inverted human face involves increased activity in the inferior temporal cortex, while perceiving a misaligned face involves increased activity in the occipital cortex. No results were found when perceiving a dog face, suggesting a process specific to human faces.{{cite journal|last1=Tsujii|first1=T.|last2=Watanabe|first2=S.|last3=Hiraga|first3=K.|last4=Akiyama|first4=T.|last5=Ohira|first5=T.|date=March 2005|title=Testing holistic processing hypothesis in human and animal face perception: evidence from a magnetoencephalographic study|journal=International Congress Series|volume=1278|pages=223–6|doi=10.1016/j.ics.2004.11.151}} Bilateral activation is generally shown in all of these specialized facial areas.{{cite journal|last1=Andreasen|first1=N. C.|author2=O'Leary DS|author3=Arndt S|last4=Cizadlo|first4=T|last5=Hurtig|first5=R|last6=Rezai|first6=K|last7=Watkins|first7=GL|last8=Ponto|first8=LB|last9=Hichwa|first9=RD|display-authors=3|name-list-style=vanc|year=1996|title=Neural substrates of facial recognition|journal=The Journal of Neuropsychiatry and Clinical Neurosciences|volume=8|issue=2|pages=139–46|doi=10.1176/jnp.8.2.139|pmid=9081548}}{{cite journal|last1=Haxby|first1=JV|last2=Horwitz|first2=B|last3=Ungerleider|first3=LG|last4=Maisog|first4=JM|last5=Pietrini|first5=P|last6=Grady|first6=CL|date=1 November 1994|title=The functional organization of human extrastriate cortex: a PET-rCBF study of selective attention to faces and locations|journal=The Journal of Neuroscience|volume=14|issue=11|pages=6336–53|doi=10.1523/JNEUROSCI.14-11-06336.1994|pmc=6577268|pmid=7965040}}{{cite journal|last1=Haxby|first1=James V|last2=Ungerleider|first2=Leslie G|last3=Clark|first3=Vincent P|last4=Schouten|first4=Jennifer L|last5=Hoffman|first5=Elizabeth A|last6=Martin|first6=Alex|date=January 1999|title=The Effect of Face Inversion on Activity in Human Neural Systems for Face and Object Perception|journal=Neuron|volume=22|issue=1|pages=189–199|doi=10.1016/S0896-6273(00)80690-X|pmid=10027301|s2cid=9525543|doi-access=free}}{{cite journal |last1=Puce |first1=Aina |author-link=Aina Puce |last2=Allison |first2=Truett |last3=Asgari |first3=Maryam |last4=Gore |first4=John C. |last5=McCarthy |first5=Gregory |date=15 August 1996 |title=Differential Sensitivity of Human Visual Cortex to Faces, Letterstrings, and Textures: A Functional Magnetic Resonance Imaging Study |journal=The Journal of Neuroscience |volume=16 |issue=16 |pages=5205–15 |doi=10.1523/JNEUROSCI.16-16-05205.1996 |pmc=6579313 |pmid=8756449}}{{cite journal|last1=Puce|first1=A.|last2=Allison|first2=T.|last3=Gore|first3=J. C.|last4=McCarthy|first4=G.|date=1 September 1995|title=Face-sensitive regions in human extrastriate cortex studied by functional MRI|journal=Journal of Neurophysiology|volume=74|issue=3|pages=1192–9|doi=10.1152/jn.1995.74.3.1192|pmid=7500143}}{{cite journal|last1=Sergent|first1=Justine|last2=Ohta|first2=Shinsuke|last3=Macdonald|first3=Brennan|date=1992|title=Functional neuroanatomy of face and object processing. A positron emission tomography study|journal=Brain|volume=115|issue=1|pages=15–36|doi=10.1093/brain/115.1.15|pmid=1559150}} However, some studies show increased activation in one side over the other: for instance, the right fusiform gyrus is more important for facial processing in complex situations.
= BOLD fMRI mapping and the fusiform face area =
The majority of fMRI studies use blood oxygen level dependent (BOLD) contrast to determine which areas of the brain are activated by various cognitive functions.{{cite journal|last1=KannurpattiRypmaBiswal|first1=S.S.B.|title=Prediction of task-related BOLD fMRI with amplitude signatures of resting-state fMRI|journal=Frontiers in Systems Neuroscience|date=March 2012|volume=6|pages=7|doi=10.3389/fnsys.2012.00007|pmid=22408609|pmc=3294272|first2=Bart|last2=Biswal|first3=B|last3=Bharat |doi-access=free}}
One study used BOLD fMRI mapping to identify activation in the brain when subjects viewed both cars and faces. They found that the occipital face area, the fusiform face area, the superior temporal sulcus, the amygdala, and the anterior/inferior cortex of the temporal lobe all played roles in contrasting faces from cars, with initial face perception beginning in the fusiform face area and occipital face areas. This entire region forms a network that acts to distinguish faces. The processing of faces in the brain is known as a "sum of parts" perception.{{cite journal|last=Gold|first=J.M.|author2=Mundy, P.J.|author3=Tjan, B.S.|title=The perception of a face is no more than the sum of its parts|journal=Psychological Science|year=2012|volume=23|issue=4|pages=427–434|doi=10.1177/0956797611427407|pmid=22395131|pmc=3410436}}
However, the individual parts of the face must be processed first in order to put all of the pieces together. In early processing, the occipital face area contributes to face perception by recognizing the eyes, nose, and mouth as individual pieces.{{cite journal|last=Pitcher|first=D.|author2=Walsh, V.|author3=Duchaine, B.|title=The role of the occipital face area in the cortical face perception network|journal=Experimental Brain Research|year=2011|volume=209|issue=4|pages=481–493|doi=10.1007/s00221-011-2579-1|pmid=21318346|s2cid=6321920}}
Researchers also used BOLD fMRI mapping to determine the patterns of activation in the brain when parts of the face were presented in combination and when they were presented singly.{{cite journal|last=Arcurio|first=L.R.|author2=Gold, J.M.|author3=James, T.W.|year=2012|title=The response of face-selective cortex with single face parts and part combinations|journal=Neuropsychologia|volume=50|issue=10|pages=2454–9|doi=10.1016/j.neuropsychologia.2012.06.016|pmc=3423083|pmid=22750118}} The occipital face area is activated by the visual perception of single features of the face, for example, the nose and mouth, and preferred combination of two-eyes over other combinations. This suggests that the occipital face area recognizes the parts of the face at the early stages of recognition.
On the contrary, the fusiform face area shows no preference for single features, because the fusiform face area is responsible for "holistic/configural" information, meaning that it puts all of the processed pieces of the face together in later processing. This is supported by a study which found that regardless of the orientation of a face, subjects were impacted by the configuration of the individual facial features. Subjects were also impacted by the coding of the relationships between those features. This shows that processing is done by a summation of the parts in later stages of recognition.
= The fusiform gyrus and the amygdala =
The fusiform gyri are preferentially responsive to faces, whereas the parahippocampal/lingual gyri are responsive to buildings.{{cite journal|last1=Gorno-Tempini|first1=M. L.|last2=Price|first2=CJ|title=Identification of famous faces and buildings: A functional neuroimaging study of semantically unique items|journal=Brain|date=1 October 2001|volume=124|issue=10|pages=2087–97|doi=10.1093/brain/124.10.2087|pmid=11571224 |doi-access=free}}
While certain areas respond selectively to faces, facial processing involves many neural networks, including visual and emotional processing systems. While looking at faces displaying emotions (especially those with fear facial expressions) compared to neutral faces there is increased activity in the right fusiform gyrus. This increased activity also correlates with increased amygdala activity in the same situations.{{cite journal|last1 = Vuilleumier|first1 = P|last2 = Pourtois|first2 = G|year = 2007|title = Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging|journal = Neuropsychologia|volume = 45|issue = 1|pages = 174–194|doi=10.1016/j.neuropsychologia.2006.06.003|pmid = 16854439|citeseerx = 10.1.1.410.2526|s2cid = 5635384 }} The emotional processing effects observed in the fusiform gyrus are decreased in patients with amygdala lesions. This demonstrates connections between the amygdala and facial processing areas.
Face familiarity also affects the fusiform gyrus and amygdala activation. Multiple regions activated by similar face components indicates that facial processing is a complex process. Increased brain activation in precuneus and cuneus often occurs when differentiation of two faces are easy (kin and familiar non-kin faces) and the role of posterior medial substrates for visual processing of faces with familiar features (faces averaged with that of a sibling).{{cite journal|last1=Platek|first1=Steven M.|last2=Kemp|first2=Shelly M.|title=Is family special to the brain? An event-related fMRI study of familiar, familial, and self-face recognition|journal=Neuropsychologia|date=February 2009|volume=47|issue=3|pages=849–858|doi=10.1016/j.neuropsychologia.2008.12.027|pmid=19159636|s2cid=12674158 }}
The object form topology hypothesis posits a topological organization of neural substrates for object and facial processing.{{Cite journal|author1=Ishai A|author2=Ungerleider LG|author3=Martin A|author4= Schouten JL|author5=Haxby JV|title=Distributed representation of objects in the human ventral visual pathway|journal=Proc. Natl. Acad. Sci. U.S.A.|volume=96|issue=16|pages=9379–84|date=August 1999|pmid=10430951|pmc=17791|doi=10.1073/pnas.96.16.9379|bibcode=1999PNAS...96.9379I |doi-access=free}} However, there is disagreement: the category-specific and process-map models could accommodate most other proposed models for the neural underpinnings of facial processing.{{cite journal|last1=Gauthier|first1=Isabel|title=What constrains the organization of the ventral temporal cortex?|journal=Trends in Cognitive Sciences|date=January 2000|volume=4|issue=1|pages=1–2|doi=10.1016/s1364-6613(99)01416-3|pmid=10637614|s2cid=17347723 }}
Most neuroanatomical substrates for facial processing are perfused by the middle cerebral artery. Therefore, facial processing has been studied using measurements of mean cerebral blood flow velocity in the middle cerebral arteries bilaterally. During facial recognition tasks, greater changes occur in the right middle cerebral artery than the left.{{cite journal|last1=Droste|first1=D W|last2=Harders|first2=A G|last3=Rastogi|first3=E|title=A transcranial Doppler study of blood flow velocity in the middle cerebral arteries performed at rest and during mental activities.|journal=Stroke|date=August 1989|volume=20|issue=8|pages=1005–11|doi=10.1161/01.str.20.8.1005|pmid=2667197 |doi-access=free}}{{cite journal|last1=Harders|first1=A. G.|last2=Laborde|first2=G.|last3=Droste|first3=D. W.|last4=Rastogi|first4=E.|title=Brain Activity and Blood flow Velocity Changes: A Transcranial Doppler Study|journal=International Journal of Neuroscience|date=January 1989|volume=47|issue=1–2|pages=91–102|doi=10.3109/00207458908987421|pmid=2676884 }} Men are right-lateralized and women left-lateralized during facial processing tasks.{{Cite journal|author=Njemanze PC|title=Asymmetry in cerebral blood flow velocity with processing of facial images during head-down rest|journal=Aviat Space Environ Med|volume=75|issue=9|pages=800–5|date=September 2004|pmid=15460633}}
Just as memory and cognitive function separate the abilities of children and adults to recognize faces, the familiarity of a face may also play a role in the perception of faces. Recording event-related potentials in the brain to determine the timing of facial recognition{{cite journal|last1=Zheng|first1=Xin|last2=Mondloch|first2=Catherine J.|last3=Segalowitz|first3=Sidney J.|title=The timing of individual face recognition in the brain|journal=Neuropsychologia|date=June 2012|volume=50|issue=7|pages=1451–61|doi=10.1016/j.neuropsychologia.2012.02.030|pmid=22410414|s2cid=207237508 }} showed that familiar faces are indicated by a stronger N250, a specific wavelength response that plays a role in the visual memory of faces.{{cite journal|last=Eimer|first=M.|author2=Gosling, A.|author3=Duchaine, B.|title=Electrophysiological markers of covert face recognition in developmental prosopagnosia|journal=Brain|year=2012|volume=135|issue=2|pages=542–554|doi=10.1093/brain/awr347|pmid=22271660|doi-access=free}} Similarly, all faces elicit the N170 response in the brain.{{cite journal|last=Moulson|first=M.C.|author2=Balas, B.|author3=Nelson, C.|author4=Sinha, P.|year=2011|title=EEG correlates of categorical and graded face perception|journal=Neuropsychologia|volume=49|issue=14|pages=3847–53|doi=10.1016/j.neuropsychologia.2011.09.046|pmc=3290448|pmid=22001852}}
The brain conceptually needs only ~50 neurons to encode any human face, with facial features projected on individual axes (neurons) in a 50-dimensional "Face Space".{{cite journal|last1=Chang|first1=Le|last2=Tsao|first2=Doris Y.|date=June 2017|title=The Code for Facial Identity in the Primate Brain|journal=Cell|volume=169|issue=6|pages=1013–28.e14|doi=10.1016/j.cell.2017.05.011|pmid=28575666|pmc=8088389|s2cid=32432231}}
Cognitive neuroscience
Cognitive neuroscientists Isabel Gauthier and Michael Tarr are two of the major proponents of the view that face recognition involves expert discrimination of similar objects.[https://web.archive.org/web/20060509092446/http://www.psy.vanderbilt.edu/faculty/gauthier/PEN/ Perceptual Expertise Network] Other scientists, in particular Nancy Kanwisher and her colleagues, argue that face recognition involves processes that are face-specific and that are not recruited by expert discriminations in other object classes.{{cite web |title=Evidence against the expertise hypothesis |url=http://web.mit.edu/bcs/nklab/expertise.shtml |website=Kanwisher Lab |access-date=5 February 2024 |archive-url=https://web.archive.org/web/20070820073617/http://web.mit.edu/bcs/nklab/expertise.shtml |archive-date=20 August 2007 |date=20 August 2007}}
Studies by Gauthier have shown that an area of the brain known as the fusiform gyrus (sometimes called the fusiform face area because it is active during face recognition) is also active when study participants are asked to discriminate between different types of birds and cars,{{cite journal|last1=Gauthier|first1=Isabel|last2=Skudlarski|first2=Pawel|last3=Gore|first3=John C.|last4=Anderson|first4=Adam W.|title=Expertise for cars and birds recruits brain areas involved in face recognition|journal=Nature Neuroscience|date=February 2000|volume=3|issue=2|pages=191–7|doi=10.1038/72140|pmid=10649576|s2cid=15752722 }} and even when participants become expert at distinguishing computer generated nonsense shapes known as greebles.{{cite journal|last1=Gauthier|first1=Isabel|last2=Tarr|first2=Michael J.|last3=Anderson|first3=Adam W.|last4=Skudlarski|first4=Pawel|last5=Gore|first5=John C.|title=Activation of the middle fusiform 'face area' increases with expertise in recognizing novel objects|journal=Nature Neuroscience|date=June 1999|volume=2|issue=6|pages=568–573|doi=10.1038/9224|pmid=10448223|s2cid=9504895 }} This suggests that the fusiform gyrus have a general role in the recognition of similar visual objects.
The activity found by Gauthier when participants viewed non-face objects was not as strong as when participants were viewing faces, however this could be because we have much more expertise for faces than for most other objects. Furthermore, not all findings of this research have been successfully replicated, for example, other research groups using different study designs have found that the fusiform gyrus is specific to faces and other nearby regions deal with non-face objects.{{cite journal|last1=Grill-Spector|first1=Kalanit|last2=Knouf|first2=Nicholas|last3=Kanwisher|first3=Nancy|title=The fusiform face area subserves face perception, not generic within-category identification|journal=Nature Neuroscience|date=May 2004|volume=7|issue=5|pages=555–562|doi=10.1038/nn1224|pmid=15077112|s2cid=2204107 }}
However, these findings are difficult to interpret: failures to replicate are null effects and can occur for many different reasons. In contrast, each replication adds a great deal of weight to a particular argument. There are now multiple replications with greebles, with birds and cars,{{Cite journal|author=Xu Y|title=Revisiting the role of the fusiform face area in visual expertise|journal=Cereb. Cortex|volume=15|issue=8|pages=1234–42|date=August 2005|pmid=15677350|doi=10.1093/cercor/bhi006 |doi-access=free}} and two unpublished studies with chess experts.{{Cite journal|author=Righi G, Tarr MJ|title=Are chess experts any different from face, bird, or greeble experts?|journal=Journal of Vision|volume=4|issue=8|pages=504|year=2004|doi=10.1167/4.8.504|last2=Tarr |doi-access=free}}[https://www.youtube.com/watch?v=WLQbTd6RFhY] {{Webarchive|url=https://web.archive.org/web/20160428225908/https://www.youtube.com/watch?v=WLQbTd6RFhY|date=28 April 2016}} My Brilliant Brain, partly about grandmaster Susan Polgar, shows brain scans of the fusiform gyrus while Polgar viewed chess diagrams.
Although expertise sometimes recruits the fusiform face area, a more common finding is that expertise leads to focal category-selectivity in the fusiform gyrus—a pattern similar in terms of antecedent factors and neural specificity to that seen for faces. As such, it remains an open question as to whether face recognition and expert-level object recognition recruit similar neural mechanisms across different subregions of the fusiform or whether the two domains literally share the same neural substrates. At least one study argues that the issue is nonsensical, as multiple measurements of the fusiform face area within an individual often overlap no more with each other than measurements of fusiform face area and expertise-predicated regions.{{Cite journal|author1=Kung CC|author2=Peissig JJ|author3=Tarr MJ|title=Is region-of-interest overlap comparison a reliable measure of category specificity?|journal=J Cogn Neurosci|volume=19|issue=12|pages=2019–34|date=December 2007|pmid=17892386|doi=10.1162/jocn.2007.19.12.2019|s2cid=7864360|url=https://figshare.com/articles/journal_contribution/6616853|access-date=31 March 2021|archive-date=2 June 2021|archive-url=https://web.archive.org/web/20210602121634/https://figshare.com/articles/journal_contribution/Is_region-of-interest_overlap_comparison_a_reliable_measure_of_category_specificity_/6616853|url-status=live}}
fMRI studies have asked whether expertise has any specific connection to the fusiform face area in particular, by testing for expertise effects in both the fusiform face area and a nearby but not face-selective region called LOC (Rhodes et al., JOCN 2004; Op de Beeck et al., JN 2006; Moore et al., JN 2006; Yue et al. VR 2006). In all studies, expertise effects are significantly stronger in the LOC than in the fusiform face area, and indeed expertise effects were only borderline significant in the fusiform face area in two of the studies, while the effects were robust and significant in the LOC in all studies.{{Citation needed|date=December 2010}}
Therefore, it is still not clear in exactly which situations the fusiform gyrus becomes active, although it is certain that face recognition relies heavily on this area and damage to it can lead to severe face recognition impairment.{{cn|date=January 2025}}
Face advantage in memory recall
During face perception, neural networks make connections with the brain to recall memories.{{cite encyclopedia|last=Mansour|first=Jamal|author2=Lindsay, Roderick|title=Facial Recognition|encyclopedia=Corsini Encyclopedia of Psychology|date=30 January 2010|volume=1–2|pages=1–2 |doi=10.1002/9780470479216.corpsy0342|isbn=978-0-470-47921-6}}
According to the Seminal Model of face perception, there are three stages of face processing:
- recognition of the face
- recall of memories and information linked with that face
- name recall
There are exceptions to this order. For example, names are recalled faster than semantic information in cases of highly familiar stimuli.{{cite journal|last=Calderwood|first=L|author2=Burton, A.M.|title=Children and adults recall the names of highly familiar faces faster than semantic information|journal=British Journal of Psychology|date=November 2006|volume=96|issue=4|pages=441–454|doi=10.1348/000712605X84124|pmid=17018182}} While the face is a powerful identifier, the voice also helps in recognition.{{cite journal|last=Ellis|first=Hadyn|author2=Jones, Dylan|title=Intra- and Inter-modal repetition priming of familiar faces and voices|journal=British Journal of Psychology|date=February 1997|volume=88|issue=1|pages=143–156|doi=10.1111/j.2044-8295.1997.tb02625.x|pmid=9061895|last3=Mosdell|first3=Nick}}{{cite encyclopedia|last=Nadal|first=Lynn|title=Speaker Recognition|encyclopedia=Encyclopedia of Cognitive Science|year=2005|volume=4|pages=142–5}}
Research has tested if faces or voices make it easier to identify individuals and recall semantic memory and episodic memory. These experiments looked at all three stages of face processing. The experiment showed two groups of celebrity and familiar faces or voices with a between-group design and asked the participants to recall information about them.{{cite journal|last=Bredart|first=S.|author2=Barsics, C.|title=Recalling Semantic and Episodic Information From Faces and Voices: A Face Advantage|journal=Current Directions in Psychological Science|date=3 December 2012|volume=21|issue=6|pages=378–381|doi=10.1177/0963721412454876|hdl=2268/135794 |s2cid=145337404|hdl-access=free}} The participants were first asked if the stimulus was familiar. If they answered yes then they were asked for information (semantic memory) and memories (episodic memory) that fit the face or voice presented. These experiments demonstrated the phenomenon of face advantage and how it persists through follow-up studies.
=Recognition-performance issue=
After the first experiments on the advantage of faces over voices in memory recall, errors and gaps were found in the methods used.
For one, there was not a clear face advantage for the recognition stage of face processing. Participants showed a familiarity-only response to voices more often than faces.{{cite journal|last=Hanley|first=J. Richard|author2=Damjanovic, Ljubica|title=It is more difficult to retrieve a familiar person's name and occupation from their voice than from their blurred face|journal=Memory|date=November 2009|volume=17|issue=8|pages=830–9|doi=10.1080/09658210903264175|pmid=19882434|s2cid=27070912}} In other words, when voices were recognized (about 60–70% of the time) they were much harder to recall biographical information but very good at being recognized. The results were looked at as remember versus know judgements. A lot more remember results (or familiarity) occurred with voices, and more know (or memory recall) responses happened with faces. This phenomenon persists through experiments dealing with criminal line-ups in prisons. Witnesses are more likely to say that a suspect's voice sounded familiar than his/her face even though they cannot remember anything about the suspect.{{cite journal|last1=Yarmey|first1=Daniel A.|title=Face and Voice Identifications in showups and lineups|journal=Applied Cognitive Psychology|date=1 January 1994|volume=8|issue=5|pages=453–464|doi=10.1002/acp.2350080504|last2=Yarmey|first2=A. Linda|last3=Yarmey|first3=Meagan J.}} This discrepancy is due to a larger amount of guesswork and false alarms that occur with voices.
To give faces a similar ambiguity to that of voices, the face stimuli were blurred in the follow-up experiment. This experiment followed the same procedures as the first, presenting two groups with sets of stimuli made up of half celebrity faces and half unfamiliar faces. The only difference was that the face stimuli were blurred so that detailed features could not be seen. Participants were then asked to say if they recognized the person, if they could recall specific biographical information about them, and finally if they knew the person's name. The results were completely different from those of the original experiment, supporting the view that there were problems in the first experiment's methods. According to the results of the followup, the same amount of information and memory could be recalled through voices and faces, dismantling the face advantage. However, these results are flawed and premature because other methodological issues in the experiment still needed to be fixed.
=Content of speech=
The process of controlling the content of speech extract has proven to be more difficult than the elimination of non facial cues in photographs.
Thus the findings of experiments that did not control this factor lead to misleading conclusions regarding the voice recognition over the face recognition. For example, in an experiment it was found that 40% of the time participants could easily pair the celebrity-voice with their occupation just by guessing. In order to eliminate these errors, experimenters removed parts of the voice samples that could possibly give clues to the identity of the target, such as catchphrases.{{cite journal|last=Van Lancker|first=Diana|author2=Kreiman, Jody|title=Voice discrimination and recognition are separate abilities|journal=Neuropsychologia|date=January 1987|volume=25|issue=5|pages=829–834|doi=10.1016/0028-3932(87)90120-5|pmid=3431677|s2cid=15240833}} Even after controlling the voice samples as well as the face samples (using blurred faces), studies have shown that semantic information can be more accessible to retrieve when individuals are recognizing faces than voices.{{cite journal|last=Barsics|first=Catherine|author2=Brédart, Serge|title=Recalling episodic information about personally known faces and voices|journal=Consciousness and Cognition|date=June 2011|volume=20|issue=2|pages=303–8|doi=10.1016/j.concog.2010.03.008|pmid=20381380|s2cid=40812033}}
Another technique to control the content of the speech extracts is to present the faces and voices of personally familiar individuals, like the participant's teachers or neighbors, instead of the faces and voices of celebrities. In this way alike words are used for the speech extracts. For example, the familiar targets are asked to read exactly the same scripted speech for their voice extracts. The results showed again that semantic information is easier to retrieve when individuals are recognizing faces than voices.
=Frequency-of-exposure issue=
Another factor that has to be controlled in order for the results to be reliable is the frequency of exposure.
If we take the example of celebrities, people are exposed to celebrities' faces more often than their voices because of the mass media. Through magazines, newspapers and the Internet, individuals are exposed to celebrities' faces without their voices on an everyday basis rather than their voices without their faces. Thus, someone could argue that for all of the experiments that were done until now the findings were a result of the frequency of exposure to the faces of celebrities rather than their voices.{{cite book|editor-last=Ethofer|editor2=Belin Pascal|editor3=Salvatore Campanella|editor-first=Thomas|title=Integrating face and voice in person perception|publisher=Springer|location=New York|isbn=978-1-4614-3584-6|date=21 August 2012}}
To overcome this problem researchers decided to use personally familiar individuals as stimuli instead of celebrities. Personally familiar individuals, such as participant's teachers, are for the most part heard as well as seen.{{cite journal|last1=Brédart|first1=Serge|last2=Barsics|first2=Catherine|last3=Hanley|first3=Rick|title=Recalling semantic information about personally known faces and voices|journal=European Journal of Cognitive Psychology|date=November 2009|volume=21|issue=7|pages=1013–21|doi=10.1080/09541440802591821|hdl=2268/27809|s2cid=1042153|url=http://orbi.ulg.ac.be/handle/2268/27809|access-date=5 February 2019|archive-date=2 June 2021|archive-url=https://web.archive.org/web/20210602121625/https://orbi.uliege.be/handle/2268/27809|url-status=live|hdl-access=free}} Studies that used this type of control also demonstrated the face advantage. Students were able to retrieve semantic information more readily when recognizing their teachers faces (both normal and blurred) rather than their voices.
However, researchers over the years have found an even more effective way to control not only the frequency of exposure but also the content of the speech extracts, the associative learning paradigm. Participants are asked to link semantic information as well as names with pre-experimentally unknown voices and faces.{{cite journal|last1=Barsics|first1=Catherine|last2=Brédart|first2=Serge|title=Recalling semantic information about newly learned faces and voices|journal=Memory|date=July 2012|volume=20|issue=5|pages=527–534|doi=10.1080/09658211.2012.683012|pmid=22646520|s2cid=23728924 }}{{cite encyclopedia|title=Learning.|url=http://www.credoreference.com/entry/estinsects/learning|encyclopedia=Encyclopedia of Insects.|publisher=Oxford: Elsevier Science & Technology|access-date=6 December 2013|date=|archive-date=2 June 2021|archive-url=https://web.archive.org/web/20210602121638/https://search.credoreference.com/content/entry/estinsects/learning/0|url-status=live}} In a current experiment that used this paradigm, a name and a profession were given together with, accordingly, a voice, a face or both to three participant groups. The associations described above were repeated four times.
The next step was a cued recall task in which every stimulus that was learned in the previous phase was introduced and participants were asked to tell the profession and the name for every stimulus.{{cite encyclopedia|title=Memory, Explicit and Implicit.|url=http://www.credoreference.com/entry/esthumanbrain/memory_explicit_and_implicit|encyclopedia=Encyclopedia of the Human Brain.|publisher=Oxford: Elsevier Science & Technology|access-date=6 December 2013|date=|archive-date=2 June 2021|archive-url=https://web.archive.org/web/20210602121639/https://search.credoreference.com/content/entry/esthumanbrain/memory_explicit_and_implicit/0|url-status=live}} Again, the results showed that semantic information can be more accessible to retrieve when individuals are recognizing faces than voices even when the frequency of exposure was controlled.
=Extension to episodic memory and explanation for existence=
Episodic memory is our ability to remember specific, previously experienced events.{{cite encyclopedia|year=2005|title=Episodic Memory, Computational Models of|encyclopedia=Encyclopedia of Cognitive Science|publisher=Wiley |doi=10.1002/0470018860.s00444 |first=Kenneth A. |last=Norman}}
In recognition of faces as it pertains to episodic memory, there has been shown to be activation in the left lateral prefrontal cortex, parietal lobe, and the left medial frontal/anterior cingulate cortex.{{cite journal|last=Leube|first=Dirk T.|author2=Erb, Michael|author3=Grodd, Wolfgang|author4=Bartels, Mathias|author5= Kircher, Tilo T.J.|title=Successful episodic memory retrieval of newly learned faces activates a left fronto-parietal network|journal=Cognitive Brain Research|date=December 2003|volume=18|issue=1|pages=97–101|doi=10.1016/j.cogbrainres.2003.09.008|pmid=14659501}}{{cite journal|last=Hofer|first=Alex|author2=Siedentopf, Christian M.|author3=Ischebeck, Anja|author4=Rettenbacher, Maria A.|author5=Verius, Michael|author6=Golaszewski, Stefan M.|author7=Felber, Stephan|author8= Fleischhacker, W. Wolfgang|title=Neural substrates for episodic encoding and recognition of unfamiliar faces|journal=Brain and Cognition|date=March 2007|volume=63|issue=2|pages=174–181|doi=10.1016/j.bandc.2006.11.005|pmid=17207899|s2cid=42077795}} It was also found that a left lateralization during episodic memory retrieval in the parietal cortex correlated strongly with success in retrieval. This may possibly be due to the hypothesis that the link between face recognition and episodic memory were stronger than those of voice and episodic memory. This hypothesis can also be supported by the existence of specialized face recognition devices thought to be located in the temporal lobes.{{cite encyclopedia|year=2005|title= Face Perception, Neural Basis of|encyclopedia=Encyclopedia of Cognitive Science|publisher=Wiley |doi=10.1002/0470018860.s00330 |first=Shlomo |last=Bentin }}
There is also evidence of the existence of two separate neural systems for face recognition: one for familiar faces and another for newly learned faces. One explanation for this link between face recognition and episodic memory is that since face recognition is a major part of human existence, the brain creates a link between the two in order to be better able to communicate with others.{{cite encyclopedia|year=2005|title=Face Perception, Psychology of|encyclopedia=Encyclopedia of Cognitive Science|publisher=Wiley |first1=Alice J. |last=O'Toole |doi=10.1002/0470018860.s00535}}
Self-face perception
Though many animals have face-perception capabilities, the recognition of self-face has been observed to be unique to only a few species. There is a particular interest in the study of self-face perception because of its relation to the perceptual integration process.
One study found that the perception/recognition of one's own face was unaffected by changing contexts, while the perception/recognition of familiar and unfamiliar faces was adversely affected. Another study that focused on older adults found that they had self-face advantage in configural processing but not featural processing.{{cite journal|last1=Lawrence|first1=Kate|last2=Kuntsi|first2=Joanna|last3=Coleman|first3=Michael|last4=Campbell|first4=Ruth|last5=Skuse|first5=David|date=2003|title=Face and emotion recognition deficits in Turner syndrome: A possible role for X-linked genes in amygdala development.|journal=Neuropsychology|volume=17|issue=1|pages=39–49|doi=10.1037/0894-4105.17.1.39|pmid=12597072}}
In 2014, Motoaki Sugiura developed a conceptual model for self-recognition by breaking it into three categories: the physical, interpersonal, and social selves.{{Cite journal|last=Sugiura|first=Motoaki|date=2011|title=The multi-layered model of self: a social neuroscience perspective|journal=New Frontiers in Social Cognitive Neuroscience|publisher=Tohoku University Press|pages=111–135|url=https://www.scopus.com/record/display.uri?eid=2-s2.0-84884682818&txGid=427018910caf1a247b2c953c516c7baa}}{{cite journal|last=Sugiura|first=Motoaki|date=2014|title=Three FAces of Self-Face Recognition: Potential for a Multi-Dimensional Diagnostic Tool|url=https://www.researchgate.net/figure/The-three-layer-model-of-self-related-cognition-Three-categories-of-self-share-the_fig2_266976125|journal=Neuroscience Research|volume=90|pages=56–64|doi=10.1016/j.neures.2014.10.002|pmid=25450313|s2cid=13292035|via=Research Gate|access-date=22 April 2021|archive-date=22 April 2021|archive-url=https://web.archive.org/web/20210422163001/https://www.researchgate.net/figure/The-three-layer-model-of-self-related-cognition-Three-categories-of-self-share-the_fig2_266976125|url-status=live|doi-access=free}}
=Mirror test=
Gordon Gallup Jr. developed a technique in 1970 as an attempt to measure self-awareness. This technique is commonly referred to has the mirror test.
The method involves placing a marker on the subject in a place they can not see without a mirror (e.g. ones forehead). The marker must be placed inconspicuously enough that the subject does not become aware that they have been marked. Once the marker is placed, the subject is given access to a mirror. If the subject investigates the mark (e.g. tries to wipe the mark off), this would indicate that the subject understands they are looking at a reflection of themselves, as opposed to perceiving the mirror as an extension of their environment.{{cite journal|author = Gallup, GG Jr.|title = Chimpanzees: Self recognition|journal = Science|volume = 167|pages = 86–87|year = 1970|doi = 10.1126/science.167.3914.86|pmid = 4982211|issue = 3914|bibcode = 1970Sci...167...86G|s2cid = 145295899
}} (e.g., thinking the reflection is another person/animal behind a window)
Though this method is regarded as one of the more effective techniques when it comes to measuring self-awareness, it is certainly not perfect. There are many factors at play that could have an effect on the outcome. For example, if an animal is biologically blind, like a mole, we can not assume that they inherently lack self awareness. It can only be assumed that visual self-recognition, is possibly one of many ways for a living being to be considered as cognitively "self aware".
Gender
Studies using electrophysiological techniques have demonstrated gender-related differences during a face recognition memory task and a facial affect identification task.{{Cite journal|author1=Everhart DE|author2=Shucard JL|author3=Quatrin T|author4=Shucard DW|title=Sex-related differences in event-related potentials, face recognition, and facial affect processing in prepubertal children|journal=Neuropsychology|volume=15|issue=3|pages=329–41|date=July 2001|pmid=11499988|doi=10.1037/0894-4105.15.3.329 }}
In facial perception there was no association to estimated intelligence, suggesting that face recognition in women is unrelated to several basic cognitive processes.{{Cite journal|author=Herlitz A, Yonker JE|title=Sex differences in episodic memory: the influence of intelligence|journal=J Clin Exp Neuropsychol|volume=24|issue=1|pages=107–14|date=February 2002|pmid=11935429|doi=10.1076/jcen.24.1.107.970|last2=Yonker|s2cid=26683095 }} Gendered differences may suggest a role for sex hormones.{{Cite journal|author=Smith WM|date=July 2000|title=Hemispheric and facial asymmetry: gender differences|journal=Laterality|volume=5|issue=3|pages=251–8|doi=10.1080/713754376|pmid=15513145|s2cid=25349709}} In females there may be variability for psychological functions related to differences in hormonal levels during different phases of the menstrual cycle.{{Cite journal|author1=Voyer D|author2=Voyer S|author3=Bryden MP|date=March 1995|title=Magnitude of sex differences in spatial abilities: a meta-analysis and consideration of critical variables|journal=Psychol Bull|volume=117|issue=2|pages=250–70|doi=10.1037/0033-2909.117.2.250|pmid=7724690}}{{Cite journal|author=Hausmann M|title=Hemispheric asymmetry in spatial attention across the menstrual cycle|journal=Neuropsychologia|volume=43|issue=11|pages=1559–67|year=2005|pmid=16009238|doi= 10.1016/j.neuropsychologia.2005.01.017|s2cid=17133930 }}
Data obtained in norm and in pathology support asymmetric face processing.{{Cite journal|author=De Renzi E|title=Prosopagnosia in two patients with CT scan evidence of damage confined to the right hemisphere|journal=Neuropsychologia|volume=24|issue=3|pages=385–9|year=1986|pmid= 3736820|doi=10.1016/0028-3932(86)90023-0|s2cid=53181659 }}{{Cite journal|author1=De Renzi E|author2=Perani D|author3=Carlesimo GA|author4=Silveri MC|author5=Fazio F|title=Prosopagnosia can be associated with damage confined to the right hemisphere--an MRI and PET study and a review of the literature|journal=Neuropsychologia|volume=32|issue=8|pages=893–902|date=August 1994|pmid=7969865|doi=10.1016/0028-3932(94)90041-8|s2cid=45526094 }}{{Cite journal|author1=Mattson AJ|author2=Levin HS|author3=Grafman J|title=A case of prosopagnosia following moderate closed head injury with left hemisphere focal lesion|journal=Cortex|volume=36|issue=1|pages=125–37|date=February 2000|pmid=10728902|doi=10.1016/S0010-9452(08)70841-4|s2cid=4480823 }}
The left inferior frontal cortex and the bilateral occipitotemporal junction may respond equally to all face conditions. Some contend that both the left inferior frontal cortex and the occipitotemporal junction are implicated in facial memory.{{Cite journal|author=Barton JJ, Cherkasova M|title=Face imagery and its relation to perception and covert recognition in prosopagnosia|journal=Neurology|volume=61|issue=2|pages=220–5|date=July 2003|pmid=12874402|doi=10.1212/01.WNL.0000071229.11658.F8|last2=Cherkasova|s2cid=42156497 }}{{cite journal|last1=Sprengelmeyer|first1=R.|last2=Rausch|first2=M.|last3=Eysel|first3=U. T.|last4=Przuntek|first4=H.|title=Neural structures associated with recognition of facial expressions of basic emotions|journal=Proceedings of the Royal Society of London. Series B: Biological Sciences|date=22 October 1998|volume=265|issue=1409|pages=1927–31|doi=10.1098/rspb.1998.0522|pmid=9821359|pmc=1689486 }}{{cite journal|last1=Verstichel|first1=Patrick|title=Troubles de la reconnaissance des visages : reconnaissance implicite, sentiment de familiarité, rôle de chaque hémisphère|trans-title=Impaired recognition of faces: implicit recognition, feeling of familiarity, role of each hemisphere|language=fr|journal=Bulletin de l'Académie Nationale de Médecine|date=March 2001|volume=185|issue=3|pages=537–553|doi=10.1016/S0001-4079(19)34538-8|pmid=11501262 |doi-access=free}} The right inferior temporal/fusiform gyrus responds selectively to faces but not to non-faces. The right temporal pole is activated during the discrimination of familiar faces and scenes from unfamiliar ones.{{cite journal|last1=Nakamura|first1=K.|last2=Kawashima|first2=R|last3=Sato|first3=N|last4=Nakamura|first4=A|last5=Sugiura|first5=M|last6=Kato|first6=T|last7=Hatano|first7=K|last8=Ito|first8=K|last9=Fukuda|first9=H|last10=Schormann|first10=T|last11=Zilles|first11=K|title=Functional delineation of the human occipito-temporal areas related to face and scene processing: A PET study|journal=Brain|date=1 September 2000|volume=123|issue=9|pages=1903–12|doi=10.1093/brain/123.9.1903|pmid=10960054 |doi-access=free}} Right asymmetry in the mid-temporal lobe for faces has also been shown using 133-Xenon measured cerebral blood flow.{{cite journal |last1=Gur |first1=Ruben C. |last2=Jaggi |first2=Jurg L. |last3=Ragland |first3=J. Daniel |last4=Resnick |first4=Susan M. |last5=Shtasel |first5=Derri |last6=Muenz |first6=Larry |last7=Gur |first7=Raquel E. |author-link7=Raquel Gur |date=January 1993 |title=Effects of Memory Processing on Regional Brain Activation: Cerebral Blood Flow in Normal Subjects |journal=International Journal of Neuroscience |volume=72 |issue=1–2 |pages=31–44 |doi=10.3109/00207459308991621 |pmid=8225798}} Other investigators have observed right lateralization for facial recognition in previous electrophysiological and imaging studies.{{cite journal|last1=Ojemann|first1=Jeffrey G.|last2=Ojemann|first2=George A.|last3=Lettich|first3=Ettore|title=Neuronal activity related to faces and matching in human right nondominant temporal cortex|journal=Brain|date=1992|volume=115|issue=1|pages=1–13|doi=10.1093/brain/115.1.1|pmid=1559147 }}
Asymmetric facial perception implies implementing different hemispheric strategies. The right hemisphere would employ a holistic strategy, and the left an analytic strategy.{{Cite journal|author=Bogen JE|title=The other side of the brain. I. Dysgraphia and dyscopia following cerebral commissurotomy|journal=Bull Los Angeles Neurol Soc|volume=34|issue=2|pages=73–105|date=April 1969|pmid=5792283}}{{Cite journal|author=Bogen JE|title=Some educational aspects of hemispheric specialization|journal=UCLA Educator|volume= 17|pages=24–32|year=1975}}{{Cite journal|author=Bradshaw JL, Nettleton NC|title=The nature of hemispheric specialization in man|journal=Behavioral and Brain Sciences|volume=4|pages=51–91|year= 1981|doi=10.1017/S0140525X00007548|last2=Nettleton |s2cid=145235366}}{{Cite journal|author=Galin D|title=Implications for psychiatry of left and right cerebral specialization. A neurophysiological context for unconscious processes|journal=Arch. Gen. Psychiatry|volume=31|issue=4|pages=572–83|date=October 1974|pmid=4421063|doi=10.1001/archpsyc.1974.01760160110022 }}{{dead link|date=December 2016|bot=InternetArchiveBot|fix-attempted=yes }}
A 2007 study, using functional transcranial Doppler spectroscopy, demonstrated that men were right-lateralized for object and facial perception, while women were left-lateralized for facial tasks but showed a right-tendency or no lateralization for object perception.{{Cite journal|author=Njemanze PC|s2cid=2964994|title=Cerebral lateralisation for facial processing: gender-related cognitive styles determined using Fourier analysis of mean cerebral blood flow velocity in the middle cerebral arteries|journal=Laterality|volume=12|issue=1|pages=31–49|date=January 2007|pmid=17090448|doi=10.1080/13576500600886796 }} This could be taken as evidence for topological organization of these cortical areas in men. It may suggest that the latter extends from the area implicated in object perception to a much greater area involved in facial perception.
This agrees with the object form topology hypothesis proposed by Ishai. However, the relatedness of object and facial perception was process-based, and appears to be associated with their common holistic processing strategy in the right hemisphere. Moreover, when the same men were presented with facial paradigm requiring analytic processing, the left hemisphere was activated. This agrees with the suggestion made by Gauthier in 2000, that the extrastriate cortex contains areas that are best suited for different computations, and described as the process-map model.
Therefore, the proposed models are not mutually exclusive: facial processing imposes no new constraints on the brain besides those used for other stimuli.
Each stimulus may have been mapped by category into face or non-face, and by process into holistic or analytic. Therefore, a unified category-specific process-mapping system was implemented for either right or left cognitive styles. For facial perception, men likely use a category-specific process-mapping system for right cognitive style, and women use the same for the left.
Ethnicity
{{main|Cross-race effect}}
File:Cross-race effect study samples.jpg
Differences in own- versus other-race face recognition and perceptual discrimination was first researched in 1914.{{cite journal|author=Feingold, C.A.|year=1914|title=The influence of environment on identification of persons and things|url=https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=1279&context=jclc|journal=Journal of Criminal Law and Police Science|volume=5|issue=1|pages=39–51|doi=10.2307/1133283|jstor=1133283|access-date=5 February 2019|archive-date=19 July 2018|archive-url=https://web.archive.org/web/20180719212902/https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=1279&context=jclc|url-status=live}} Humans tend to perceive people of other races than their own to all look alike:{{blockquote|Other things being equal, individuals of a given race are distinguishable from each other in proportion to our familiarity, to our contact with the race as whole. Thus, to the uninitiated American all Asiatics look alike, while to the Asiatics, all White men look alike.}}
This phenomenon, known as the cross-race effect, is also called the own-race effect, other-race effect, own race bias, or interracial face-recognition deficit.
It is difficult to measure the true influence of the cross-race effect.
A 1990 study found that other-race effect is larger among White subjects than among African-American subjects, whereas a 1979 study found the opposite.{{cite journal|last1=Lindsay|first1=D. Stephen|last2=Jack|first2=Philip C. Jr. |last3=Christian|first3=Christian A.|date=13 February 1991|title=Other-race face perception|url=http://web.uvic.ca/~slindsay/publications/1991LindJackChristian.pdf|journal=Journal of Applied Psychology|volume=76|issue=4|access-date=30 September 2016|doi=10.1037/0021-9010.76.4.587|pmid=1917773|pages=587–9|archive-date=3 March 2016|archive-url=https://web.archive.org/web/20160303183033/http://web.uvic.ca/~slindsay/publications/1991LindJackChristian.pdf|url-status=live}} D. Stephen Lindsay and colleagues note that results in these studies could be due to intrinsic difficulty in recognizing the faces presented, an actual difference in the size of cross-race effect between the two test groups, or some combination of these two factors. Shepherd reviewed studies that found better performance on African-American faces, White faces, and studies where no difference was found.Brigham & Karkowitz, 1978; Brigham & Williamson, 1979; cited in Shepherd, 1981Chance, Goldstein, & McBride, 1975; Feinman & Entwistle, 1976; cited in Shepherd, 1981Malpass & Kravitz, 1969; Cross, Cross, & Daly, 1971; Shepherd, Deregowski, & Ellis, 1974; all cited in Shepherd, 1981
Overall, Shepherd reported a reliable positive correlation between the size of the effect and the amount of interaction subjects had with members of the other race. This correlation reflects the fact that African-American subjects, who performed equally well on faces of both races in Shepherd's study, almost always responded with the highest possible self-rating of amount of interaction with white people, whereas white counterparts displayed a larger other-race effect and reported less other-race interaction. This difference in rating was statistically reliable.
The cross-race effect seems to appear in humans at around six months of age.{{cite journal|last1=Kelly|first1=David J.|last2=Quinn|first2=Paul C.|last3=Slater|first3=Alan M.|last4=Lee|first4=Kang|last5=Ge|first5=Liezhong|last6=Pascalis|first6=Olivier|date=1 December 2007|title=The other-race effect develops during infancy: Evidence of perceptual narrowing|journal=Psychological Science|volume=18|issue=12|pages=1084–9|doi=10.1111/j.1467-9280.2007.02029.x|pmid=18031416|pmc=2566514 }}
= Challenging the cross-race effect =
Cross-race effects can be changed through interaction with people of other races.{{cite journal|last1=Sangrigoli|first1=S.|last2=Pallier|first2=C.|last3=Argenti|first3=A.-M.|last4=Ventureyra|first4=V. a. G.|last5=de Schonen|first5=S.|date=1 June 2005|title=Reversibility of the other-race effect in face recognition during childhood|journal=Psychological Science|volume=16|issue=6|pages=440–4|doi=10.1111/j.0956-7976.2005.01554.x|pmid=15943669|s2cid=5572690 |url=https://hal.archives-ouvertes.fr/hal-02336677/file/SangrigoliPallier_finaldraft.pdf }} Other-race experience is a major influence on the cross-race effect.{{Cite journal|last1=Walker|first1=Pamela M|last2=Tanaka|first2=James W|date=1 September 2003|title=An Encoding Advantage for Own-Race versus Other-Race Faces|url=https://doi.org/10.1068/p5098|journal=Perception|language=en|volume=32|issue=9|pages=1117–25|doi=10.1068/p5098|pmid=14651324|s2cid=22723263|issn=0301-0066}} A series of studies revealed that participants with greater other-race experience were consistently more accurate at discriminating other-race faces than participants with less experience.{{Cite journal|last1=Walker|first1=Pamela M.|last2=Hewstone|first2=Miles|date=2006|title=A developmental investigation of other-race contact and the own-race face effect|url=https://bpspsychub.onlinelibrary.wiley.com/doi/abs/10.1348/026151005X51239|journal=British Journal of Developmental Psychology|language=en|volume=24|issue=3|pages=451–463|doi=10.1348/026151005X51239|issn=2044-835X}} Many current models of the effect assume that holistic face processing mechanisms are more fully engaged when viewing own-race faces.{{cite journal|last1=de Gutis|first1=Joseph|last2=Mercado|first2=Rogelio J.|last3=Wilmer|first3=Jeremy|last4=Rosenblatt|first4=Andrew|date=10 April 2013|title=Individual differences in holistic processing predict the own-race advantage in recognition memory|journal=PLOS ONE|volume=8|issue=4|page=e58253|doi=10.1371/journal.pone.0058253|pmid=23593119|pmc=3622684|bibcode=2013PLoSO...858253D |doi-access=free}}
The own-race effect appears related to increased ability to extract information about the spatial relationships between different facial features.Diamond & Carey, 1986; Rhodes et al.., 1989
A deficit occurs when viewing people of another race because visual information specifying race takes up mental attention at the expense of individuating information.{{cite journal|author=Levin, Daniel T.|title=Race as a visual feature: Using visual search and perceptual discrimination tasks to understand face categories and the cross-race recognition deficit|journal=J Exp Psychol Gen|volume=129|issue=4|pages=559–574|date=December 2000|pmid=11142869|doi=10.1037/0096-3445.129.4.559 }}
{{cite journal |vauthors=Senholzi KB, Ito TA |title=Structural face encoding: How task affects the N170's sensitivity to race |journal=Soc Cogn Affect Neurosci |volume=8 |issue=8 |pages=937–42 |date=December 2013 |pmid=22956666 |pmc=3831558 |doi=10.1093/scan/nss091 }} Further research using perceptual tasks could shed light on the specific cognitive processes involved in the other-race effect. The own-race effect likely extends beyond racial membership into in-group favoritism. Categorizing somebody by the university they attend yields similar results to the own-race effect.{{cite journal|last1=Bernstein|first1=Michael J.|last2=Young|first2=Steven G.|last3=Hugenberg|first3=Kurt|title=The Cross-Category Effect: Mere Social Categorization Is Sufficient to Elicit an Own-Group Bias in Face Recognition|journal=Psychological Science|date=August 2007|volume=18|issue=8|pages=706–712|doi=10.1111/j.1467-9280.2007.01964.x|pmid=17680942|s2cid=747276 }}
Similarly, men tend to recognize fewer female faces than women do, whereas there are no sex differences for male faces.{{cite journal|author1=Rehnman, J.|author2=Herlitz, A.|date=April 2006|title=Higher face recognition ability in girls: Magnified by own-sex and own-ethnicity bias|journal=Memory|volume=14|issue=3|pages=289–296|doi=10.1080/09658210500233581|pmid=16574585|s2cid=46188393}}
If made aware of the own-race effect prior to the experiment, test subjects show significantly less, if any, of the own-race effect.{{cite journal|last1=Hugenberg|first1=Kurt|last2=Miller|first2=Jennifer|last3=Claypool|first3=Heather M.|date=1 March 2007|title=Categorization and individuation in the cross-race recognition deficit: Toward a solution to an insidious problem|journal=Journal of Experimental Social Psychology|volume=43|issue=2|pages=334–340|doi=10.1016/j.jesp.2006.02.010}}
Autism
File:Autism-stacking-cans edit.jpg
Autism spectrum disorder is a comprehensive neural developmental disorder that produces social, communicative,{{cite book|last=Tanaka|first=J.W.|title=The development of face processing|year=2003|publisher=Hogrefe & Huber Publishers|location=Ohio|isbn=9780889372641|pages=101–119|author2=Lincoln, S.|author3=Hegg, L.|editor=Schwarzer, G.|editor2=Leder, H.|chapter=A framework for the study and treatment of face processing deficits in autism}} and perceptual deficits.{{cite journal|last1=Behrmann|first1=Marlene|last2=Avidan|first2=Galia|last3=Leonard|first3=Grace Lee|last4=Kimchi|first4=Rutie|last5=Luna|first5=Beatriz|last6=Humphreys|first6=Kate|last7=Minshew|first7=Nancy|title=Configural processing in autism and its relationship to face processing|journal=Neuropsychologia|date=January 2006|volume=44|issue=1|pages=110–129|doi=10.1016/j.neuropsychologia.2005.04.002|pmid=15907952|citeseerx=10.1.1.360.7141|s2cid=6407530 }} Individuals with autism exhibit difficulties with facial identity recognition and recognizing emotional expressions.{{cite book|last=Schreibman|first=Laura|title=Autism|year=1988|publisher=Sage Publications|location=Newbury Park|isbn=978-0803928091|pages=14–47}}{{cite journal|last=Weigelt|first=Sarah|author2=Koldewyn, Kami|author3=Kanwisher, Nancy|title=Face identity recognition in autism spectrum disorders: A review of behavioral studies|journal=Neuroscience & Biobehavioral Reviews|year=2012|volume=36|issue=3|pages=1060–84|doi=10.1016/j.neubiorev.2011.12.008|pmid=22212588|s2cid=13909935}} These deficits are suspected to spring from abnormalities in early and late stages of facial processing.{{cite journal|last=Dawson|first=Geraldine|author2=Webb, Sara Jane|author3=McPartland, James|title=Understanding the nature of face processing impairment in autism: Insights from behavioral and electrophysiological studies|journal=Developmental Neuropsychology|year=2005|volume=27|pages=403–424|pmid=15843104|doi=10.1207/s15326942dn2703_6|issue=3|citeseerx=10.1.1.519.8390|s2cid=2566676}}
=Speed and methods=
People with autism process face and non-face stimuli with the same speed.{{cite journal|last=Kita|first=Yosuke|author2=Inagaki, Masumi|title=Face recognition in patients with Autism Spectrum Disorder|journal=Brain and Nerve|year=2012|volume=64|pages=821–831|pmid=22764354|issue=7}}
In non-autistic individuals, a preference for face processing results in a faster processing speed in comparison to non-face stimuli. These individuals use holistic processing when perceiving faces. In contrast, individuals with autism employ part-based processing or bottom-up processing, focusing on individual features rather than the face as a whole.{{cite journal|last1=Grelotti|first1=David J.|last2=Gauthier|first2=Isabel|last3=Schultz|first3=Robert T.|title=Social interest and the development of cortical face specialization: What autism teaches us about face processing|journal=Developmental Psychobiology|date=April 2002|volume=40|issue=3|pages=213–225|doi=10.1002/dev.10028|pmid=11891634|citeseerx=10.1.1.20.4786 }}{{cite journal|last=Riby|first=Deborah|author2=Doherty-Sneddon Gwyneth|title=The eyes or the mouth? Feature salience and unfamiliar face processing in Williams syndrome and autism|journal=The Quarterly Journal of Experimental Psychology|year=2009|volume=62|issue=1|pages=189–203|doi=10.1080/17470210701855629|pmid=18609381|last3=Bruce|first3=Vicki|hdl=1893/394|s2cid=7505424|hdl-access=free}} People with autism direct their gaze primarily to the lower half of the face, specifically the mouth, varying from the eye-trained gaze of non autistic people.{{cite journal|last=Joseph|first=Robert|author2=Tanaka, James|title=Holistic and part-based face recognition in children with autism|journal=Journal of Child Psychology and Psychiatry|year=2003|volume=44|issue=4|pages=529–542|doi=10.1111/1469-7610.00142|pmid=12751845|citeseerx=10.1.1.558.7877}}{{cite journal|last1=Langdell|first1=Tim|title=Recognition of Faces: An approach to the study of autism|journal=Journal of Child Psychology and Psychiatry|date=July 1978|volume=19|issue=3|pages=255–268|doi=10.1111/j.1469-7610.1978.tb00468.x|pmid=681468 }}{{cite journal|last=Spezio|first=Michael|author2=Adolphs, Ralph|author3=Hurley, Robert|author4= Piven, Joseph|title=Abnormal use of facial information in high functioning autism|journal=Journal of Autism and Developmental Disorders|date=28 September 2006|volume=37|issue=5|pages=929–939|doi=10.1007/s10803-006-0232-9|pmid=17006775|s2cid=13972633}} This deviation does not employ the use of facial prototypes, which are templates stored in memory that make for easy retrieval.{{cite book|last=Revlin|first=Russell|title=Cognition: Theory and Practice|year=2013|publisher=Worth Publishers|isbn=9780716756675|pages=98–101}}
Additionally, individuals with autism display difficulty with recognition memory, specifically memory that aids in identifying faces. The memory deficit is selective for faces and does not extend to other visual input. These face-memory deficits are possibly products of interference between face-processing regions.
=Associated difficulties=
Autism often manifests in weakened social ability, due to decreased eye contact, joint attention, interpretation of emotional expression, and communicative skills.{{cite journal|last1=Triesch|first1=Jochen|last2=Teuscher|first2=Christof|last3=Deak|first3=Gedeon O.|last4=Carlson|first4=Eric|title=Gaze following: Why (not) learn it?|journal=Developmental Science|year=2006|volume=9|issue=2|pages=125–157|doi=10.1111/j.1467-7687.2006.00470.x|pmid=16472311|url=http://www.escholarship.org/uc/item/8fm3k5xc|access-date=5 February 2019|archive-date=9 October 2017|archive-url=https://web.archive.org/web/20171009001822/http://escholarship.org/uc/item/8fm3k5xc|url-status=live}}
These deficiencies can be seen in infants as young as nine months. Some experts use 'face avoidance' to describe how infants who are later diagnosed with autism preferentially attend to non-face objects. Furthermore, some have proposed that autistic children's difficulty in grasping the emotional content of faces is the result of a general inattentiveness to facial expression, and not an incapacity to process emotional information in general.
The constraints are viewed to cause impaired social engagement.{{cite journal|last1=Volkmar|first1=Fred|last2=Chawarska|first2=Kasia|last3=Klin|first3=Ami|title=Autism in infancy and early childhood|journal=Annual Review of Psychology|year=2005|volume=56|pages=315–6|doi=10.1146/annurev.psych.56.091103.070159|pmid=15709938}} Furthermore, research suggests a link between decreased face processing abilities in individuals with autism and later deficits in theory of mind. While typically developing individuals are able to relate others' emotional expressions to their actions, individuals with autism do not demonstrate this skill to the same extent.{{cite book|last1=Nader-Grosbois|first1=N.|last2=Day|first2=J.M.|editor1=Matson, J.L.|editor2=Sturmey, R.|title=International handbook of autism and pervasive developmental disorders|year=2011|publisher=Springer Science & Business Media|location=New York|isbn=9781441980649|pages=127–157|chapter=Emotional cognition: theory of mind and face recognition}}
This causation, however, resembles the chicken or the egg dispute. Some theorize that social impairment leads to perceptual problems. In this perspective, a biological lack of social interest inhibits facial recognition due to under-use.
= Neurology =
Many of the obstacles that individuals with autism face in terms of facial processing may be derived from abnormalities in the fusiform face area and amygdala.
Typically, the fusiform face area in individuals with autism has reduced volume.{{cite journal|last1=Pierce|first1=K.|last2=Müller|first2=RA|last3=Ambrose|first3=J|last4=Allen|first4=G|last5=Courchesne|first5=E|title=Face processing occurs outside the fusiform 'face area' in autism: evidence from functional MRI|journal=Brain|date=1 October 2001|volume=124|issue=10|pages=2059–73|doi=10.1093/brain/124.10.2059|pmid=11571222 |doi-access=free}} This volume reduction has been attributed to deviant amygdala activity that does not flag faces as emotionally salient, and thus decreases activation levels.
Studies are not conclusive as to which brain areas people with autism use instead. One found that, when looking at faces, people with autism exhibit activity in brain regions normally active when non-autistic individuals perceive objects. Another found that during facial perception, people with autism use different neural systems, each using their own unique neural circuitry.
=Compensation mechanisms=
As autistic individuals age, scores on behavioral tests assessing ability to perform face-emotion recognition increase to levels similar to controls.
The recognition mechanisms of these individuals are still atypical, though often effective.{{cite journal|last1=Harms|first1=Madeline B.|last2=Martin|first2=Alex|last3=Wallace|first3=Gregory L.|title=Facial Emotion Recognition in Autism Spectrum Disorders: A Review of Behavioral and Neuroimaging Studies|journal=Neuropsychology Review|date=September 2010|volume=20|issue=3|pages=290–322|doi=10.1007/s11065-010-9138-6|pmid=20809200|s2cid=24696402 }} In terms of face identity-recognition, compensation can include a more pattern-based strategy, first seen in face inversion tasks. Alternatively, older individuals compensate by using mimicry of others' facial expressions and rely on their motor feedback of facial muscles for face emotion-recognition.{{cite journal|last1=Wright|first1=Barry|last2=Clarke|first2=Natalie|last3=Jordan|first3=Jo|last4=Young|first4=Andrew W.|last5=Clarke|first5=Paula|last6=Miles|first6=Jeremy|last7=Nation|first7=Kate|last8=Clarke|first8=Leesa|last9=Williams|first9=Christine|title=Emotion recognition in faces and the use of visual context Vo in young people with high-functioning autism spectrum disorders|journal=Autism|date=November 2008|volume=12|issue=6|pages=607–626|doi=10.1177/1362361308097118|pmid=19005031|s2cid=206714766 }}
Schizophrenia
Schizophrenia is known to affect attention, perception, memory, learning, processing, reasoning, and problem solving.{{Cite journal|doi=10.1080/13546805.2015.1133407|pmid=26816133|title=Face perception in schizophrenia: A specific deficit|journal=Cognitive Neuropsychiatry|volume=21|issue=1|pages=60–72|year=2016|last1=Megreya|first1=Ahmed M.|s2cid=26125559}}
Schizophrenia has been linked to impaired face and emotion perception.{{cite journal|last1=Onitsuka|first1=Toshiaki|last2=Niznikiewicz|first2=Margaret A.|last3=Spencer|first3=Kevin M.|last4=Frumin|first4=Melissa|last5=Kuroki|first5=Noriomi|last6=Lucia|first6=Lisa C.|last7=Shenton|first7=Martha E.|last8=McCarley|first8=Robert W.|title=Functional and Structural Deficits in Brain Regions Subserving Face Perception in Schizophrenia|journal=American Journal of Psychiatry|date=March 2006|volume=163|issue=3|pages=455–462|doi=10.1176/appi.ajp.163.3.455|pmid=16513867|pmc=2773688 }}{{cite journal|pmid=26778631|year=2016|last1=Tang|first1=D. Y.|title=Facial emotion perception impairments in schizophrenia patients with comorbid antisocial personality disorder|journal=Psychiatry Research|volume=236|pages=22–7|last2=Liu|first2=A. C.|last3=Lui|first3=S. S.|last4=Lam|first4=B. Y.|last5=Siu|first5=B. W.|last6=Lee|first6=T. M.|last7=Cheung|first7=E. F.|doi=10.1016/j.psychres.2016.01.005|s2cid=6029349}}{{cite journal|pmid=21803427|year=2012|last1=Soria Bauser|first1=D|title=Face and body perception in schizophrenia: A configural processing deficit?|journal=Psychiatry Research|volume=195|issue=1–2|pages=9–17|last2=Thoma|first2=P|last3=Aizenberg|first3=V|last4=Brüne|first4=M|last5=Juckel|first5=G|last6=Daum|first6=I|doi=10.1016/j.psychres.2011.07.017|s2cid=6137252}} People with schizophrenia demonstrate worse accuracy and slower response time in face perception tasks in which they are asked to match faces, remember faces, and recognize which emotions are present in a face. People with schizophrenia have more difficulty matching upright faces than they do with inverted faces. A reduction in configural processing, using the distance between features of an item for recognition or identification (e.g. features on a face such as eyes or nose), has also been linked to schizophrenia.
Schizophrenia patients are able to easily identify a "happy" affect but struggle to identify faces as "sad" or "fearful". Impairments in face and emotion perception are linked to impairments in social skills, due to the individual's inability to distinguish facial emotions. People with schizophrenia tend to demonstrate a reduced N170 response, atypical face scanning patterns, and a configural processing dysfunction. The severity of schizophrenia symptoms has been found to correlate with the severity of impairment in face perception.
Individuals with diagnosed schizophrenia and antisocial personality disorder have been found to have even more impairment in face and emotion perception than individuals with just schizophrenia. These individuals struggle to identify anger, surprise, and disgust. There is a link between aggression and emotion perception difficulties for people with this dual diagnosis.
Data from magnetic resonance imaging and functional magnetic resonance imaging has shown that a smaller volume of the fusiform gyrus is linked to greater impairments in face perception.
There is a positive correlation between self-face recognition and other-face recognition difficulties in individuals with schizophrenia. The degree of schizotypy has also been shown to correlate with self-face difficulties, unusual perception difficulties, and other face recognition difficulties.{{cite journal|last1=Lar⊘i|first1=Frank|last2=D'Argembeau|first2=Arnaud|last3=Brédart|first3=Serge|last4=van der Linden|first4=Martial|title=Face recognition failures in schizotypy|journal=Cognitive Neuropsychiatry|date=November 2007|volume=12|issue=6|pages=554–571|doi=10.1080/13546800701707223|pmid=17978939|hdl=2268/1432 |s2cid=42925862 |hdl-access=free}} Schizophrenia patients report more feelings of strangeness when looking in a mirror than do normal controls. Hallucinations, somatic concerns, and depression have all been found to be associated with self-face perception difficulties.{{cite journal|last1=Bortolon|first1=Catherine|last2=Capdevielle|first2=Delphine|last3=Altman|first3=Rosalie|last4=Macgregor|first4=Alexandra|last5=Attal|first5=Jérôme|last6=Raffard|first6=Stéphane|title=Mirror self-face perception in individuals with schizophrenia: Feelings of strangeness associated with one's own image|journal=Psychiatry Research|date=July 2017|volume=253|pages=205–210|doi=10.1016/j.psychres.2017.03.055|pmid=28390296|s2cid=207453912 }}
Other animals
Neurobiologist Jenny Morton and her team have been able to teach sheep to choose a familiar face over unfamiliar one when presented with two photographs, which has led to the discovery that sheep can recognize human faces.{{cite web|title=Sheep are able to recognise human faces from photographs|url=https://www.cam.ac.uk/research/news/sheep-are-able-to-recognise-human-faces-from-photographs|website=University of Cambridge|access-date=8 November 2017|date=8 November 2017|archive-date=30 August 2019|archive-url=https://web.archive.org/web/20190830183136/https://www.cam.ac.uk/research/news/sheep-are-able-to-recognise-human-faces-from-photographs|url-status=live}}{{cite web|last1=Rincon|first1=Paul|title=Sheep 'can recognise human faces'|url=https://www.bbc.co.uk/news/science-environment-41905652|website=BBC News|access-date=8 November 2017|date=8 November 2017|archive-date=3 June 2019|archive-url=https://web.archive.org/web/20190603153717/https://www.bbc.co.uk/news/science-environment-41905652|url-status=live}} Archerfish (distant relatives of humans) were able to differentiate between forty-four different human faces, which supports the theory that there is no need for a neocortex or a history of discerning human faces in order to do so.{{Cite journal|title=Face facts: Even nonhuman animals discriminate human faces|journal=Learning & Behavior|date=December 2016|volume=44|issue=4|first=Edward A|last=Wasserman|pages = 307–8|doi=10.3758/s13420-016-0239-9|pmid = 27421848|s2cid=8331130|doi-access=free}} Pigeons were found to use the same parts of the brain as humans do to distinguish between happy and neutral faces or male and female faces.
Artificial intelligence
{{main|Facial recognition system}}
Much effort has gone into developing software that can recognize human faces.
This work has occurred in a branch of artificial intelligence known as computer vision, which uses the psychology of face perception to inform software design. Recent breakthroughs use noninvasive functional transcranial Doppler spectroscopy to locate specific responses to facial stimuli. The new system uses input responses, called cortical long-term potentiation, to trigger target face search from a computerized face database system.Njemanze, P.C. Transcranial doppler spectroscopy for assessment of brain cognitive functions. United States Patent Application No. 20040158155, 12 August 2004Njemanze, P.C. Noninvasive transcranial doppler ultrasound face and object recognition testing system. United States Patent No. 6,773,400, 10 August 2004 Such a system provides for brain-machine interface for facial recognition, referred to as cognitive biometrics.
Another application is estimating age from images of faces. Compared with other cognition problems, age estimation from facial images is challenging, mainly because the aging process is influenced by many external factors like physical condition and living style.The aging process is also slow, making sufficient data difficult to collect.{{Cite journal|url=http://pages.cs.wisc.edu/~huangyz/caip09_Long.pdf|title=Human age estimation by metric learning for regression problems|author=YangJing Long|journal=Proc. International Conference on Computer Analysis of Images and Patterns|year=2009|pages=74–82|url-status=dead|archive-url=https://web.archive.org/web/20100108055346/http://pages.cs.wisc.edu/~huangyz/caip09_Long.pdf|archive-date=8 January 2010 }}
=Nemrodov=
In 2016, Dan Nemrodov conducted multivariate analyses of EEG signals that might be involved in identity related information and applied pattern classification to event-related potential signals both in time and in space. The main target of the study were:
- evaluating whether previously known event-related potential components such as N170 and others are involved in individual face recognition or not
- locating temporal landmarks of individual level recognition from event-related potential signals
- figuring out the spatial profile of individual face recognition
For the experiment, conventional event-related potential analyses and pattern classification of event-related potential signals were conducted given preprocessed EEG signals.{{Cite journal|date=1 January 2019|title=Multimodal evidence on shape and surface information in individual face processing|url=https://www.sciencedirect.com/science/article/abs/pii/S1053811918319591|journal=NeuroImage|language=en|volume=184|pages=813–825|doi=10.1016/j.neuroimage.2018.09.083|issn=1053-8119|access-date=2 June 2021|archive-date=15 May 2021|archive-url=https://web.archive.org/web/20210515104335/https://www.sciencedirect.com/science/article/abs/pii/S1053811918319591|url-status=live|last1=Nemrodov|first1=Dan|last2=Behrmann|first2=Marlene|last3=Niemeier|first3=Matthias|last4=Drobotenko|first4=Natalia|last5=Nestor|first5=Adrian|pmid=30291975|s2cid=207211751}}
This and a further study showed the existence of a spatio-temporal profile of individual face recognition process and reconstruction of individual face images was possible by utilizing such profile and informative features that contribute to encoding of identity related information.
Genetic basis
While many cognitive abilities, such as general intelligence, have a clear genetic basis, evidence for the genetic basis of facial recognition is fairly recent. Current evidence suggests that facial recognition abilities are highly linked to genetic, rather than environmental, bases.
Early research focused on genetic disorders which impair facial recognition abilities, such as Turner syndrome, which results in impaired amygdala functioning. A 2003 study found significantly poorer facial recognition abilities in individuals with Turner syndrome, suggesting that the amygdala impacts face perception.
Evidence for a genetic basis in the general population, however, comes from twin studies in which the facial recognition scores on the Cambridge Face Memory test were twice as similar for monozygotic twins in comparison to dizygotic twins.{{cite journal|last1=Wilmer|first1=J. B.|last2=Germine|first2=L.|last3=Chabris|first3=C. F.|last4=Chatterjee|first4=G.|last5=Williams|first5=M.|last6=Loken|first6=E.|last7=Nakayama|first7=K.|last8=Duchaine|first8=B.|date=16 March 2010|title=Human face recognition ability is specific and highly heritable|journal=Proceedings of the National Academy of Sciences|volume=107|issue=11|pages=5238–41|bibcode=2010PNAS..107.5238W|doi=10.1073/pnas.0913053107|pmc=2841913|pmid=20176944|doi-access=free}} This finding was supported by studies which found a similar difference in facial recognition scores{{cite journal|last1=Zhu|first1=Qi|last2=Song|first2=Yiying|last3=Hu|first3=Siyuan|last4=Li|first4=Xiaobai|last5=Tian|first5=Moqian|last6=Zhen|first6=Zonglei|last7=Dong|first7=Qi|last8=Kanwisher|first8=Nancy|last9=Liu|first9=Jia|title=Heritability of the Specific Cognitive Ability of Face Perception|journal=Current Biology|date=January 2010|volume=20|issue=2|pages=137–142|doi=10.1016/j.cub.2009.11.067|pmid=20060296|hdl=1721.1/72376|s2cid=8390495 |doi-access=free|bibcode=2010CBio...20..137Z |hdl-access=free}}{{cite journal|last1=Shakeshaft|first1=Nicholas G.|last2=Plomin|first2=Robert|title=Genetic specificity of face recognition|journal=Proceedings of the National Academy of Sciences|date=13 October 2015|volume=112|issue=41|pages=12887–92|doi=10.1073/pnas.1421881112|pmid=26417086|pmc=4611634|bibcode=2015PNAS..11212887S |doi-access=free}} and those which determined the heritability of facial recognition to be approximately 61%.
There was no significant relationship between facial recognition scores and other cognitive abilities, most notably general object recognition. This suggests that facial recognition abilities are heritable, and have a genetic basis independent from other cognitive abilities. Research suggests that more extreme examples of facial recognition abilities, specifically hereditary prosopagnosics, are highly genetically correlated.{{cite journal|last1=Cattaneo|first1=Zaira|last2=Daini|first2=Roberta|last3=Malaspina|first3=Manuela|last4=Manai|first4=Federico|last5=Lillo|first5=Mariarita|last6=Fermi|first6=Valentina|last7=Schiavi|first7=Susanna|last8=Suchan|first8=Boris|last9=Comincini|first9=Sergio|date=December 2016|title=Congenital prosopagnosia is associated with a genetic variation in the oxytocin receptor (OXTR) gene: An exploratory study|journal=Neuroscience|volume=339|pages=162–173|doi=10.1016/j.neuroscience.2016.09.040|pmid=27693815|s2cid=37038809}}
For hereditary prosopagnosics, an autosomal dominant model of inheritance has been proposed.{{cite journal|last1=Kennerknecht|first1=Ingo|last2=Grueter|first2=Thomas|last3=Welling|first3=Brigitte|last4=Wentzek|first4=Sebastian|last5=Horst|first5=Jürgen|last6=Edwards|first6=Steve|last7=Grueter|first7=Martina|title=First report of prevalence of non-syndromic hereditary prosopagnosia (HPA)|journal=American Journal of Medical Genetics Part A|date=1 August 2006|volume=140A|issue=15|pages=1617–22|doi=10.1002/ajmg.a.31343|pmid=16817175|s2cid=2401 }} Research also correlated the probability of hereditary prosopagnosia with the single nucleotide polymorphisms along the oxytocin receptor gene (OXTR), suggesting that these alleles serve a critical role in normal face perception. Mutation from the wild type allele at these loci has also been found to result in other disorders in which social and facial recognition deficits are common, such as autism spectrum disorder, which may imply that the genetic bases for general facial recognition are complex and polygenic.
This relationship between OXTR and facial recognition is also supported by studies of individuals who do not have hereditary prosopagnosia.{{cite journal|last1=Melchers|first1=Martin|last2=Montag|first2=Christian|last3=Markett|first3=Sebastian|last4=Reuter|first4=Martin|title=Relationship between oxytocin receptor genotype and recognition of facial emotion.|journal=Behavioral Neuroscience|date=2013|volume=127|issue=5|pages=780–7|doi=10.1037/a0033748|pmid=24128365 }}{{cite journal|last1=Westberg|first1=Lars|last2=Henningsson|first2=Susanne|last3=Zettergren|first3=Anna|last4=Svärd|first4=Joakim|last5=Hovey|first5=Daniel|last6=Lin|first6=Tian|last7=Ebner|first7=Natalie C.|last8=Fischer|first8=Håkan|title=Variation in the Oxytocin Receptor Gene Is Associated with Face Recognition and its Neural Correlates|journal=Frontiers in Behavioral Neuroscience|date=22 September 2016|volume=10|page=178|doi=10.3389/fnbeh.2016.00178|pmid=27713694|pmc=5031602 |doi-access=free}}
Social perceptions of faces
People make rapid judgements about others based on facial appearance. Some judgements are formed very quickly and accurately, with adults correctly categorising the sex of adult faces with only a 75ms exposure{{Cite journal |last1=O'Toole |first1=Alice J |last2=Peterson |first2=Jennifer |last3=Deffenbacher |first3=Kenneth A |date=June 1996 |title=An 'other-Race Effect' for Categorizing Faces by Sex |url=http://journals.sagepub.com/doi/10.1068/p250669 |journal=Perception |language=en |volume=25 |issue=6 |pages=669–676 |doi=10.1068/p250669 |pmid=8888300 |s2cid=7191979 |issn=0301-0066}} and with near 100% accuracy.{{Cite journal |last1=Wild |first1=Heather A. |last2=Barrett |first2=Susan E. |last3=Spence |first3=Melanie J. |last4=O'Toole |first4=Alice J. |last5=Cheng |first5=Yi D. |last6=Brooke |first6=Jessica |date=December 2000 |title=Recognition and Sex Categorization of Adults' and Children's Faces: Examining Performance in the Absence of Sex-Stereotyped Cues |url=https://linkinghub.elsevier.com/retrieve/pii/S0022096599925547 |journal=Journal of Experimental Child Psychology |language=en |volume=77 |issue=4 |pages=269–291 |doi=10.1006/jecp.1999.2554|pmid=11063629 }} The accuracy of some other judgements are less easily confirmed, though there is evidence that perceptions of health made from faces are at least partly accurate, with health judgements reflecting fruit and vegetable intake,{{Cite journal |last1=Stephen |first1=Ian D. |last2=Coetzee |first2=Vinet |last3=Perrett |first3=David I. |date=May 2011 |title=Carotenoid and melanin pigment coloration affect perceived human health |url=https://linkinghub.elsevier.com/retrieve/pii/S1090513810001169 |journal=Evolution and Human Behavior |language=en |volume=32 |issue=3 |pages=216–227 |doi=10.1016/j.evolhumbehav.2010.09.003|bibcode=2011EHumB..32..216S }} body fat and BMI. People also form judgements about others' personalities from their faces, and there is evidence of at least partial accuracy in this domain too.{{Cite journal |last1=Antar |first1=Joseph C. |last2=Stephen |first2=Ian D. |date=2021-07-01 |title=Facial shape provides a valid cue to sociosexuality in men but not women |url=https://www.sciencedirect.com/science/article/pii/S109051382100012X |journal=Evolution and Human Behavior |language=en |volume=42 |issue=4 |pages=361–370 |doi=10.1016/j.evolhumbehav.2021.02.001 |bibcode=2021EHumB..42..361A |s2cid=233919468 |issn=1090-5138}}
= Valence-dominance model =
The valence-dominance model of face recognition is a widely cited model that suggests that the social judgements made of faces can be summarised into two dimensions: valence (positive-negative) and dominance (dominant-submissive).{{Cite journal |last1=Oosterhof |first1=Nikolaas N. |last2=Todorov |first2=Alexander |date=2008-08-12 |title=The functional basis of face evaluation |journal=Proceedings of the National Academy of Sciences |language=en |volume=105 |issue=32 |pages=11087–92 |doi=10.1073/pnas.0805664105 |issn=0027-8424 |pmc=2516255 |pmid=18685089|bibcode=2008PNAS..10511087O |doi-access=free }} A recent large-scale multi-country replication project largely supported this model across different world regions, though found that a potential third dimension may also be important in some regions{{Cite journal |last1=Jones |first1=Benedict C. |last2=DeBruine |first2=Lisa M. |last3=Flake |first3=Jessica K. |last4=Liuzza |first4=Marco Tullio |last5=Antfolk |first5=Jan |last6=Arinze |first6=Nwadiogo C. |last7=Ndukaihe |first7=Izuchukwu L. G. |last8=Bloxsom |first8=Nicholas G. |last9=Lewis |first9=Savannah C. |last10=Foroni |first10=Francesco |last11=Willis |first11=Megan L. |last12=Cubillas |first12=Carmelo P. |last13=Vadillo |first13=Miguel A. |last14=Turiegano |first14=Enrique |last15=Gilead |first15=Michael |date=January 2021 |title=To which world regions does the valence–dominance model of social perception apply? |url=https://www.nature.com/articles/s41562-020-01007-2 |journal=Nature Human Behaviour |language=en |volume=5 |issue=1 |pages=159–169 |doi=10.1038/s41562-020-01007-2 |pmid=33398150 |hdl=10037/23933 |s2cid=229298679 |issn=2397-3374|hdl-access=free }} and other research suggests that the valence-dominance model also applies to social perceptions of bodies.{{Cite journal |last1=Tzschaschel |first1=Eva |last2=Brooks |first2=Kevin R. |last3=Stephen |first3=Ian D. |title=The valence-dominance model applies to body perception |journal=Royal Society Open Science |year=2022 |volume=9 |issue=9 |pages=220594 |doi=10.1098/rsos.220594 |pmc=9449465 |pmid=36133152|bibcode=2022RSOS....920594T }}
See also
{{div col}}
- Apophenia, seeing meaningful patterns in random data
- Autism
- Capgras delusion
- Cognitive neuropsychology
- Cross-race effect
- Delusional misidentification syndrome
- Emotion perception
- Facial expression
- Facial recognition system
- Fregoli delusion
- Gestalt psychology
- Hollow-Face illusion
- Nonverbal learning disorder
- Pareidolia
- Prosopagnosia
- Social cognition
- Social intelligence
- Super recogniser
{{div col end}}
References
{{reflist}}
- Added {( [https://endota.com/nz/ face care] )}
Further reading
{{refbegin}}
- {{cite book |last=Bruce |first=V. |last2=Young |first2=A. |title=In the Eye of the Beholder: The Science of Face Perception |publisher=Oxford University Press |date=2000 |isbn=0-19-852439-0 |oclc=42406634 }}
{{refend}}
External links
- [https://www.face-rec.org/ Face Recognition Homepage]
- [https://web.archive.org/web/20160927124648/http://scienceaid.co.uk/psychology/cognition/face.html Science Aid: Face Recognition]
- [https://web.archive.org/web/20150409062831/http://www.faceresearch.org/ FaceResearch] – Scientific research and online studies on face perception
- [https://www.faceblind.org/ Face Blind] Prosopagnosia Research Centers at Harvard and University College London
- [https://web.archive.org/web/20060620080614/http://www.icn.ucl.ac.uk/facetests/ Face Recognition Tests] — online tests for self-assessment of face recognition abilities.
- [https://web.archive.org/web/20060509092446/http://www.psy.vanderbilt.edu/faculty/gauthier/PEN/ Perceptual Expertise Network (PEN)] Collaborative group of cognitive neuroscientists studying perceptual expertise, including face recognition.
- [https://www.uwa.edu.au/Research/Perception Perception Lab] at the University of Western Australia
- [https://perception-lab.st-andrews.ac.uk/ Perception Lab] at the University of St Andrews, Scotland
- [http://hdl.handle.net/1893/112 The effect of facial expression and identity information on the processing of own and other race faces] by Yoriko Hirose, PhD thesis from the University of Stirling
- [https://web.archive.org/web/20080708184525/http://www.globalemotion.com/ Global Emotion] Online-Training to overcome Caucasian-Asian other-race effect
- [https://www.npr.org/2019/05/20/725048026/some-people-are-great-at-recognizing-faces-others-not-so-much Some People Are Great At Recognizing Faces. Others...Not So Much] — Hidden Brain podcast
{{Human group differences}}