Article
Faces and Voices Processing in Human and Primate Brains: Rhythmic and Multimodal Mechanisms Underlying the Evolution and Development of Speech
Fecha
2022Registro en:
Michon M, Zamorano-Abramson J and Aboitiz F (2022) Faces and Voices Processing in Human and Primate Brains: Rhythmic and Multimodal Mechanisms Underlying the Evolution and Development of Speech. Front. Psychol. 13:829083. doi: 10.3389/fpsyg.2022.829083
Autor
Michon, Maëva
Zamorano-Abramson, José
Aboitiz, Francisco
Institución
Resumen
While influential works since the 1970s have widely assumed that imitation is an
innate skill in both human and non-human primate neonates, recent empirical studies
and meta-analyses have challenged this view, indicating other forms of reward-based
learning as relevant factors in the development of social behavior. The visual input
translation into matching motor output that underlies imitation abilities instead seems
to develop along with social interactions and sensorimotor experience during infancy
and childhood. Recently, a new visual stream has been identified in both human and
non-human primate brains, updating the dual visual stream model. This third pathway
is thought to be specialized for dynamics aspects of social perceptions such as eyegaze, facial expression and crucially for audio-visual integration of speech. Here, we
review empirical studies addressing an understudied but crucial aspect of speech and
communication, namely the processing of visual orofacial cues (i.e., the perception of
a speaker’s lips and tongue movements) and its integration with vocal auditory cues.
Along this review, we offer new insights from our understanding of speech as the
product of evolution and development of a rhythmic and multimodal organization of
sensorimotor brain networks, supporting volitional motor control of the upper vocal tract
and audio-visual voices-faces integration.