dc.creatorBuiatti, Marco
dc.creatorPena, Marcela
dc.creatorDehaene Lambertz, Ghislaine
dc.date.accessioned2024-01-10T12:39:52Z
dc.date.accessioned2024-05-02T16:36:33Z
dc.date.available2024-01-10T12:39:52Z
dc.date.available2024-05-02T16:36:33Z
dc.date.created2024-01-10T12:39:52Z
dc.date.issued2009
dc.identifier10.1016/j.neuroimage.2008.09.015
dc.identifier1095-9572
dc.identifier1053-8119
dc.identifierMEDLINE:18929668
dc.identifierhttps://doi.org/10.1016/j.neuroimage.2008.09.015
dc.identifierhttps://repositorio.uc.cl/handle/11534/77244
dc.identifierWOS:000262301100021
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/9266642
dc.description.abstractIn order to learn an oral language, humans have to discover words from a continuous signal. Streams of artificial monotonous speech can be readily segmented based on the statistical analysis of the syllables' distribution. This parsing is considerably improved when acoustic cues, such as subliminal pauses, are added suggesting that a different mechanism is involved. Here we used a frequency-tagging approach to explore the neural mechanisms underlying word learning while listening to continuous speech. High-density EEG was recorded in adults listening to a concatenation of either random syllables or tri-syllabic artificial words, with or without subliminal pauses added every three syllables. Peaks in the EEG power spectrum at the frequencies of one and three syllables occurrence were used to tag the perception of a monosyllabic or trisyllabic structure, respectively. Word streams elicited the suppression of a one-syllable frequency peak, steadily present during random streams, suggesting that syllables are no more perceived as isolated segments but bounded to adjacent syllables. Crucially, three-syllable frequency peaks were only observed during word streams with pauses, and were positively correlated to the explicit recall of the detected words. This result shows that pauses facilitate a fast, explicit and successful extraction of words from continuous speech, and that the frequency-tagging approach is a powerful tool to track brain responses to different hierarchical units of the speech structure. (C) 2008 Elsevier Inc. All rights reserved.
dc.languageen
dc.publisherACADEMIC PRESS INC ELSEVIER SCIENCE
dc.rightsacceso restringido
dc.subjectEEG
dc.subjectExplicit learning
dc.subjectProsody
dc.subjectSpeech segmentation
dc.subjectSteady-state response
dc.subjectAUDITORY-CORTEX
dc.subjectCONSCIOUS PERCEPTION
dc.subjectYOUNG INFANTS
dc.subjectSEGMENTATION
dc.subjectLANGUAGE
dc.subjectUNITS
dc.subjectPATTERNS
dc.subjectSEGREGATION
dc.subjectBOUNDARIES
dc.subjectSYLLABLES
dc.titleInvestigating the neural correlates of continuous speech computation with frequency-tagged neuroelectric responses
dc.typeartículo


Este ítem pertenece a la siguiente institución