dc.creatorGauder, María Lara
dc.creatorGravano, Agustin
dc.creatorFerrer, Luciana
dc.creatorRiera, Pablo Ernesto
dc.creatorBrussino, Silvina Alejandra
dc.date.accessioned2022-06-08T05:21:45Z
dc.date.accessioned2022-10-15T17:03:17Z
dc.date.available2022-06-08T05:21:45Z
dc.date.available2022-10-15T17:03:17Z
dc.date.created2022-06-08T05:21:45Z
dc.date.issued2019
dc.identifierA protocol for collecting speech data with varying degrees of trust; Speech, Music and Mind 2019: Detecting and Influencing Mental States with Audio; Viena; Austria; 2019; 6-10
dc.identifierhttp://hdl.handle.net/11336/159186
dc.identifierCONICET Digital
dc.identifierCONICET
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4412866
dc.description.abstractThis paper describes a novel experimental setup for collecting speech data from subjects induced to have different degrees of trust in the skills of a conversational agent. The protocol consists of an interactive session where the subject is asked to respond to a series of factual questions with the help of a virtual assistant. In order to induce subjects to either trust or distrust the agent?s skills, they are first informed that the agent was previously rated by other users as being either good or bad; subsequently, the agent answers the subjects? questions consistently to its alleged abilities. These interactions will be speech-based, with subjects and agents communicating verbally, which will allow for the recording of speech produced under different trust conditions. Ultimately, the resulting dataset will be usedto study the feasibility of automatically predicting the degree of trust from speech. This paper describes a preliminary experiment using a text-only version of the protocol in Argentine Spanish. The results show that the protocol effectively succeeds in influencing subjects into the desired mental state of either trusting or distrusting the agent?s skills. We are currently beginning the collection of the speech dataset, which will be made publicly available once ready.
dc.languageeng
dc.publisherInternational Speech Communication Association
dc.relationinfo:eu-repo/semantics/altIdentifier/url/https://www.isca-speech.org/archive_v0/SMM_2019/abstracts/SMM19_paper_4.html
dc.relationinfo:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.21437/SMM.2019-2
dc.rightshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.rightsinfo:eu-repo/semantics/openAccess
dc.sourceProc. Workshop on Speech, Music and Mind (SMM 2019)
dc.subjectTRUST/DISTRUST
dc.subjectSPEECH CORPUS
dc.subjectMENTAL STATE
dc.subjectSPOKEN DIALOGUE SYSTEM
dc.subjectAUTOMATIC DETECTION
dc.titleA protocol for collecting speech data with varying degrees of trust
dc.typeinfo:eu-repo/semantics/publishedVersion
dc.typeinfo:eu-repo/semantics/conferenceObject
dc.typeinfo:ar-repo/semantics/documento de conferencia


Este ítem pertenece a la siguiente institución