dc.contributorhttps://orcid.org/0000-0002-7337-8974
dc.contributorhttps://orcid.org/0000-0002-9498-6602
dc.contributor0000-0002-9498-6602
dc.creatorNematollahi, Mohammad Ali
dc.creatorGamboa Rosales, Hamurabi
dc.creatorMartínez Ruíz, Francisco Javier
dc.creatorDe la Rosa Vargas, José Ismael
dc.creatorR. Al-Haddad, S.A.
dc.creatorEsmaeilpour, Mansour
dc.date.accessioned2020-04-16T18:58:20Z
dc.date.available2020-04-16T18:58:20Z
dc.date.created2020-04-16T18:58:20Z
dc.date.issued2017-03
dc.identifier1380-7501
dc.identifier1573-7721
dc.identifierhttp://ricaxcan.uaz.edu.mx/jspui/handle/20.500.11845/1711
dc.identifierhttps://doi.org/10.48779/nsng-kq12
dc.description.abstractIn this paper, a Multi-Factor Authentication (MFA) method is developed by a combination of Personal Identification Number (PIN), One Time Password (OTP), and speaker biometric through the speech watermarks. For this reason, a multipurpose digital speech watermarking applied to embed semi-fragile and robust watermarks simultaneously in the speech signal, respectively to provide tamper detection and proof of ownership. Similarly, the blind semi-fragile speech watermarking technique, Discrete Wavelet Packet Transform (DWPT) and Quantization Index Modulation (QIM) are used to embed the watermark in an angle of the wavelet’s sub-bands where more speaker specific information is available.
dc.languageeng
dc.publisherSpringer
dc.relationgeneralPublic
dc.relationhttps://doi.org/10.1007/s11042-016-3350-1
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/3.0/us/
dc.rightsAtribución-NoComercial-SinDerivadas 3.0 Estados Unidos de América
dc.sourceMultimedia Tools Applications, Vol. 76, pp. 7251-7281
dc.titleMulti-factor authentication model based on multipurpose speech watermarking and online speaker recognition
dc.typeinfo:eu-repo/semantics/article


Este ítem pertenece a la siguiente institución