dc.contributorUniversidade Estadual Paulista (Unesp)
dc.date.accessioned2020-12-10T19:50:10Z
dc.date.accessioned2022-12-19T20:18:56Z
dc.date.available2020-12-10T19:50:10Z
dc.date.available2022-12-19T20:18:56Z
dc.date.created2020-12-10T19:50:10Z
dc.date.issued2019-12-01
dc.identifierSn Applied Sciences. Cham: Springer International Publishing Ag, v. 1, n. 12, 17 p., 2019.
dc.identifier2523-3963
dc.identifierhttp://hdl.handle.net/11449/196603
dc.identifier10.1007/s42452-019-1689-4
dc.identifierWOS:000515158800026
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/5377240
dc.description.abstractTraditional word embeddings approaches, such as bag-of-words models, tackles the problem of text data representation by linking words in a document to a binary vector, marking their occurrence or not. Additionally, a term frequency-inverse document frequency encoding provides a numerical statistic reflecting how important a particular word is in a document. Nevertheless, the major vulnerability of such models concerns with the loss of contextual meaning, which inhibits them from learning proper pieces of information. A new neural-based embedding approach, known as Word2Vec, tries to mitigate that issue by minimizing the loss of predicting a vector from a particular word considering its surrounding words. Furthermore, as these embedding-based methods produce low-dimensional data, it is impossible to visualize them accurately. With that in mind, dimensionality reduction techniques, such as t-SNE, presents a method to generate bi-dimensional data, allowing its visualization. One common problem of such reductions concerns with the setting of their hyperparameters, such as the perplexity parameter. Therefore, this paper addresses the problem of selecting a suitable perplexity through a meta-heuristic optimization process. Meta-heuristic-driven techniques, such as Artificial Bee Colony, Bat Algorithm, Genetic Programming, and Particle Swarm Optimization, are employed to find proper values for the perplexity parameter. The results revealed that optimizing t-SNE's perplexity is suitable for improving data visualization and thus, an exciting field to be fostered.
dc.languageeng
dc.publisherSpringer
dc.relationSn Applied Sciences
dc.sourceWeb of Science
dc.subjectWord embeddings
dc.subjectDimensionality reduction
dc.subjectMeta-heuristic optimization
dc.titleHow optimizing perplexity can affect the dimensionality reduction on word embeddings visualization?
dc.typeArtículos de revistas


Este ítem pertenece a la siguiente institución