Buscar
Mostrando ítems 1-10 de 75
Neural information extraction pipeline for cyber forensics with pre-trained language models
(2022-08-03)
A investigação digital é uma tarefa desafiadora composta por várias etapas e procedimentos que são
muitas vezes lentos e propensos a erros. Da coleta de informações à análise e comunicação de resultados,
muitas subtarefas ...
Redes neuronales para extracción de información relevante de sentencias legales
(2023)
En los últimos anos el Procesamiento de Lenguaje Natural, desde ahora PLN, ha utilizado técnicas de Aprendizaje Automático para representar fragmentos de texto. La introducción de la arquitectura del Transformer (Vaswani ...
Evaluation Benchmarks for Spanish Sentence Representations
(European Language Resources Association (ELRA), 2022)
© European Language Resources Association (ELRA), licensed under CC-BY-NC-4.0.Due to the success of pre-trained language models, versions of languages other than English have been released in recent years. This fact implies ...
PetroBERT: A Domain Adaptation Language Model for Oil and Gas Applications in Portuguese
(2022-01-01)
This work proposes the PetroBERT, which is a BERT-based model adapted to the oil and gas exploration domain in Portuguese. PetroBERT was pre-trained using the Petrolês corpus and a private daily drilling report corpus over ...
Comparando embeddings contextuais no problema de busca de similaridade semântica em português
(Universidade Federal do Rio Grande do NorteBrasilUFRNEngenharia de Computação, 2021-04-27)
Semantic textual similarity (STS) is a natural language processing problem that aims to measure how similar two pairs of sentences are semantically. This problem has been gaining great attention both in the industry, through ...
Improving Asynchronous Interview Interaction with Follow-up Question Generation
The user experience of an asynchronous video interview system, conventionally is not reciprocal or conversational. Interview applicants expect that, like a typical face-to-face interview, they are innate and coherent. We ...
Masked language modeling y fine tuning con los modelos Bert y XLM-RoBERTa en la evaluación de la predicción de las palabras complejas en el idioma inglés.
(Universidad de Guayaquil. Facultad de Ciencias Matemáticas y Físicas. Carrera de Ingeniería en Sistemas Computacionales., 2023-03)
Este proyecto pretende la realizacion de la evaluación de los modelos basados en Transformers: BERT, XLM-RoBERTa ejecutado con las técnica Masked language modeling y Fine tuning aplicados al idioma inglés, con el objetivo ...
Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations
(ASSOC COMPUTATIONAL LINGUISTICS-ACL, 2021)
Current language models are usually trained using a self-supervised scheme, where the main focus is learning representations at the word or sentence level. However, there has been limited progress in generating useful ...