SciELO - Scientific Electronic Library Online

 
vol.23 número3Ontology-driven Text Feature Modeling for Disease Prediction using Unstructured Radiological NotesReasoning over Arabic WordNet Relations with Neural Tensor Network índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Computación y Sistemas

versión On-line ISSN 2007-9737versión impresa ISSN 1405-5546

Resumen

PHAM, Thuong-Hai; MACHAčEK, Dominik  y  BOJAR, Ondřej. Promoting the Knowledge of Source Syntax in Transformer NMT Is Not Needed. Comp. y Sist. [online]. 2019, vol.23, n.3, pp.923-934.  Epub 09-Ago-2021. ISSN 2007-9737.  https://doi.org/10.13053/cys-23-3-3265.

The utility of linguistic annotation in neural machine translation seemed to had been established in past papers. The experiments were however limited to recurrent sequence-to-sequence architectures and relatively small data settings. We focus on the state-of-the-art Transformer model and use comparably larger corpora. Specifically, we try to promote the knowledge of source-side syntax using multi-task learning either through simple data manipulation techniques or through a dedicated model component. In particular, we train one of Transformer attention heads to produce source-side dependency tree. Overall, our results cast some doubt on the utility of multi-task setups with linguistic information. The data manipulation techniques, recommended in previous works, prove ineffective in large data settings. The treatment of self-attention as dependencies seems much more promising: it helps in translation and reveals that Transformer model can very easily grasp the syntactic structure. An important but curious result is, however, that identical gains are obtained by using trivial "linear trees" instead of true dependencies. The reason for the gain thus may not be coming from the added linguistic knowledge but from some simpler regularizing effect we induced on self-attention matrices.

Palabras llave : Syntax; Transformer NMT; Multi-Task NMT.

        · texto en Inglés     · Inglés ( pdf )