Scielo RSS <![CDATA[Polibits]]> http://www.scielo.org.mx/rss.php?pid=1870-904420110001&lang=es vol. num. 43 lang. es <![CDATA[SciELO Logo]]> http://www.scielo.org.mx/img/en/fbpelogp.gif http://www.scielo.org.mx <![CDATA[<b>Detecting Derivatives using Specific and Invariant Descriptors</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100001&lng=es&nrm=iso&tlng=es This paper explores the detection of derivation links between texts (otherwise called plagiarism, near-duplication, revision, etc.) at the document level. We evaluate the use of textual elements implementing the ideas of specificity and invariance as well as their combination to characterize derivatives. We built a French press corpus based on Wikinews revisions to run this evaluation. We obtain performances similar to the state of the art method (n-grams overlap) while reducing the signature size and so, the processing costs. In order to ensure the verifiability and the reproducibility of our results we make our code as well as our corpus available to the community. <![CDATA[<b>Assesing the Feature-Driven Nature of Similarity-based Sorting of Verbs</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100002&lng=es&nrm=iso&tlng=es The paper presents a computational analysis of the results from a sorting task with motion verbs in Norwegian. The sorting behavior of humans rests on the features they use when they compare two or more words. We investigate what these features are and how differential each feature may be in sorting. The key rationale for our method of analysis is the assumption that a sorting task rests on a similarity assessment process. The main idea is that a set of features underlies this similarity judgment, and similarity between two verbs amounts to the sum of the weighted similarity between the given set of features. The computational methodology used to investigate the features is as follows. Based on the frequency of co-occurrence of verbs in the human generated cluster, weights of a given set of features are computed using linear regression. The weights are used, in turn, to compute a similarity matrix between the verbs. This matrix is used as an input for the agglomerative hierarchical clustering. If the selected/projected set of features aligns with the features the participants used when sorting verbs in groups, then the clusters we obtain using this computational method would align with the clusters generated by humans. Otherwise, the method proceeds with modifying the feature set and repeating the process. Features promoting clusters that align with human-generated clusters are evaluated by a set of human experts and the results show that the method manages to identify the appropriate feature sets. This method can be applied in analyzing a variety of data ranging from experimental free production data, to linguistic data from controlled experiments in the assessment of semantic relations and hierarchies within languages and across languages. <![CDATA[<b>Semantic Textual Entailment Recognition using UNL</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100003&lng=es&nrm=iso&tlng=es A two-way textual entailment (TE) recognition system that uses semantic features has been described in this paper. We have used the Universal Networking Language (UNL) to identify the semantic features. UNL has all the components of a natural language. The development of a UNL based textual entailment system that compares the UNL relations in both the text and the hypothesis has been reported. The semantic TE system has been developed using the RTE-3 test annotated set as a development set (includes 800 text-hypothesis pairs). Evaluation scores obtained on the RTE-4 test set (includes 1000 text-hypothesis pairs) show 55.89% precision and 65.40% recall for YES decisions and 66.50% precision and 55.20% recall for NO decisions and overall 60.3% precision and 60.3% recall. <![CDATA[<b>Examining the Validity of Cross-Lingual Word Sense Disambiguation</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100004&lng=es&nrm=iso&tlng=es This paper describes a set of experiments in which the viability of a classification-based Word Sense Disambiguation system that uses evidence from multiple languages is investigated. Instead of using a predefined monolingual sense-inventory such as WordNet, we use a language-independent framework and start from a manually constructed gold standard in which the word senses are made up by the translations that result from word alignments on a parallel corpus. To train and test the classifier, we used English as an input language and we incorporated the translations of our target words in five languages (viz. Spanish, Italian, French, Dutch and German) as features in the feature vectors. Our results show that the multilingual approach outperforms the classification experiments where no additional evidence from other languages is used. These results confirm our initial hypothesis that each language adds evidence to further refine the senses of a given word. This allows us to develop a proof of concept for a multilingual approach to Word Sense Disambiguation. <![CDATA[<b>Knowledge Expansion of a Statistical Machine Translation System using Morphological Resources</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100005&lng=es&nrm=iso&tlng=es Translation capability of a Phrase-Based Statistical Machine Translation (PBSMT) system mostly depends on parallel data and phrases that are not present in the training data are not correctly translated. This paper describes a method that efflciently expands the existing knowledge of a PBSMT system without adding more parallel data but using external morphological resources. A set of new phrase associations is added to translation and reordering models; each of them corresponds to a morphological variation of the source/target/both phrases of an existing association. New associations are generated using a string similarity score based on morphosyntactic information. We tested our approach on En-Fr and Fr-En translations and results showed improvements of the performance in terms of automatic scores (BLEU and Meteor) and reduction of out-of-vocabulary (OOV) words. We believe that our knowledge expansion framework is generic and could be used to add different types of information to the model. <![CDATA[<b>Low Cost Construction of a Multilingual Lexicon from Bilingual Lists</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100006&lng=es&nrm=iso&tlng=es Manually constructing multilingual translation lexicons can be very costly, both in terms of time and human effort. Although there have been many efforts at (semi-)automatically merging bilingual machine readable dictionaries to produce a multilingual lexicon, most of these approaches place quite specific requirements on the input bilingual resources. Unfortunately, not all bilingual dictionaries fulfil these criteria, especially in the case of under-resourced language pairs. We describe a low cost method for constructing a multilingual lexicon using only simple lists of bilingual translation mappings. The method is especially suitable for under-resourced language pairs, as such bilingual resources are often freely available and easily obtainable from the Internet, or digitised from simple, conventional paper-based dictionaries. The precision of random samples of the resultant multilingual lexicon is around 0.70-0.82, while coverage for each language, precision and recall can be controlled by varying threshold values. Given the very simple input resources, our results are encouraging, especially in incorporating under-resourced languages into multilingual lexical resources. <![CDATA[<b>A Cross-Lingual Pattern Retrieval Framework</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100007&lng=es&nrm=iso&tlng=es We introduce a method for learning to grammatically categorize and organize the contexts of a given query. In our approach, grammatical descriptions, from general word groups to specific lexical phrases, are imposed on the query's contexts aimed at accelerating lexicographers' and language learners' navigation through and GRASP upon the word usages. The method involves lemmatizing, part-of-speech tagging and shallowly parsing a general corpus and constructing its inverted files for monolingual queries, and word-aligning parallel texts and extracting and pruning translation equivalents for cross-lingual ones. At run-time, grammar-like patterns are generated, organized to form a thesaurus index structure on query words' contexts, and presented to users along with their instantiations. Experimental results show that the extracted predominant patterns resemble phrases in grammar books and that the abstract-to-concrete context hierarchy of querying words effectively assists the process of language learning, especially in sentence translation or composition. <![CDATA[<b>Clause Boundary Identification using Classifier and Clause Markers in Urdu Language</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100008&lng=es&nrm=iso&tlng=es This paper presents the identification of clause boundary for the Urdu language. We have used Conditional Random Field as the classification method and the clause markers. The clause markers play the role to detect the type of subordinate clause, which is with or within the main clause. If there is any misclassification after testing with different sentences then more rules are identified to get high recall and precision. Obtained results show that this approach efficiently determines the type of sub-ordinate clause and its boundary. <![CDATA[<b>External Sandhi and its Relevance to Syntactic Treebanking</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100009&lng=es&nrm=iso&tlng=es Externai sandhi is a linguistic phenomenon which refers to a set of sound changes that occur at word boundaries. These changes are similar to phonological processes such as assimilation and fusion when they apply at the level of prosody, such as in connected speech. External sandhi formation can be orthographically reflected in some languages. External sandhi formation in such languages, causes the occurrence of forms which are morphologically unanalyzable, thus posing a problem for all kind of NLP applications. In this paper, we discuss the implications that this phenomenon has for the syntactic annotation of sentences in Telugu, an Indian language with agglutinative morphology. We describe in detail, how external sandhi formation in Telugu, if not handled prior to dependency annotation, leads either to loss or misrepresentation of syntactic information in the treebank. This phenomenon, we argue, necessitates the introduction of a sandhi splitting stage in the generic annotation pipeline currently being followed for the treebanking of Indian languages. We identify one type of external sandhi widely occurring in the previous version of the Telugu treebank (version 0.2) and manually split all its instances leading to the development of a new version 0.5. We also conduct an experiment with a statistical parser to empirically verify the usefulness of the changes made to the treebank. Comparing the parsing accuracies obtained on versions 0. 2 and 0. 5 of the treebank, we observe that splitting even just one type of external sandhi leads to an increase in the overall parsing accuracies. <![CDATA[<b>Keywords Identification within Greek URLs</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100010&lng=es&nrm=iso&tlng=es In this paper we propose a method that identifies and extracts keywords within URLs, focusing on the Greek Web and especially on URLs containing Greek terms. Although there are previous works on how to process Greek online content, none of them focuses on keyword identification within URLs of the Greek web domain. In addition, there are many known techniques for web page categorization based on URLs but, none addresses the case of URLs containing transliterated Greek terms. The proposed method integrates two components; a URL tokenizer that segments URL tokens into meaningful words and a Latin-to-Greek script transliteration engine that relies on a dictionary and a set of orthographic and syntactic rules for converting Latin verbalized word tokens into Greek terms. The experimental evaluation of our method against a sample of 1,000 Greek URLs reveals that it can be fruitfully exploited towards automatic keyword identification within Greek URLs. <![CDATA[<b>Contextual Analysis of Mathematical Expressions for Advanced Mathematical Search</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100011&lng=es&nrm=iso&tlng=es We found a way to use mathematical search to provide better navigation for reading papers on computers. Since the superficial information of mathematical expressions is ambiguous, considering not only mathematical expressions but also the texts around them is necessary. We present how to extract a natural language description, such as variable names or function definitions that refer to mathematical expressions with various experimental results. We first define an extraction task and constructed a reference dataset of 100 Japanese scientific papers by hand. We then propose the use of two methods, pattern matching and machine learning based ones for the extraction task. The effectiveness of the proposed methods is shown through experiments by using the reference set. <![CDATA[<b>Semantic Aspect Retrieval for Encyclopedia</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100012&lng=es&nrm=iso&tlng=es With the development of Web 2.0, more and more people contribute their knowledge to the Internet. Many general and domain-specific online encyclopedia resources become available, and they are valuable for many Natural Language Processing (NLP) applications, such as summarization and question-answering. We propose a novel encyclopedia-specific method to retrieve passages which are semantically related to a short query (usually comprises of only one word/phrase) from a given article in the encyclopedia. The method captures the expression word features and categorical word features in the surrounding snippets of the aspect words by setting up massive hybrid language models. These local models outperform the global models such as LSA and ESA in our task. <![CDATA[<b>Are my Children Old Enough to Read these Books? Age Suitability Analysis</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100013&lng=es&nrm=iso&tlng=es In general, books are not appropriate for all ages, so the aim of this work was to find an effective method of representing the age suitability of textual documents, making use of automatic analysis and visualization. Interviews with experts identified possible aspects of a text (such as 'is it hard to read?') and a set of features were devised (such as linguistic complexity, story complexity, genre) which combine to characterize these age related aspects. In order to measure these properties, we map a set of text features onto each one. An evaluation of the measures, using Amazon Mechanical Turk, showed promising results. Finally, the set features are visualized in our age-suitability tool, which gives the user the possibility to explore the results, supporting transparency and traceability as well as the opportunity to deal with the limitations of automatic methods and computability issues. <![CDATA[<b>Linguistically Motivated Negation Processing</b>: <b>An Application for the Detection of Risk Indicators in Unstructured Discharge Summaries</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100014&lng=es&nrm=iso&tlng=es The paper proposes a linguistically motivated approach to deal with negation in the context of information extraction. This approach is used in a practical application: the automatic detection of cases of hospital acquired infections (HAI) by processing unstructured medical discharge summaries. One of the important processing steps is the extraction of specific terms expressing risk indicators that can lead to the conclusion of HAI cases. This term extraction has to be very accurate and negation has to be taken into account in order to really understand if a string corresponding to a potential risk indicator is attested positively or negatively in the document. We propose a linguistically motivated approach for dealing with negation using both syntactic and semantic information. This approach is first described and then evaluated in the context of our application in the medical domain. The results of evaluation are also compared with other related approaches dealing with negation in medical texts. <![CDATA[<b>A Micro Artificial Immune System</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100015&lng=es&nrm=iso&tlng=es In this paper, we present a new algorithm, namely, a micro artificial immune system (Micro-AIS) based on the Clonal Selection Theory for solving numerical optimization problems. For our study, we consider the algorithm CLONALG, a widely used artificial immune system. During the process of cloning, CLONALG greatly increases the size of its population. We propose a version with reduced population. Our hypothesis is that reducing the number of individuals in a population will decrease the number of evaluations of the objective function, increasing the speed of convergence and reducing the use of data memory. Our proposal uses a population of 5 individuals (antibodies), from which only 15 clones are obtained. In the maturation stage of the clones, two simple and fast mutation operators are used in a nominal convergence that works together with a reinitialization process to preserve the diversity. To validate our algorithm, we use a set of test functions taken from the specialized literature to compare our approach with the standard version of CLONALG. The same method can be applied in many other problems, for example, in text processing. <![CDATA[<b>A Graph-based Approach to Cross-language Multi-document Summarization</b>]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1870-90442011000100016&lng=es&nrm=iso&tlng=es Cross-language summarization is the task of generating a summary in a language different from the language of the source documents. In this paper, we propose a graph-based approach to multi-document summarization that integrates machine translation quality scores in the sentence extraction process. We evaluate our method on a manually translated subset of the DUC 2004 evaluation campaign. Results indicate that our approach improves the readability of the generated summaries without degrading their informativity.