Persona:
Carrillo de Albornoz Cuadrado, Jorge Amando

Cargando...
Foto de perfil
Dirección de correo electrónico
ORCID
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
Carrillo de Albornoz Cuadrado
Nombre de pila
Jorge Amando
Nombre

Resultados de la búsqueda

Mostrando 1 - 4 de 4
  • Publicación
    Automatic Generation of Entity-Oriented Summaries for Reputation Management
    (Springer, 2020-04-01) Rodríguez Vidal, Javier; Verdejo, Julia; Carrillo de Albornoz Cuadrado, Jorge Amando; Amigo Cabrera, Enrique; Plaza Morales, Laura; Gonzalo Arroyo, Julio Antonio
    Producing online reputation summaries for an entity (company, brand, etc.) is a focused summarization task with a distinctive feature: issues that may affect the reputation of the entity take priority in the summary. In this paper we (i) present a new test collection of manually created (abstractive and extractive) reputation reports which summarize tweet streams for 31 companies in the banking and automobile domains; (ii) propose a novel methodology to evaluate summaries in the context of online reputation monitoring, which profits from an analogy between reputation reports and the problem of diversity in search; and (iii) provide empirical evidence that producing reputation reports is different from a standard summarization problem, and incorporating priority signals is essential to address the task effectively.
  • Publicación
    EvALL: Open Access Evaluation for Information Access Systems
    (Association for Computing Machinery (ACM), 2017) Almagro Cádiz, Mario; Rodríguez Vidal, Javier; Verdejo, M. Felisa; Amigo Cabrera, Enrique; Carrillo de Albornoz Cuadrado, Jorge Amando; Gonzalo Arroyo, Julio Antonio
    The EvALL online evaluation service aims to provide a unified evaluation framework for Information Access systems that makes results completely comparable and publicly available for the whole research community. For researchers working on a given test collection, the framework allows to: (i) evaluate results in a way compliant with measurement theory and with state-of-the-art evaluation practices in the field; (ii) quantitatively and qualitatively compare their results with the state of the art; (iii) provide their results as reusable data to the scientific community; (iv) automatically generate evaluation figures and (low-level) interpretation of the results, both as a pdf report and as a latex source. For researchers running a challenge (a comparative evaluation campaign on shared data), the framework helps them to manage, store and evaluate submissions, and to preserve ground truth and system output data for future use by the research community. EvALL can be tested at http://evall.uned.es.
  • Publicación
    An Effectiveness Metric for Ordinal Classification: Formal Properties and Experimental Results
    (Association for Computational Linguistics Note:, 2020-07-01) Amigo Cabrera, Enrique; Gonzalo Arroyo, Julio Antonio; Mizzarro, Stefano; Carrillo de Albornoz Cuadrado, Jorge Amando
    In Ordinal Classification tasks, items have to be assigned to classes that have a relative ordering, such as positive, neutral, negative in sentiment analysis. Remarkably, the most popular evaluation metrics for ordinal classification tasks either ignore relevant information (for instance, precision/recall on each of the classes ignores their relative ordering) or assume additional information (for instance, Mean Average Error assumes absolute distances between classes). In this paper we propose a new metric for Ordinal Classification, Closeness Evaluation Measure, that is rooted on Measurement Theory and Information Theory. Our theoretical analysis and experimental results over both synthetic data and data from NLP shared tasks indicate that the proposed metric captures quality aspects from different traditional tasks simultaneously. In addition, it generalizes some popular classification (nominal scale) and error minimization (interval scale) metrics, depending on the measurement scale in which it is instantiated.
  • Publicación
    Evaluating Sequence Labeling on the basis of Information Theory
    (Association for Computational Linguistics, 2025-07-01) Amigo Cabrera, Enrique; Álvarez Mellado, Elena; Carrillo de Albornoz Cuadrado, Jorge Amando; European Commission; Agensia Estatal de Investigación (España)
    Various metrics exist for evaluating sequence labeling problems (strict span matching, token oriented metrics, token concurrence in sequences, etc.), each of them focusing on certain aspects of the task. In this paper, we define a comprehensive set of formal properties that captures the strengths and weaknesses of the existing metric families and prove that none of them is able to satisfy all properties simultaneously. We argue that it is necessary to measure how much information (correct or noisy) each token in the sequence contributes depending on different aspects such as sequence length, number of tokens annotated by the system, token specificity, etc. On this basis, we introduce the Sequence Labelling Information Contrast Model (SL-ICM), a novel metric based on information theory for evaluating sequence labeling tasks. Our formal analysis and experimentation show that the proposed metric satisfies all properties simultaneously.