Cargando...
Fecha
2025-09
Editor/a
Director/a
Tutor/a
Coordinador/a
Prologuista
Revisor/a
Ilustrador/a
Derechos de acceso
info:eu-repo/semantics/openAccess
Título de la revista
ISSN de la revista
Título del volumen
Editorial
Resumen
Este trabajo de fin de máster se enfoca en la interpretación de modelos de clasificación en el contexto de test neuropsicológicos, utilizando diferentes técnicas de explicabilidad. En concreto, se emplean métodos de explicabilidad global como la importancia de características por permutación (PFI), las gráficas de dependencia parcial (PDP), las gráficas de efectos locales acumulados (ALE) y el uso de modelos sustitutos globales. Mientras que los métodos locales aplicados incluyen las curvas de expectativa condicional individual (ICE), LIME, las reglas condicionales locales y SHAP.
La metodología empleada incluye la construcción de modelos predictivos utilizando Auto-gluon y la posterior aplicación de las técnicas de XAI para entender el comportamiento de los modelos. Se presentan análisis de cómo las variables influyen en las predicciones de los modelos y se emplea cada método para proporcionar interpretaciones claras y coherentes de los modelos. Se parte de un conjunto de variables normalizadas, sobre el que se establecen criterios de clasificación en ED1, ED2 y ED3 según umbrales definidos en los test de rendimiento. Además, se estudia cómo la calidad de los modelos entrenados influye en la interpretabilidad de las explicaciones generadas, mostrando que modelos con menor rendimiento producen explicaciones menos fiables y, por tanto, de menor utilidad.
Los resultados obtenidos muestran que aunque todos los métodos aportan a una mejor comprensión del modelo, cada uno tiene sus ventajas y limitaciones. En particular, los métodos de explicabilidad global aportan una visión completa del modelo, mientras que los métodos locales incluyen un análisis más detallado a nivel de instancia. Finalmente, se concluye que la combinación de diferentes técnicas de explicabilidad es la mejor estrategia para obtener interpreteaciones claras y útiles en el contexto de modelos complejos.
This master’s thesis focuses on the interpretation of classification models in the context of neuropsychological tests, using different explainability techniques. Specifically, global explainability methods are applied, such as permutation feature importance (PFI), partial dependence plots (PDP), accumulated local effects plots (ALE), and the use of global surrogate models. On the other hand, the local methods applied include individual conditional expectation (ICE) curves, LIME, local conditional rules, and SHAP. The methodology involves the construction of predictive models using AutoGluon and the subsequent application of XAI techniques to understand their behavior. Analyses are presented on how the variables influence model predictions, and each method is employed to provide clear and coherent interpretations. The study is based on a set of normalized variables, from which classification criteria are established for ED1, ED2, and ED3 according to thresholds defined in performance tests. Furthermore, the research examines how the quality of the trained models influences the interpretability of the generated explanations, showing that models with lower performance produce less reliable explanations and are therefore less useful. The results show that, although all methods contribute to a better understanding of the model, each has its own advantages and limitations. In particular, global explainability methods provide a comprehensive view of the model, while local methods allow for a more detailed instance-level analysis. Finally, it is concluded that combining different explainability techniques is the best strategy to obtain clear and useful interpretations in the context of complex models.
This master’s thesis focuses on the interpretation of classification models in the context of neuropsychological tests, using different explainability techniques. Specifically, global explainability methods are applied, such as permutation feature importance (PFI), partial dependence plots (PDP), accumulated local effects plots (ALE), and the use of global surrogate models. On the other hand, the local methods applied include individual conditional expectation (ICE) curves, LIME, local conditional rules, and SHAP. The methodology involves the construction of predictive models using AutoGluon and the subsequent application of XAI techniques to understand their behavior. Analyses are presented on how the variables influence model predictions, and each method is employed to provide clear and coherent interpretations. The study is based on a set of normalized variables, from which classification criteria are established for ED1, ED2, and ED3 according to thresholds defined in performance tests. Furthermore, the research examines how the quality of the trained models influences the interpretability of the generated explanations, showing that models with lower performance produce less reliable explanations and are therefore less useful. The results show that, although all methods contribute to a better understanding of the model, each has its own advantages and limitations. In particular, global explainability methods provide a comprehensive view of the model, while local methods allow for a more detailed instance-level analysis. Finally, it is concluded that combining different explainability techniques is the best strategy to obtain clear and useful interpretations in the context of complex models.
Descripción
Categorías UNESCO
Palabras clave
Citación
Gómez García, Juan Manuel. Trabajo Fin de Máster: "Análisis del impacto de la calidad del modelo en la explicabilidad mediante técnicas XAI en datos clínicos de Identia". Universidad Nacional de Educación a Distancia (UNED), 2025
Centro
E.T.S. de Ingeniería Informática