Ghajari Espinosa, AdriánBenito Santos, AlejandroRos Muñoz, SalvadorFresno Fernández, Víctor DiegoGonzález Blanco, Elena2025-05-132025-05-132025-06-15Adrián Ghajari, Alejandro Benito-Santos, Salvador Ros, Víctor Fresno, Elena González-Blanco, Test-driving information theory-based compositional distributional semantics: A case study on Spanish song lyrics, Knowledge-Based Systems, Volume 319, 2025, 113549, ISSN 0950-7051, https://doi.org/10.1016/j.knosys.2025.1135490950-7051https://doi.org/10.1016/j.knosys.2025.113549https://hdl.handle.net/20.500.14468/26536The registered version of this article, first published in “Knowledge-Based Systems, vol. 319, 2025", is available online at the publisher's website: Elsevier, https://doi.org/10.1016/j.knosys.2025.113549 La versión registrada de este artículo, publicado por primera vez en “Knowledge-Based Systems, vol. 319, 2025", está disponible en línea en el sitio web del editor: Elsevier, https://doi.org/10.1016/j.knosys.2025.113549Song lyrics pose unique challenges for semantic similarity assessment due to their metaphorical language, structural patterns, and cultural nuances - characteristics that often challenge standard natural language processing (NLP) approaches. These challenges stem from a tension between compositional and distributional semantics: while lyrics follow compositional structures, their meaning depends heavily on context and interpretation. The Information Theory-based Compositional Distributional Semantics framework offers a principled approach by integrating information theory with compositional rules and distributional representations. We evaluate eight embedding models on Spanish song lyrics, including multilingual, monolingual contextual, and static embeddings. Results show that multilingual models consistently outperform monolingual alternatives, with the domain-adapted ALBERTI achieving the highest F1 macro scores (78.92 ± 10.86). Our analysis reveals that monolingual models generate highly anisotropic embedding spaces, significantly impacting performance with traditional metrics. The Information Contrast Model metric proves particularly effective, providing improvements up to 18.04 percentage points over cosine similarity. Additionally, composition functions maintaining longer accumulated vector norms consistently outperform standard averaging approaches. Our findings have important implications for NLP applications and challenge standard practices in similarity calculation, showing that effectiveness varies with both task nature and model characteristics.eninfo:eu-repo/semantics/openAccess33 Ciencias TecnológicasTest-driving information theory-based compositional distributional semantics: A case study on Spanish song lyricsartículocompositional distributional semanticssemantic textual similarityword embeddingssong lyrics