Persona:
Heradio Gil, Rubén

Cargando...
Foto de perfil
Dirección de correo electrónico
ORCID
0000-0002-7131-0482
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
Heradio Gil
Nombre de pila
Rubén
Nombre

Resultados de la búsqueda

Mostrando 1 - 10 de 16
  • Publicación
    Supporting the Statistical Analysis of Variability Models
    (Institute of Electrical and Electronics Engineers (IEEE), 2019-08-26) Mayr Dorn, Christoph; Egyed, Alexander; Heradio Gil, Rubén; Fernández Amoros, David José
    Variability models are broadly used to specify the configurable features of highly customizable software. In practice, they can be large, defining thousands of features with their dependencies and conflicts. In such cases, visualization techniques and automated analysis support are crucial for understanding the models. This paper contributes to this line of research by presenting a novel, probabilistic foundation for statistical reasoning about variability models. Our approach not only provides a new way to visualize, describe and interpret variability models, but it also supports the improvement of additional state-of-the-art methods for software product lines; for instance, providing exact computations where only approximations were available before, and increasing the sensitivity of existing analysis operations for variability models. We demonstrate the benefits of our approach using real case studies with up to 17,365 features, and written in two different languages (KConfig and feature models).
  • Publicación
    Exemplar driven development of software product lines
    (Elsevier, 2012-12-01) Heradio Gil, Rubén; Fernández Amoros, David José; Torre Cubillo, Luis de la; Abad Cardiel, Ismael
    The benefits of following a product line approach to develop similar software systems are well documented. Nevertheless, some case studies have revealed significant barriers to adopt such approach. In order to minimize the paradigm shift between conventional software engineering and software product line engineering, this paper presents a new development process where the products of a domain are made by analogy to an existing product. Furthermore, this paper discusses the capabilities and limitations of different techniques to implement the analogy relation and proposes a new language to overcome such limitations.
  • Publicación
    A scalable approach to exact model and commonality counting for extended feature models.
    (Institute of Electrical and Electronics Engineers (IEEE), 2014-05-29) Fernández Amoros, David José; Heradio Gil, Rubén; Cerrada Somolinos, José Antonio; Cerrada Somolinos, Carlos
    A software product line is an engineering approach to efficient development of software product portfolios. Key to the success of the approach is to identify the common and variable features of the products and the interdependencies between them, which are usually modeled using feature models. Implicitly, such models also include valuable information that can be used by economic models to estimate the payoffs of a product line. Unfortunately, as product lines grow, analyzing large feature models manually becomes impracticable. This paper proposes an algorithm to compute the total number of products that a feature model represents and, for each feature, the number of products that implement it. The inference of both parameters is helpful to describe the standarization/parameterization balance of a product line, detect scope flaws, assess the product line incremental development, and improve the accuracy of economic models. The paper reports experimental evidence that our algorithm has better runtime performance than existing alternative approaches.
  • Publicación
    A Monte Carlo tree search conceptual framework for feature model analyses
    (Elsevier, 2023-01) Horcas Aguilera, Jose Miguel ; Galindo, José A.; Benavides, David; Heradio Gil, Rubén; Fernández Amoros, David José
    Challenging domains of the future such as Smart Cities, Cloud Computing, or Industry 4.0 expose highly variable systems with colossal configuration spaces. The automated analysis of those systems’ variability has often relied on SAT solving and constraint programming. However, many of the analyses have to deal with the uncertainty introduced by the fact that undertaking an exhaustive exploration of the whole configuration space is usually intractable. In addition, not all analyses need to deal with the configuration space of the feature models, but with different search spaces where analyses are performed over the structure of the feature models, the constraints, or the implementation artifacts, instead of configurations. This paper proposes a conceptual framework that tackles various of those analyses using Monte Carlo tree search methods, which have proven to succeed in vast search spaces (e.g., game theory, scheduling tasks, security, program synthesis, etc.). Our general framework is formally described, and its flexibility to cope with a diversity of analysis problems is discussed. We provide a Python implementation of the framework that shows the feasibility of our proposal, identifying up to 11 lessons learned, and open challenges about the usage of the Monte Carlo methods in the software product line context. With this contribution, we envision that different problems can be addressed using Monte Carlo simulations and that our framework can be used to advance the state-of-the-art one step forward.
  • Publicación
    Pragmatic random sampling of Kconfig-based systems: A unified approach
    (Elsevier, 2025-07-28) Fernández Amoros, David José; Heradio Gil, Rubén; Horcas Aguilera, Jose Miguel; Galindo, José A.; Benavides, David; Fuentes, Lidia
    The configuration space of some systems is so large that it cannot be computed. This is the case with the Linux Kernel, which provides more than 18,000 configurable options described across almost 1,700 files in the Kconfig language. As a result, many analyses of these systems rely on sampling their configuration space (e.g., debugging compilation errors, predicting configuration performance, finding the configuration that optimizes specific performance metrics, among others.). The Kernel and other Kconfig-based systems can be sampled pragmatically, using their built-in tool conf to get a sample directly from the Kconfig specification that is approximately random, or idealistically, generating a genuine random sample by first translating the Kconfig files into logic formulas, then using a logic engine to compute the probability that each option value has to appear in a configuration, and finally utilizing these probabilities to generate an authentically random sample. The pros of the idealistic approach are that it ensures the sample is representative of the population, but the cons are that it sets out many challenging problems that have not been solved yet (fundamentally, how to obtain a valid translation into Boolean that covers all the Kconfig language, and how to compute the option value probabilities for very large formulas). This paper introduces a new version of conf called randconfig+, which incorporates a series of improvements that increase the randomness and correctness of pragmatic sampling and also help validate the Boolean translation required for the idealistic approach. randconfig+ has been tested on ten versions of the Linux Kernel and twenty additional Kconfig systems. Its compatibility significantly enhances the current landscape, where some systems use a customized conf variant that is maintained independently, while others do not support sampling at all. randconfig+ not only offers universal sampling for all Kconfig systems but also simplifies its evolutive maintenance as a single tool rather than an unorganized collection of conf variants.
  • Publicación
    Pragmatic Random Sampling of the Linux Kernel: Enhancing the Randomness and Correctness of the conf Tool
    (Association for Computing Machinery, New York, 2024-09-02) Fernández Amoros, David José; Heradio Gil, Rubén; Horcas Aguilera, Jose Miguel; Galindo, José A.; Benavides, David; Fuentes, Lidia; https://orcid.org/0000-0003-3758-0195; https://orcid.org/0000-0002-5677-7156; https://orcid.org/0000-0002-8449-3273; https://orcid.org/0000-0001-9293-9784
    The configuration space of some systems is so large that it cannot be computed. This is the case with the Linux Kernel, which provides almost 19,000 configurable options described across more than 1,600 files in the Kconfig language. As a result, many analyses of the Kernel rely on sampling its configuration space (e.g., debugging compilation errors, predicting configuration performance, finding the configuration that optimizes specific performance metrics, etc.). The Kernel can be sampled pragmatically, with its built-in tool conf, or idealistically, translating the Kconfig files into logic formulas. The pros of the idealistic approach are that it provides statistical guarantees for the sampled configurations, but the cons are that it sets out many challenging problems that have not been solved yet, such as scalability issues. This paper introduces a new version of conf called randconfig+, which incorporates a series of improvements that increase the randomness and correctness of pragmatic sampling and also help validate the Boolean translation required for the idealistic approach. randconfig+ has been tested on 20,000 configurations generated for 10 different Kernel versions from 2003 to the present day. The experimental results show that randconfig+ is compatible with all tested Kernel versions, guarantees the correctness of the generated configurations, and increases conf’s randomness for numeric and string options.
  • Publicación
    Scalable Sampling of Highly-Configurable Systems: Generating Random Instances of the Linux Kernel
    (Association for Computing Machinery (ACM), 2023-01-05) Mayr Dorn, Christoph; Egyed, Alexander; Fernández Amoros, David José; Heradio Gil, Rubén
    Software systems are becoming increasingly configurable. A paradigmatic example is the Linux kernel, which can be adjusted for a tremendous variety of hardware devices, from mobile phones to supercomputers, thanks to the thousands of configurable features it supports. In principle, many relevant problems on configurable systems, such as completing a partial configuration to get the system instance that consumes the least energy or optimizes any other quality attribute, could be solved through exhaustive analysis of all configurations. However, configuration spaces are typically colossal and cannot be entirely computed in practice. Alternatively, configuration samples can be analyzed to approximate the answers. Generating those samples is not trivial since features usually have inter-dependencies that constrain the configuration space. Therefore, getting a single valid configuration by chance is extremely unlikely. As a result, advanced samplers are being proposed to generate random samples at a reasonable computational cost. However, to date, no sampler can deal with highly configurable complex systems, such as the Linux kernel. This paper proposes a new sampler that does scale for those systems, based on an original theoretical approach called extensible logic groups. The sampler is compared against five other approaches. Results show our tool to be the fastest and most scalable one.
  • Publicación
    Circuit Testing Based on Fuzzy Sampling with BDD Bases
    (University of Hawaiʻi at Mānoa, 2023) Pinilla, Elena; Fernández Amoros, David José; Heradio Gil, Rubén
    Fuzzy testing of integrated circuits is an established technique. Current approaches generate an approximately uniform random sample from a translation of the circuit to Boolean logic. These approaches have serious scalability issues, which become more pressing with the ever-increasing size of circuits. We propose using a base of binary decision diagrams to sample the translations as a soft computing approach. Uniformity is guaranteed by design and scalability is greatly improved. We test our approach against five other state-of-the-art tools and find our tool to outperform all of them, both in terms of performance and scalability.
  • Publicación
    Supporting commonality-based analysis of software product lines
    (Institution of Engineering and Technology (IET), 2011-03-24) Heradio Gil, Rubén; Fernández Amoros, David José; Cerrada Somolinos, José Antonio; Cerrada Somolinos, Carlos
    Software Product Line (SPL) engineering is a cost effective approach to developing families of similar products. Key to the success of this approach is to correctly scope the domain of the SPL, identifying the common and variable features of the products and the interdependencies between features. In this paper, we show how the commonality of a feature (i.e., the reuse ratio of the feature among the products) can be used to detect scope flaws in the early stages of development. SPL domains are usually modeled by means of feature diagrams following the FODA notation. We extend classical FODA trees with unrestricted cardinalities, and present an algorithm to compute the number of products modeled by a feature diagram and the commonality of the features. Finally, we compare the performance of our algorithm with two other approaches built on top of boolean logic SAT-solver technology such as cachet and relsat.
  • Publicación
    A Rule-Learning Approach for Detecting Faults in Highly Configurable Software Systems from Uniform Random Samples
    (2022) Heradio Gil, Rubén; Fernández Amoros, David José; Ruiz Parrado, Victoria; Cobo, Manuel J.; https://orcid.org/0000-0003-2993-7705; http://orcid.org/ 0000-0001-6575-803X
    Software systems tend to become more and more configurable to satisfy the demands of their increasingly varied customers. Exhaustively testing the correctness of highly configurable software is infeasible in most cases because the space of possible configurations is typically colossal. This paper proposes addressing this challenge by (i) working with a representative sample of the configurations, i.e., a ``uniform'' random sample, and (ii) processing the results of testing the sample with a rule induction system that extracts the faults that cause the tests to fail. The paper (i) gives a concrete implementation of the approach, (ii) compares the performance of the rule learning algorithms AQ, CN2, LEM2, PART, and RIPPER, and (iii) provides empirical evidence supporting our procedure