liu.seSearch for publications in DiVA
Change search
Refine search result
1234567 1 - 50 of 1097
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abdulla, Parosh Aziz
    et al.
    Department of Information Technology, Uppsala University, Sweden.
    Atig, Mohamed Faouzi
    Department of Information Technology, Uppsala University, Sweden.
    Chen, Yu-Fang
    Institute of Information Science, Academia Sinica, Taiwan.
    Holik, Lukas
    Faculty of Information Technology, Brno University of Technology, Czech Republic.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Rümmer, Philipp
    Department of Information Technology, Uppsala University, Sweden.
    Stenman, Jari
    Department of Information Technology, Uppsala University, Sweden.
    String Constraints for Verification2014In: 26th International Conference on Computer Aided Verification (CAV 2014), Vienna, Austria, Jul. 9-12, 2014., Berlin: Springer, 2014, p. 150-166Conference paper (Refereed)
    Abstract [en]

    We present a decision procedure for a logic that combines (i) word equations over string variables denoting words of arbitrary lengths, together with (ii) constraints on the length of words, and on (iii) the regular languages to which words belong. Decidability of this general logic is still open. Our procedure is sound for the general logic, and a decision procedure for a particularly rich fragment that restricts the form in which word equations are written. In contrast to many existing procedures, our method does not make assumptions about the maximum length of words. We have developed a prototypical implementation of our decision procedure, and integrated it into a CEGAR-based model checker for the analysis of programs encoded as Horn clauses. Our tool is able to automatically establish the correctness of several programs that are beyond the reach of existing methods.

  • 2.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University, Sweden.
    Atig, Mohamed Faouzi
    Uppsala University, Sweden.
    Chen, Yu-Fang
    Academia Sinica, Taiwan.
    Leonardsson, Carl
    Uppsala University, Sweden.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Automatic fence insertion in integer programs via predicate abstraction2012In: Static Analysis: 19th International Symposium, SAS 2012, Deauville, France, September 11-13, 2012. Proceedings / [ed] Antoine Miné, David Schmidt, Springer Berlin/Heidelberg, 2012, p. 164-180Conference paper (Refereed)
    Abstract [en]

    We propose an automatic fence insertion and verification framework for concurrent programs running under relaxed memory. Unlike previous approaches to this problem, which allow only variables of finite domain, we target programs with (unbounded) integer variables. The problem is difficult because it has two different sources of infiniteness: unbounded store buffers and unbounded integer variables. Our framework consists of three main components: (1) a finite abstraction technique for the store buffers, (2) a finite abstraction technique for the integer variables, and (3) a counterexample guided abstraction refinement loop of the model obtained from the combination of the two abstraction techniques. We have implemented a prototype based on the framework and run it successfully on all standard benchmarks together with several challenging examples that are beyond the applicability of existing methods.

  • 3.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University, Sweden.
    Atig, Mohamed Faouzi
    Uppsala University, Sweden.
    Chen, Yu-Fang
    Academia Sinica, Taiwan.
    Leonardsson, Carl
    Uppsala University, Sweden.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Memorax, a Precise and Sound Tool for Automatic Fence Insertion under TSO2013In: Tools and Algorithms for the Construction and Analysis of Systems: 19th International Conference, TACAS 2013, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2013, Rome, Italy, March 16-24, 2013. Proceedings, Springer Berlin/Heidelberg, 2013, p. 530-536Conference paper (Refereed)
    Abstract [en]

    We introduce MEMORAX, a tool for the verification of control state reachability (i.e., safety properties) of concurrent programs manipulating finite range and integer variables and running on top of weak memory models. The verification task is non-trivial as it involves exploring state spaces of arbitrary or even infinite sizes. Even for programs that only manipulate finite range variables, the sizes of the store buffers could grow unboundedly, and hence the state spaces that need to be explored could be of infinite size. In addition, MEMORAX in- corporates an interpolation based CEGAR loop to make possible the verification of control state reachability for concurrent programs involving integer variables. The reachability procedure is used to automatically compute possible memory fence placements that guarantee the unreachability of bad control states under TSO. In fact, for programs only involving finite range variables and running on TSO, the fence insertion functionality is complete, i.e., it will find all minimal sets of memory fence placements (minimal in the sense that removing any fence would result in the reachability of the bad control states). This makes MEMORAX the first freely available, open source, push-button verification and fence insertion tool for programs running under TSO with integer variables.

  • 4.
    Abdulla, Parosh Aziz
    et al.
    Uppsala Univ, Sweden.
    Atig, Mohamed Faouzi
    Uppsala Univ, Sweden.
    Chen, Yu-Fang
    Acad Sinica, Taiwan.
    Phi Diep, Bui
    Uppsala Univ, Sweden.
    Holik, Lukas
    Brno Univ Technol, Czech Republic.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Rummer, Philipp
    Uppsala Univ, Sweden.
    TRAU : SMT solver for string constraints2018In: PROCEEDINGS OF THE 2018 18TH CONFERENCE ON FORMAL METHODS IN COMPUTER AIDED DESIGN (FMCAD), IEEE , 2018, p. 165-169Conference paper (Refereed)
    Abstract [en]

    We introduce TRAU, an SMT solver for an expressive constraint language, including word equations, length constraints, context-free membership queries, and transducer constraints. The satisfiability problem for such a class of constraints is in general undecidable. The key idea behind TRAU is a technique called flattening, which searches for satisfying assignments that follow simple patterns. TRAU implements a Counter-Example Guided Abstraction Refinement (CEGAR) framework which contains both an under- and an over-approximation module. The approximations are refined in an automatic manner by information flow between the two modules. The technique implemented by TRAU can handle a rich class of string constraints and has better performance than state-of-the-art string solvers.

  • 5.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University.
    Atig, Mohammed Faouzi
    Uppsala University.
    Ganjei, Zeinab
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Zhu, Yunyun
    Uppsala University.
    Verification of Cache Coherence Protocols wrt. Trace Filters2015Conference paper (Refereed)
    Abstract [en]

    We address the problem of parameterized verification of cache coherence protocols for hardware accelerated transactional memories. In this setting, transactional memories leverage on the versioning capabilities of the underlying cache coherence protocol. The length of the transactions, their number, and the number of manipulated variables (i.e., cache lines) are parameters of the verification problem. Caches in such systems are finite-state automata communicating via broadcasts and shared variables. We augment our system with filters that restrict the set of possible executable traces according to existing conflict resolution policies. We show that the verification of coherence for parameterized cache protocols with filters can be reduced to systems with only a finite number of cache lines. For verification, we show how to account for the effect of the adopted filters in a symbolic backward reachability algorithm based on the framework of constrained monotonic abstraction. We have implemented our method and used it to verify transactional memory coherence protocols with respect to different conflict resolution policies.

  • 6.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University, Sweden.
    Dwarkadas, Sandhya
    University of Rochester, U.S.A..
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Shriraman, Arrvindh
    Simon Fraser University, Canada.
    Zhu, Yunyun
    Uppsala University, Sweden.
    Verifying Safety and Liveness for the FlexTM Hybrid Transactional Memory2013In: Design, Automation & Test in Europe (DATE 2013), Grenoble, France, March 18-22, 2013., IEEE , 2013, p. 785-790Conference paper (Refereed)
    Abstract [en]

    We consider the verification of safety (strict se- rializability and abort consistency) and liveness (obstruction and livelock freedom) for the hybrid transactional memory framework FLEXTM. This framework allows for flexible imple- mentations of transactional memories based on an adaptation of the MESI coherence protocol. FLEXTM allows for both eager and lazy conflict resolution strategies. Like in the case of Software Transactional Memories, the verification problem is not trivial as the number of concurrent transactions, their size, and the number of accessed shared variables cannot be a priori bounded. This complexity is exacerbated by aspects that are specific to hardware and hybrid transactional memories. Our work takes into account intricate behaviours such as cache line based conflict detection, false sharing, invisible reads or non-transactional instructions. We carry out the first automatic verification of a hybrid transactional memory and establish, by adopting a small model approach, challenging properties such as strict serializability, abort consistency, and obstruction freedom for both an eager and a lazy conflict resolution strategies. We also detect an example that refutes livelock freedom. To achieve this, our prototype tool makes use the latest antichain based techniques to handle systems with tens of thousands of states.

  • 7.
    Abdulla, Parosh Aziz
    et al.
    Uppsala Univ, Sweden.
    Haziza, Frederic
    Uppsala Univ, Sweden.
    Holik, Lukas
    Uppsala Univ, Sweden; Brno Univ Technol, Czech Republic.
    Jonsson, Bengt
    Uppsala Univ, Sweden.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Correction: An integrated specification and verification technique for highly concurrent data structures (vol 19, pg 549, 2017)2021In: International Journal on Software Tools for Technology Transfer, ISSN 1433-2779, E-ISSN 1433-2787, Vol. 23, article id 825Article in journal (Other academic)
    Abstract [en]

    n/a

  • 8.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University, Sweden.
    Haziza, Frédéric
    Uppsala University, Sweden.
    Holik, Lukas
    Brno University of Technology, Czech Republic.
    Jonsson, Bengt
    Uppsala University, Sweden.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    An Integrated Specification and Verification Technique for Highly Concurrent Data Structures2013In: The 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2013), Rome, Italy, March 16-24, 2013. / [ed] Piterman, Nir, Smolka, Scott, 2013Conference paper (Refereed)
    Abstract [en]

    We present a technique for automatically verifying safety properties of concurrent programs, in particular programs which rely on subtle dependencies of local states of different threads, such as lock-free implementations of stacks and queues in an environment without garbage collection. Our technique addresses the joint challenges of infinite-state specifications, an unbounded number of threads, and an unbounded heap managed by explicit memory allocation. Our technique builds on the automata-theoretic approach to model checking, in which a specification is given by an automaton that observes the execution of a program and accepts executions that violate the intended specification.We extend this approach by allowing specifications to be given by a class of infinite-state automata. We show how such automata can be used to specify queues, stacks, and other data structures, by extending a data-independence argument. For verification, we develop a shape analysis, which tracks correlations between pairs of threads, and a novel abstraction to make the analysis practical. We have implemented our method and used it to verify programs, some of which have not been verified by any other automatic method before.

  • 9.
    Abraham, Michael
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Effektivare fordonsdiagnostik över CAN-bussen genom UDS2020Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Cars are getting more technically advanced and more ECUs are being developed that results in increased safety and comfort, and a lower environmental impact. This leads to a complex work to test and verify that all the different ECUs are functioning as intended in various situations. Vehicle diagnostics often requires software from third parties that are often expensive. Syntronic AB are currently using software with a much larger functionality than needed to perform vehicle diagnostics and much of the unneces-sary functionality in the software leads to unnecessarily long runtimes for the program. By studying CAN and UDS and analyzing how they interact, I was able to create a software by systematically developing the software with two interfaces connected to each computer and continuously testing the implementation against the theoretical basis and then finally testing the software in a vehicle. The created software was better suited to the needs of the company and the more functionality-adapted software could perform the same diagnostics faster than the company’s current software. The most used UDS-service by the company could be implemented and the created software enabled more UDS services to be added without modifications of the main program or its features.

    Download full text (pdf)
    fulltext
  • 10.
    Abrahamsson, Linn
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Melin Wenström, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Användning av prototyper som verktyg för kravhantering i agil mjukvaruutveckling: - En fallstudie2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Requirements Engineering (RE) in Agile Software Development (ASD) is a challenge thatmany face and several techniques exist when doing so. One such technique is prototyping, when a model of a product is used to gather important information in software develop-ment. To describe how much a prototype resembles the product the notion of fidelity is used. The aim of this study is to contribute to research regarding prototyping in ASD,and to examine the effect of a prototype’s fidelity when using prototypes in discussionsduring RE. A case study is performed at the company Exsitec where staff are interviewedregarding prototyping in software development. Thereafter, two prototypes of low andhigh fidelity are developed and used in interviews as a basis for discussion. Based on thisstudy, the use of prototypes in software projects can help customers trust the process,improve communication with customers, and facilitate when trying to reach consensusamong different stakeholders. Furthermore, depending on how they are used, prototypescan contribute to understanding the big picture of the requirements and can also serve asdocumentation. The study also shows some, albeit subtle, differences in the informationcollected using prototypes with low and high fidelity. The use of a high fidelity prototypeseems to generate more requirements, but makes interviewees less likely to come up withlarger, more comprehensive requirement changes.

    Download full text (pdf)
    fulltext
  • 11.
    Abrahamsson, Robin
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Berntsen, David
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Comparing modifiability of React Native and two native codebases2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Creating native mobile application on multiple platforms generate a lot of duplicate code. This thesis has evaluated if the code quality attribute modifiability improves when migrating to React Native. One Android and one iOS codebase existed for an application and a third codebase was developed with React Native. The measurements of the codebases were based on the SQMMA-model. The metrics for the model were collected with static analyzers created specifically for this project. The results created consists of graphs that show the modifiability for some specific components over time and graphs that show the stability of the platforms. These graphs show that when measuring code metrics on applications over time it is better to do this on a large codebase that has been developed for some time. When calculating a modifiability value the sum of the metrics and the average value of the metrics between files should be used and it is shown that the React Native platform seems to be more stable than native.

    Download full text (pdf)
    fulltext
  • 12.
    Abualigah, Laith
    et al.
    Al al Bayt Univ, Jordan; Al Ahliyya Amman Univ, Jordan; Lebanese Amer Univ, Lebanon; Middle East Univ, Jordan; Appl Sci Private Univ, Jordan; Univ Sains Malaysia, Malaysia; Sunway Univ Malaysia, Malaysia.
    Oliva, Diego
    Univ Guadalajara, Mexico.
    Jia, Heming
    Sanming Univ, Peoples R China.
    Gul, Faiza
    Air Univ, Pakistan.
    Khodadadi, Nima
    Florida Int Univ, FL USA.
    Hussien, Abdelazim
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Fayoum Univ, Egypt.
    Al Shinwan, Mohammad
    Appl Sci Private Univ, Jordan.
    Ezugwu, Absalom E.
    North West Univ, South Africa.
    Abuhaija, Belal
    Wenzhou Kean Univ, Peoples R China.
    Abu Zitar, Raed
    Sorbonne Univ Abu Dhabi, U Arab Emirates.
    Improved prairie dog optimization algorithm by dwarf mongoose optimization algorithm for optimization problems2023In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 83, no 11, p. 32613-32653Article in journal (Refereed)
    Abstract [en]

    Recently, optimization problems have been revised in many domains, and they need powerful search methods to address them. In this paper, a novel hybrid optimization algorithm is proposed to solve various benchmark functions, which is called IPDOA. The proposed method is based on enhancing the search process of the Prairie Dog Optimization Algorithm (PDOA) by using the primary updating mechanism of the Dwarf Mongoose Optimization Algorithm (DMOA). The main aim of the proposed IPDOA is to avoid the main weaknesses of the original methods; these weaknesses are poor convergence ability, the imbalance between the search process, and premature convergence. Experiments are conducted on 23 standard benchmark functions, and the results are compared with similar methods from the literature. The results are recorded in terms of the best, worst, and average fitness function, showing that the proposed method is more vital to deal with various problems than other methods.

  • 13.
    Adegboye, Oluwatayomi Rereloluwa
    et al.
    Univ Mediterranean Karpasia, Turkiye.
    Feda, Afi Kekeli
    European Univ Lefke, Turkiye.
    Ojekemi, Opeoluwa Seun
    Univ Mediterranean Karpasia, Turkiye.
    Agyekum, Ephraim Bonah
    Ural Fed Univ, Russia.
    Hussien, Abdelazim
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Fayoum Univ, Egypt; Appl Sci Private Univ, Jordan; Middle East Univ, Jordan.
    Kamel, Salah
    Aswan Univ, Egypt.
    Chaotic opposition learning with mirror reflection and worst individual disturbance grey wolf optimizer for continuous global numerical optimization2024In: Scientific Reports, E-ISSN 2045-2322, Vol. 14, no 1, article id 4660Article in journal (Refereed)
    Abstract [en]

    The effective meta-heuristic technique known as the grey wolf optimizer (GWO) has shown its proficiency. However, due to its reliance on the alpha wolf for guiding the position updates of search agents, the risk of being trapped in a local optimal solution is notable. Furthermore, during stagnation, the convergence of other search wolves towards this alpha wolf results in a lack of diversity within the population. Hence, this research introduces an enhanced version of the GWO algorithm designed to tackle numerical optimization challenges. The enhanced GWO incorporates innovative approaches such as Chaotic Opposition Learning (COL), Mirror Reflection Strategy (MRS), and Worst Individual Disturbance (WID), and it's called CMWGWO. MRS, in particular, empowers certain wolves to extend their exploration range, thus enhancing the global search capability. By employing COL, diversification is intensified, leading to reduced solution stagnation, improved search precision, and an overall boost in accuracy. The integration of WID fosters more effective information exchange between the least and most successful wolves, facilitating a successful exit from local optima and significantly enhancing exploration potential. To validate the superiority of CMWGWO, a comprehensive evaluation is conducted. A wide array of 23 benchmark functions, spanning dimensions from 30 to 500, ten CEC19 functions, and three engineering problems are used for experimentation. The empirical findings vividly demonstrate that CMWGWO surpasses the original GWO in terms of convergence accuracy and robust optimization capabilities.

  • 14.
    Adiththan, Arun
    et al.
    CUNY, NY 10019 USA.
    Ramesh, S.
    Gen Motors RandD, MI 48090 USA.
    Samii, Soheil
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Gen Motors RandD, MI 48090 USA.
    Cloud-assisted Control of Ground Vehicles using Adaptive Computation Offloading Techniques2018In: PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION and TEST IN EUROPE CONFERENCE and EXHIBITION (DATE), IEEE , 2018, p. 589-592Conference paper (Refereed)
    Abstract [en]

    The existing approaches to design efficient safety critical control applications is constrained by limited in-vehicle sensing and computational capabilities. In the context of automated driving, we argue that there is a need to leverage resources "out-of-the-vehicle" to meet the sensing and powerful processing requirements of sophisticated algorithms (e.g., deep neural networks). To realize the need, a suitable computation offloading technique that meets the vehicle safety and stability requirements, even in the presence of unreliable communication network, has to be identified. In this work, we propose an adaptive offloading technique for control computations into the cloud. The proposed approach considers both current network conditions and control application requirements to determine the feasibility of leveraging remote computation and storage resources. As a case study, we describe a cloud-based path following controller application that leverages crowdsensed data for path planning.

  • 15.
    Adolfsson, Fredrik
    Linköping University, Department of Computer and Information Science, Software and Systems.
    A Model-Based Approach to Hands Overlay for Augmented Reality2021Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Augmented Reality is a technology where the user sees the environment mixed with a virtual reality containing things such as text, animations, pictures, and videos. Remote guidance is a sub-field of Augmented Reality where guidance is given remotely to identify and solve problems without being there in person. Using hands overlay, the guide can use his or her hand to point and show gestures in real-time. To do this one needs to track the hands and create a video stream that represents them. The video stream of the hands is then overlaid on top of the video from the individual getting help. A solution currently used in the industry is to use image segmentation, which is done by segmenting an image to foreground and background to decide what to include. This requires distinct differences between the pixels that should be included and the ones that should be discarded to work correctly. This thesis instead investigates a model-based approach to hand tracking, where one tracks points of interest on the hands to build a 3D model of them. A model-based solution is based on sensor data, meaning that it would not have the limitations that image segmentation has. A prototype is developed and integrated into the existing solution. The hand modeling is done in a Unity application and then transferred into the existing application. The results show that there is a clear but not too significant overhead, so it can run on a normal computer. The prototype works as a proof of concept and shows the potential of a model-based approach.

    Download full text (pdf)
    fulltext
  • 16.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Software and Systems.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Adaptive Temperature-Aware SoC Test Scheduling Considering Process Variation2011In: Digital System Design (DSD), 2011 14th Euromicro Conference on, IEEE, 2011, p. 197-204Conference paper (Refereed)
    Abstract [en]

    High temperature and process variation areundesirable effects for modern systems-on-chip. The hightemperature is a prominent issue during test and should be takencare of during the test process. Modern SoCs, affected by largeprocess variation, experience rapid and large temperaturedeviations and, therefore, a traditional static test schedule which isunaware of these deviations will be suboptimal in terms of speedand/or thermal-safety. This paper presents an adaptive testscheduling method which addresses the temperature deviationsand acts accordingly in order to improve the test speed andthermal-safety. The proposed method is divided into acomputationally intense offline-phase, and a very simple online-phase.In the offline-phase a schedule tree is constructed, and inthe online-phase the appropriate path in the schedule tree istraversed, step by step and based on temperature sensor readings.Experiments have demonstrated the efficiency of the proposedmethod.

  • 17.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    An Efficient Temperature-Gradient Based Burn-In Technique for 3D Stacked ICs2014In: Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, IEEE conference proceedings, 2014Conference paper (Refereed)
    Abstract [en]

    Burn-in is usually carried out with high temperature and elevated voltage. Since some of the early-life failures depend not only on high temperature but also on temperature gradients, simply raising up the temperature of an IC is not sufficient to detect them. This is especially true for 3D stacked ICs, since they have usually very large temperature gradients. The efficient detection of these early-life failures requires that specific temperature gradients are enforced as a part of the burn-in process. This paper presents an efficient method to do so by applying high power stimuli to the cores of the IC under burn-in through the test access mechanism. Therefore, no external heating equipment is required. The scheduling of the heating and cooling intervals to achieve the required temperature gradients is based on thermal simulations and is guided by functions derived from a set of thermal equations. Experimental results demonstrate the efficiency of the proposed method.

  • 18.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    An Integrated Temperature-Cycling Acceleration and Test Technique for 3D Stacked ICs2015In: 20th Asia and South Pacific Design Automation Conference (ASP-DAC 2015), Chiba/Tokyo, Japan, Jan. 19-22, 2015., Institute of Electrical and Electronics Engineers (IEEE), 2015, p. 526-531Conference paper (Refereed)
    Abstract [en]

    In a modern 3D IC, electrical connections between vertically stacked dies are made using through silicon vias. Through silicon vias are subject to undesirable early-life effects such as protrusion as well as void formation and growth. These effects result in opens, resistive opens, and stress induced carrier mobility reduction, and consequently circuit failures. Operating the ICs under extreme temperature cycling can effectively accelerate such early-life failures and make them detectable at the manufacturing test process. An integrated temperature-cycling acceleration and test technique is introduced in this paper that integrates a temperature-cycling acceleration procedure with pre-, mid-, and post-bond tests for 3D ICs. Moreover, it reduces the need for costly temperature chamber based temperature-cycling acceleration procedures. All these result in a reduction in the overall test costs. The proposed method is a schedule-based solution that creates the required temperature cycling effect along with performing the tests. Experimental results demonstrate its efficiency.

  • 19.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Efficient Test Application for Rapid Multi-Temperature Testing2015In: Proceedings of the 25th edition on Great Lakes Symposium on VLSI, Association for Computing Machinery (ACM), 2015, p. 3-8Conference paper (Other academic)
    Abstract [en]

    Different defects may manifest themselves at different temperatures. Therefore, the tests that target such temperature-dependent defects must be applied at different temperatures appropriate for detecting them. Such multi-temperature testing scheme applies tests at different required temperatures. It is known that a test's power dissipation depends on the previously applied test. Therefore, the same set of tests when organized differently dissipates different amounts of power. The technique proposed in this paper organizes the tests efficiently so that the resulted power levels lead to the required temperatures. Consequently a rapid multi-temperature testing is achieved. Experimental studies demonstrate the efficiency of the proposed technique.

  • 20.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Heuristics for Adaptive Temperature-Aware SoC Test Scheduling Considering Process Variation2011In: The 11th Swedish System-on-Chip Conference, Varberg, Sweden, May 2-3, 2011, 2011Conference paper (Other academic)
    Abstract [en]

    High working temperature and process variation are undesirable effects for modern systems-on-chip. The high temperature should be taken care of during the test. On the other hand, large process variations induce rapid and large temperature deviations causing the traditional static test schedules to be suboptimal in terms of speed and/or thermal-safety. A remedy to this problem is an adaptive test schedule which addresses the temperature deviations by reacting to them. Our adaptive method is divided into a computationally intense offline-phase, and a very simple online-phase. In this paper, heuristics are proposed for the offline phase in which the optimized schedule tree is found. In the online-phase, based on the temperature sensor readings the appropriate path in the schedule tree is traversed. Experiments are made to tune the proposed heuristics and to demonstrate their efficiency.

  • 21.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Process-variation and Temperature Aware SoC Test Scheduling Technique2013In: Journal of electronic testing, ISSN 0923-8174, E-ISSN 1573-0727, Vol. 29, no 4, p. 499-520Article in journal (Refereed)
    Abstract [en]

    High temperature and process variation are undesirable phenomena affecting modern Systems-on-Chip (SoC). High temperature is a well-known issue, in particular during test, and should be taken care of in the test process. Modern SoCs are affected by large process variation and therefore experience large and time-variant temperature deviations. A traditional test schedule which ignores these deviations will be suboptimal in terms of speed or thermal-safety. This paper presents an adaptive test scheduling method which acts in response to the temperature deviations in order to improve the test speed and thermal safety. The method consists of an offline phase and an online phase. In the offline phase a schedule tree is constructed and in the online phase the appropriate path in the schedule tree is traversed based on temperature sensor readings. The proposed technique is designed to keep the online phase very simple by shifting the complexity into the offline phase. In order to efficiently produce high-quality schedules, an optimization heuristic which utilizes a dedicated thermal simulation is developed. Experiments are performed on a number of SoCs including the ITC'02 benchmarks and the experimental results demonstrate that the proposed technique significantly improves the cost of the test in comparison with the best existing test scheduling method.

    Download full text (pdf)
    fulltext
  • 22.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Process-Variation Aware Multi-temperature Test Scheduling2014In: 27th International Conference on VLSI Design and 13th International Conference on Embedded Systems, IEEE conference proceedings, 2014, p. 32-37Conference paper (Refereed)
    Abstract [en]

    Chips manufactured with deep sub micron technologies are prone to large process variation and temperature-dependent defects. In order to provide high test efficiency, the tests for temperature-dependent defects should be applied at appropriate temperature ranges. Existing static scheduling techniques achieve these specified temperatures by scheduling the tests, specially developed heating sequences, and cooling intervals together. Because of the temperature uncertainty induced by process variation, a static test schedule is not capable of applying the tests at intended temperatures in an efficient manner. As a result the test cost will be very high. In this paper, an adaptive test scheduling method is introduced that utilizes on-chip temperature sensors in order to adapt the test schedule to the actual temperatures. The proposed method generates a low cost schedule tree based on the variation statistics and thermal simulations in the design phase. During the test, a chip selects an appropriate schedule dynamically based on temperature sensor readings. A 23% decrease in the likelihood that tests are not applied at the intended temperatures is observed in the experimental studies in addition to 20% reduction in test application time.

  • 23.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Temperature-Gradient Based Burn-In for 3D Stacked ICs2013In: The 12th Swedish System-on-Chip Conference (SSoCC 2013), Ystad, Sweden, May 6-7, 2013 (not reviewed, not printed)., 2013Conference paper (Other academic)
    Abstract [en]

    3D Stacked IC fabrication, using Through-Silicon-Vias, is a promising technology for future integrated circuits. However, large temperature gradients may exacerbate early-life-failures to the extent that the commercialization of 3D Stacked ICs is challenged. The effective detection of these early-life-failures requires that burn-in is performed when the IC’s temperatures comply with the thermal maps that properly specify the temperature gradients. In this paper, two methods that efficiently generate and maintain the specified thermal maps are proposed. The thermal maps are achieved by applying heating and cooling intervals to the chips under test through test access mechanisms. Therefore, no external heating system is required. The scheduling of the heating and cooling intervals is based on thermal simulations. The schedule generation is guided by functions that are derived from the temperature equations. Experimental results demonstrate the efficiency of the proposed method.

  • 24.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Temperature-Gradient Based Test Scheduling for 3D Stacked ICs2013In: 2013 IEEE International Conference on Electronics, Circuits, and Systems, IEEE conference proceedings, 2013, p. 405-408Conference paper (Refereed)
    Abstract [en]

    Defects that are dependent on temperature-gradients (e.g., delay-faults) introduce a challenge for achieving an effective test process, in particular for 3D ICs. Testing for such defects must be performed when the proper temperature gradients are enforced on the IC, otherwise these defects may escape the test. In this paper, a technique that efficiently heats up the IC during test so that it complies with the specified temperature gradients is proposed. The specified temperature gradients are achieved by applying heating sequences to the cores of the IC under test trough test access mechanism; thus no external heating mechanism is required. The scheduling of the test and heating sequences is based on thermal simulations. The schedule generation is guided by functions derived from the IC's temperature equation. Experimental results demonstrate that the proposed technique offers considerable test time savings.

  • 25.
    Aghaee, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Temperature-Gradient-Based Burn-In and Test Scheduling for 3-D Stacked ICs2015In: IEEE Transactions on Very Large Scale Integration (vlsi) Systems, ISSN 1063-8210, E-ISSN 1557-9999, Vol. 23, no 12, p. 2992-3005Article in journal (Refereed)
    Abstract [en]

    Large temperature gradients exacerbate various types of defects including early-life failures and delay faults. Efficient detection of these defects requires that burn-in and test for delay faults, respectively, are performed when temperature gradients with proper magnitudes are enforced on an Integrated Circuit (IC). This issue is much more important for 3-D stacked ICs (3-D SICs) compared with 2-D ICs because of the larger temperature gradients in 3-D SICs. In this paper, two methods to efficiently enforce the specified temperature gradients on the IC, for burn-in and delay-fault test, are proposed. The specified temperature gradients are enforced by applying high-power stimuli to the cores of the IC under test through the test access mechanism. Therefore, no external heating mechanism is required. The tests, high power stimuli, and cooling intervals are scheduled together based on temperature simulations so that the desired temperature gradients are rapidly enforced. The schedule generation is guided by functions derived from a set of thermal equations. The experimental results demonstrate the efficiency of the proposed methods.

    Download full text (pdf)
    fulltext
  • 26. Order onlineBuy this publication >>
    Aghighi, Meysam
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Computational Complexity of some Optimization Problems in Planning2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Automated planning is known to be computationally hard in the general case. Propositional planning is PSPACE-complete and first-order planning is undecidable. One method for analyzing the computational complexity of planning is to study restricted subsets of planning instances, with the aim of differentiating instances with varying complexity. We use this methodology for studying the computational complexity of planning. Finding new tractable (i.e. polynomial-time solvable) problems has been a particularly important goal for researchers in the area. The reason behind this is not only to differentiate between easy and hard planning instances, but also to use polynomial-time solvable instances in order to construct better heuristic functions and improve planners. We identify a new class of tractable cost-optimal planning instances by restricting the causal graph. We study the computational complexity of oversubscription planning (such as the net-benefit problem) under various restrictions and reveal strong connections with classical planning. Inspired by this, we present a method for compiling oversubscription planning problems into the ordinary plan existence problem. We further study the parameterized complexity of cost-optimal and net-benefit planning under the same restrictions and show that the choice of numeric domain for the action costs has a great impact on the parameterized complexity. We finally consider the parameterized complexity of certain problems related to partial-order planning. In some applications, less restricted plans than total-order plans are needed. Therefore, a partial-order plan is being used instead. When dealing with partial-order plans, one important question is how to achieve optimal partial order plans, i.e. having the highest degree of freedom according to some notion of flexibility. We study several optimization problems for partial-order plans, such as finding a minimum deordering or reordering, and finding the minimum parallel execution length.

    List of papers
    1. Oversubscription planning: Complexity and compilability
    Open this publication in new window or tab >>Oversubscription planning: Complexity and compilability
    2014 (English)In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AI Access Foundation , 2014, Vol. 3, p. 2221-2227Conference paper, Published paper (Refereed)
    Abstract [en]

    Many real-world planning problems are oversubscription problems where all goals are not simultaneously achievable and the planner needs to find a feasible subset. We present complexity results for the so-called partial satisfaction and net benefit problems under various restrictions; this extends previous work by van den Briel et al. Our results reveal strong connections between these problems and with classical planning. We also present a method for efficiently compiling oversubscription problems into the ordinary plan existence problem; this can be viewed as a continuation of earlier work by Keyder and Geffner.

    Place, publisher, year, edition, pages
    AI Access Foundation, 2014
    National Category
    Computer and Information Sciences
    Identifiers
    urn:nbn:se:liu:diva-116727 (URN)000485439702031 ()2-s2.0-84908192348 (Scopus ID)9781577356790 (ISBN)
    Conference
    28th AAAI Conference on Artificial Intelligence, AAAI 2014, 26th Innovative Applications of Artificial Intelligence Conference, IAAI 2014 and the 5th Symposium on Educational Advances in Artificial Intelligence, EAAI 2014
    Available from: 2015-04-09 Created: 2015-04-02 Last updated: 2020-06-29
    2. Tractable Cost-Optimal Planning over Restricted Polytree Causal Graphs
    Open this publication in new window or tab >>Tractable Cost-Optimal Planning over Restricted Polytree Causal Graphs
    2015 (English)In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI Press, 2015Conference paper, Published paper (Refereed)
    Abstract [en]

    Causal graphs are widely used to analyze the complexity of planning problems. Many tractable classes have been identified with their aid and state-of-the-art heuristics have been derived by exploiting such classes. In particular, Katz and Keyder have studied causal graphs that are hourglasses (which is a generalization of forks and inverted-forks) and shown that the corresponding cost-optimal planning problem is tractable under certain restrictions. We continue this work by studying polytrees (which is a generalization of hourglasses) under similar restrictions. We prove tractability of cost-optimal planning by providing an algorithm based on a novel notion of variable isomorphism. Our algorithm also sheds light on the k-consistency procedure for identifying unsolvable planning instances. We speculate that this may, at least partially, explain why merge-and-shrink heuristics have been successful for recognizing unsolvable instances.

    Place, publisher, year, edition, pages
    AAAI Press, 2015
    Series
    Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399, E-ISSN 2374-3468
    Keywords
    automated planning, causal graph, polynomial-time algorithm, cost-optimal planning, polytree
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-118729 (URN)000485625503038 ()978-1-57735-703-2 (ISBN)
    Conference
    29th AAAI Conference on Artificial Intelligence (AAAI-15), January 25–30, Austin, TX, USA
    Funder
    CUGS (National Graduate School in Computer Science)
    Available from: 2015-06-03 Created: 2015-06-03 Last updated: 2022-02-18
    3. Cost-optimal and Net-benefit Planning: A Parameterised Complexity View
    Open this publication in new window or tab >>Cost-optimal and Net-benefit Planning: A Parameterised Complexity View
    2015 (English)In: 24th International Joint Conference on Artificial Intelligence (IJCAI-15), IJCAI-INT JOINT CONF ARTIF INTELL, ALBERT-LUDWIGS UNIV FREIBURG GEORGES-KOHLER-ALLEE, INST INFORMATIK, GEB 052, FREIBURG, D-79110, GERMANY , 2015, p. 1487-1493Conference paper, Published paper (Refereed)
    Abstract [en]

    Cost-optimal planning (COP) uses action costs and asks for a minimum-cost plan. It is sometimes assumed that there is no harm in using actions with zero cost or rational cost. Classical complexity analysis does not contradict this assumption; planning is PSPACE-complete regardless of whether action costs are positive or non-negative, integer or rational. We thus apply parameterised complexity analysis to shed more light on this issue. Our main results are the following. COP is W[2]-complete for positive integer costs, i.e. it is no harder than finding a minimum-length plan, but it is para-NPhard if the costs are non-negative integers or positive rationals. This is a very strong indication that the latter cases are substantially harder. Net-benefit planning (NBP) additionally assigns goal utilities and asks for a plan with maximum difference between its utility and its cost. NBP is para-NP-hard even when action costs and utilities are positive integers, suggesting that it is harder than COP. In addition, we also analyse a large number of subclasses, using both the PUBS restrictions and restricting the number of preconditions and effects.

    Place, publisher, year, edition, pages
    IJCAI-INT JOINT CONF ARTIF INTELL, ALBERT-LUDWIGS UNIV FREIBURG GEORGES-KOHLER-ALLEE, INST INFORMATIK, GEB 052, FREIBURG, D-79110, GERMANY, 2015
    National Category
    Transport Systems and Logistics
    Identifiers
    urn:nbn:se:liu:diva-128181 (URN)000442637801080 ()9781577357384 (ISBN)
    Conference
    24th International Joint Conference on Artificial Intelligence (IJCAI-15), Buenos Aires, Argentina, Jul 25-31, 2015
    Funder
    CUGS (National Graduate School in Computer Science), 1054Swedish Research Council, 621- 2014-4086
    Available from: 2016-05-20 Created: 2016-05-20 Last updated: 2019-07-03Bibliographically approved
    4. A Multi-parameter Complexity Analysis of Cost-optimal and Net-benefit Planning
    Open this publication in new window or tab >>A Multi-parameter Complexity Analysis of Cost-optimal and Net-benefit Planning
    2016 (English)In: Twenty-Sixth International Conference on Automated Planning and Scheduling King's College, London June 12, 2016 – June 17, 2016 / [ed] Amanda Coles, Andrew Coles, Stefan Edelkamp, Daniele Magazzeni, Scott Sanner, AAAI Press, 2016, p. 2-10Conference paper, Published paper (Refereed)
    Abstract [en]

    Aghighi and Bäckström have previously studied cost-optimal planning (COP) and net-benefit planning (NBP) for three action cost domains: the positive integers (Z_+), the non-negative integers (Z_0) and the positive rationals (Q_+). These were indistinguishable under standard complexity analysis for both problems, but separated for COP using parameterised complexity analysis. With the plan cost, k, as parameter, COP was W[2]-complete for Z_+, but para-NP-hard for both Z_0 and Q_+, i.e. presumably much harder. NBP was para-NP-hard for all three domains, thus remaining unseparable. We continue by considering combinations with several additional parameters and also the non-negative rationals (Q_0). Examples of new parameters are the plan length, l, and the largest denominator of the action costs, d. Our findings include: (1) COP remains W[2]-hard for all domains, even if combining all parameters; (2) COP for Z_0 is in W[2] for the combined parameter {k,l}; (3) COP for Q_+ is in W[2] for {k,d} and (4) COP for Q_0 is in W[2] for {k,d,l}. For NBP we consider further additional parameters, where the most crucial one for reducing complexity is the sum of variable utilities. Our results help to understand the previous results, eg. the separation between Z_+ and Q_+ for COP, and to refine the previous connections with empirical findings.

    Place, publisher, year, edition, pages
    AAAI Press, 2016
    Keywords
    cost-optimal planning, parameterised complexity, numeric domains
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-136278 (URN)000492982200001 ()9781577357575 (ISBN)
    Conference
    Twenty-Sixth International Conference on Automated Planning and Scheduling (ICAPS-16), London, UK, June 12–17, 2016
    Available from: 2017-04-05 Created: 2017-04-05 Last updated: 2020-06-29Bibliographically approved
    5. Plan Reordering and Parallel Execution -- A Parameterized Complexity View
    Open this publication in new window or tab >>Plan Reordering and Parallel Execution -- A Parameterized Complexity View
    2017 (English)Conference paper, Published paper (Refereed)
    Abstract [en]

    Bäckström has previously studied a number of optimization problems for partial-order plans, like finding a minimum deordering (MCD) or reordering (MCR), and finding the minimum parallel execution length (PPL), which are all NP-complete. We revisit these problems, but applying parameterized complexity analysis rather than standard complexity analysis. We consider various parameters, including both the original and desired size of the plan order, as well as its width and height. Our findings include that MCD and MCR are W[2]-hard and in W[P] when parameterized with the desired order size, and MCD is fixed-parameter tractable (fpt) when parameterized with the original order size. Problem PPL is fpt if parameterized with the size of the non-concurrency relation, but para-NP-hard in most other cases. We also consider this problem when the number (k) of agents, or processors, is restricted, finding that this number is a crucial parameter; this problem is fixed-parameter tractable with the order size, the parallel execution length and k as parameter, but para-NP-hard without k as parameter.

    Place, publisher, year, edition, pages
    AAAI Press, 2017
    Keywords
    Partially ordered plan, Parameterized complexity, Complexity of planning, Plan reordering, Parallel plan execution
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-136279 (URN)000485630703082 ()
    Conference
    Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
    Available from: 2017-04-05 Created: 2017-04-05 Last updated: 2020-06-29Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (pdf)
    omslag
    Download (jpg)
    presentationsbild
  • 27.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    A Multi-parameter Complexity Analysis of Cost-optimal and Net-benefit Planning2016In: Twenty-Sixth International Conference on Automated Planning and Scheduling King's College, London June 12, 2016 – June 17, 2016 / [ed] Amanda Coles, Andrew Coles, Stefan Edelkamp, Daniele Magazzeni, Scott Sanner, AAAI Press, 2016, p. 2-10Conference paper (Refereed)
    Abstract [en]

    Aghighi and Bäckström have previously studied cost-optimal planning (COP) and net-benefit planning (NBP) for three action cost domains: the positive integers (Z_+), the non-negative integers (Z_0) and the positive rationals (Q_+). These were indistinguishable under standard complexity analysis for both problems, but separated for COP using parameterised complexity analysis. With the plan cost, k, as parameter, COP was W[2]-complete for Z_+, but para-NP-hard for both Z_0 and Q_+, i.e. presumably much harder. NBP was para-NP-hard for all three domains, thus remaining unseparable. We continue by considering combinations with several additional parameters and also the non-negative rationals (Q_0). Examples of new parameters are the plan length, l, and the largest denominator of the action costs, d. Our findings include: (1) COP remains W[2]-hard for all domains, even if combining all parameters; (2) COP for Z_0 is in W[2] for the combined parameter {k,l}; (3) COP for Q_+ is in W[2] for {k,d} and (4) COP for Q_0 is in W[2] for {k,d,l}. For NBP we consider further additional parameters, where the most crucial one for reducing complexity is the sum of variable utilities. Our results help to understand the previous results, eg. the separation between Z_+ and Q_+ for COP, and to refine the previous connections with empirical findings.

    Download full text (pdf)
    A Multi-parameter Complexity Analysis of Cost-optimal and Net-benefit Planning
  • 28.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Cost-optimal and Net-benefit Planning: A Parameterised Complexity View2015In: 24th International Joint Conference on Artificial Intelligence (IJCAI-15), IJCAI-INT JOINT CONF ARTIF INTELL, ALBERT-LUDWIGS UNIV FREIBURG GEORGES-KOHLER-ALLEE, INST INFORMATIK, GEB 052, FREIBURG, D-79110, GERMANY , 2015, p. 1487-1493Conference paper (Refereed)
    Abstract [en]

    Cost-optimal planning (COP) uses action costs and asks for a minimum-cost plan. It is sometimes assumed that there is no harm in using actions with zero cost or rational cost. Classical complexity analysis does not contradict this assumption; planning is PSPACE-complete regardless of whether action costs are positive or non-negative, integer or rational. We thus apply parameterised complexity analysis to shed more light on this issue. Our main results are the following. COP is W[2]-complete for positive integer costs, i.e. it is no harder than finding a minimum-length plan, but it is para-NPhard if the costs are non-negative integers or positive rationals. This is a very strong indication that the latter cases are substantially harder. Net-benefit planning (NBP) additionally assigns goal utilities and asks for a plan with maximum difference between its utility and its cost. NBP is para-NP-hard even when action costs and utilities are positive integers, suggesting that it is harder than COP. In addition, we also analyse a large number of subclasses, using both the PUBS restrictions and restricting the number of preconditions and effects.

    Download full text (pdf)
    fulltext
  • 29.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Plan Reordering and Parallel Execution -- A Parameterized Complexity View2017Conference paper (Refereed)
    Abstract [en]

    Bäckström has previously studied a number of optimization problems for partial-order plans, like finding a minimum deordering (MCD) or reordering (MCR), and finding the minimum parallel execution length (PPL), which are all NP-complete. We revisit these problems, but applying parameterized complexity analysis rather than standard complexity analysis. We consider various parameters, including both the original and desired size of the plan order, as well as its width and height. Our findings include that MCD and MCR are W[2]-hard and in W[P] when parameterized with the desired order size, and MCD is fixed-parameter tractable (fpt) when parameterized with the original order size. Problem PPL is fpt if parameterized with the size of the non-concurrency relation, but para-NP-hard in most other cases. We also consider this problem when the number (k) of agents, or processors, is restricted, finding that this number is a crucial parameter; this problem is fixed-parameter tractable with the order size, the parallel execution length and k as parameter, but para-NP-hard without k as parameter.

  • 30.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Ståhlberg, Simon
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Analysing Approximability and Heuristics in Planning Using the Exponential-Time Hypothesis2016In: ECAI 2016: 22ND EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, IOS Press, 2016, Vol. 285, p. 184-192Conference paper (Refereed)
    Abstract [en]

    Cost-optimal planning has become a very well-studied topic within planning. Needless to say, cost-optimal planning has proven to be computationally hard both theoretically and in practice. Since cost-optimal planning is an optimisation problem, it is natural to analyse it from an approximation point of view. Even though such studies may be valuable in themselves, additional motivation is provided by the fact that there is a very close link between approximability and the performance of heuristics used in heuristic search. The aim of this paper is to analyse approximability (and indirectly the performance of heuristics) with respect to lower time bounds. That is, we are not content by merely classifying problems into complexity classes - we also study their time complexity. This is achieved by replacing standard complexity-theoretic assumptions (such as P not equal NP) with the exponential time hypothesis (ETH). This enables us to analyse, for instance, the performance of the h(+) heuristic and obtain general trade-off results that correlate approximability bounds with bounds on time complexity.

  • 31.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Ståhlberg, Simon
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Refining complexity analyses in planning by exploiting the exponential time hypothesis2016In: Annals of Mathematics and Artificial Intelligence, ISSN 1012-2443, E-ISSN 1573-7470, Vol. 78, no 2, p. 157-175Article in journal (Refereed)
    Abstract [en]

    The use of computational complexity in planning, and in AI in general, has always been a disputed topic. A major problem with ordinary worst-case analyses is that they do not provide any quantitative information: they do not tell us much about the running time of concrete algorithms, nor do they tell us much about the running time of optimal algorithms. We address problems like this by presenting results based on the exponential time hypothesis (ETH), which is a widely accepted hypothesis concerning the time complexity of 3-SAT. By using this approach, we provide, for instance, almost matching upper and lower bounds onthe time complexity of propositional planning.

    Download full text (pdf)
    fulltext
  • 32.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Oversubscription planning: Complexity and compilability2014In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AI Access Foundation , 2014, Vol. 3, p. 2221-2227Conference paper (Refereed)
    Abstract [en]

    Many real-world planning problems are oversubscription problems where all goals are not simultaneously achievable and the planner needs to find a feasible subset. We present complexity results for the so-called partial satisfaction and net benefit problems under various restrictions; this extends previous work by van den Briel et al. Our results reveal strong connections between these problems and with classical planning. We also present a method for efficiently compiling oversubscription problems into the ordinary plan existence problem; this can be viewed as a continuation of earlier work by Keyder and Geffner.

  • 33.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Ståhlberg, Simon
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Tractable Cost-Optimal Planning over Restricted Polytree Causal Graphs2015In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI Press, 2015Conference paper (Refereed)
    Abstract [en]

    Causal graphs are widely used to analyze the complexity of planning problems. Many tractable classes have been identified with their aid and state-of-the-art heuristics have been derived by exploiting such classes. In particular, Katz and Keyder have studied causal graphs that are hourglasses (which is a generalization of forks and inverted-forks) and shown that the corresponding cost-optimal planning problem is tractable under certain restrictions. We continue this work by studying polytrees (which is a generalization of hourglasses) under similar restrictions. We prove tractability of cost-optimal planning by providing an algorithm based on a novel notion of variable isomorphism. Our algorithm also sheds light on the k-consistency procedure for identifying unsolvable planning instances. We speculate that this may, at least partially, explain why merge-and-shrink heuristics have been successful for recognizing unsolvable instances.

  • 34.
    Agyekum, Ephraim Bonah
    et al.
    Ural Fed Univ, Russia.
    Ampah, Jeffrey Dankwa
    Tianjin Univ, Peoples R China.
    Khan, Tahir
    Zhejiang Univ, Peoples R China.
    Giri, Nimay Chandra
    Centurion Univ Technol & Management, India; Centur Univ Technol & Managemention, India.
    Hussien, Abdelazim
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Fayoum Univ, Egypt; Appl Sci Private Univ, Jordan; Middle East Univ, Jordan.
    Velkin, Vladimir Ivanovich
    Ural Fed Univ, Russia.
    Mehmood, Usman
    Bahcesehir Cyprus Univ, Turkiye; Univ Punjab, Pakistan.
    Kamel, Salah
    Aswan Univ, Egypt.
    Towards a reduction of emissions and cost-savings in homes: Techno-economic and environmental impact of two different solar water heaters2024In: Energy Reports, E-ISSN 2352-4847, Vol. 11, p. 963-981Article in journal (Refereed)
    Abstract [en]

    South Africa currently has the highest carbon emission intensity per kilowatt of electricity generation globally, and its government intends to reduce it. Some of the measures taken by the government include a reduction of emissions in the building sector using solar water heating (SWH) systems. However, there is currently no study in the country that comprehensively assesses the technical, economic, and environmental impact of SWH systems across the country. This study therefore used the System Advisor Model (SAM) to model two different technologies of SWH systems (i.e., flat plate (FPC) and evacuated tube (EPC) SWH) at five different locations (i.e., Pretoria, Upington, Kimberley, Durban, and Cape Town) strategically selected across the country. According to the study, the optimum azimuth for both the evacuated tube and flat plate SWH system in South Africa is 0 degrees. Installing FPC and EPC at the different locations would yield payback periods of 3.2 to 4.4 years and 3.5 to 4.3 years, respectively. Comparably, levelized cost of energy for the FPC and EPC will range from 7.47 to 9.62 cents/kWh and 7.66 to 9.24 cents/kWh, respectively, based on where the SWH system is located. Depending on where the facility is located, the annual cost savings for the FPC system would be between $486 and $625, while the EPC system would save between $529 and $638. Using SWHs can reduce CO2 emissions by 75-77% for the evacuated tube system and 69-76% for the flat plate system annually, depending on the location.

  • 35.
    Ahlgren, Simon
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Aini, Daniel
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Conversion and Analysis of Telemetric Data from the CCSDS Standard2017Independent thesis Basic level (university diploma), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    When communicating with spacecrafts, the international standard is to use the protocols defined by CCSDS. In this study, the Space Packet Protocol from CCSDS is converted to the Digital Recording Standard used in aviation. The goal of the study is to find out in what way such a conversion can be made, as well as analyzing the efficiency of different packing methods for the Digital Recording Standard. An application is developed in order to perform the conversion, and the performance of said application is profiled using different packet sizes. In the end the results are evaluated and an optimal packet size is found in terms of runtime and memory usage. In the end we conclude that a packet size of 216 bytes is best when prioritizing speed, and a packet size of 219 bytes is best when prioritizing memory.

    Download full text (pdf)
    fulltext
  • 36.
    Ahmad, Azeem
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    An Evaluation of Machine Learning Methods for Predicting Flaky Tests2020In: Proceedings of the 8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) / [ed] Horst Lichter, Selin Aydin, Thanwadee Sunetnanta, Toni Anwar, CEUR-WS , 2020, Vol. 2767, p. 37-46Conference paper (Other academic)
    Download full text (pdf)
    fulltext
  • 37.
    Ahmad, Azeem
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Contributions to Improving Feedback and Trust in Automated Testing and Continuous Integration and Delivery2022Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    An integrated release version (also known as a release candidate in software engineering) is produced by merging, building, and testing code on a regular basis as part of the Continuous Integration and Continuous Delivery (CI/CD) practices. Several benefits, including improved software quality and shorter release cycles, have been claimed for CI/CD. On the other hand, recent research has uncovered a plethora of problems and bad practices related to CI/CD adoption, necessitating some optimization. Some of the problems addressed in this work include the ability to respond to practitioners’ questions and obtain quick and trustworthy feedback in CI/CD. To be more specific, our effort concentrated on: 1) identifying the information needs of software practitioners engaged in CI/CD; 2) adopting test optimization approaches to obtain faster feedback that are realistic for use in CI/CD environments without introducing excessive technical requirements; 3) identifying perceived causes and automated root cause analysis of test flakiness, thereby providing developers with guidance on how to resolve test flakiness; and 4) identifying challenges in addressing information needs, providing faster and more trustworthy feedback. 

    The findings of the research reported in this thesis are based on data from three single-case studies and three multiple-case studies. The research uses quantitative and qualitative data collected via interviews, site visits, and workshops. To perform our analyses, we used data from firms producing embedded software as well as open-source repositories. The following are major research and practical contributions. 

    • Information Needs: The initial contribution to research is a list of information needs in CI/CD. This list contains 27 frequently asked questions on continuous integration and continuous delivery by software practitioners. The identified information needs have been classified as related to testing, code & commit, confidence, bug, and artifacts. We investigated how companies deal with information needs, what tools they use to deal with them, and who is interested in them. We concluded that there is a discrepancy between the identified needs and the techniques employed to meet them. Since some information needs cannot be met by current tools, manual inspections are required, which adds time to the process. Information about code & commit, confidence level, and testing is the most frequently sought for and most important information. 
    • Evaluation of Diversity Based Techniques/Tool: The contribution is to conduct a detailed examination of diversity-based techniques using industry test cases to determine if there is a difference between diversity functions in selecting integrationlevel automated test. Additionally, how diversity-based testing compares to other optimization techniques used in industry in terms of fault detection rates, feature coverage, and execution time. This enables us to observe how coverage changes when we run fewer test cases. We concluded that some of the techniques can eliminate up to 85% of test cases (provided by the case company) while still covering all distinct features/requirements. The techniques are developed and made available as an open-source tool for further research and application. 
    • Test Flakiness Detection, Prediction & Automated Root Cause Analysis: We identified 19 factors that professionals perceive affect test flakiness. These perceived factors are divided into four categories: test code, system under test, CI/test infrastructure, and organizational. We concluded that some of the perceived factors of test flakiness in closed-source development are directly related to non-determinism, whereas other perceived factors concern different aspects e.g., lack of good properties of a test case (i.e., small, simple and robust), deviations from the established  processes, etc. To see if the developers’ perceptions were in line with what they had labelled as flaky or not, we examined the test artifacts that were readily available. We verified that two of the identified perceived factors (i.e., test case size and simplicity) are indeed indicative of test flakiness. Furthermore, we proposed a light weight technique named trace-back coverage to detect flaky tests. Trace-back coverage was combined with other factors such as test smells indicating test flakiness, flakiness frequency and test case size to investigate the effect on revealing test flakiness. When all factors are taken into consideration, the precision of flaky test detection is increased from 57% (using single factor) to 86% (combination of different factors). 
    List of papers
    1. Data visualisation in continuous integration and delivery: Information needs, challenges, and recommendations
    Open this publication in new window or tab >>Data visualisation in continuous integration and delivery: Information needs, challenges, and recommendations
    2022 (English)In: IET Software, ISSN 1751-8806, E-ISSN 1751-8814, Vol. 16, no 3, p. 331-349Article in journal (Refereed) Published
    Abstract [en]

    Several operations, ranging from regular code updates to compiling, building, testing, and distribution to customers, are consolidated in continuous integration and delivery. Professionals seek additional information to complete the mission at hand during these tasks. Developers who devote a large amount of time and effort to finding such information may become distracted from their work. We will better understand the processes, procedures, and resources used to deliver a quality product on time by defining the types of information that software professionals seek. A deeper understanding of software practitioners information needs has many advantages, including remaining competitive, growing knowledge of issues that can stymie a timely update, and creating a visualisation tool to assist practitioners in addressing their information needs. This is an extension of a previous work done by the authors. The authors conducted a multiple-case holistic study with six different companies (38 unique participants) to identify information needs in continuous integration and delivery. This study attempts to capture the importance, frequency, required effort (e.g. sequence of actions required to collect information), current approach to handling, and associated stakeholders with respect to identified needs. 27 information needs associated with different stakeholders (i.e. developers, testers, project managers, release team, and compliance authority) were identified. The identified needs were categorised as testing, code & commit, confidence, bug, and artefacts. Apart from identifying information needs, practitioners face several challenges in developing visualisation tools. Thus, 8 challenges that were faced by the practitioners to develop/maintain visualisation tools for the software team were identified. The recommendations from practitioners who are experts in developing, maintaining, and providing visualisation services to the software team were listed.

    Place, publisher, year, edition, pages
    WILEY, 2022
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-176847 (URN)10.1049/sfw2.12030 (DOI)000660517400001 ()
    Note

    Funding Agencies|Linkoping University

    Available from: 2021-06-22 Created: 2021-06-22 Last updated: 2022-10-20
    2. Improving continuous integration with similarity-based test case selection
    Open this publication in new window or tab >>Improving continuous integration with similarity-based test case selection
    Show others...
    2018 (English)In: Proceedings of the 13th International Workshop on Automation of Software Test, New York: ACM Digital Library, 2018, p. 39-45Conference paper, Published paper (Refereed)
    Abstract [en]

    Automated testing is an essential component of Continuous Integration (CI) and Delivery (CD), such as scheduling automated test sessions on overnight builds. That allows stakeholders to execute entire test suites and achieve exhaustive test coverage, since running all tests is often infeasible during work hours, i.e., in parallel to development activities. On the other hand, developers also need test feedback from CI servers when pushing changes, even if not all test cases are executed. In this paper we evaluate similarity-based test case selection (SBTCS) on integration-level tests executed on continuous integration pipelines of two companies. We select test cases that maximise diversity of test coverage and reduce feedback time to developers. Our results confirm existing evidence that SBTCS is a strong candidate for test optimisation, by reducing feedback time (up to 92% faster in our case studies) while achieving full test coverage using only information from test artefacts themselves.

    Place, publisher, year, edition, pages
    New York: ACM Digital Library, 2018
    Series
    International Workshop on Automation of Software Test, ISSN 2377-8628
    Keywords
    Similarity based test case selection, Continuous integration, Automated testing
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-152002 (URN)10.1145/3194733.3194744 (DOI)000458922700009 ()978-1-4503-5743-2 (ISBN)
    Conference
    AST'18 2018 ACM/IEEE 13th International Workshop on Automation of Software Test
    Note

    Funding agencies: Chalmers Software Center7 [30]

    Available from: 2018-10-14 Created: 2018-10-14 Last updated: 2022-08-23
    3. Empirical analysis of practitioners perceptions of test flakiness factors
    Open this publication in new window or tab >>Empirical analysis of practitioners perceptions of test flakiness factors
    2021 (English)In: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689, Vol. 31, no 8, article id e1791Article in journal (Refereed) Published
    Abstract [en]

    Identifying the root causes of test flakiness is one of the challenges faced by practitioners during software testing. In other words, the testing of the software is hampered by test flakiness. Since the research about test flakiness in large-scale software engineering is scarce, the need for an empirical case-study where we can build a common and grounded understanding of the problem as well as relevant remedies that can later be evaluated in a large-scale context is a necessity. This study reports the findings from a multiple-case study. The authors conducted an online survey to investigate and catalogue the root causes of test flakiness and mitigation strategies. We attempted to understand how practitioners perceive test flakiness in closed-source development, such as how they define test flakiness and what practitioners perceive can affect test flakiness. The perceptions of practitioners were compared with the available literature. We investigated whether practitioners perceptions are reflected in the test artefacts such as what is the relationship between the perceived factors and properties of test artefacts. This study reported 19 factors that are perceived by professionals to affect test flakiness. These perceived factors are categorized as test code, system under test, CI/test infrastructure, and organization-related. The authors concluded that some of the perceived factors in test flakiness in closed-source development are directly related to non-determinism, whereas other perceived factors concern different aspects, for example, lack of good properties of a test case, deviations from the established processes, and ad hoc decisions. Given a data set from investigated cases, the authors concluded that two of the perceived factors (i.e., test case size and test case simplicity) have a strong effect on test flakiness.

    Place, publisher, year, edition, pages
    Wiley-Blackwell, 2021
    Keywords
    flaky tests; non-deterministic tests; practitioners perceptions; software testing; test smells
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-178938 (URN)10.1002/stvr.1791 (DOI)000687875100001 ()
    Note

    Funding Agencies|Chalmers Tekniska Hogskola; Linkopings Universitet

    Available from: 2021-09-06 Created: 2021-09-06 Last updated: 2022-08-23
    4. An Evaluation of Machine Learning Methods for Predicting Flaky Tests
    Open this publication in new window or tab >>An Evaluation of Machine Learning Methods for Predicting Flaky Tests
    2020 (English)In: Proceedings of the 8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) / [ed] Horst Lichter, Selin Aydin, Thanwadee Sunetnanta, Toni Anwar, CEUR-WS , 2020, Vol. 2767, p. 37-46Conference paper, Published paper (Other academic)
    Place, publisher, year, edition, pages
    CEUR-WS, 2020
    Series
    CEUR Workshop Proceedings, ISSN 1613-0073
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-174179 (URN)2-s2.0-85097906339 (Scopus ID)
    Conference
    27th Asia-Pacific Software Engineering Conference (APSEC 2020) Singapore (virtual), December 1, 2020.
    Available from: 2021-03-15 Created: 2021-03-15 Last updated: 2022-10-14Bibliographically approved
    5. A Multi-factor Approach for Flaky Test Detection and Automated Root Cause Analysis
    Open this publication in new window or tab >>A Multi-factor Approach for Flaky Test Detection and Automated Root Cause Analysis
    Show others...
    2021 (English)In: 2021 28th Asia-Pacific Software Engineering Conference (APSEC), IEEE , 2021, p. 338-348Conference paper, Published paper (Refereed)
    Abstract [en]

    Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, also known as non-deterministic tests, switch their outcomes without any modification in the codebase, hence reducing the confidence of developers during maintenance as well as in the quality of a product. Re-running test cases to reveal flakiness is resource-consuming, unreliable and does not reveal the root causes of test flakiness. Our paper evaluates a multi-factor approach to identify flaky test executions implemented in a tool named MDFlaker. The four factors are: trace-back coverage, flaky frequency, number of test smells, and test size. Based on the extracted factors, MDFlaker uses k-Nearest Neighbor (KNN) to determine whether failed test executions are flaky. We investigate MDFlaker in a case study with 2166 test executions from different open-source repositories. We evaluate the effectiveness of our flaky detection tool. We illustrate how the multi-factor approach can be used to reveal root causes for flakiness, and we conduct a qualitative comparison between MDFlaker and other tools proposed in literature. Our results show that the combination of different factors can be used to identify flaky tests. Each factor has its own trade-off, e.g., trace-back leads to many true positives, while flaky frequency yields more true negatives. Therefore, specific combinations of factors enable classification for testers with limited information (e.g., not enough test history information).

    Place, publisher, year, edition, pages
    IEEE, 2021
    Series
    Asia-Pacific Software Engineering Conference, ISSN 1530-1362, E-ISSN 2640-0715
    Keywords
    flaky tests; non-deterministic tests; flaky test detection; automated root-cause analysis; trace-back
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-186181 (URN)10.1109/APSEC53868.2021.00041 (DOI)000802192700034 ()2-s2.0-85126250720 (Scopus ID)9781665437844 (ISBN)9781665437851 (ISBN)
    Conference
    28th Asia-Pacific Software Engineering Conference (APSEC), Virtual event, December 06-09, 2021
    Available from: 2022-06-23 Created: 2022-06-23 Last updated: 2024-09-10
    Download full text (pdf)
    fulltext
    Download (png)
    presentationsbild
  • 38.
    Ahmad, Azeem
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    de Oliveira Neto, Francisco Gomes
    Chalmers & Univ Gothenburg, Sweden.
    Shi, Zhixiang
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    A Multi-factor Approach for Flaky Test Detection and Automated Root Cause Analysis2021In: 2021 28th Asia-Pacific Software Engineering Conference (APSEC), IEEE , 2021, p. 338-348Conference paper (Refereed)
    Abstract [en]

    Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, also known as non-deterministic tests, switch their outcomes without any modification in the codebase, hence reducing the confidence of developers during maintenance as well as in the quality of a product. Re-running test cases to reveal flakiness is resource-consuming, unreliable and does not reveal the root causes of test flakiness. Our paper evaluates a multi-factor approach to identify flaky test executions implemented in a tool named MDFlaker. The four factors are: trace-back coverage, flaky frequency, number of test smells, and test size. Based on the extracted factors, MDFlaker uses k-Nearest Neighbor (KNN) to determine whether failed test executions are flaky. We investigate MDFlaker in a case study with 2166 test executions from different open-source repositories. We evaluate the effectiveness of our flaky detection tool. We illustrate how the multi-factor approach can be used to reveal root causes for flakiness, and we conduct a qualitative comparison between MDFlaker and other tools proposed in literature. Our results show that the combination of different factors can be used to identify flaky tests. Each factor has its own trade-off, e.g., trace-back leads to many true positives, while flaky frequency yields more true negatives. Therefore, specific combinations of factors enable classification for testers with limited information (e.g., not enough test history information).

  • 39.
    Ahmad, Azeem
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Ericsson AB, Linköping, Sweden.
    Gomes de Oliveira Neto, Francisco
    Gothenburg University, Gothenburg, Sweden.
    Enoiu, Eduard Paul
    Mälardalens University, Mälardalens, Sweden.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    An Industrial Study on the Challenges and Effects of Diversity-Based Testing in Continuous Integration2023In: 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security (QRS), Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 337-347Conference paper (Refereed)
    Abstract [en]

    Many test prioritisation techniques have been proposed in order to improve test effectiveness of Continuous Integration (CI) pipelines. Particularly, diversity-based testing (DBT) has shown promising and competitive results to improve test effectiveness. However, the technical and practical challenges of introducing test prioritisation in CI pipelines are rarely discussed, thus hindering the applicability and adoption of those proposed techniques. This research builds on our prior work in which we evaluated diversity-based techniques in an industrial setting. This work investigates the factors that influence the adoption of DBT both in connection to improvements in test cost-effectiveness, as well as the process and human related challenges to transfer and use DBT prioritisation in CI pipelines. We report on a case study considering the CI pipeline of Axis Communications in Sweden. We performed a thematic analysis of a focus group interview with senior practitioners at the company to identify the challenges and perceived benefits of using test prioritisation in their test process. Our thematic analysis reveals a list of ten challenges and seven perceived effects of introducing test prioritisation in CI cycles. For instance, our participants emphasized the importance of introducing comprehensible and transparent techniques that instill trust in its users. Moreover, practitioners prefer techniques compatible with their current test infrastructure (e.g., test framework and environments) in order to reduce instrumentation efforts and avoid disrupting their current setup. In conclusion, we have identified tradeoffs between different test prioritisation techniques pertaining to the technical, process and human aspects of regression testing in CI. We summarize those findings in a list of seven advantages that refer to specific stakeholder interests and describe the effects of adopting DBT in CI pipelines.

  • 40.
    Ahmad, Azeem
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Ericsson AB.
    Gomes de Oliveira Neto, Francisco
    Gothenburg University, Gothenburg, Sweden.
    Enoiu, Eduard Paul
    Mälardalens University, Mälardalens, Sweden.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    The Comparative Evaluation of Test Prioritization Approaches in an Industrial Study2023In: 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security Companion (QRS-C), Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 35-44Conference paper (Refereed)
  • 41.
    Ahmad, Azeem
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Held, Erik Norrestam
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Identifying Randomness related Flaky Tests through Divergence and Execution Tracing2022In: 2022 IEEE 15TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2022), IEEE COMPUTER SOC , 2022, p. 293-300Conference paper (Refereed)
    Abstract [en]

    Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, known as non-deterministic tests, change their outcomes without any changes in the codebase, thus reducing the trust of developers during a software release as well as in the quality of a product. While rerunning test cases is a common approach, it is resource intensive, unreliable, and does not uncover the actual cause of test flakiness. Our paper evaluates an approach to identify randomness-related flaky. This paper used a divergence algorithm and execution tracing techniques to identify flaky tests, which resulted in the FLAKYPY prototype. In addition, this paper discusses the cases where FLAKYPY successfully identified the flaky test as well as those cases where FLAKYPY failed. The papers discuss how the reporting mechanism of FLAKYPY can help developers in identifying the root cause of randomness-related test flakiness. Thirty-two open-source projects were used in this. We concluded that FLAKYPY can detect most of the randomness-related test flakiness. In addition, the reporting mechanism of FLAKYPY reveals sufficient information about possible root causes of test flakiness.

  • 42.
    Ahmad, Azeem
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Data visualisation in continuous integration and delivery: Information needs, challenges, and recommendations2022In: IET Software, ISSN 1751-8806, E-ISSN 1751-8814, Vol. 16, no 3, p. 331-349Article in journal (Refereed)
    Abstract [en]

    Several operations, ranging from regular code updates to compiling, building, testing, and distribution to customers, are consolidated in continuous integration and delivery. Professionals seek additional information to complete the mission at hand during these tasks. Developers who devote a large amount of time and effort to finding such information may become distracted from their work. We will better understand the processes, procedures, and resources used to deliver a quality product on time by defining the types of information that software professionals seek. A deeper understanding of software practitioners information needs has many advantages, including remaining competitive, growing knowledge of issues that can stymie a timely update, and creating a visualisation tool to assist practitioners in addressing their information needs. This is an extension of a previous work done by the authors. The authors conducted a multiple-case holistic study with six different companies (38 unique participants) to identify information needs in continuous integration and delivery. This study attempts to capture the importance, frequency, required effort (e.g. sequence of actions required to collect information), current approach to handling, and associated stakeholders with respect to identified needs. 27 information needs associated with different stakeholders (i.e. developers, testers, project managers, release team, and compliance authority) were identified. The identified needs were categorised as testing, code & commit, confidence, bug, and artefacts. Apart from identifying information needs, practitioners face several challenges in developing visualisation tools. Thus, 8 challenges that were faced by the practitioners to develop/maintain visualisation tools for the software team were identified. The recommendations from practitioners who are experts in developing, maintaining, and providing visualisation services to the software team were listed.

    Download full text (pdf)
    fulltext
  • 43.
    Ahmad, Azeem
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Empirical analysis of practitioners perceptions of test flakiness factors2021In: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689, Vol. 31, no 8, article id e1791Article in journal (Refereed)
    Abstract [en]

    Identifying the root causes of test flakiness is one of the challenges faced by practitioners during software testing. In other words, the testing of the software is hampered by test flakiness. Since the research about test flakiness in large-scale software engineering is scarce, the need for an empirical case-study where we can build a common and grounded understanding of the problem as well as relevant remedies that can later be evaluated in a large-scale context is a necessity. This study reports the findings from a multiple-case study. The authors conducted an online survey to investigate and catalogue the root causes of test flakiness and mitigation strategies. We attempted to understand how practitioners perceive test flakiness in closed-source development, such as how they define test flakiness and what practitioners perceive can affect test flakiness. The perceptions of practitioners were compared with the available literature. We investigated whether practitioners perceptions are reflected in the test artefacts such as what is the relationship between the perceived factors and properties of test artefacts. This study reported 19 factors that are perceived by professionals to affect test flakiness. These perceived factors are categorized as test code, system under test, CI/test infrastructure, and organization-related. The authors concluded that some of the perceived factors in test flakiness in closed-source development are directly related to non-determinism, whereas other perceived factors concern different aspects, for example, lack of good properties of a test case, deviations from the established processes, and ad hoc decisions. Given a data set from investigated cases, the authors concluded that two of the perceived factors (i.e., test case size and test case simplicity) have a strong effect on test flakiness.

    Download full text (pdf)
    fulltext
  • 44.
    Ahmad, Azeem
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Software professionals' information needs in continuous integration and delivery2021In: SAC '21: Proceedings of the 36th Annual ACM Symposium on Applied ComputingMarch 2021, New York, NY, USA: ACM Digital Library, 2021, p. 1513-1520Conference paper (Other academic)
    Abstract [en]

    Continuous integration and delivery consolidate several activities, ranging from frequent code changes to compiling, building, testing, and deployment to customers. During these activities, software professionals seek additional information to perform the task at hand. Developers that spend a considerable amount of time and effort to identify such information can be distracted from doing productive work. By identifying the types of information that software professionals seek, we can better understand the processes, practices, and tools that are required to develop a quality product on time. A better understanding of the information needs of software practitioners has several benefits, such as staying competitive, increasing awareness of the issues that can hinder a timely release, and building a visualization tool that can help practitioners to address their information needs. We conducted a multiple-case holistic study with 5 different companies (34 unique participants) to identify information needs in continuous integration and delivery. This study attempts to capture the importance, frequency, required effort (e.g., sequence of actions required to collect information), current approach to handling, and associated stakeholders with respect to identified needs. We identified 27 information needs associated with different stakeholders (i.e., developers, testers, project managers, release team, and compliance authority). The identified needs were categorized as testing, code & commit, confidence, bug, and artifacts. We discussed whether the information needs were aligned with the tools used to address them.

  • 45.
    Ahmad, Azeem
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Berglund, Aseel
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    The Perceived Effects of Introducing Coaching on the Development of Student's Soft Skills Managing Software Quality2021In: Proceedings of 4th Software Engineering Education Workshop (SEED 2021) co-located with APSEC 2021, 06-Dec, 2021, Taipei, Taiwan, CEUR-WS , 2021, Vol. 3062, p. 22-29Conference paper (Refereed)
    Abstract [en]

    Technical abilities (also known as hard skills) are just as crucial as soft skills (such as communication, cooperation, teamwork, etc.) in attaining professional success. Therefore it is important to pay much attention to soft skills when developing the curriculum of engineering educations. Many elements can have a direct or indirect impact on students’ soft skills, including course topic, course module (i.e., laboratories, seminars, etc.), the medium of instruction, and learning activities. Many academics have investigated the development of soft skills in a variety of disciplines, including engineering, science, and business. The purpose of this study is to assess the perceived impact of coaching on the development of soft skills in MS and BS engineering students. During four planned sessions over a six-month period, MS students acted as coachers, while BS students received coaching from MS students. After each coaching session, all students were asked to complete a survey to evaluate their perception for how their soft skills had developed. The results of the perceived effects of introducing coaching activities are presented in this article. This article is a first step, in the series of our investigation, in identifying the students’ perceptions about the development of soft skills. According to the survey, the MS engineering students who were the coachers had perceived to improve most of their soft skills. However, in the perception of BS students, their soft skills did not improve as compared to MS students, prompting us to conduct additional research in the future to discover what hampered the growth of BS students’ soft skills as well as how MS students’ soft skills were enhanced.

  • 46.
    Ahmad, Azeem
    et al.
    Ericsson AB, Linkoping, Sweden.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Hasselqvist, Daniel
    Ericsson AB, Linkoping, Sweden.
    Sandberg, Pontus
    Ericsson AB, Linkoping, Sweden.
    Information Needs in Continuous Integration and Delivery in Large Scale Organizations: An Observational Study2024In: Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing (SAC '24), ACM Digital Library, 2024, p. 1262-1271Conference paper (Refereed)
    Abstract [en]

    Continuous integration and delivery encompass a variety of activities, including regular code changes, compilation, building, testing, and distribution to clients. In order to accomplish the assigned tasks, practitioners tend to pursue additional information. Software practitioners who allocate a significant portion of their time and energy towards seeking out required information may experience a diversion from their primary responsibilities. Identifying the specific types of information sought by software practitioners can enhance our comprehension of the processes, protocols, and resources utilized to ensure timely delivery of a high-quality product. Gaining a comprehensive understanding of the information needs of software practitioners can yield numerous benefits such as maintaining competitiveness, enhancing awareness of challenges that may impede a faster delivery, and developing a visual tool to facilitate practitioners in fulfilling their information needs. This study extends prior research by Ahmad et. al [1, 2], by using the observation and think-aloud technique to broaden upon our understanding of information needs. A multiple-case holistic study was carried out to identify information needs in continuous integration and delivery. The study involved four companies and a total of 34 unique participants. The present investigation attempts to capture the importance, frequency, and necessary effort (such as the series of steps needed to gather information) pertaining to identified needs. A total of 39 different information needs were identified across various stakeholders, including release managers, testers, project owners, release team members, and compliance authorities. The needs that were identified have been classified into five distinct categories, namely code and commit, test suite and test case, release and deployment, trends and statistics, and testing infrastructure. The present study has revealed additional 31% of information needs compared to previous work, with 77% of these needs being unique. The level of overlap between the current research and Ahmad et. al work from other researchers is 23%. The impact of utilizing diverse methodologies on the identification of information needs was also a topic of discussion.

  • 47.
    Ahmad, Fozail
    et al.
    McGill Univ, Canada.
    Rangappa, Maruthi
    McGill Univ, Canada.
    Katiyar, Neeraj
    McGill Univ, Canada.
    Staniszewski, Martin
    Siemens Energy, Canada.
    Varro, Daniel
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. McGill Univ, Canada.
    Hybrid Cloudification of Legacy Software for Efficient Simulation of Gas Turbine Designs2023In: 2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING IN PRACTICE, ICSE-SEIP, IEEE COMPUTER SOC , 2023, p. 384-395Conference paper (Refereed)
    Abstract [en]

    When developing aeroderivative gas turbines at Siemens Energy, engine models are subject to complex simulation campaigns for finite element analysis carried out by a legacy simulation tool. This paper presents results of a multi-year software modernization project to provide a software-as-a-service (SaaS) framework that enables the distributed and automated execution of simulation jobs over a hybrid cloud platform containing both private cloud and public cloud nodes. Our framework allows to significantly reduce the net time required for completing complex simulation campaigns, thus increasing the effectiveness of engineers. The performance of our framework is evaluated in various cloud configurations with complex simulation campaigns performed in the context of a real simulation task.

  • 48.
    Ahmed, Deyaa
    et al.
    Holding Co Water & Wastewater HCWW, Egypt.
    Ebeed, Mohamed
    Sohag Univ, Egypt; Univ Jaen, Spain.
    Kamel, Salah
    Aswan Univ, Egypt.
    Nasrat, Loai
    Aswan Univ, Egypt.
    Ali, Abdelfatah
    Amer Univ Sharjah, U Arab Emirates; South Valley Univ, Egypt.
    Shaaban, Mostafa F.
    Amer Univ Sharjah, U Arab Emirates.
    Hussien, Abdelazim
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Fayoum Univ, Egypt; Middle East Univ, Jordan.
    An enhanced jellyfish search optimizer for stochastic energy management of multi-microgrids with wind turbines, biomass and PV generation systems considering uncertainty2024In: Scientific Reports, E-ISSN 2045-2322, Vol. 14, no 1, article id 15558Article in journal (Refereed)
    Abstract [en]

    The energy management (EM) solution of the multi-microgrids (MMGs) is a crucial task to provide more flexibility, reliability, and economic benefits. However, the energy management (EM) of the MMGs became a complex and strenuous task with high penetration of renewable energy resources due to the stochastic nature of these resources along with the load fluctuations. In this regard, this paper aims to solve the EM problem of the MMGs with the optimal inclusion of photovoltaic (PV) systems, wind turbines (WTs), and biomass systems. In this regard, this paper proposed an enhanced Jellyfish Search Optimizer (EJSO) for solving the EM of MMGs for the 85-bus MMGS system to minimize the total cost, and the system performance improvement concurrently. The proposed algorithm is based on the Weibull Flight Motion (WFM) and the Fitness Distance Balance (FDB) mechanisms to tackle the stagnation problem of the conventional JSO technique. The performance of the EJSO is tested on standard and CEC 2019 benchmark functions and the obtained results are compared to optimization techniques. As per the obtained results, EJSO is a powerful method for solving the EM compared to other optimization method like Sand Cat Swarm Optimization (SCSO), Dandelion Optimizer (DO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), and the standard Jellyfish Search Optimizer (JSO). The obtained results reveal that the EM solution by the suggested EJSO can reduce the cost by 44.75% while the system voltage profile and stability are enhanced by 40.8% and 10.56%, respectively.

  • 49.
    Alakula, Anton Risberg
    et al.
    Lund Univ, Sweden.
    Hedin, Gorel
    Lund Univ, Sweden.
    Fors, Niklas
    Lund Univ, Sweden.
    Pop, Adrian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Property probes: Live exploration of program analysis results2024In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 211, article id 111980Article in journal (Refereed)
    Abstract [en]

    We present property probes , a mechanism for helping a developer explore partial program analysis results in terms of the source program interactively while the program is edited. A node locator data structure is introduced that maps between source code spans and program representation nodes, and that helps identify probed nodes in a robust way, after modifications to the source code. We have developed a client-server based tool CODEPROBER supporting property probes, and argue that it is very helpful in debugging and understanding program analyses. We have evaluated our tool on several languages and analyses, including a full Java compiler and a tool for intraprocedural dataflow analysis. Our performance results show that the probe overhead is negligible even when analyzing large projects.

  • 50.
    Alegroth, Emil
    et al.
    Blekinge Inst Technol, Sweden.
    Borch Petersen, Eline
    Linköping University, Department of Clinical and Experimental Medicine, Division of Speech language pathology, Audiology and Otorhinolaryngology. Linköping University, Faculty of Medicine and Health Sciences.
    Tinnerholm, John
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    A Failed attempt at creating Guidelines for Visual GUI Testing: An industrial case study2021In: 2021 14TH IEEE CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST 2021), IEEE COMPUTER SOC , 2021, p. 340-350Conference paper (Refereed)
    Abstract [en]

    Software development is governed by guidelines that aim to improve the codes qualities, such as maintainability. However, whilst coding guidelines are commonplace for software, guidelines for testware are much less common. In particular, for GUI-based tests driven with image recognition, also referred to as Visual GUI Testing (VGT), explicit coding guidelines are missing. In this industrial case study, performed at the Swedish defence contractor Saab AB, we propose a set of coding guidelines for VGT and evaluate their impact on test scripts for an industrial, safety-critical system. To study the guidelines effect on maintenance costs, five representative manual test cases are each translated with and without the proposed guidelines in the two VGT tools SikuliX and EyeAutomate. As such, 20 test scripts were developed, with a combined development cost of more than 100 man-hours. Three of the tests are then maintained by one researcher and two practitioners for another version of the system and costs measured to evaluate return on investment. This analysis is complemented with observations and interviews to elicit practitioners perceptions and experiences with VGT. Results show that scripts developed with the guidelines had higher maintenance costs than scripts developed without guide-lines. This is supported by qualitative results that many of the guidelines are considered inappropriate, superfluous or unnecessary due to the inherent properties of the scripts, e.g. their natural small size, linear flows, natural separation of concerns, and more. We conclude that there are differences between VGT scripts and software that prohibit direct translation of guidelines between the two. As such, we consider our study as a failure but argue that several lessons can be drawn from our results to guide future research into guidelines for VGT and GUI-based test automation.

1234567 1 - 50 of 1097
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf