liu.seSearch for publications in DiVA
Change search
Refine search result
1234567 1 - 50 of 3864
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aaro, Gustav
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Roos, Daniel
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Toolset for Run-time Dataset Collection of Deep-scene Information2020In: Symposium on Modelling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), Springer, 2020, p. 224-236Conference paper (Refereed)
    Abstract [en]

    Virtual reality (VR) provides many exciting new application opportunities, but also present new challenges. In contrast to 360° videos that only allow a user to select its viewing direction, in fully immersive VR, users can also move around and interact with objects in the virtual world. To most effectively deliver such services it is therefore important to understand how users move around in relation to such objects. In this paper, we present a methodology and software tool for generating run-time datasets capturing a user’s interactions with such 3D environments, evaluate and compare different object identification methods that we implement within the tool, and use datasets collected with the tool to demonstrate example uses. The tool was developed in Unity, easily integrates with existing Unity applications through the use of periodic calls that extracts information about the environment using different ray-casting methods. The software tool and example datasets are made available with this paper. 

  • 2.
    Abbas, Qaisar
    et al.
    Uppsala universitet, Avdelningen för teknisk databehandling.
    Nordström, Jan
    Uppsala universitet, Avdelningen för teknisk databehandling.
    Weak versus strong no-slip boundary conditions for the Navier-Stokes equations2010In: Engineering Applications of Computational Fluid Mechanics, ISSN 1994-2060, Vol. 4, p. 29-38Article in journal (Refereed)
    Download full text (pdf)
    fulltext
  • 3.
    Abbas, Qaisar
    et al.
    Uppsala universitet, Avdelningen för teknisk databehandling.
    Nordström, Jan
    Uppsala universitet, Avdelningen för teknisk databehandling.
    Weak versus Strong No-Slip Boundary Conditions for the Navier-Stokes Equations2008In: Proc. 6th South African Conference on Computational and Applied Mechanics, South African Association for Theoretical and Applied Mechanics , 2008, p. 52-62Conference paper (Other academic)
  • 4.
    Abbas, Qaisar
    et al.
    Uppsala universitet, Avdelningen för teknisk databehandling.
    van der Weide, Edwin
    Nordström, Jan
    Uppsala universitet, Avdelningen för teknisk databehandling.
    Accurate and stable calculations involving shocks using a new hybrid scheme2009In: Proc. 19th AIAA CFD Conference, AIAA , 2009Conference paper (Refereed)
  • 5.
    Abd Nikooie Pour, Mina
    et al.
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Algergawy, Alsayed
    Friedrich Schiller University Jena, Germany.
    Amardeilh, Florence
    Elzeard.co, Paris, France.
    Amini, Reihaneh
    Data Semantics (DaSe) Laboratory, Kansas State University, USA.
    Fallatah, Omaima
    Information School, The University of Sheffield, Sheffield, UK.
    Faria, Daniel
    LASIGE, Faculdade de Ciencias, Universidade de Lisboa, Portugal .
    Fundulaki, Irini
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Harrow, Ian
    Pistoia Alliance Inc., USA.
    Hertling, Sven
    University of Mannheim, Germany.
    Hitzler, Pascal
    Data Semantics (DaSe) Laboratory, Kansas State University, USA.
    Huschka, Martin
    Fraunhofer Institute for High-Speed Dynamics, Ernst-Mach-Institut, EMI, Germany.
    Ibanescu, Liliana
    AgroParisTech, UMR MIA-Paris/INRAE, France.
    Jimenez-Ruiz, Ernesto
    City, University of London, UK and Department of Informatics, University of Oslo, Norway.
    Karam, Naouel
    Fraunhofer FOKUS, Berlin, Germany and Institute for Applied Informatics (InfAI), University of Leipzig, Germany.
    Laadhar, Amir
    Department of Computer Science, Aalborg University, Denmark.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology. University of Gävle, Sweden.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Li, Ying
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Michel, Franck
    University Cote d’Azur, CNRS, Inria, France.
    Nasr, Engy
    Freiburg Galaxy Team, University of Freiburg, Germany.
    Paulheim, Heiko
    University of Mannheim, Germany.
    Pesquita, Catia
    LASIGE, Faculdade de Ciencias, Universidade de Lisboa, Portugal .
    Portisch, Jan
    University of Mannheim, Germany.
    Roussey, Catherine
    INRAE Centre Clermont-ARA, laboratoire TSCF, France.
    Saveta, Tzanina
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Shvaiko, Pavel
    Trentino Digitale SpA, Trento, Italy.
    Splendiani, Andrea
    Pistoia Alliance Inc., USA.
    Trojahn, Cassia
    IRIT & Universite Toulouse II, Toulouse, France .
    Vatascinova, Jana
    Prague University of Economics and Business, Czech Republic.
    Yaman, Beyza
    ADAPT Centre, Dublin City University, Ireland.
    Zamazal, Ondrej
    Prague University of Economics and Business, Czech Republic.
    Zhou, Lu
    Data Semantics (DaSe) Laboratory, Kansas State University, USA.
    Results of theOntology Alignment Evaluation Initiative 20212021In: Proceedings of the 16th International Workshop on Ontology Matching: co-located with the 20th International Semantic Web Conference (ISWC 2021) / [ed] Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn, CEUR Workshop proceedings , 2021, p. 62-108Conference paper (Refereed)
    Abstract [en]

    The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity and use different evaluation modalities (e.g., blind evaluation, open evaluation, or consensus). The OAEI 2021 campaign offered 13 tracks and was attended by 21 participants.This paper is an overall presentation of that campaign.

  • 6.
    Abd Nikooie Pour, Mina
    et al.
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Algergawy, Alsayed
    Friedrich Schiller University Jena, Germany.
    Amini, Reihaneh
    Kansas State University, USA.
    Faria, Daniel
    BioData.pt, INESC-ID, Lisbon, Portugal.
    Fundulaki, Irini
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Harrow, Ian
    Pistoia Alliance Inc., USA.
    Hertling, Sven
    University of Mannheim, Germany.
    Jimenez-Ruiz, Ernesto
    City, University of London, UK, and , University of Oslo, Norway.
    Jonquet, Clement
    LIRMM, University of Montpellier & CNRS, France.
    Karam, Naouel
    Fraunhofer FOKUS, Berlin, Germany.
    Khiat, Abderrahmane
    Fraunhofer IAIS, Sankt Augustin, Germany.
    Laadhar, Amir
    LIRMM, University of Montpellier & CNRS, France.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Li, Ying
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Hitzler, Pascal
    Kansas State University, USA.
    Paulheim, Heiko
    University of Mannheim, Germany.
    Pesquita, Catia
    Universidade de Lisboa, Portugal.
    Saveta, Tzanina
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Shvaiko, Pavel
    TasLab, Trentino Digitale SpA, Trento, Italy.
    Splendiani, Andrea
    Pistoia Alliance Inc., USA.
    Thieblin, Elodie
    Logilab, France.
    Trojahn, Cassia
    IRIT & Universite Toulouse II, Toulouse, France.
    Vatascinova, Jana
    University of Economics, Prague, Czech Republic.
    Yaman, Beyza
    Dublin City University, Ireland.
    Zamazal, Ondrej
    University of Economics, Prague, Czech Republic.
    Zhou, Lu
    Kansas State University, USA.
    Results of theOntology Alignment Evaluation Initiative 20202020In: Proceedings of the 15th International Workshop on Ontology Matching: co-located with the 19th International Semantic Web Conference (ISWC 2020) / [ed] Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn, Aachen, Germany: CEUR Workshop proceedings , 2020, p. 92-138Conference paper (Refereed)
    Abstract [en]

    The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity and use different evaluation modalities (e.g., blind evaluation, open evaluation, or consensus).The OAEI 2020 campaign offered 12 tracks with 36 test cases, and was attended by 19 participants. This paper is an overall presentation of that campaign. 

  • 7.
    Abd Nikooie Pour, Mina
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Algergawy, Alsayed
    Heinz Nixdorf Chair for Distributed Information Systems, Friedrich Schiller University Jena, Germany; Chair of Data and Knowledge Engineering, University of Passau, Germany.
    Buche, Patrice
    UMR IATE, INRAE, University of Montpellier, France.
    Castro, Leyla J.
    ZB MED Information Centre for Life Sciences, Germany.
    Chen, Jiaoyan
    Department of Computer Science, The University of Manchester, UK.
    Coulet, Adrien
    Inria Paris, France; Centre de Recherche des Cordeliers, Inserm, Université Paris Cité, Sorbonne Université, France.
    Cufi, Julien
    UMR IATE, INRAE, University of Montpellier, France.
    Dong, Hang
    Department of Computer Science, University of Oxford, UK.
    Fallatah, Omaima
    Department of Data Science, Umm Al-Qura University, Saudi Arabia.
    Faria, Daniel
    INESC-ID / IST, University of Lisbon, Portugal.
    Fundulaki, Irini
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Hertling, Sven
    Data and Web Science Group, University of Mannheim, Germany.
    He, Yuan
    Department of Computer Science, University of Oxford, UK.
    Horrocks, Ian
    Department of Computer Science, University of Oxford, UK.
    Huschka, Martin
    Fraunhofer Institute for High-Speed Dynamics, Ernst-Mach-Institut, EMI, Germany.
    Ibanescu, Liliana
    Université Paris-Saclay, INRAE, AgroParisTech, UMR MIA Paris-Saclay, France.
    Jain, Sarika
    National Institute of Technology Kurukshetra, India.
    Jiménez-Ruiz, Ernesto
    City, University of London, UK; SIRIUS, University of Oslo, Norway.
    Karam, Naouel
    Institute for Applied Informatics, University of Leipzig, Germany.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Li, Ying
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Monnin, Pierre
    University Côte d’Azur, Inria, CNRS, I3S, France.
    Nasr, Engy
    Albert Ludwig University of Freiburg, Germany.
    Paulheim, Heiko
    Data and Web Science Group, University of Mannheim, Germany.
    Pesquita, Catia
    LASIGE, Faculdade de Ciências, Universidade de Lisboa, Portugal.
    Saveta, Tzanina
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Shvaiko, Pavel
    Trentino Digitale SpA, Trento, Italy.
    Sousa, Guilherme
    Institut de Recherche en Informatique de Toulouse, France.
    Trojahn, Cássia
    Institut de Recherche en Informatique de Toulouse, France.
    Vatascinova, Jana
    Prague University of Economics and Business, Czech Republic.
    Wu, Mingfang
    Australian Research Data Commons.
    Yaman, Beyza
    ADAPT Centre, Trinity College Dublin.
    Zamazal, Ondřej
    Prague University of Economics and Business, Czech Republic.
    Zhou, Lu
    Flatfee Corp, USA.
    Results of the Ontology Alignment Evaluation Initiative 20232023In: Proceedings of the 18th International Workshop on Ontology Matching co-located with the 22nd International Semantic Web Conference (ISWC 2023), Athens, Greece, November 7, 2023. / [ed] Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, Cássia Trojahn, CEUR Workshop Proceedings , 2023, Vol. 3591, p. 97-139Conference paper (Refereed)
  • 8.
    Abd Nikooie Pour, Mina
    et al.
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Algergawy, Alsayed
    Heinz Nixdorf Chair for Distributed Information Systems, Friedrich Schiller University Jena, Germany.
    Buche, Patrice
    UMR IATE, INRAE, University of Montpellier, France.
    Castro, Leyla J.
    ZB MED Information Centre for Life Sciences, Germany.
    Chen, Jiaoyan
    Department of Computer Science, The University of Manchester, UK.
    Dong, Hang
    Department of Computer Science, University of Oxford, UK.
    Fallatah, Omaima
    Information School, The University of Sheffield, Sheffield, UK.
    Faria, Daniel
    University of Lisbon, Portugal.
    Fundulaki, Irini
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Hertling, Sven
    Data and Web Science Group, University of Mannheim, Germany.
    He, Yuan
    Department of Computer Science, University of Oxford, UK.
    Horrocks, Ian
    Department of Computer Science, University of Oxford, UK.
    Huschka, Martin
    Fraunhofer Institute for High-Speed Dynamics, Ernst-Mach-Institut, EMI, Germany.
    Ibanescu, Liliana
    Universite Paris-Saclay, INRAE, AgroParisTech, UMR MIA Paris-Saclay, France.
    Jimenez-Ruiz, Ernesto
    City, University of London, UK & SIRIUS, University of Oslo, Norway.
    Karam, Naouel
    Fraunhofer FOKUS & Institute for Applied Informatics, University of Leipzig, Germany.
    Laadhar, Amir
    University of Stuttgart, Germany.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology. Högskolan i Gävle.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Li, Ying
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Michel, Franck
    University Cote d’Azur, CNRS, Inria.
    Nasr, Engy
    Albert Ludwig University of Freiburg, Germany.
    Paulheim, Heiko
    Data and Web Science Group, University of Mannheim, Germany.
    Pesquita, Catia
    LASIGE, Faculdade de Ciencias, Universidade de Lisboa, Portugal.
    Saveta, Tzanina
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Shvaiko, Pavel
    Trentino Digitale SpA, Trento, Italy.
    Trojahn, Cassia
    Institut de Recherche en Informatique de Toulouse, France.
    Verhey, Chantelle
    World Data System, International Technology Office, USA.
    Wu, Mingfang
    Australian Research Data Commons.
    Yaman, Beyza
    ADAPT Centre, Trinity College Dublin.
    Zamazal, Ondrej
    Prague University of Economics and Business, Czech Republic.
    Zhou, Lu
    TigerGraph, Inc. USA.
    Results of the Ontology Alignment EvaluationInitiative 20222022In: Proceedings of the 17th International Workshop on Ontology Matching (OM 2022): co-located with the 21th International Semantic Web Conference (ISWC 2022) / [ed] Pavel Shvaiko, Jerome Euzenat, Ernesto Jimenez-Ruiz, Oktie Hassanzadeh, Cassia Trojahn, CEUR Workshop Proceedings , 2022, p. 84-128Conference paper (Refereed)
    Abstract [en]

    The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity and use different evaluation modalities. The OAEI 2022 campaign offered 14 tracks and was attended by18 participants. This paper is an overall presentation of that campaign

  • 9.
    Abd Nikooie Pour, Mina
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Armiento, Rickard
    Linköping University, Department of Physics, Chemistry and Biology, Theoretical Physics. Linköping University, Faculty of Science & Engineering.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering. Högskolan i Gävle, Gävle, Sweden.
    A First Step towards a Tool for Extending Ontologies2021In: Proceedings of the Sixth International Workshop on the Visualization and Interaction for Ontologies and Linked Data: co-located with the 20th International Semantic Web Conference (ISWC 2021) / [ed] Patrick Lambrix, Catia Pesquita, Vitalis Wiens, CEUR Workshop proceedings , 2021, p. 1-12Conference paper (Refereed)
    Abstract [en]

    Ontologies have been proposed as a means towards making data FAIR (Findable, Accessible, Interoperable, Reusable). This has attracted much interest in several communities and ontologies are being developed. However, to obtain good results when using ontologies in semantically-enabled applications, the ontologies need to be of high quality. One of the quality aspects is that the ontologies should be as complete as possible. In this paper we propose a first version of a tool that supports users in extending ontologies using a phrase-based approach.  To demonstrate the usefulness of our proposed tool, we exemplify the use by extending the Materials Design Ontology.

  • 10.
    Abd Nikooie Pour, Mina
    et al.
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Armiento, Rickard
    Linköping University, Department of Physics, Chemistry and Biology, Theoretical Physics. Linköping University, Faculty of Science & Engineering.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    A First Step towards Extending the Materials Design Ontology2021In: Workshop on Domain Ontologies for Research Data Management in Industry Commons of Materials and Manufacturing - DORIC-MM 2021 / [ed] S Chiacchiera, MT Horsch, J Francisco Morgado, G Goldbeck, 2021, p. 1-11Conference paper (Refereed)
    Abstract [en]

    Ontologies have been proposed as a means towards making data FAIR (Findable, Accessible, Interoperable, Reusable) and has recently attracted much interest in the materials science community. Ontologies for this domain are being developed and one such effort is the Materials Design Ontology. However, to obtain good results when using ontologies in semantically-enabled applications, the ontologies need to be of high quality. One of the quality aspects is that the ontologies should be as complete as possible. In this paper we show preliminary results regarding extending the Materials Design Ontology using a phrase-based topic model.

    Download full text (pdf)
    fulltext
  • 11.
    Abd Nikooie Pour, Mina
    et al.
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, Database and information techniques. Swedish e-Science Research Centre, Sweden.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering. Swedish e-Science Research Centre, Sweden.
    Armiento, Rickard
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Physics, Chemistry and Biology, Theoretical Physics. Swedish e-Science Research Centre, Sweden.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering. Swedish e-Science Research Centre, Sweden; University of Gävle, Sweden.
    Phrase2Onto: A Tool to Support Ontology Extension2023In: 27th International Conference on Knowledge Based and Intelligent Information and Engineering Sytems (KES 2023) / [ed] Robert Howlett, Elsevier, 2023, p. 1415-1424Conference paper (Refereed)
    Abstract [en]

    Due to importance of data FAIRness (Findable, Accessible, Interoperable, Reusable), ontologies as a means to make data FAIR have attracted more and more attention in different communities and are being used in semantically-enabled applications. However, to obtain good results while using ontologies in these applications, high quality ontologies are needed of which completeness is one of the important aspects. An ontology lacking information can lead to missing results. In this paper we present a tool, Phrase2Onto, that supports users in extending ontologies to make the ontologies more complete. It is particularly suited for ontology extension using a phrase-based topic model approach, but the tool can support any extension approach where a user needs to make decisions regarding the appropriateness of using phrases to define new concepts. We describe the functionality of the tool and a user study using Pizza Ontology. The user study showed  a good usability of the system and high task completion. Further, we report on a real application where we extend the Materials Design Ontology.

  • 12.
    Abdolmajid Ahmad, Bookan
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Programmering av generativ konst i C# .Net2014Independent thesis Basic level (degree of Bachelor), 180 HE creditsStudent thesis
    Abstract [sv]

    Detta examensarbete utfördes på IDA (Institutionen för datavetenskap) vid Linköpings universitet. 

    Syftet med det här examensarbetet var att utveckla ett program som skulle skapa förutsättningar för generativ konst med hjälp av MyPaint som är ett digitalt rit/målarverktyg. Metoden gick ut på att registrera vad användaren skapat för komponenter, dvs. musinteraktioner och kortkommandon, och därefter använda dem algoritmiskt.

    Examensarbetet resulterades i ett program (SharpArt), som fångar musinteraktioner samt simulerar tangentbordstryckningar (kortkommandon) från och till Mypaint, vilket i sin tur skapar komponenter som används algoritmiskt. Programmet kan även positionera objektet på canvasen enligt det önskade koordinatvärdet.

    Download full text (pdf)
    fulltext
  • 13.
    Abdulahad, Bassam
    et al.
    Linköping University, Department of Computer and Information Science.
    Lounis, Georgios
    Linköping University, Department of Computer and Information Science.
    A user interface for the ontology merging tool SAMBO2004Independent thesis Basic level (professional degree)Student thesis
    Abstract [en]

    Ontologies have become an important tool for representing data in a structured manner. Merging ontologies allows for the creation of ontologies that later can be composed into larger ontologies as well as for recognizing patterns and similarities between ontologies. Ontologies are being used nowadays in many areas, including bioinformatics. In this thesis, we present a desktop version of SAMBO, a system for merging ontologies that are represented in the languages OWL and DAML+OIL. The system has been developed in the programming language JAVA with JDK (Java Development Kit) 1.4.2. The user can open a file locally or from the network and can merge ontologies using suggestions generated by the SAMBO algorithm. SAMBO provides a user-friendly graphical interface, which guides the user through the merging process.

    Download full text (pdf)
    FULLTEXT01
  • 14.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University, Sweden.
    Atig, Mohamed Faouzi
    Uppsala University, Sweden.
    Chen, Yu-Fang
    Academia Sinica, Taiwan.
    Leonardsson, Carl
    Uppsala University, Sweden.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Automatic fence insertion in integer programs via predicate abstraction2012In: Static Analysis: 19th International Symposium, SAS 2012, Deauville, France, September 11-13, 2012. Proceedings / [ed] Antoine Miné, David Schmidt, Springer Berlin/Heidelberg, 2012, p. 164-180Conference paper (Refereed)
    Abstract [en]

    We propose an automatic fence insertion and verification framework for concurrent programs running under relaxed memory. Unlike previous approaches to this problem, which allow only variables of finite domain, we target programs with (unbounded) integer variables. The problem is difficult because it has two different sources of infiniteness: unbounded store buffers and unbounded integer variables. Our framework consists of three main components: (1) a finite abstraction technique for the store buffers, (2) a finite abstraction technique for the integer variables, and (3) a counterexample guided abstraction refinement loop of the model obtained from the combination of the two abstraction techniques. We have implemented a prototype based on the framework and run it successfully on all standard benchmarks together with several challenging examples that are beyond the applicability of existing methods.

  • 15.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University, Sweden.
    Atig, Mohamed Faouzi
    Uppsala University, Sweden.
    Chen, Yu-Fang
    Academia Sinica, Taiwan.
    Leonardsson, Carl
    Uppsala University, Sweden.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Memorax, a Precise and Sound Tool for Automatic Fence Insertion under TSO2013In: Tools and Algorithms for the Construction and Analysis of Systems: 19th International Conference, TACAS 2013, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2013, Rome, Italy, March 16-24, 2013. Proceedings, Springer Berlin/Heidelberg, 2013, p. 530-536Conference paper (Refereed)
    Abstract [en]

    We introduce MEMORAX, a tool for the verification of control state reachability (i.e., safety properties) of concurrent programs manipulating finite range and integer variables and running on top of weak memory models. The verification task is non-trivial as it involves exploring state spaces of arbitrary or even infinite sizes. Even for programs that only manipulate finite range variables, the sizes of the store buffers could grow unboundedly, and hence the state spaces that need to be explored could be of infinite size. In addition, MEMORAX in- corporates an interpolation based CEGAR loop to make possible the verification of control state reachability for concurrent programs involving integer variables. The reachability procedure is used to automatically compute possible memory fence placements that guarantee the unreachability of bad control states under TSO. In fact, for programs only involving finite range variables and running on TSO, the fence insertion functionality is complete, i.e., it will find all minimal sets of memory fence placements (minimal in the sense that removing any fence would result in the reachability of the bad control states). This makes MEMORAX the first freely available, open source, push-button verification and fence insertion tool for programs running under TSO with integer variables.

  • 16.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University.
    Atig, Mohammed Faouzi
    Uppsala University.
    Ganjei, Zeinab
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Zhu, Yunyun
    Uppsala University.
    Verification of Cache Coherence Protocols wrt. Trace Filters2015Conference paper (Refereed)
    Abstract [en]

    We address the problem of parameterized verification of cache coherence protocols for hardware accelerated transactional memories. In this setting, transactional memories leverage on the versioning capabilities of the underlying cache coherence protocol. The length of the transactions, their number, and the number of manipulated variables (i.e., cache lines) are parameters of the verification problem. Caches in such systems are finite-state automata communicating via broadcasts and shared variables. We augment our system with filters that restrict the set of possible executable traces according to existing conflict resolution policies. We show that the verification of coherence for parameterized cache protocols with filters can be reduced to systems with only a finite number of cache lines. For verification, we show how to account for the effect of the adopted filters in a symbolic backward reachability algorithm based on the framework of constrained monotonic abstraction. We have implemented our method and used it to verify transactional memory coherence protocols with respect to different conflict resolution policies.

  • 17.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University, Sweden.
    Dwarkadas, Sandhya
    University of Rochester, U.S.A..
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Shriraman, Arrvindh
    Simon Fraser University, Canada.
    Zhu, Yunyun
    Uppsala University, Sweden.
    Verifying Safety and Liveness for the FlexTM Hybrid Transactional Memory2013In: Design, Automation & Test in Europe (DATE 2013), Grenoble, France, March 18-22, 2013., IEEE , 2013, p. 785-790Conference paper (Refereed)
    Abstract [en]

    We consider the verification of safety (strict se- rializability and abort consistency) and liveness (obstruction and livelock freedom) for the hybrid transactional memory framework FLEXTM. This framework allows for flexible imple- mentations of transactional memories based on an adaptation of the MESI coherence protocol. FLEXTM allows for both eager and lazy conflict resolution strategies. Like in the case of Software Transactional Memories, the verification problem is not trivial as the number of concurrent transactions, their size, and the number of accessed shared variables cannot be a priori bounded. This complexity is exacerbated by aspects that are specific to hardware and hybrid transactional memories. Our work takes into account intricate behaviours such as cache line based conflict detection, false sharing, invisible reads or non-transactional instructions. We carry out the first automatic verification of a hybrid transactional memory and establish, by adopting a small model approach, challenging properties such as strict serializability, abort consistency, and obstruction freedom for both an eager and a lazy conflict resolution strategies. We also detect an example that refutes livelock freedom. To achieve this, our prototype tool makes use the latest antichain based techniques to handle systems with tens of thousands of states.

  • 18.
    Abdulla, Parosh Aziz
    et al.
    Uppsala University, Sweden.
    Haziza, Frédéric
    Uppsala University, Sweden.
    Holik, Lukas
    Brno University of Technology, Czech Republic.
    Jonsson, Bengt
    Uppsala University, Sweden.
    Rezine, Ahmed
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    An Integrated Specification and Verification Technique for Highly Concurrent Data Structures2013In: The 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2013), Rome, Italy, March 16-24, 2013. / [ed] Piterman, Nir, Smolka, Scott, 2013Conference paper (Refereed)
    Abstract [en]

    We present a technique for automatically verifying safety properties of concurrent programs, in particular programs which rely on subtle dependencies of local states of different threads, such as lock-free implementations of stacks and queues in an environment without garbage collection. Our technique addresses the joint challenges of infinite-state specifications, an unbounded number of threads, and an unbounded heap managed by explicit memory allocation. Our technique builds on the automata-theoretic approach to model checking, in which a specification is given by an automaton that observes the execution of a program and accepts executions that violate the intended specification.We extend this approach by allowing specifications to be given by a class of infinite-state automata. We show how such automata can be used to specify queues, stacks, and other data structures, by extending a data-independence argument. For verification, we develop a shape analysis, which tracks correlations between pairs of threads, and a novel abstraction to make the analysis practical. We have implemented our method and used it to verify programs, some of which have not been verified by any other automatic method before.

  • 19.
    Abraham, Michael
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Effektivare fordonsdiagnostik över CAN-bussen genom UDS2020Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Cars are getting more technically advanced and more ECUs are being developed that results in increased safety and comfort, and a lower environmental impact. This leads to a complex work to test and verify that all the different ECUs are functioning as intended in various situations. Vehicle diagnostics often requires software from third parties that are often expensive. Syntronic AB are currently using software with a much larger functionality than needed to perform vehicle diagnostics and much of the unneces-sary functionality in the software leads to unnecessarily long runtimes for the program. By studying CAN and UDS and analyzing how they interact, I was able to create a software by systematically developing the software with two interfaces connected to each computer and continuously testing the implementation against the theoretical basis and then finally testing the software in a vehicle. The created software was better suited to the needs of the company and the more functionality-adapted software could perform the same diagnostics faster than the company’s current software. The most used UDS-service by the company could be implemented and the created software enabled more UDS services to be added without modifications of the main program or its features.

    Download full text (pdf)
    fulltext
  • 20.
    Abrahamsson, Olle
    et al.
    Linköping University, Department of Electrical Engineering, Communication Systems. Linköping University, Faculty of Science & Engineering.
    Danev, Danyo
    Linköping University, Department of Electrical Engineering, Communication Systems. Linköping University, Faculty of Science & Engineering.
    Larsson, Erik G
    Linköping University, Department of Electrical Engineering, Communication Systems. Linköping University, Faculty of Science & Engineering.
    Opinion Dynamics with Random Actions and a Stubborn Agent2019In: CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEE , 2019, p. 1486-1490Conference paper (Refereed)
    Abstract [en]

    We study opinion dynamics in a social network with stubborn agents who influence their neighbors but who themselves always stick to their initial opinion. We consider first the well-known DeGroot model. While it is known in the literature that this model can lead to consensus even in the presence of a stubborn agent, we show that the same result holds under weaker assumptions than has been previously reported. We then consider a recent extension of the DeGroot model in which the opinion of each agent is a random Bernoulli distributed variable, and by leveraging on the first result we establish that this model also leads to consensus, in the sense of convergence in probability, in the presence of a stubborn agent. Moreover, all agents opinions converge to that of the stubborn agent.

  • 21.
    Abrahamsson, Sara
    et al.
    Linköping University, Department of Computer and Information Science.
    Andersson, Frida
    Linköping University, Department of Computer and Information Science.
    Jaldevik, Albin
    Linköping University, Department of Computer and Information Science.
    Nyrfors, Frans
    Linköping University, Department of Computer and Information Science.
    Jareman, Erik
    Linköping University, Department of Computer and Information Science.
    Kröger, Oscar
    Linköping University, Department of Computer and Information Science.
    Tjern, Martin
    Linköping University, Department of Computer and Information Science.
    TopQ - a web-based queuing application: A case study in developing a queuing application for students and tutors with focus on navigability and design2021Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
    Abstract [en]

    Students’ learning processes can be affected negatively by long waiting times to get assistance on lesson- and lab-sessions. Studies show that digital queuing systems decrease the waiting time. Thus, the purpose of this report is to investigate how to design a web-based queuing application to achieve a high perceived usability for students and tutors. Especially based on navigability and design which in accordance with research in the area has a direct impact on the usability. To achieve a high perceived usability the application was developed iteratively. In the first version the implemented functionality was built upon the result from the feasibility study combined with research in the area. After a set of user evaluations, changes from the first version were implemented to further improve the perceived usability. Lastly, another set of evaluations were performed to confirm the improvement in the final version. The results showed that the first version of the system was perceived as 84 out of 100 on the System Usability Scale (SUS) and the final version as 88 out of 100, an improvement by four units. Uniform design, no irrelevant functionality, placing buttons in conspicuous positions and having double checks to “dangerous actions” all seem to be factors contributing to the navigability, desirability and thus the usability on a queuing-application.

    Download full text (pdf)
    fulltext
  • 22.
    Abu Baker, Mohamed
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Agile Prototyping: A combination of different approaches into one main process2009Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Software prototyping is considered to be one of the most important tools that are used by software engineersnowadays to be able to understand the customer’s requirements, and develop software products that are efficient,reliable, and acceptable economically. Software engineers can choose any of the available prototyping approaches tobe used, based on the software that they intend to develop and how fast they would like to go during the softwaredevelopment. But generally speaking all prototyping approaches are aimed to help the engineers to understand thecustomer’s true needs, examine different software solutions and quality aspect, verification activities…etc, that mightaffect the quality of the software underdevelopment, as well as avoiding any potential development risks.A combination of several prototyping approaches, and brainstorming techniques which have fulfilled the aim of theknowledge extraction approach, have resulted in developing a prototyping approach that the engineers will use todevelop one and only one throwaway prototype to extract more knowledge than expected, in order to improve thequality of the software underdevelopment by spending more time studying it from different points of view.The knowledge extraction approach, then, was applied to the developed prototyping approach in which thedeveloped model was treated as software prototype, in order to gain more knowledge out of it. This activity hasresulted in several points of view, and improvements that were implemented to the developed model and as a resultAgile Prototyping AP, was developed. AP integrated more development approaches to the first developedprototyping model, such as: agile, documentation, software configuration management, and fractional factorialdesign, in which the main aim of developing one, and only one prototype, to help the engineers gaining moreknowledge, and reducing effort, time, and cost of development was accomplished but still developing softwareproducts with satisfying quality is done by developing an evolutionary prototyping and building throwawayprototypes on top of it.

    Download full text (pdf)
    FULLTEXT01
    Download (pdf)
    COVER01
  • 23. Order onlineBuy this publication >>
    Abugessaisa, Imad
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    Analytical tools and information-sharing methods supporting road safety organizations2008Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    A prerequisite for improving road safety are reliable and consistent sources of information about traffic and accidents, which will help assess the prevailing situation and give a good indication of their severity. In many countries there is under-reporting of road accidents, deaths and injuries, no collection of data at all, or low quality of information. Potential knowledge is hidden, due to the large accumulation of traffic and accident data. This limits the investigative tasks of road safety experts and thus decreases the utilization of databases. All these factors can have serious effects on the analysis of the road safety situation, as well as on the results of the analyses.

    This dissertation presents a three-tiered conceptual model to support the sharing of road safety–related information and a set of applications and analysis tools. The overall aim of the research is to build and maintain an information-sharing platform, and to construct mechanisms that can support road safety professionals and researchers in their efforts to prevent road accidents. GLOBESAFE is a platform for information sharing among road safety organizations in different countries developed during this research.

    Several approaches were used, First, requirement elicitation methods were used to identify the exact requirements of the platform. This helped in developing a conceptual model, a common vocabulary, a set of applications, and various access modes to the system. The implementation of the requirements was based on iterative prototyping. Usability methods were introduced to evaluate the users’ interaction satisfaction with the system and the various tools. Second, a system-thinking approach and a technology acceptance model were used in the study of the Swedish traffic data acquisition system. Finally, visual data mining methods were introduced as a novel approach to discovering hidden knowledge and relationships in road traffic and accident databases. The results from these studies have been reported in several scientific articles.

    List of papers
    1. Ontological Approach to Modeling Information Systems
    Open this publication in new window or tab >>Ontological Approach to Modeling Information Systems
    2004 (English)In: Proceedings of the Fourth International Conference on Computer and information Technology (Cit'04), 14–16 September, Wuhan, China: IEEE Computer Society, Washington, DC, 2004, p. 1122-1127Conference paper, Published paper (Other academic)
    Abstract [en]

    In recent years, the use of formal tools in information system modeling and development represents a potential area of research in computer science. In 1967, the term ontology appeared for the first time in computer science literature as S. H. Mealy introduced it as a basic foundation in data modeling. The main objective of this paper is to discuss the concept of ontology (from a philosophical perspective) as it was used to bridge the gap between philosophy and information systems science, and to investigate ontology types that can be found during ontological investigation and the methods used in the investigation process. The secondary objective of this paper is to study different design and engineering approaches of ontology as well as development environments that are used to create and edit ontologies.

    Keywords
    Ontology, Conceptual Model
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-13184 (URN)10.1109/CIT.2004.1357345 (DOI)
    Available from: 2008-04-28 Created: 2008-04-28 Last updated: 2009-04-21
    2. Benchmarking Road Safety Situations Using OGC Model of Portrayal Workflow
    Open this publication in new window or tab >>Benchmarking Road Safety Situations Using OGC Model of Portrayal Workflow
    2005 (English)In: Proceedings of the 13th International Conference on Geoinformatics (GeoInformatics’5), 17-19 August, Toronto, Canada: Ryerson University, 2005Conference paper, Published paper (Other academic)
    Keywords
    road safety, benchmarking, OGC model
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-13185 (URN)
    Available from: 2008-04-28 Created: 2008-04-28 Last updated: 2009-04-21
    3. Map as Interface for Shared Information: A Study of Design Principles and User Interaction Satisfaction
    Open this publication in new window or tab >>Map as Interface for Shared Information: A Study of Design Principles and User Interaction Satisfaction
    2006 (English)In: IADIS International Conference WWW/Internet 2006: Murcia, Spain, 2006, p. 377-384Conference paper, Published paper (Refereed)
    Keywords
    Maps, shared information, design priciples, user satisfaction
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-13186 (URN)972-8924-19-4 (ISBN)
    Available from: 2008-04-28 Created: 2008-04-28 Last updated: 2009-02-05Bibliographically approved
    4. GLOBESAFE: A Platform for Information-Sharing Among Road Safety Organizations
    Open this publication in new window or tab >>GLOBESAFE: A Platform for Information-Sharing Among Road Safety Organizations
    2007 (English)In: IFIP-W.G. 9th International Conference on Social Implications of Computers in Developing Countries: May 2007, São Paulo, Brazil, 2007, p. 1-10Conference paper, Published paper (Refereed)
    Keywords
    information sharing, road safety
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-13187 (URN)
    Available from: 2008-04-28 Created: 2008-04-28 Last updated: 2009-04-23Bibliographically approved
    5. A Systemic View on Swedish Traffic Accident Data Acquisition System
    Open this publication in new window or tab >>A Systemic View on Swedish Traffic Accident Data Acquisition System
    2007 (English)In: Proceedings of the 14th International Conference on Road Safety on Four Continents (RS4C), 14-16 November, Bangkok, Thailand, Sweden: VTI , 2007, p. 1-12Conference paper, Published paper (Refereed)
    Abstract [en]

    This paper presents work in progress to study information sharing among road safety organizations. The focus is to study accident data acquisition system. In 2002, Swedish Road Transport authority (SRT) has accepted STRADA as accident reporting system to be used by the police all over Sweden. Such system is vital for coordinating, maintaining and auditing road safety in the country. Normally road accidents are reported by the police or by Emergency unit at the hospital. However more than 50% of the hospitals in Sweden didn’t use the system which decrease the utilization of the system and reduce the quality of the information that demanded. By using system thinking approach in this study we try to see why such situation is occurred and how changes can be introduced and handle to overcome such problem. Interviews conducted with focus group and different users of the system. To investigate the issues related to the acceptance of the system we use Technology Acceptance Model (TAM). We recommend getting the user involved in the life cycle of the STRADA and also the developers could use enabling system to overcome problems in related to system usability and complexity. Also we suggest the use of iterative development to govern the life cycle.

    Place, publisher, year, edition, pages
    Sweden: VTI, 2007
    Keywords
    STRADA Information sharing Road accidents recording system
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-13188 (URN)
    Available from: 2008-04-28 Created: 2008-04-28 Last updated: 2009-04-23Bibliographically approved
    6. Knowledge Discovery in Road Accidents Database Integration of Visual and Automatic Data Mining Methods
    Open this publication in new window or tab >>Knowledge Discovery in Road Accidents Database Integration of Visual and Automatic Data Mining Methods
    2008 (English)In: International Journal of Public Information Systems, ISSN 1653-4360, Vol. 1, p. 59-85Article in journal (Refereed) Published
    Abstract [en]

    Road accident statistics are collected and used by a large number of users and this can result in a huge volume of data which requires to be explored in order to ascertain the hidden knowledge. Potential knowledge may be hidden because of the accumulation of data, which limits the exploration task for the road safety expert and, hence, reduces the utilization of the database. In order to assist in solving these problems, this paper explores Automatic and Visual Data Mining (VDM) methods. The main purpose is to study VDM methods and their applicability to knowledge discovery in a road accident databases. The basic feature of VDM is to involve the user in the exploration process. VDM uses direct interactive methods to allow the user to obtain an insight into and recognize different patterns in the dataset. In this paper, I apply a range of methods and techniques, including a paradigm for VDM, exploratory data analysis, and clustering methods, such as K-means algorithms, hierarchical agglomerative clustering (HAC), classification trees, and self-organized-maps (SOM). These methods assist in integrating VDM with automatic data mining algorithms. Open source VDM tools offering visualization techniques were used. The first contribution of this paper lies in the area of discovering clusters and different relationships (such as the relationship between socioeconomic indicators and fatalities, traffic risk and population, personal risk and car per capita, etc.) in the road safety database. The methods used were very useful and valuable for detecting clusters of countries that share similar traffic situations. The second contribution was the exploratory data analysis where the user can explore the contents and the structure of the data set at an early stage of the analysis. This is supported by the filtering components of VDM. This assists expert users with a strong background in traffic safety analysis to be able to intimate assumptions and hypotheses concerning future situations. The third contribution involved interactive explorations based on brushing and linking methods; this novel approach assists both the experienced and inexperienced users to detect and recognize interesting patterns in the available database. The results obtained showed that this approach offers a better understanding of the contents of road safety databases, with respect to current statistical techniques and approaches used for analyzing road safety situations.

    Keywords
    Visual data mining, K-Means, HAC, SOM, InfoVis, IRTAD, GLOBESAFE
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-13189 (URN)
    Available from: 2008-04-28 Created: 2008-04-28 Last updated: 2009-01-26
    Download full text (pdf)
    FULLTEXT01
    Download (pdf)
    COVER01
  • 24.
    Achichi, Manel
    et al.
    Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM), France; University of Montpellier, France.
    Cheatham, Michelle
    Wright State University, USA.
    Dragisic, Zlatan
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Euzenat, Jerome
    INRIA, France; University Grenoble Alpes, Grenoble, France.
    Faria, Daniel
    Instituto Gulbenkian de Ciencia, Lisbon, Portugal.
    Ferrara, Alfio
    Universita degli studi di Milano, Italy.
    Flouris, Giorgos
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Fundulaki, Irini
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Harrow, Ian
    Pistoia Alliance Inc., USA.
    Ivanova, Valentina
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Jimenez-Ruiz, Ernesto
    University of Oslo, Norway.
    Kolthoff, Kristian
    University of Mannheim, Germany.
    Kuss, Elena
    University of Mannheim, Germany.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Leopold, Henrik
    Vrije Universiteit Amsterdam, Netherlands.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Meilicke, Christian
    University of Mannheim, Germany.
    Mohammadi, Majid
    Technical University of Delft, Netherlands.
    Montanelli, Stefano
    Universita degli studi di Milano, Italy.
    Pesquita, Catia
    Universidade de Lisboa, Portugal.
    Saveta, Tzanina
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Shvaiko, Pavel
    Informatica Trentina, Trento, Italy.
    Splendiani, Andrea
    Pistoia Alliance Inc., USA.
    Stuckenschmidt, Heiner
    University of Mannheim, Germany.
    Thieblin, Elodie
    Institut de Recherche en Informatique de Toulouse (IRIT), France; Universite Toulouse II, Toulouse, France.
    Todorov, Konstantin
    Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM), France; University of Montpellier, France.
    Trojahn, Cassia
    Institut de Recherche en Informatique de Toulouse (IRIT); Universite Toulouse II, Toulouse, France.
    Zamazal, Ondrej
    University of Economics, Prague, Czech Republic.
    Results of the Ontology Alignment Evaluation Initiative 20172017In: Proceedings of the 12th International Workshop on Ontology Matching co-located with the 16th International Semantic Web Conference (ISWC 2017) / [ed] Pavel Shvaiko, Jerome Euzenat, Ernesto Jimenez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh, Aachen, Germany: CEUR Workshop Proceedings , 2017, p. 61-113Conference paper (Refereed)
  • 25.
    Achichi, Manel
    et al.
    LIRMM, University of Montpellier, France.
    Cheatham, Michelle
    Wright State University, USA.
    Dragisic, Zlatan
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Euzenat, Jerome
    INRIA, France; Univ. Grenoble Alpes, Grenoble, France.
    Faria, Daniel
    Instituto Gulbenkian de Ciencia, Lisbon, Portugal.
    Ferrara, Alfio
    Universita degli studi di Milano, Italy.
    Flouris, Giorgos
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Fundulaki, Irini
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Harrow, Ian
    Pistoia Alliance Inc., USA.
    Ivanova, Valentina
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Jiménez-Ruiz, Ernesto
    University of Oslo, Norway; University of Oxford, UK.
    Kuss, Elena
    University of Mannheim, Germany.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Leopold, Henrik
    Vrije Universiteit Amsterdam, The Netherlands.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Meilicke, Christian
    University of Mannheim, Germany.
    Montanelli, Stefano
    Universita degli studi di Milano, Italy.
    Pesquita, Catia
    Universidade de Lisboa, Portugal.
    Saveta, Tzanina
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Shvaiko, Pavel
    TasLab, Informatica Trentina, Trento, Italy.
    Splendiani, Andrea
    Novartis Institutes for Biomedical Research, Basel, Switzerland.
    Stuckenschmidt, Heiner
    University of Mannheim, Germany.
    Todorov, Konstantin
    LIRMM, University of Montpellier, France.
    Trojahn, Cassia
    IRIT, Toulouse, France; Université Toulouse II, Toulouse, France.
    Zamazal, Ondřej
    University of Economics, Prague, Czech Republic.
    Results of the Ontology Alignment Evaluation Initiative 20162016In: Proceedings of the 11th International Workshop on Ontology Matching, Aachen, Germany: CEUR Workshop Proceedings , 2016, p. 73-129Conference paper (Refereed)
  • 26.
    Acosta, Maribel
    et al.
    Karlsruhe Institute of Technology.
    Hartig, Olaf
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Sequeda, Juan
    Capsenta.
    Federated RDF query processing2019In: Encyclopedia of big data technologies / [ed] Sherif Sakr, Albert Zomaya, Cham: Springer, 2019Chapter in book (Refereed)
    Abstract [en]

    Federated RDF query processing is concerned with querying a federation of RDF data sources where the queries are expressed using a declarative query language (typically, the RDF query language SPARQL), and the data sources are autonomous and heterogeneous. The current literature in this context assumes that the data and the data sources are semantically homogeneous, while heterogeneity occurs at the level of data formats and access protocols.

  • 27.
    Adiththan, Arun
    et al.
    CUNY, NY 10019 USA.
    Ramesh, S.
    Gen Motors RandD, MI 48090 USA.
    Samii, Soheil
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Gen Motors RandD, MI 48090 USA.
    Cloud-assisted Control of Ground Vehicles using Adaptive Computation Offloading Techniques2018In: PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION and TEST IN EUROPE CONFERENCE and EXHIBITION (DATE), IEEE , 2018, p. 589-592Conference paper (Refereed)
    Abstract [en]

    The existing approaches to design efficient safety critical control applications is constrained by limited in-vehicle sensing and computational capabilities. In the context of automated driving, we argue that there is a need to leverage resources "out-of-the-vehicle" to meet the sensing and powerful processing requirements of sophisticated algorithms (e.g., deep neural networks). To realize the need, a suitable computation offloading technique that meets the vehicle safety and stability requirements, even in the presence of unreliable communication network, has to be identified. In this work, we propose an adaptive offloading technique for control computations into the cloud. The proposed approach considers both current network conditions and control application requirements to determine the feasibility of leveraging remote computation and storage resources. As a case study, we describe a cloud-based path following controller application that leverages crowdsensed data for path planning.

  • 28.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    An Efficient Temperature-Gradient Based Burn-In Technique for 3D Stacked ICs2014In: Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, IEEE conference proceedings, 2014Conference paper (Refereed)
    Abstract [en]

    Burn-in is usually carried out with high temperature and elevated voltage. Since some of the early-life failures depend not only on high temperature but also on temperature gradients, simply raising up the temperature of an IC is not sufficient to detect them. This is especially true for 3D stacked ICs, since they have usually very large temperature gradients. The efficient detection of these early-life failures requires that specific temperature gradients are enforced as a part of the burn-in process. This paper presents an efficient method to do so by applying high power stimuli to the cores of the IC under burn-in through the test access mechanism. Therefore, no external heating equipment is required. The scheduling of the heating and cooling intervals to achieve the required temperature gradients is based on thermal simulations and is guided by functions derived from a set of thermal equations. Experimental results demonstrate the efficiency of the proposed method.

  • 29.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Efficient Test Application for Rapid Multi-Temperature Testing2015In: Proceedings of the 25th edition on Great Lakes Symposium on VLSI, Association for Computing Machinery (ACM), 2015, p. 3-8Conference paper (Other academic)
    Abstract [en]

    Different defects may manifest themselves at different temperatures. Therefore, the tests that target such temperature-dependent defects must be applied at different temperatures appropriate for detecting them. Such multi-temperature testing scheme applies tests at different required temperatures. It is known that a test's power dissipation depends on the previously applied test. Therefore, the same set of tests when organized differently dissipates different amounts of power. The technique proposed in this paper organizes the tests efficiently so that the resulted power levels lead to the required temperatures. Consequently a rapid multi-temperature testing is achieved. Experimental studies demonstrate the efficiency of the proposed technique.

  • 30.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Process-variation and Temperature Aware SoC Test Scheduling Technique2013In: Journal of electronic testing, ISSN 0923-8174, E-ISSN 1573-0727, Vol. 29, no 4, p. 499-520Article in journal (Refereed)
    Abstract [en]

    High temperature and process variation are undesirable phenomena affecting modern Systems-on-Chip (SoC). High temperature is a well-known issue, in particular during test, and should be taken care of in the test process. Modern SoCs are affected by large process variation and therefore experience large and time-variant temperature deviations. A traditional test schedule which ignores these deviations will be suboptimal in terms of speed or thermal-safety. This paper presents an adaptive test scheduling method which acts in response to the temperature deviations in order to improve the test speed and thermal safety. The method consists of an offline phase and an online phase. In the offline phase a schedule tree is constructed and in the online phase the appropriate path in the schedule tree is traversed based on temperature sensor readings. The proposed technique is designed to keep the online phase very simple by shifting the complexity into the offline phase. In order to efficiently produce high-quality schedules, an optimization heuristic which utilizes a dedicated thermal simulation is developed. Experiments are performed on a number of SoCs including the ITC'02 benchmarks and the experimental results demonstrate that the proposed technique significantly improves the cost of the test in comparison with the best existing test scheduling method.

    Download full text (pdf)
    fulltext
  • 31.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Process-Variation and Temperature Aware SoC Test Scheduling Using Particle Swarm Optimization2011In: The 6th IEEE International Design and Test Workshop (IDT'11), Beirut, Lebanon, December 11–14, 2011., IEEE , 2011Conference paper (Refereed)
    Abstract [en]

    High working temperature and process variation are undesirable effects for modern systems-on-chip. It is well recognized that the high temperature should be taken care of during the test process. Since large process variations induce rapid and large temperature deviations, traditional static test schedules are suboptimal in terms of speed and/or thermalsafety. A solution to this problem is to use an adaptive test schedule which addresses the temperature deviations by reacting to them. We propose an adaptive method that consists of a computationally intense offline-phase and a very simple onlinephase. In the offline-phase, a near optimal schedule tree is constructed and in the online-phase, based on the temperature sensor readings, an appropriate path in the schedule tree is traversed. In this paper, particle swarm optimization is introduced into the offline-phase and the implications are studied. Experimental results demonstrate the advantage of the proposed method.

  • 32.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Process-Variation Aware Multi-temperature Test Scheduling2014In: 27th International Conference on VLSI Design and 13th International Conference on Embedded Systems, IEEE conference proceedings, 2014, p. 32-37Conference paper (Refereed)
    Abstract [en]

    Chips manufactured with deep sub micron technologies are prone to large process variation and temperature-dependent defects. In order to provide high test efficiency, the tests for temperature-dependent defects should be applied at appropriate temperature ranges. Existing static scheduling techniques achieve these specified temperatures by scheduling the tests, specially developed heating sequences, and cooling intervals together. Because of the temperature uncertainty induced by process variation, a static test schedule is not capable of applying the tests at intended temperatures in an efficient manner. As a result the test cost will be very high. In this paper, an adaptive test scheduling method is introduced that utilizes on-chip temperature sensors in order to adapt the test schedule to the actual temperatures. The proposed method generates a low cost schedule tree based on the variation statistics and thermal simulations in the design phase. During the test, a chip selects an appropriate schedule dynamically based on temperature sensor readings. A 23% decrease in the likelihood that tests are not applied at the intended temperatures is observed in the experimental studies in addition to 20% reduction in test application time.

  • 33.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Temperature-Gradient Based Burn-In for 3D Stacked ICs2013In: The 12th Swedish System-on-Chip Conference (SSoCC 2013), Ystad, Sweden, May 6-7, 2013 (not reviewed, not printed)., 2013Conference paper (Other academic)
    Abstract [en]

    3D Stacked IC fabrication, using Through-Silicon-Vias, is a promising technology for future integrated circuits. However, large temperature gradients may exacerbate early-life-failures to the extent that the commercialization of 3D Stacked ICs is challenged. The effective detection of these early-life-failures requires that burn-in is performed when the IC’s temperatures comply with the thermal maps that properly specify the temperature gradients. In this paper, two methods that efficiently generate and maintain the specified thermal maps are proposed. The thermal maps are achieved by applying heating and cooling intervals to the chips under test through test access mechanisms. Therefore, no external heating system is required. The scheduling of the heating and cooling intervals is based on thermal simulations. The schedule generation is guided by functions that are derived from the temperature equations. Experimental results demonstrate the efficiency of the proposed method.

  • 34.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Temperature-Gradient Based Test Scheduling for 3D Stacked ICs2013In: 2013 IEEE International Conference on Electronics, Circuits, and Systems, IEEE conference proceedings, 2013, p. 405-408Conference paper (Refereed)
    Abstract [en]

    Defects that are dependent on temperature-gradients (e.g., delay-faults) introduce a challenge for achieving an effective test process, in particular for 3D ICs. Testing for such defects must be performed when the proper temperature gradients are enforced on the IC, otherwise these defects may escape the test. In this paper, a technique that efficiently heats up the IC during test so that it complies with the specified temperature gradients is proposed. The specified temperature gradients are achieved by applying heating sequences to the cores of the IC under test trough test access mechanism; thus no external heating mechanism is required. The scheduling of the test and heating sequences is based on thermal simulations. The schedule generation is guided by functions derived from the IC's temperature equation. Experimental results demonstrate that the proposed technique offers considerable test time savings.

  • 35.
    Aghaee, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Faculty of Science & Engineering.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Faculty of Science & Engineering.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Faculty of Science & Engineering.
    A Test-Ordering Based Temperature-Cycling Acceleration Technique for 3D Stacked ICs2015In: Journal of electronic testing, ISSN 0923-8174, E-ISSN 1573-0727, ISSN 0923-8174, Vol. 31, no 5, p. 503-523Article in journal (Refereed)
    Abstract [en]

    n a modern three-dimensional integrated circuit (3D IC), vertically stacked dies are interconnected using through silicon vias. 3D ICs are subject to undesirable temperature-cycling phenomena such as through silicon via protrusion as well as void formation and growth. These cycling effects that occur during early life result in opens, resistive opens, and stress induced carrier mobility reduction. Consequently these early-life failures lead to products that fail shortly after the start of their use. Artificially-accelerated temperature cycling, before the manufacturing test, helps to detect such early-life failures that are otherwise undetectable. A test-ordering based temperature-cycling acceleration technique is introduced in this paper that integrates a temperature-cycling acceleration procedure with pre-, mid-, and post-bond tests for 3D ICs. Moreover, it reduces the need for costly temperature chamber based temperature-cycling acceleration methods. All these result in a reduction in the overall test costs. The proposed method is a test-ordering and schedule based solution that enforces the required temperature cycling effect and simultaneously performs the tests whenever appropriate. Experimental results demonstrate the efficiency of the proposed technique.

    Download full text (pdf)
    fulltext
  • 36.
    Aghaee, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Temperature-Gradient-Based Burn-In and Test Scheduling for 3-D Stacked ICs2015In: IEEE Transactions on Very Large Scale Integration (vlsi) Systems, ISSN 1063-8210, E-ISSN 1557-9999, Vol. 23, no 12, p. 2992-3005Article in journal (Refereed)
    Abstract [en]

    Large temperature gradients exacerbate various types of defects including early-life failures and delay faults. Efficient detection of these defects requires that burn-in and test for delay faults, respectively, are performed when temperature gradients with proper magnitudes are enforced on an Integrated Circuit (IC). This issue is much more important for 3-D stacked ICs (3-D SICs) compared with 2-D ICs because of the larger temperature gradients in 3-D SICs. In this paper, two methods to efficiently enforce the specified temperature gradients on the IC, for burn-in and delay-fault test, are proposed. The specified temperature gradients are enforced by applying high-power stimuli to the cores of the IC under test through the test access mechanism. Therefore, no external heating mechanism is required. The tests, high power stimuli, and cooling intervals are scheduled together based on temperature simulations so that the desired temperature gradients are rapidly enforced. The schedule generation is guided by functions derived from a set of thermal equations. The experimental results demonstrate the efficiency of the proposed methods.

    Download full text (pdf)
    fulltext
  • 37.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Ståhlberg, Simon
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Analysing Approximability and Heuristics in Planning Using the Exponential-Time Hypothesis2016In: ECAI 2016: 22ND EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, IOS Press, 2016, Vol. 285, p. 184-192Conference paper (Refereed)
    Abstract [en]

    Cost-optimal planning has become a very well-studied topic within planning. Needless to say, cost-optimal planning has proven to be computationally hard both theoretically and in practice. Since cost-optimal planning is an optimisation problem, it is natural to analyse it from an approximation point of view. Even though such studies may be valuable in themselves, additional motivation is provided by the fact that there is a very close link between approximability and the performance of heuristics used in heuristic search. The aim of this paper is to analyse approximability (and indirectly the performance of heuristics) with respect to lower time bounds. That is, we are not content by merely classifying problems into complexity classes - we also study their time complexity. This is achieved by replacing standard complexity-theoretic assumptions (such as P not equal NP) with the exponential time hypothesis (ETH). This enables us to analyse, for instance, the performance of the h(+) heuristic and obtain general trade-off results that correlate approximability bounds with bounds on time complexity.

  • 38.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Ståhlberg, Simon
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Refining complexity analyses in planning by exploiting the exponential time hypothesis2016In: Annals of Mathematics and Artificial Intelligence, ISSN 1012-2443, E-ISSN 1573-7470, Vol. 78, no 2, p. 157-175Article in journal (Refereed)
    Abstract [en]

    The use of computational complexity in planning, and in AI in general, has always been a disputed topic. A major problem with ordinary worst-case analyses is that they do not provide any quantitative information: they do not tell us much about the running time of concrete algorithms, nor do they tell us much about the running time of optimal algorithms. We address problems like this by presenting results based on the exponential time hypothesis (ETH), which is a widely accepted hypothesis concerning the time complexity of 3-SAT. By using this approach, we provide, for instance, almost matching upper and lower bounds onthe time complexity of propositional planning.

    Download full text (pdf)
    fulltext
  • 39.
    Aghili, Mohammed
    Linköping University, Department of Computer and Information Science.
    Jämförelse av aggregeringswebbdelar i MOSS 20072010Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    En typisk funktion på startsidan till många webbportaler är den webbdel som presenterar exempelvis desenaste blogginläggen, nyheterna eller händelserna som har lagts till på webbplatsen. Dessa funktioner ärkända som aggreggeringswebbdelar. Eftersom startsidan är den sida som besöks mest jämfört med alla andrawebbsidor i portalen innebär det i sin tur att denna funktion utnyttjas väldigt ofta.Detta arbete syftar till att finna ett antal olika metoder som kan användas för att uppnå denna funktion ochatt ta reda på hur väl dessa webbdelar presterar.Denna rapport presenterar både de olika metoder som fanns och resultaten på en systematisk testning avdessa. Resultaten av testerna presenteras på ett överskådligt sätt.Slutligen dras slutsatser angående resultaten. Resultaten förespråkar inte en specifik metod, den metod somlämpar sig bäst för varje enskild sammanhang avgörs till största del av andra faktorer såsom frekvens avbesökare eller ändringar på innehållet som metoden söker igenom.

    Download full text (pdf)
    exjobb_rapport_mohammed_2010-09-20
  • 40.
    Ahl, Linda
    Linköping University, Department of Computer and Information Science.
    Hur fungerar datorer?: En fallstudie av att utveckla pedagogisk multimedia för ett datorhistoriskt museum.2004Independent thesis Basic level (professional degree)Student thesis
    Abstract [sv]

    Få människor vet hur datorer fungerar, vilka komponenter de är uppbyggda av och hur dessa samverkar. I detta examensarbete har en prototyp till en multimediepresentation utvecklats. Presentationen kommer att placeras på ett datorhistoriskt museum och dess syfte kommer där att vara att hjälpa människor förstå hur datorer fungerar. Prototypen är baserad på bilder och enklare animationer som förklarar samverkan och funktion hos de olika datorkomponenterna, bland annat genom att visa scenarier som många människor troligtvis känner igen från sin vardag.

    Målet med arbetet har varit att inskaffa kunskap kring hur multimedia kan användas för att illustrera tekniska processer, samt kunskap kring hur multimediepresentationer skall utveck-las. Därför har en systemutvecklingsmetod tagits fram som är anpassad till denna typ av system och som använts vid utvecklingen av prototypen.

    Systemutvecklingsmetoden är av iterativ modell, eftersom det visat sig att ett iterativt arbetssätt är att föredra framför ett linjärt vid multimedieutveckling. Detta beror på att det i denna typ av arbete där det till en början oftast är oklart vilka krav och önskemål som finns på slutprodukten är svårt att gå enkelriktat genom utvecklingsprocessen, d v s att göra ett steg helt färdigt innan nästa påbörjas.

    När det gäller multimedia är en slutsats att det med fördel kan användas för att visa och förklara tekniska förlopp och att det verkar vara ett användbart hjälpmedel inom utbildning och museiverksamhet.

    Download full text (pdf)
    FULLTEXT01
  • 41.
    Ahlberg, Gustav
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Generating web applications containing XSS and CSRF vulnerabilities2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Most of the people in the industrial world are using several web applications every day. Many of those web applications contain vulnerabilities that can allow attackers to steal sensitive data from the web application's users. One way to detect these vulnerabilities is to have a penetration tester examine the web application. A common way to train penetration testers to find vulnerabilities is to challenge them with realistic web applications that contain vulnerabilities. The penetration tester's assignment is to try to locate and exploit the vulnerabilities in the web application. Training on the same web application twice will not provide any new challenges to the penetration tester, because the penetration tester already knows how to exploit all the vulnerabilities in the web application. Therefore, a vast number of web applications and variants of web applications are needed to train on.

    This thesis describes a tool designed and developed to automatically generate vulnerable web applications. First a web application is prepared, so that the tool can generate a vulnerable version of the web application. The tool injects Cross Site Scripting (XSS) and Cross Site Request Forgery (CSRF) vulnerabilities in prepared web applications. Different variations of the same vulnerability can also be injected, so that different methods are needed to exploit the vulnerability depending on the variation. A purpose of the tool is that it should generate web applications which shall be used to train penetration testers, and some of the vulnerabilities the tool can inject, cannot be detected by current free web application vulnerability scanners, and would thus need to be detected by a penetration tester.

    To inject the vulnerabilities, the tool uses abstract syntax trees and taint analysis to detect where vulnerabilities can be injected in the prepared web applications.

    Tests confirm that web application vulnerability scanners cannot find all the vulnerabilities on the web applications which have been generated by the tool.

    Download full text (pdf)
    Generating web applications containing XSS and CSRF vulnerabilities
  • 42.
    Ahlstedt, Kristoffer
    et al.
    Linköping University, Department of Computer and Information Science.
    Annerwall, Lovisa
    Linköping University, Department of Computer and Information Science.
    Axelsson, Daniel
    Linköping University, Department of Computer and Information Science.
    Björklund, Samuel
    Linköping University, Department of Computer and Information Science.
    Cedenheim, Oliver
    Linköping University, Department of Computer and Information Science.
    Eriksson, Josefin
    Linköping University, Department of Computer and Information Science.
    Lehtonen, Jesper
    Linköping University, Department of Computer and Information Science.
    Lorentzon, Linn
    Linköping University, Department of Computer and Information Science.
    Olofsson, Gustaf
    Linköping University, Department of Computer and Information Science.
    BryggaHem – Development of an E-commerce Web Application with a Usability Focus2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis is part of a bachelor’s project conducted at Linköping University and addresses the development of an e-commerce web application with a usability focus. A market survey was conducted as part of the project to establish the orientation of the web application. Furthermore, the Scrum methodology is described and analyzed, and the team’s experiences of the project are documented. Research relevant to designing an application with high usability is detailed. Additionally the thesis addresses the tools and frameworks used during the development of the application, as well as ethical aspects of handling user information and selling products related to home-brewing of alcoholic beverages. The conclusion drawn from the project regarding the methodology is that Scrum is a viable methodology for this type of development work, although it requires small teams as well as previous experience of Scrum to yield high efficiency. The conclusion drawn from the project regarding usability is that it is achieved through a combination of variables that to a large extent is based on users’ distinct perceptions of the given application. 

    Download full text (pdf)
    fulltext
  • 43.
    Ahlström, Martin
    Linköping University, Department of Computer and Information Science.
    User-centred redesign of a business systemusing the Star Life Cycle method2008Independent thesis Advanced level (degree of Master), 20 points / 30 hpStudent thesis
    Abstract [en]

    The purpose with this thesis was to study user activities in a business system, MediusFlow. The overall objective was to identify user related problems and to analyse which of the usability data gathering methods to use in the future development process of the company Medius.

    The outcome of this study indicated that a cognitive related user problem was the most important problem to solve. A Star Life Cycle method was preferred. Two low-fidelity prototypes were developed to exemplify an alternative design solution to the identified cognitive user problem. Furthermore, the two best methods to use when gathering user related requirements were heuristic evaluation and expert review.

    In addition a company specific Style Guide was created with generic guidelines as a foundation for development of future applications within Medius.

    Download full text (pdf)
    FULLTEXT01
  • 44.
    Ahlström, Petter
    Linköping University, Department of Computer and Information Science, EISLAB - Economic Information Systems. Linköping University, The Institute of Technology.
    Affärsstrategier för seniorbostadsmarknaden2005Licentiate thesis, monograph (Other academic)
    Abstract [sv]

    Den demografiska utvecklingen i Sverige går mot en befolkningssammansättning med allt högre medelålder. Enligt svenska befolkningsprognoser kommer nästan var fjärde svensk år 2025 att vara över 65 år. Den äldre andelen av befolkningen utgör en välbeställd grupp med relativt stora realekonomiska tillgångar. Attitydundersökningar på morgondagens pensionärer talar för att denna grupp ställer högre krav på boendeutformning, kringservice, vård och omsorg än tidigare generationer. Flera studier visar på en ökad betalningsvilja och betalningsförmåga för alternativa service- och boendeformer. Samtidigt försöker olika marknadsaktörer att positionera ett produkt- och tjänsteutbud inom en bostadsmarknadens nischer, här definierad som seniorbostadsmarknaden. På seniorbostadsmarknaden har ett särskilt segment identifierats där utöver seniorboende även service-, vård- och omsorgsrelaterade kringtjänster bjuds ut. Mot den bakgrunden har avhandlingens problemställning formulerats enligt följande: vad skapar en stark marknadsposition för en aktör på seniorbostadsmarknaden med integrerad service, vård och omsorg?

    Utgångspunkten har varit ett sannolikt scenario där privata initiativ i allt större utsträckning kommer att bidra till framtida boendelösningar riktade till samhällets seniora och äldrebefolkningsgrupper. Syftet med avhandlingen har varit dels att identifiera de framgångsfaktorer som kan antas ligger till grund för en stark marknadsposition, dels att skapa en typologi över olika affärsstrategier. Genom en branschanalys har det i avhandlingen påvisats att seniorbostadsmarknaden är en nischmarknad med marginell omfattning. Avhandlingens empiriska undersökning har designats som en fältstudie. Fältstudien har i sin tur bl.a. genomförts i form av en förstudie och en intervjustudie. Intervjustudien ägde rum under hösten 2004 med platsbesök och intervjuer av verksamhetsföreträdare för elva utvalda fallstudieorganisationer. Utifrån ett antal i förhand uppställda kriterier har marknadsaktörernas framgångsfaktorer identifierats. Den bearbetnings- och analysmodell som konstruerats för detta syfte och som använts för att analysera fältstudiens empiri är baserad på studier inom strategiområdet. Modellen har bl.a. inspirerats av forskare som Miles & Snow (1978), Porter (1980) och Gupta & Govindarajan (1984). Vidare bygger den på antagandena om resursers och kompetensers betydelse för strategiformuleringen. Service management, och då särskilt tjänsters sammansättning, är ett annat område som beaktas. Analysmodellen har byggts upp kring fem dimensioner: omgivning, strategi, resurser, tjänstekoncept och konkurrens. De identifierade framgångsfaktorerna har baserats på intervjustudiens två mest framgångsrika aktörer. Resultatet har formulerats i ett antal strategiska vägval vilka kan sammanfattas i begreppen: differentiering, fokus, integration, samverkan, kontroll, verksamhetsutveckling, kärnkompetens och resurser. I avhandlingen påvisas att aktörer som bedriver framgångsrik verksamhet på seniorbostadsmarknaden till stora delar följer det Porter (1980) definierat som en differentieringsstrategi med fokus. Avhandlingen har också utmynnat i en affärsstrategisk typologi för seniorbostadsmarknaden. Dessa tentativa slutsatser har formulerats i fyra strategiska idealtyper: förvaltare, konceptbyggare, entreprenörer och idealister.

  • 45.
    Ahlström, Petter
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, EISLAB - Economic Information Systems.
    Nilsson, Fredrik
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, EISLAB - Economic Information Systems.
    Affärsstrategier och seniorbostadsmarknaden2005In: Fastighetsnytt, ISSN 1104-8913, Vol. 12 nr 5, p. 36-37Article in journal (Other (popular science, discussion, etc.))
    Abstract [sv]

      Artikeln beskriver några av resultaten från licentiatavhandlingen "Affärsstrategier för seniorbostadsmarknaden" av Petter Ahlström.      

  • 46.
    Ahlström, Vincent
    Linköping University, Department of Computer and Information Science, Database and information techniques.
    Improvement of simulation software for test equipment used in radio design and development2023Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    The global engineering design house Syntronic has requested a further development of the open source Python framework PyVISA-sim to enable dynamic simulations of signals and measuring instruments which would streamline development of their internal radio equipment testing tool. This tool is used by a world leading telecommunications company when developing their next generation radio equipment. PyVISA-sim is used in lab environments to test applications without access to real connected instruments. The project detailed in this thesis strives to pinpoint Syntronic’s needs, develop the requested functionality within the framework and have the changes implemented as part of the official GitHub repository, thereby making them available for anyone wanting to utilize them. To achieve this agile software development methods are utilized combined with an open source mindset. The resulting additions to PyVISA-sim can reduce the workload for all users in need of a more complex simulation method.   

    Download full text (pdf)
    fulltext
  • 47.
    Ahmad, Azeem
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Contributions to Improving Feedback and Trust in Automated Testing and Continuous Integration and Delivery2022Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    An integrated release version (also known as a release candidate in software engineering) is produced by merging, building, and testing code on a regular basis as part of the Continuous Integration and Continuous Delivery (CI/CD) practices. Several benefits, including improved software quality and shorter release cycles, have been claimed for CI/CD. On the other hand, recent research has uncovered a plethora of problems and bad practices related to CI/CD adoption, necessitating some optimization. Some of the problems addressed in this work include the ability to respond to practitioners’ questions and obtain quick and trustworthy feedback in CI/CD. To be more specific, our effort concentrated on: 1) identifying the information needs of software practitioners engaged in CI/CD; 2) adopting test optimization approaches to obtain faster feedback that are realistic for use in CI/CD environments without introducing excessive technical requirements; 3) identifying perceived causes and automated root cause analysis of test flakiness, thereby providing developers with guidance on how to resolve test flakiness; and 4) identifying challenges in addressing information needs, providing faster and more trustworthy feedback. 

    The findings of the research reported in this thesis are based on data from three single-case studies and three multiple-case studies. The research uses quantitative and qualitative data collected via interviews, site visits, and workshops. To perform our analyses, we used data from firms producing embedded software as well as open-source repositories. The following are major research and practical contributions. 

    • Information Needs: The initial contribution to research is a list of information needs in CI/CD. This list contains 27 frequently asked questions on continuous integration and continuous delivery by software practitioners. The identified information needs have been classified as related to testing, code & commit, confidence, bug, and artifacts. We investigated how companies deal with information needs, what tools they use to deal with them, and who is interested in them. We concluded that there is a discrepancy between the identified needs and the techniques employed to meet them. Since some information needs cannot be met by current tools, manual inspections are required, which adds time to the process. Information about code & commit, confidence level, and testing is the most frequently sought for and most important information. 
    • Evaluation of Diversity Based Techniques/Tool: The contribution is to conduct a detailed examination of diversity-based techniques using industry test cases to determine if there is a difference between diversity functions in selecting integrationlevel automated test. Additionally, how diversity-based testing compares to other optimization techniques used in industry in terms of fault detection rates, feature coverage, and execution time. This enables us to observe how coverage changes when we run fewer test cases. We concluded that some of the techniques can eliminate up to 85% of test cases (provided by the case company) while still covering all distinct features/requirements. The techniques are developed and made available as an open-source tool for further research and application. 
    • Test Flakiness Detection, Prediction & Automated Root Cause Analysis: We identified 19 factors that professionals perceive affect test flakiness. These perceived factors are divided into four categories: test code, system under test, CI/test infrastructure, and organizational. We concluded that some of the perceived factors of test flakiness in closed-source development are directly related to non-determinism, whereas other perceived factors concern different aspects e.g., lack of good properties of a test case (i.e., small, simple and robust), deviations from the established  processes, etc. To see if the developers’ perceptions were in line with what they had labelled as flaky or not, we examined the test artifacts that were readily available. We verified that two of the identified perceived factors (i.e., test case size and simplicity) are indeed indicative of test flakiness. Furthermore, we proposed a light weight technique named trace-back coverage to detect flaky tests. Trace-back coverage was combined with other factors such as test smells indicating test flakiness, flakiness frequency and test case size to investigate the effect on revealing test flakiness. When all factors are taken into consideration, the precision of flaky test detection is increased from 57% (using single factor) to 86% (combination of different factors). 
    List of papers
    1. Data visualisation in continuous integration and delivery: Information needs, challenges, and recommendations
    Open this publication in new window or tab >>Data visualisation in continuous integration and delivery: Information needs, challenges, and recommendations
    2022 (English)In: IET Software, ISSN 1751-8806, E-ISSN 1751-8814, Vol. 16, no 3, p. 331-349Article in journal (Refereed) Published
    Abstract [en]

    Several operations, ranging from regular code updates to compiling, building, testing, and distribution to customers, are consolidated in continuous integration and delivery. Professionals seek additional information to complete the mission at hand during these tasks. Developers who devote a large amount of time and effort to finding such information may become distracted from their work. We will better understand the processes, procedures, and resources used to deliver a quality product on time by defining the types of information that software professionals seek. A deeper understanding of software practitioners information needs has many advantages, including remaining competitive, growing knowledge of issues that can stymie a timely update, and creating a visualisation tool to assist practitioners in addressing their information needs. This is an extension of a previous work done by the authors. The authors conducted a multiple-case holistic study with six different companies (38 unique participants) to identify information needs in continuous integration and delivery. This study attempts to capture the importance, frequency, required effort (e.g. sequence of actions required to collect information), current approach to handling, and associated stakeholders with respect to identified needs. 27 information needs associated with different stakeholders (i.e. developers, testers, project managers, release team, and compliance authority) were identified. The identified needs were categorised as testing, code & commit, confidence, bug, and artefacts. Apart from identifying information needs, practitioners face several challenges in developing visualisation tools. Thus, 8 challenges that were faced by the practitioners to develop/maintain visualisation tools for the software team were identified. The recommendations from practitioners who are experts in developing, maintaining, and providing visualisation services to the software team were listed.

    Place, publisher, year, edition, pages
    WILEY, 2022
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-176847 (URN)10.1049/sfw2.12030 (DOI)000660517400001 ()
    Note

    Funding Agencies|Linkoping University

    Available from: 2021-06-22 Created: 2021-06-22 Last updated: 2022-10-20
    2. Improving continuous integration with similarity-based test case selection
    Open this publication in new window or tab >>Improving continuous integration with similarity-based test case selection
    Show others...
    2018 (English)In: Proceedings of the 13th International Workshop on Automation of Software Test, New York: ACM Digital Library, 2018, p. 39-45Conference paper, Published paper (Refereed)
    Abstract [en]

    Automated testing is an essential component of Continuous Integration (CI) and Delivery (CD), such as scheduling automated test sessions on overnight builds. That allows stakeholders to execute entire test suites and achieve exhaustive test coverage, since running all tests is often infeasible during work hours, i.e., in parallel to development activities. On the other hand, developers also need test feedback from CI servers when pushing changes, even if not all test cases are executed. In this paper we evaluate similarity-based test case selection (SBTCS) on integration-level tests executed on continuous integration pipelines of two companies. We select test cases that maximise diversity of test coverage and reduce feedback time to developers. Our results confirm existing evidence that SBTCS is a strong candidate for test optimisation, by reducing feedback time (up to 92% faster in our case studies) while achieving full test coverage using only information from test artefacts themselves.

    Place, publisher, year, edition, pages
    New York: ACM Digital Library, 2018
    Series
    International Workshop on Automation of Software Test, ISSN 2377-8628
    Keywords
    Similarity based test case selection, Continuous integration, Automated testing
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-152002 (URN)10.1145/3194733.3194744 (DOI)000458922700009 ()978-1-4503-5743-2 (ISBN)
    Conference
    AST'18 2018 ACM/IEEE 13th International Workshop on Automation of Software Test
    Note

    Funding agencies: Chalmers Software Center7 [30]

    Available from: 2018-10-14 Created: 2018-10-14 Last updated: 2022-08-23
    3. Empirical analysis of practitioners perceptions of test flakiness factors
    Open this publication in new window or tab >>Empirical analysis of practitioners perceptions of test flakiness factors
    2021 (English)In: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689, Vol. 31, no 8, article id e1791Article in journal (Refereed) Published
    Abstract [en]

    Identifying the root causes of test flakiness is one of the challenges faced by practitioners during software testing. In other words, the testing of the software is hampered by test flakiness. Since the research about test flakiness in large-scale software engineering is scarce, the need for an empirical case-study where we can build a common and grounded understanding of the problem as well as relevant remedies that can later be evaluated in a large-scale context is a necessity. This study reports the findings from a multiple-case study. The authors conducted an online survey to investigate and catalogue the root causes of test flakiness and mitigation strategies. We attempted to understand how practitioners perceive test flakiness in closed-source development, such as how they define test flakiness and what practitioners perceive can affect test flakiness. The perceptions of practitioners were compared with the available literature. We investigated whether practitioners perceptions are reflected in the test artefacts such as what is the relationship between the perceived factors and properties of test artefacts. This study reported 19 factors that are perceived by professionals to affect test flakiness. These perceived factors are categorized as test code, system under test, CI/test infrastructure, and organization-related. The authors concluded that some of the perceived factors in test flakiness in closed-source development are directly related to non-determinism, whereas other perceived factors concern different aspects, for example, lack of good properties of a test case, deviations from the established processes, and ad hoc decisions. Given a data set from investigated cases, the authors concluded that two of the perceived factors (i.e., test case size and test case simplicity) have a strong effect on test flakiness.

    Place, publisher, year, edition, pages
    Wiley-Blackwell, 2021
    Keywords
    flaky tests; non-deterministic tests; practitioners perceptions; software testing; test smells
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-178938 (URN)10.1002/stvr.1791 (DOI)000687875100001 ()
    Note

    Funding Agencies|Chalmers Tekniska Hogskola; Linkopings Universitet

    Available from: 2021-09-06 Created: 2021-09-06 Last updated: 2022-08-23
    4. An Evaluation of Machine Learning Methods for Predicting Flaky Tests
    Open this publication in new window or tab >>An Evaluation of Machine Learning Methods for Predicting Flaky Tests
    2020 (English)In: Proceedings of the 8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) / [ed] Horst Lichter, Selin Aydin, Thanwadee Sunetnanta, Toni Anwar, CEUR-WS , 2020, Vol. 2767, p. 37-46Conference paper, Published paper (Other academic)
    Place, publisher, year, edition, pages
    CEUR-WS, 2020
    Series
    CEUR Workshop Proceedings, ISSN 1613-0073
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-174179 (URN)2-s2.0-85097906339 (Scopus ID)
    Conference
    27th Asia-Pacific Software Engineering Conference (APSEC 2020) Singapore (virtual), December 1, 2020.
    Available from: 2021-03-15 Created: 2021-03-15 Last updated: 2022-10-14Bibliographically approved
    5. A Multi-factor Approach for Flaky Test Detection and Automated Root Cause Analysis
    Open this publication in new window or tab >>A Multi-factor Approach for Flaky Test Detection and Automated Root Cause Analysis
    Show others...
    2021 (English)In: 2021 28TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE (APSEC 2021), IEEE COMPUTER SOC , 2021, p. 338-348Conference paper, Published paper (Refereed)
    Abstract [en]

    Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, also known as non-deterministic tests, switch their outcomes without any modification in the codebase, hence reducing the confidence of developers during maintenance as well as in the quality of a product. Re-running test cases to reveal flakiness is resource-consuming, unreliable and does not reveal the root causes of test flakiness. Our paper evaluates a multi-factor approach to identify flaky test executions implemented in a tool named MDFlaker. The four factors are: trace-back coverage, flaky frequency, number of test smells, and test size. Based on the extracted factors, MDFlaker uses k-Nearest Neighbor (KNN) to determine whether failed test executions are flaky. We investigate MDFlaker in a case study with 2166 test executions from different open-source repositories. We evaluate the effectiveness of our flaky detection tool. We illustrate how the multi-factor approach can be used to reveal root causes for flakiness, and we conduct a qualitative comparison between MDFlaker and other tools proposed in literature. Our results show that the combination of different factors can be used to identify flaky tests. Each factor has its own trade-off, e.g., trace-back leads to many true positives, while flaky frequency yields more true negatives. Therefore, specific combinations of factors enable classification for testers with limited information (e.g., not enough test history information).

    Place, publisher, year, edition, pages
    IEEE COMPUTER SOC, 2021
    Series
    Asia-Pacific Software Engineering Conference, ISSN 1530-1362
    Keywords
    flaky tests; non-deterministic tests; flaky test detection; automated root-cause analysis; trace-back
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-186181 (URN)10.1109/APSEC53868.2021.00041 (DOI)000802192700034 ()9781665437844 (ISBN)9781665437851 (ISBN)
    Conference
    28th Asia-Pacific Software Engineering Conference (APSEC), ELECTR NETWORK, dec 06-09, 2021
    Available from: 2022-06-23 Created: 2022-06-23 Last updated: 2022-09-22
    Download full text (pdf)
    fulltext
    Download (png)
    presentationsbild
  • 48.
    Ahmad, Azeem
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    de Oliveira Neto, Francisco Gomes
    Chalmers & Univ Gothenburg, Sweden.
    Shi, Zhixiang
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    A Multi-factor Approach for Flaky Test Detection and Automated Root Cause Analysis2021In: 2021 28TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE (APSEC 2021), IEEE COMPUTER SOC , 2021, p. 338-348Conference paper (Refereed)
    Abstract [en]

    Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, also known as non-deterministic tests, switch their outcomes without any modification in the codebase, hence reducing the confidence of developers during maintenance as well as in the quality of a product. Re-running test cases to reveal flakiness is resource-consuming, unreliable and does not reveal the root causes of test flakiness. Our paper evaluates a multi-factor approach to identify flaky test executions implemented in a tool named MDFlaker. The four factors are: trace-back coverage, flaky frequency, number of test smells, and test size. Based on the extracted factors, MDFlaker uses k-Nearest Neighbor (KNN) to determine whether failed test executions are flaky. We investigate MDFlaker in a case study with 2166 test executions from different open-source repositories. We evaluate the effectiveness of our flaky detection tool. We illustrate how the multi-factor approach can be used to reveal root causes for flakiness, and we conduct a qualitative comparison between MDFlaker and other tools proposed in literature. Our results show that the combination of different factors can be used to identify flaky tests. Each factor has its own trade-off, e.g., trace-back leads to many true positives, while flaky frequency yields more true negatives. Therefore, specific combinations of factors enable classification for testers with limited information (e.g., not enough test history information).

  • 49.
    Ahmad, Shakeel
    et al.
    Linköping University, Department of Electrical Engineering. Linköping University, Faculty of Science & Engineering. Univ Management and Technol, Pakistan.
    Dabrowski, Jerzy
    Linköping University, Department of Electrical Engineering, Communication Systems. Linköping University, Faculty of Science & Engineering.
    Design of Two-Tone RF Generator for On-Chip IP3/IP2 Test2019In: Journal of electronic testing, ISSN 0923-8174, E-ISSN 1573-0727, Vol. 35, no 1, p. 77-85Article in journal (Refereed)
    Abstract [en]

    In this paper a built-in-self-test (BiST) aimed at the third and second intercept point (IP3/IP2) characterization of RF receiver is discussed with a focus on a stimulus generator. The generator is designed based on a specialized phase-lock loop (PLL) architecture with two voltage controlled oscillators (VCOs) operating in GHz frequency range. The objective of PLL is to keep the VCOs frequency spacing under control. According to the test requirements the phase noise and nonlinear distortion of the two-tone generator are considered as a merit for the design of VCOs and analog adder. The PLL reference spurs, critical for the IP3 measurement, are avoided by means of a frequency doubling technique. The circuit is designed in 65nm CMOS. A highly linear analog adder with OIP3amp;gt;+15dBm and ring VCOs with phase noise amp;lt; -104 dBc/Hz at 1MHz offset are used to generate the RF stimulus of total power greater than -22dBm. In simulations a performance sufficient for IP3/IP2 test of a typical RF CMOS receiver is demonstrated.

  • 50.
    Ahmadian, Amirhossein
    et al.
    Linköping University, Department of Computer and Information Science, The Division of Statistics and Machine Learning. Linköping University, Faculty of Science & Engineering.
    Lindsten, Fredrik
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, Department of Computer and Information Science, The Division of Statistics and Machine Learning. Linköping University, Faculty of Science & Engineering.
    Enhancing Representation Learning with Deep Classifiers in Presence of Shortcut2023In: Proceedings of IEEE ICASSP 2023, 2023Conference paper (Refereed)
    Abstract [en]

    A deep neural classifier trained on an upstream task can be leveraged to boost the performance of another classifier in a related downstream task through the representations learned in hidden layers. However, presence of shortcuts (easy-to-learn features) in the upstream task can considerably impair the versatility of intermediate representations and, in turn, the downstream performance. In this paper, we propose a method to improve the representations learned by deep neural image classifiers in spite of a shortcut in upstream data. In our method, the upstream classification objective is augmented with a type of adversarial training where an auxiliary network, so called lens, fools the classifier by exploiting the shortcut in reconstructing images. Empirical comparisons in self-supervised and transfer learning problems with three shortcut-biased datasets suggest the advantages of our method in terms of downstream performance and/or training time.

1234567 1 - 50 of 3864
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf