liu.seSearch for publications in DiVA
Change search
Refine search result
12 1 - 50 of 83
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Eriksson, Henrik
    et al.
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Raciti, Massimiliano
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Basile, Maurizio
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Cunsolo, Alessandro
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Fröberg, Anders
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Ekberg, Joakim
    Linköping University, Department of Medical and Health Sciences, Social Medicine and Public Health Science. Linköping University, Faculty of Health Sciences.
    Timpka, Toomas
    Linköping University, Department of Medical and Health Sciences, Social Medicine and Public Health Science. Linköping University, Faculty of Health Sciences.
    A Cloud-Based Simulation Architecture for Pandemic Influenza Simulation2011In: AMIA Annual Symposium Proceedings 2011, Curran , 2011, p. 364-373Conference paper (Refereed)
    Abstract [en]

    High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best costadjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface.

  • 2.
    Hellström, Jesper
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Moberg, Anton
    Linköping University, Department of Computer and Information Science, Software and Systems.
    A Lightweight Secure Development Process for Developers2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Following a secure development process when developing software can greatly increase the security of the software. Several secure development processes have been developed and are available for companies and organizations to adopt. However, the processes can be expensive and complex to adopt in terms of expertise, education, time, and other resources.In this thesis, a software service, developed by a small IT-consulting company, was tested with security tools and manual code review to find security vulnerabilities. These vulnerabilities showed that there was room for security improvement in the software development life cycle. Therefore, a lightweight secure development process that can be used by developers, is proposed. The secure development process called Lightweight Developer-Oriented Security Process (LDOSP) is based on activities from other secure development processes and the choice of these activities were based on interviews with representatives of the IT-consulting company. The interviews showed that the process would need to be lightweight, time- and cost-efficient, and possible to be performed by a developer without extensive security experience. LDOSP contains 11 activities spread across different phases of the software development life cycle and an exemplification of the process was made to simplify the adoption of LDOSP.

  • 3.
    Leifler, Ola
    et al.
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Eriksson, Henrik
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    A Model for Document Processing in Semantic Desktop Systems2008In: Proceedings of the I-KNOW '08, the International Conference on Knowledge Management, Berlin, Heidelberg: Springer Verlag , 2008Conference paper (Refereed)
    Abstract [en]

    There is a significant gap between the services provided by dedicated information systems and general desktop systems for document communication and preparation. This situation is a serious knowledge-management problem, which often results in information loss, poor communication, and confusion among users. Semantic desktops promise to bring knowledge-based services to common desktop applications and, ultimately, to support knowledge management by adding advanced functionality to familiar computing environments. By custom tailoring these systems to different application domains, it is possible to provide dedicated services that assist users in combining document handling and communication with structured workflow processes and the services provided by dedicated systems. This paper presents a model for developing custom-tailored document processing for semantic-desktop systems. Our approach has been applied to the domain of military command and control, which as based on highly-structured document-driven processes. Key Words: semantic desktop, document-driven processes, semantic documents, planning

  • 4.
    Leifler, Ola
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces.
    Eriksson, Henrik
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces.
    A Research Agenda for Critiquing in Military Decision-Making2004In: Swedish-American Workshop on Simulation and modeling,2004, 2004, p. 11-20Conference paper (Refereed)
  • 5.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Affordances and Constraints of Intelligent Decision Support for Military Command and Control: Three Case Studies of Support Systems2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Researchers in military command and control (C2) have for several decades sought to help commanders by introducing automated, intelligent decision support systems. These systems are still not widely used, however, and some researchers argue that this may be due to those problems that are inherent in the relationship between the affordances of technology and the requirements by the specific contexts of work in military C2. In this thesis, we study some specific properties of three support techniques for analyzing and automating aspects of C2 scenarios that are relevant for the contexts of work in which they can be used.

    The research questions we address concern (1) which affordances and constraints of these technologies are of most relevance to C2, and (2) how these affordances and limitations can be managed to improve the utility of intelligent decision support systems in C2. The thesis comprises three case studies of C2 scenarios where intelligent support systems have been devised for each scenario.

    The first study considered two military planning scenarios: planning for medical evacuations and similar tactical operations. In the study, we argue that the plan production capabilities of automated planners may be of less use than their constraint management facilities. ComPlan, which was the main technical system studied in the first case study, consisted of a highly configurable, collaborative, constraint-management framework for planning in which constraints could be used either to enforce relationships or notify users of their validity during planning. As a partial result of the first study, we proposed three tentative design criteria for intelligent decision support: transparency, graceful regulation and event-based feedback.

    The second study was of information management during planning at the operational level, where we used a C2 training scenario from the Swedish Armed Forces and the documents produced during the scenario as a basis for studying properties of Semantic Desktops as intelligent decision support. In the study, we argue that (1) due to the simultaneous use of both documents and specialized systems, it is imperative that commanders can manage information from heterogeneous sources consistently, and (2) in the context of a structurally rich domain such as C2, documents can contain enough information about domain-specific concepts that occur in several applications to allow them to be automatically extracted from documents and managed in a unified manner. As a result of our second study, we present a model for extending a general semantic desktop ontology with domain-specific concepts and mechanisms for extracting and managing semantic objects from plan documents. Our model adheres to the design criteria from the first case study.

    The third study investigated machine learning techniques in general and text clustering in particular, to support researchers who study team behavior and performance in C2. In this study, we used material from several C2 scenarios which had been studied previously. We interviewed the participating researchers about their work profiles, evaluated machine learning approaches for the purpose of supporting their work and devised a support system based on the results of our evaluations. In the study, we report on empirical results regarding the precision possible to achieve when automatically classifying messages in C2 workflows and present some ramifications of these results on the design of support tools for communication analysis. Finally, we report how the prototype support system for clustering messages in C2 communications was conceived by the users, the utility of the design criteria from case study 1 when applied to communication analysis, and the possibilities for using text clustering as a concrete support tool in communication analysis.

    In conclusion, we discuss how the affordances and constraints of intelligent decision support systems for C2 relate to our design criteria, and how the characteristics of each work situation demand new adaptations of the way in which intelligent support systems are used.

    List of papers
    1. Combining Technical and Human-Centered Strategies for Decision Support in Command and Control - The ComPlan Approach
    Open this publication in new window or tab >>Combining Technical and Human-Centered Strategies for Decision Support in Command and Control - The ComPlan Approach
    2008 (English)In: ISCRAM2008 Proceedings of the 5th International ISCRAM Conference / [ed] F. Fiedrich and B. Van de Walle, 2008, p. 504-515Conference paper, Published paper (Refereed)
    Abstract [en]

    ComPlan (A Combined, Collaborative Command and Control Planning tool) is an approach to providing knowledge-based decision support in the context of command and control. It combines technical research on automated planning tools with human-centered research on mission planning. At its core, ComPlan uses interconnected views of a planning situation to present and manipulate aspects of a scenario. By using domain knowledge flexibly, it presents immediate and directly visible feedback on constraint violations of a plan, facilitates mental simulation of events, and provides support for synchronization of concurrently working mission planners. The conceptual framework of ComPlan is grounded on three main principles from human-centered research on command and control: transparency, graceful regulation, and event-based feedback. As a result, ComPlan provides a model for applying a human-centered perspective on plan authoring tools for command and control, and a demonstration for how to apply that model in an integrated plan-authoring environment.

    Keywords
    Decision support, mixed-initiative planning, critiquing, cognitive systems engineering
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-42584 (URN)66339 (Local ID)66339 (Archive number)66339 (OAI)
    Conference
    5th International ISCRAM Conference, May 4-7, Washington, DC, USA
    Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2018-01-12Bibliographically approved
    2. A Model for Document Processing in Semantic Desktop Systems
    Open this publication in new window or tab >>A Model for Document Processing in Semantic Desktop Systems
    2008 (English)In: Proceedings of the I-KNOW '08, the International Conference on Knowledge Management, Berlin, Heidelberg: Springer Verlag , 2008Conference paper, Published paper (Refereed)
    Abstract [en]

    There is a significant gap between the services provided by dedicated information systems and general desktop systems for document communication and preparation. This situation is a serious knowledge-management problem, which often results in information loss, poor communication, and confusion among users. Semantic desktops promise to bring knowledge-based services to common desktop applications and, ultimately, to support knowledge management by adding advanced functionality to familiar computing environments. By custom tailoring these systems to different application domains, it is possible to provide dedicated services that assist users in combining document handling and communication with structured workflow processes and the services provided by dedicated systems. This paper presents a model for developing custom-tailored document processing for semantic-desktop systems. Our approach has been applied to the domain of military command and control, which as based on highly-structured document-driven processes. Key Words: semantic desktop, document-driven processes, semantic documents, planning

    Place, publisher, year, edition, pages
    Berlin, Heidelberg: Springer Verlag, 2008
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-42583 (URN)66319 (Local ID)66319 (Archive number)66319 (OAI)
    Conference
    I-KNOW '08, the International Conference on Knowledge Management, 3-5 September, Graz, Austria
    Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2018-01-12Bibliographically approved
    3. Domain-specific knowledge management in a Semantic Desktop
    Open this publication in new window or tab >>Domain-specific knowledge management in a Semantic Desktop
    2009 (English)In: Proceedings of I-KNOW '09 9th International Conference on Knowledge Management and Knowledge Technologies / [ed] Klaus Tochtermann, 2009, p. 360-365Conference paper, Published paper (Refereed)
    Abstract [en]

    Semantic Desktops hold promise to provide intelligent information-management environments that can respond to users’ needs. A critical requirement for creating such environments is that the underlying ontology reflects the context of work properly. For specialized work domains where people deal with rich information sources in a context-specific manner, there may be a significant amount of domain-specific information available in text documents, emails and other domain-dependent data sources. We propose that this can be exploited to great effect in a Semantic Desktop to provide much more effective knowledge management. In this paper, we present extensions to an existing semantic desktop through content- and structure-based information extraction, domain-specific ontological extensions as well as visualization of semantic entities. Our extensions are justified by needs in strategic decision making, where domain-specific, well-structured knowledge is available in documents and communications but scattered across the desktop. The consistent and efficient use of these resources by a group of co-workers is critical to success. With a domain-aware semantic desktop, we argue that decision makers will have a much better chance of successful sense making in strategic decision making.

    Keywords
    semantic desktop, knowledge management, domain-specific ontology
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-56634 (URN)
    Conference
    9th International Conference on Knowledge Management and Knowledge Technologies 2-4 September, Graz, Austria
    Available from: 2010-05-27 Created: 2010-05-27 Last updated: 2018-01-12Bibliographically approved
    4. Message classification as a basis for studying command and control communication: an evaluation of machine learning approaches
    Open this publication in new window or tab >>Message classification as a basis for studying command and control communication: an evaluation of machine learning approaches
    2012 (English)In: Journal of Intelligent Information Systems, ISSN 0925-9902, E-ISSN 1573-7675, Vol. 38, no 2, p. 299-320Article in journal (Refereed) Published
    Abstract [en]

    In military command and control, success relies on being able to perform key functions such as communicating intent. Most staff functions are carried out using standard means of text communication. Exactly how members of staff perform their duties, who they communicate with and how, and how they could perform better, is an area of active research. In command and control research, there is not yet a single model which explains all actions undertaken by members of staff well enough to prescribe a set of procedures for how to perform functions in command and control. In this context, we have studied whether automated classification approaches can be applied to textual communication to assist researchers who study command teams and analyze their actions. Specifically, we report the results from evaluating machine leaning with respect to two metrics of classification performance: (1) the precision of finding a known transition between two activities in a work process, and (2) the precision of classifying messages similarly to human researchers that search for critical episodes in a workflow. The results indicate that classification based on text only provides higher precision results with respect to both metrics when compared to other machine learning approaches, and that the precision of classifying messages using text-based classification in already classified datasets was approximately 50%. We present the implications that these results have for the design of support systems based on machine learning, and outline how to practically use text classification for analyzing team communications by demonstrating a specific prototype support tool for workflow analysis.

    Place, publisher, year, edition, pages
    Berlin: Springer, 2012
    Keywords
    Command and control – Classification, Exploratory sequential data analysis, Workflow mining, Random indexing, Text clustering
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-67227 (URN)10.1007/s10844-011-0156-5 (DOI)000302240800001 ()
    Note

    funding agencies|Swedish National Defense College||

    Available from: 2011-04-06 Created: 2011-04-04 Last updated: 2018-01-12Bibliographically approved
    5. Analysis tools in the study of distributed decision-making: a meta-study of command and control research
    Open this publication in new window or tab >>Analysis tools in the study of distributed decision-making: a meta-study of command and control research
    2012 (English)In: Cognition, Technology & Work, ISSN 1435-5558, E-ISSN 1435-5566, Vol. 14, no 2, p. 157-168Article in journal (Refereed) Published
    Abstract [en]

    Our understanding of distributed decision making in professional teams and their performance comes in part from studies in which researchers gather and process information about the communications and actions of teams. In many cases, the data sets available for analysis are large, unwieldy and require methods for exploratory and dynamic management of data. In this paper, we report the results of interviewing eight researchers on their work process when conducting such analyses and their use of support tools in this process. Our aim with the study was to gain an understanding of their workflow when studying distributed decision making in teams, and specifically how automated pattern extraction tools could be of use in their work. Based on an analysis of the interviews, we elicited three issues of concern related to the use of support tools in analysis: focusing on a subset of data to study, drawing conclusionsfrom data and understanding tool limitations. Together, these three issues point to two observations regarding tool use that are of specific relevance to the design of intelligent support tools based on pattern extraction: open-endedness and transparency.

    Place, publisher, year, edition, pages
    Springer London, 2012
    Keywords
    Command and control, Text analysis, Interview study, Exploratory sequential data analysis
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-67228 (URN)10.1007/s10111-011-0177-4 (DOI)000310239500005 ()
    Note

    funding agencies|Swedish National Defense College||

    Available from: 2011-04-04 Created: 2011-04-04 Last updated: 2018-01-12Bibliographically approved
    6. Automated text-based analysis for decision-making research
    Open this publication in new window or tab >>Automated text-based analysis for decision-making research
    2012 (English)In: Cognition, Technology & Work, ISSN 1435-5558, E-ISSN 1435-5566, Vol. 14, no 2, p. 129-142Article in journal (Refereed) Published
    Abstract [en]

    We present results from a study on constructing and evaluating a support tool for the extraction of patterns in distributed decision -making processes, based on design criteria elicited from a study on the work process involved in studying such decision-making. Specifically, we devised and evaluated an analysis tool for C2 researchers who study simulated decision-making scenarios for command teams. The analysis tool used text clustering as an underlying pattern extraction technique and was evaluated together with C2 researchers in a workshop to establish whether the design criteria were valid and the approach taken with the analysis tool was sound. Design criteria elicited from an earlier study with researchers (open-endedness and transparency) were highly consistent with the results from the workshop. Specifically, evaluation results indicate that successful deployment of advanced analysis tools requires that tools can treat multiple data sources and offer rich opportunities for manipulation and interaction (open-endedness) and careful design of visual presentations and explanations of the techniques used (transparency). Finally, the results point to the high relevance and promise of using text clustering as a support for analysis of C2 data.

    Place, publisher, year, edition, pages
    Springer London, 2012
    Keywords
    Command and control, Text analysis, Exploratory sequential data analysis, Text clustering
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-67229 (URN)10.1007/s10111-010-0170-3 (DOI)000310239500003 ()
    Note

    The original publication is available at www.springerlink.com: Ola Leifler and Henrik Eriksson, Text-based Analysis for Command and Control Researchers: The Workflow Visualizer Approach, 2011, Cognition, Technology & Work. http://dx.doi.org/10.1007/s10111-010-0170-3 Copyright: Springer Science Business Media http://www.springerlink.com/

    Available from: 2011-04-06 Created: 2011-04-04 Last updated: 2018-01-12Bibliographically approved
  • 6.
    Amlinger, Anton
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    An Evaluation of Clustering and Classification Algorithms in Life-Logging Devices2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Using life-logging devices and wearables is a growing trend in today’s society. These yield vast amounts of information, data that is not directly overseeable or graspable at a glance due to its size. Gathering a qualitative, comprehensible overview over this quantitative information is essential for life-logging services to serve its purpose.

    This thesis provides an overview comparison of CLARANS, DBSCAN and SLINK, representing different branches of clustering algorithm types, as tools for activity detection in geo-spatial data sets. These activities are then classified using a simple model with model parameters learned via Bayesian inference, as a demonstration of a different branch of clustering.

    Results are provided using Silhouettes as evaluation for geo-spatial clustering and a user study for the end classification. The results are promising as an outline for a framework of classification and activity detection, and shed lights on various pitfalls that might be encountered during implementation of such service.

  • 7.
    Arvid, Johnsson
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Analysis of GPU accelerated OpenCL applications on the Intel HD 4600 GPU2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    GPU acceleration is the concept of accelerating the execution speed of an application by running it on the GPU. Researchers and developers have always wanted to achieve greater speed for their applications and GPU acceleration is a very common way of doing so. This has been done a long time for highly graphical applications using powerful dedicated GPUs. However, researchers have become more and more interested in using GPU acceleration on everyday applications. Moreover now a days more or less every computer has some sort of integrated GPU which often is underutilized. The integrated GPUs are not as powerful as dedicated ones but they have other benefits such as a lower power consumption and faster data transfer. Therefore this thesis’ purpose was to examine whether the integrated GPU Intel HD 4600 can be used to accelerate the two applications Image Convolution and sparse matrix vector multiplication (SpMV). This was done by analysing the code from a previous thesis which produced some unexpected results as well as a benchmark from the OpenDwarf’s benchmark suite. The Intel HD 4600 was able to speedup both Image Convolution and SpMV by about two times compared to running them on the Intel i7-4790. However, the SpMV implementation was not well suited for the GPU meaning that the speedup was only observed on ideal input configurations.

  • 8.
    Leifler, Ola
    et al.
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Eriksson, Henrik
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Analysis tools in the study of distributed decision-making: a meta-study of command and control research2012In: Cognition, Technology & Work, ISSN 1435-5558, E-ISSN 1435-5566, Vol. 14, no 2, p. 157-168Article in journal (Refereed)
    Abstract [en]

    Our understanding of distributed decision making in professional teams and their performance comes in part from studies in which researchers gather and process information about the communications and actions of teams. In many cases, the data sets available for analysis are large, unwieldy and require methods for exploratory and dynamic management of data. In this paper, we report the results of interviewing eight researchers on their work process when conducting such analyses and their use of support tools in this process. Our aim with the study was to gain an understanding of their workflow when studying distributed decision making in teams, and specifically how automated pattern extraction tools could be of use in their work. Based on an analysis of the interviews, we elicited three issues of concern related to the use of support tools in analysis: focusing on a subset of data to study, drawing conclusionsfrom data and understanding tool limitations. Together, these three issues point to two observations regarding tool use that are of specific relevance to the design of intelligent support tools based on pattern extraction: open-endedness and transparency.

  • 9.
    Abrahamsson, Linn
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Melin Wenström, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Användning av prototyper som verktyg för kravhantering i agil mjukvaruutveckling: - En fallstudie2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Requirements Engineering (RE) in Agile Software Development (ASD) is a challenge thatmany face and several techniques exist when doing so. One such technique is prototyping, when a model of a product is used to gather important information in software develop-ment. To describe how much a prototype resembles the product the notion of fidelity is used. The aim of this study is to contribute to research regarding prototyping in ASD,and to examine the effect of a prototype’s fidelity when using prototypes in discussionsduring RE. A case study is performed at the company Exsitec where staff are interviewedregarding prototyping in software development. Thereafter, two prototypes of low andhigh fidelity are developed and used in interviews as a basis for discussion. Based on thisstudy, the use of prototypes in software projects can help customers trust the process,improve communication with customers, and facilitate when trying to reach consensusamong different stakeholders. Furthermore, depending on how they are used, prototypescan contribute to understanding the big picture of the requirements and can also serve asdocumentation. The study also shows some, albeit subtle, differences in the informationcollected using prototypes with low and high fidelity. The use of a high fidelity prototypeseems to generate more requirements, but makes interviewees less likely to come up withlarger, more comprehensive requirement changes.

  • 10.
    Askling, Kim
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Application of Topic Models for Test Case Selection: A comparison of similarity-based selection techniques2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Regression testing is just as important for the quality assurance of a system, as it is time consuming. Several techniques exist with the purpose of lowering the execution times of test suites and provide faster feedback to the developers, examples are ones based on transition-models or string-distances. These techniques are called test case selection (TCS) techniques, and focuses on selecting subsets of the test suite deemed relevant for the modifications made to the system under test.

    This thesis project focused on evaluating the use of a topic model, latent dirichlet allocation, as a means to create a diverse selection of test cases for coverage of certain test characteristics. The model was tested on authentic data sets from two different companies, where the results were compared against prior work where TCS was performed using similarity-based techniques. Also, the model was tuned and evaluated, using an algorithm based on differential evolution, to increase the model’s stability in terms of inferred topics and topic diversity.

    The results indicate that the use of the model for test case selection purposes was not as efficient as the other similarity-based selection techniques studied in work prior to thist hesis. In fact, the results show that the selection generated using the model performs similar, in terms of coverage, to a randomly selected subset of the test suite. Tuning of the model does not improve these results, in fact the tuned model performs worse than the other methods in most cases. However, the tuning process results in the model being more stable in terms of inferred latent topics and topic diversity. The performance of the model is believed to be strongly dependent on the characteristics of the underlying data used to train the model, putting emphasis on word frequencies and the overall sizes of the training documents, and implying that this would affect the words’ relevance scoring to the better.

  • 11.
    Lindskog Hedström, David
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Assessing Needs for Visualizations in Continuous Integration: A Multiple Case Study2017Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Many organizations are moving towards agile software development and practices such as continuous integration. Being significantly different from traditional development, agile development has unique new challenges to be dealt with. This report is exploring challenges that large-scale organizations adopting continuous integration are experiencing related to their integration process. Challenges that are focused on are those that relate to understanding information about what the continuous integration system does. Two types of challenges were found: those that call for a need of understanding information and those that hinders information from being used. The report also suggests how visualizations can be used to help solving the former of the two.

  • 12.
    Olsson, Oskar
    et al.
    Linköping University, Department of Computer and Information Science.
    Eriksson, Moa
    Linköping University, Department of Computer and Information Science.
    Automated system tests with image recognition: focused on text detection and recognition2019Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Today’s airplanes and modern cars are equipped with displays to communicate important information to the pilot or driver. These displays needs to be tested for safety reasons; displays that fail can be a huge safety risk and lead to catastrophic events. Today displays are tested by checking the output signals or with the help of a person who validates the physical display manually. However this technique is very inefficient and can lead to important errors being unnoticed. MindRoad AB is searching for a solution where validation of the display is made from a camera pointed at it, text and numbers will then be recognized using a computer vision algorithm and validated in a time efficient and accurate way. This thesis compares the three different text detection algorithms, EAST, SWT and Tesseract to determine the most suitable for continued work. The chosen algorithm is then optimized and the possibility to develop a program which meets MindRoad ABs expectations is investigated. As a result several algorithms were combined to a fully working program to detect and recognize text in industrial displays.

  • 13.
    Leifler, Ola
    et al.
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Eriksson, Henrik
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Automated text-based analysis for decision-making research2012In: Cognition, Technology & Work, ISSN 1435-5558, E-ISSN 1435-5566, Vol. 14, no 2, p. 129-142Article in journal (Refereed)
    Abstract [en]

    We present results from a study on constructing and evaluating a support tool for the extraction of patterns in distributed decision -making processes, based on design criteria elicited from a study on the work process involved in studying such decision-making. Specifically, we devised and evaluated an analysis tool for C2 researchers who study simulated decision-making scenarios for command teams. The analysis tool used text clustering as an underlying pattern extraction technique and was evaluated together with C2 researchers in a workshop to establish whether the design criteria were valid and the approach taken with the analysis tool was sound. Design criteria elicited from an earlier study with researchers (open-endedness and transparency) were highly consistent with the results from the workshop. Specifically, evaluation results indicate that successful deployment of advanced analysis tools requires that tools can treat multiple data sources and offer rich opportunities for manipulation and interaction (open-endedness) and careful design of visual presentations and explanations of the techniques used (transparency). Finally, the results point to the high relevance and promise of using text clustering as a support for analysis of C2 data.

  • 14.
    Pettersson, Christopher
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Automatic fault detection and localization in IPnetworks: Active probing from a single node perspective2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Fault management is a continuously demanded function in any kind of network management. Commonly it is carried out by a centralized entity on the network which correlates collected information into likely diagnoses of the current system states. We survey the use of active-on-demand-measurement, often called active probes, together with passive readings from the perspective of one single node. The solution is confined to the node and is isolated from the surrounding environment. The utility for this approach, to fault diagnosis, was found to depend on the environment in which the specific node was located within. Conclusively, the less environment knowledge, the more useful this solution presents. Consequently this approach to fault diagnosis offers limited opportunities in the test environment. However, greater prospects was found for this approach while located in a heterogeneous customer environment.

  • 15.
    Boyer de la Giroday, Anna
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Intergrated Computer systems.
    Automatic fine tuning of cavity filters2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cavity filters are a necessary component in base stations used for telecommunication. Without these filters it would not be possible for base stations to send and receive signals at the same time. Today these cavity filters require fine tuning by humans before they can be deployed. This thesis have designed and implemented a neural network that can tune cavity filters. Different types of design parameters have been evaluated, such as neural network architecture, data presentation and data preprocessing. While the results was not comparable to human fine tuning, it was shown that there was a relationship between error and number of weights in the neural network. The thesis also presents some rules of thumb for future designs of neural network used for filter tuning.

  • 16.
    Lundberg, Martin
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Automatic parameter tuning in localization algorithms2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Many algorithms today require a number of parameters to be set in order to perform well in a given application. The tuning of these parameters is often difficult and tedious to do manually, especially when the number of parameters is large. It is also unlikely that a human can find the best possible solution for difficult problems. To be able to automatically find good sets of parameters could both provide better results and save a lot of time.

    The prominent methods Bayesian optimization and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are evaluated for automatic parameter tuning in localization algorithms in this work. Both methods are evaluated using a localization algorithm on different datasets and compared in terms of computational time and the precision and recall of the final solutions. This study shows that it is feasible to automatically tune the parameters of localization algorithms using the evaluated methods. In all experiments performed in this work, Bayesian optimization was shown to make the biggest improvements early in the optimization but CMA-ES always passed it and proceeded to reach the best final solutions after some time. This study also shows that automatic parameter tuning is feasible even when using noisy real-world data collected from 3D cameras.

  • 17.
    Helén, Ludvig
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Automating Text Categorization with Machine Learning: Error Responsibility in a multi-layer hierarchy2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The company Ericsson is taking steps towards embracing automating techniques and applying them to their product development cycle. Ericsson wants to apply machine learning techniques to automate the evaluation of a text categorization problem of error reports, or trouble reports (TRs). An excess of 100,000 TRs are handled annually.

    This thesis presents two possible solutions for solving the routing problems where one technique uses traditional classifiers (Multinomial Naive Bayes and Support Vector Machines) for deciding the route through the company hierarchy where a specific TR belongs. The other solution utilizes a Convolutional Neural Network for translating the TRs into low-dimensional word vectors, or word embeddings, in order to be able to classify what group within the company should be responsible for the handling of the TR. The traditional classifiers achieve up to 83% accuracy and the Convolutional Neural Network achieve up to 71% accuracy in the task of predicting the correct class for a specific TR.

  • 18.
    Lindén, Simon
    Linköping University, Department of Computer and Information Science, Human-Centered systems.
    Cloud Computing – A review of Confidentiality and Privacy2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With the introduction of cloud computing the computation got distributed, virtualized and scalable. This also meant that customers of cloud computing gave away some of their control of their system. That led to a heighten importance of how to handle security in the cloud, for both provider and customer. Since security is such a big subject the focus of this thesis is on confidentiality and privacy, both closely related to how to handle personal data. With the help of a systematic literature review in this thesis, current challenges and possible mitigations are presented in some different areas and concerning both the cloud provider and the cloud customer. The conclusion of the thesis is that cloud computing in itself have matured a lot since the early 2000’s and all of the challenges provided have possible mitigations. However, the exact implementation of said mitigation will differ depending on cloud customer and the exact application developed as well as the exact service provided by the cloud provider. In the end it will all boil down to a process that involves technology, employees and policies and with that can any user secure its cloud application.

  • 19.
    Alansari, Hayder
    Linköping University, Department of Computer and Information Science.
    Clustered Data Management in Virtual Docker Networks Spanning Geo-Redundant Data Centers: A Performance Evaluation Study of Docker Networking2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Software containers in general and Docker in particular is becoming more popular both in software development and deployment. Software containers are intended to be a lightweight virtualization that provides the isolation of virtual machines with a performance that is close to native. Docker does not only provide virtual isolation but also virtual networking to connect the isolated containers in the desired way. Many alternatives exist when it comes to the virtual networking provided by Docker such as Host, Macvlan, Bridge, and Overlay networks. Each of these networking solutions has its own advantages and disadvantages.

    One application that can be developed and deployed in software containers is data grid system. The purpose of this thesis is to measure the impact of various Docker networks on the performance of Oracle Coherence data grid system. Therefore, the performance metrics are measured and compared between native deployment and Docker built-in networking solutions. A scaled-down model of a data grid system is used along with benchmarking tools to measure the performance metrics.

    The obtained results show that changing the Docker networking has an impact on performance. In fact, some results suggested that some Docker networks can outperform native deployment. The conclusion of the thesis suggests that if performance is the only consideration, then Docker networks that showed high performance can be used. However, real applications require more aspects than performance such as security, availability, and simplicity. Therefore Docker network should be carefully selected based on the requirements of the application.

  • 20.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Combining Technical and Human-Centered Strategies for Decision Support in Command and Control - The ComPlan Approach2008In: ISCRAM2008 Proceedings of the 5th International ISCRAM Conference / [ed] F. Fiedrich and B. Van de Walle, 2008, p. 504-515Conference paper (Refereed)
    Abstract [en]

    ComPlan (A Combined, Collaborative Command and Control Planning tool) is an approach to providing knowledge-based decision support in the context of command and control. It combines technical research on automated planning tools with human-centered research on mission planning. At its core, ComPlan uses interconnected views of a planning situation to present and manipulate aspects of a scenario. By using domain knowledge flexibly, it presents immediate and directly visible feedback on constraint violations of a plan, facilitates mental simulation of events, and provides support for synchronization of concurrently working mission planners. The conceptual framework of ComPlan is grounded on three main principles from human-centered research on command and control: transparency, graceful regulation, and event-based feedback. As a result, ComPlan provides a model for applying a human-centered perspective on plan authoring tools for command and control, and a demonstration for how to apply that model in an integrated plan-authoring environment.

  • 21.
    Abrahamsson, Robin
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Berntsen, David
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Comparing modifiability of React Native and two native codebases2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Creating native mobile application on multiple platforms generate a lot of duplicate code. This thesis has evaluated if the code quality attribute modifiability improves when migrating to React Native. One Android and one iOS codebase existed for an application and a third codebase was developed with React Native. The measurements of the codebases were based on the SQMMA-model. The metrics for the model were collected with static analyzers created specifically for this project. The results created consists of graphs that show the modifiability for some specific components over time and graphs that show the stability of the platforms. These graphs show that when measuring code metrics on applications over time it is better to do this on a large codebase that has been developed for some time. When calculating a modifiability value the sum of the metrics and the average value of the metrics between files should be used and it is shown that the React Native platform seems to be more stable than native.

  • 22.
    Sohl, Michael
    Linköping University, Department of Computer and Information Science.
    Comparing two heuristic evaluation methods and validating with usability test methods: Applying usability evaluation on a simple website2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, an IT company asked for a tool for improving some aspects of daily work for employees working with customer support. A web-site was constructed for this purpose, and development was steered by applying usability evaluation methods in an iterative manner. These methods were combined with the approach of following the guidelines of user-centered design. The aim was to see if an increase of user-satisfaction towards the user-interface could be measured between iterations.Another significant question that was central to the study was the comparison between the industry-leading Nielsen’s heuristics and Gerhardt-Powals principles. Only one previous study was found making this comparison which made it interesting to see if the same result would be reached in this study.

  • 23.
    Grönberg, David
    et al.
    Linköping University, Department of Computer and Information Science.
    Denesfay, Otto
    Linköping University, Department of Computer and Information Science.
    Comparison and improvement of time aware collaborative filtering techniques: Recommender systems2019Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Recommender systems emerged in the mid '90s with the objective of helping users select items or products most suited for them. Whether it is Facebook recommending people you might know, Spotify recommending songs you might like or Youtube recommending videos you might want to watch, recommender systems can now be found in every corner of the internet. In order to handle the immense increase of data online, the development of sophisticated recommender systems is crucial for filtering out information, enhancing web services by tailoring them according to the preferences of the user. This thesis aims to improve the accuracy of recommendations produced by a classical collaborative filtering recommender system by utilizing temporal properties, more precisely the date on which an item was rated by a user. Three different time-weighted implementations are presented and evaluated: time-weighted prediction approach, time-weighted similarity approach and our proposed approach, weighting the mean rating of a user on time. The different approaches are evaluated using the well known MovieLens 100k dataset. Results show that it is possible to slightly increase the accuracy of recommendations by utilizing temporal properties.

  • 24.
    Walander, Tomas
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Larsson, David
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Critical success factors in Agile software development projects2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The demand for combining Agile methodologies with large organizations is growing as IT plays a larger role in modern business, even in traditional manufacturing companies. In such organizations, management feel they are losing the ability to plan and control as the developers increasingly utilize Agile methodologies. This mismatch leads to frustration and creates barriers to fully Agile software development. Therefore, this report aims to evaluate what factors affect Agile software development projects in an organizational context, and in particular how these factors can be monitored by the effective use of measures.

    This master thesis project has conducted a case study at Scania IT, a subsidiary of truck manufacturer Scania, as well as an extensive literature review, which together help identify several critical success factors for combining Agile methodologies with an organization.

    The report concludes that several aspects are important when agility is introduced to a functional organization and also when combined with a project stage gate model. Moreover, it was found that measures, in particular software metrics, can greatly aid the organization in overcoming several organizational barriers. However, to succeed, corrective actions must be defined that help the organization prevent the measure from becoming yet another statistic data, but rather learn and improve its way of working.

  • 25.
    Leifler, Ola
    et al.
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Jenvald, Johan
    VSL Research Labs, Linköpings, Sweden.
    Critique and Visualization as decision support for mass-casualty emergency management2005In: Proceedings of the Second International ISCRAM Conference, Brussels, Belgium: Royal Flemish Academy of Belgium , 2005, p. 155-Conference paper (Refereed)
    Abstract [en]

    Emergency management in highly dynamic situations consists of exploring options to solve a planning problem. This task can be supported through the use of visual cues that are based on domain knowledge of the current domain. We present an approach to use visualization of critical constraints in timelines and hierarchical views as decision support in mass-casualty emergency situations.

  • 26.
    Krhan, Jasmin
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Design och utvärdering av anpassningsbara webbplatser: en fallstudie2015Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [sv]

    I takt med att användningen av smarta mobiler och handhållna enheter ökar, så ställs det idag helt andra krav på design och prestanda av webbplatser än för bara 5-6 år sedan. En webbplats visas idag från flera olika enheter och skärmstorlekar. Detta medför att en webbplats primärt utvecklad för en pc-skärm, troligtvis inte är så användbar när den visas från en mobil eller handhållen enhet.

    Den här rapporten i form av en fallstudie försöker svara på vilka delar av den mobila användbarhetstestningen som kan automatiseras. I rapporten beskrivs även olika teknikval och deras påverkan på en webbplats prestanda.

    Det visade sig baserat på denna fallstudie att vissa delar utav den mobila användbarheten går att utvärdera utan användare. Det finns delar som inte går att automatisera och måste utvärderas förhand. När det kommer till teknikval och deras implementation på en webbplats så visar det sig i rapporten att alla teknikval inte lämpar sig i alla situationer.

  • 27.
    Hedlin, Johan
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Kahlström, Joakim
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Detecting access to sensitive data in software extensions through static analysis2019Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Static analysis is a technique to automatically audit code without having to execute or manually read through it. It is highly effective and can scan large amounts of code or text very quickly. This thesis uses static analysis to find potential threats within a software's extension modules. These extensions are developed by third parties and should not be allowed to access information belonging to other extensions. However, due to the structure of the software there is no easy way to restrict this and still keep the software's functionality intact. The use of a static analysis tool could detect such threats by analyzing the code of an extension before it is published online, and therefore keep all current functionality intact. As the software is based on a lesser known language and there is a specific threat by way of information disclosure, a new static analysis tool has to be developed. To achieve this, a combination of language specific functionality and features available in C++ are combined to create an extendable tool which has the capability to detect cross-extension data access.

  • 28.
    Leifler, Ola
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces.
    Johansson, Björn
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, CSELAB - Cognitive Systems Engineering Laboratory.
    Rigas, Georgios
    KVI Försvarshögskolan.
    Persson, Mats
    KVI Försvarshögskolan.
    Developing critiquing systems for network organizations.2004In: IFIP 13.5 Working Conference on Human Error, Safety and Systems Development,2004, Heidelberg, Germany: Springer Verlag , 2004, p. 10-Conference paper (Refereed)
  • 29.
    Leifler, Ola
    et al.
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Johansson, Björn
    Linköping University, Department of Computer and Information Science, CSELAB - Cognitive Systems Engineering Laboratory. Linköping University, The Institute of Technology.
    Persson, Mats
    National Defence College, Sweden.
    Rigas, Georgios
    National Defence College, Sweden.
    Development of Critiquing Systems in Networked Organizations2004In: Human Error, Safety and Systems Development, Springer US , 2004, p. 31-43Conference paper (Refereed)
    Abstract [en]

    Recently, network organizations have been suggested as a solution for future crisis management and warfare. This will, however, have consequences for the development of decision support and critiquing systems. This paper suggests that there are special conditions that need to be taken into account when providing the means for decision-making in networked organizations. Hence, three research problems are suggested that need to be investigated in order to develop useful critiquing systems for future command and control systems.

  • 30.
    Leifler, Ola
    et al.
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Eriksson, Henrik
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Domain-specific knowledge management in a Semantic Desktop2009In: Proceedings of I-KNOW '09 9th International Conference on Knowledge Management and Knowledge Technologies / [ed] Klaus Tochtermann, 2009, p. 360-365Conference paper (Refereed)
    Abstract [en]

    Semantic Desktops hold promise to provide intelligent information-management environments that can respond to users’ needs. A critical requirement for creating such environments is that the underlying ontology reflects the context of work properly. For specialized work domains where people deal with rich information sources in a context-specific manner, there may be a significant amount of domain-specific information available in text documents, emails and other domain-dependent data sources. We propose that this can be exploited to great effect in a Semantic Desktop to provide much more effective knowledge management. In this paper, we present extensions to an existing semantic desktop through content- and structure-based information extraction, domain-specific ontological extensions as well as visualization of semantic entities. Our extensions are justified by needs in strategic decision making, where domain-specific, well-structured knowledge is available in documents and communications but scattered across the desktop. The consistent and efficient use of these resources by a group of co-workers is critical to success. With a domain-aware semantic desktop, we argue that decision makers will have a much better chance of successful sense making in strategic decision making.

  • 31.
    Spetz-Nyström, Simon
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Dynamic updates of mobile apps using JavaScript2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Updates are a natural part of the life cycle of an application. The traditional way of updating an application by stopping it, replacing it with the new version and restarting it is lacking in many ways. There have been previous research in the field of dynamic software updates (DSU) that attempt to salvage this problem by updating the app while running. Most of the previous research have focused on static languages like C and Java, research with dynamic languages have been lacking.

    This thesis takes advantage of the dynamic features of JavaScript in order to allow for dynamic updates of applications for mobile devices. The solution is implemented and used to answer questions about how correctness can be ensured and what state transfer needs to be manually written by a programmer. The conclusion is that most failures that occur as the result of an update and is in need of a manually written state transfer can be put into one of three categories. To verify correctness of an update tests for these types of failures should be performed.

  • 32.
    Ögren, Mikael
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Wikblad, Ludwig
    Linköping University, Department of Computer and Information Science, Software and Systems.
    En testprocess för webbutvecklingsprojekt med små team2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Finding a suitable approach for testing in small development teams is a challenge. Many small companies view traditional test processes and test process improvement models as too resource intensive for their needs. Minimal Test Practice Framework (MTPF) is a framework for testing which purpose is to provide a minimalistic approach to test improvement. The goal of this study was to examine how MTPF can be adapted to a small development team without incurring a time cost that the team would experience as too high. The study was performed in the department Web \& Mobile of the company Exsitec. At the department teams of 2-6 people develop web applications to business customers. During the study a testprocess was developed in close cooperation with the developers of the department with the aim of adapting it as well as possible to the needs of the department. The study was performed as action research in three phases, according to the method Cooperative Method Development, in a project with two developers. During the first phase all developers in the department were interviewed to establish an understanding of the environment for the study. During the second phase a set of possible improvements was developed together with the developers. During the third phase some of these improvements were implemented and evaluated. By focusing on unit testing central business logic in the application the developed test process improved the developers confidence in the code quality without being perceived as too resource intensive.

  • 33.
    French, Kimberley
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Energy Consumption of In-Vehicle Communication in Electric Vehicles: A comparison between CAN, Ethernet and EEE2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As a step towards decreasing the greenhouse gas emissions caused by the transport sector, electrical vehicles (EVs) have become more and more popular. Two major problem areas the EV industry is currently facing are range limitations, i.e. being restricted by the capacity of the battery, as well as a demand for higher bandwidth as the in-vehicle communication increases. In this thesis, an attempt is made to address these problem areas by examining the energy consumption required by Controller Area Network (CAN) and Ethernet. In addition, the effects of Energy-Efficient Ethernet (EEE) are reviewed. The protocols are examined by performing a theoretical analysis over CAN, Ethernet and EEE, physical tests over CAN and Ethernet, as well as simulations of EEE. The results show that Ethernet requires 2.5 to four times more energy than CAN in theory, and 4.5 to six times more based on physical measurements. The energy consumption of EEE depends on usage, ranging from energy levels of 40 \% less than CAN when idle, and up to equal amounts as regular Ethernet at high utilisation. By taking full advantage of the traits of Time-Sensitive Networking, EEE has the potential of significantly decreasing the amount of energy consumed compared to standard Ethernet while still providing a much higher bandwidth than CAN, at the cost of introducing short delays. This thesis provides insight into the behaviour of a transmitter for each of the three protocols, discusses the energy implications of replacing CAN with Ethernet and highlights the importance of understanding how to use Ethernet and EEE efficiently.

  • 34.
    Magnusson, Filip
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Evaluating Deep Learning Algorithms for Steering an Autonomous Vehicle2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With self-driving cars on the horizon, vehicle autonomy and its problems is a hot topic. In this study we are using convolutional neural networks to make a robot car avoid obstacles. The robot car has a monocular camera, and our approach is to use the images taken by the camera as input, and then output a steering command. Using this method the car is to avoid any object in front of it.

    In order to lower the amount of training data we use models that are pretrained on ImageNet, a large image database containing millions of images. The model are then trained on our own dataset, which contains of images taken directly by the robot car while driving around. The images are then labeled with the steering command used while taking the image. While training we experiment with using different amounts of frozen layers. A frozen layer is a layer that has been pretrained on ImageNet, but are not trained on our dataset.

    The Xception, MobileNet and VGG16 architectures are tested and compared to each other.

    We find that a lower amount of frozen layer produces better results, and our best model, which used the Xception architecture, achieved 81.19% accuracy on our test set. During a qualitative test the car avoid collisions 78.57% of the time.

  • 35.
    Wennberg, Per
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Danielson, Viktor
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Evaluation of a Testing Process to Plan and Implement an Improved Test System: A Case Study, Evaluation and Implementation in Lab-VIEW/TestStand2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In order to ensure the quality of a product, the provider of the product must performcomplete testing of the product. This fact increases the demands on the test systems usedto conduct the testing, the system needs to be reliable.When developing new software for a company, sometimes a requirements specificationcreated at the beginning of the project is not enough. Details of the desired implementationmay get lost when working with a general requirements specification.This thesis presents a case study of how a certain company work with their test systems.The aim of the case study was to find where the largest points of improvements could bemade in a new test system, which was to be implemented during this thesis work. Theimplementation of this new system was done in LabVIEW in conjunction with TestStandand this process is covered in this thesis.The performed case study revealed that the employees at the company found robustnessand usability to be the key factors in a new test system. During and after the implementationof the new system, it was evaluated regarding these two metrics, this process isalso covered in this thesis.

  • 36.
    Birksjö, Marcus
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Event-based diagnostics in heavy-duty vehicles2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The integration of small computer units in vehicles has made new and more complex functionalitypossible within the vehicle industry. To verify that the functionality is working and to troubleshoot it when a fault is detected requires a set of diagnostic services. Due to the increasing complexity of the functionality the diagnostic services need to extract more data to be able to diagnose the functionality. This causes an increased network load which soon threatens to become too high for some of the current networks. New ways to diagnose functionality in vehicles are therefore needed.

    The aim of this thesis was to investigate the need for an event-based service within the domain of vehicle diagnostics as well as presenting a recommendation of how sucha service should be designed. The thesis also aimed at eliciting obstacles and pitfalls connected with the implementation of the service in the current software architecture in heavy duty vehicles.

    An industrial case study was performed at the Swedish company Scania to elicit the potential need, problems and limitations with an event-based service for vehicle diagnostics. First a set of experts representing different domains within vehicle diagnostics were interviewed to investigate the need and potential of the service for different use cases. Requirements were elicited and compared with the service ResponseOnEvent defined inthe ISO standard 14229-1:2013. A decision was then made to diverge from the standard inorder to increase the number of fulfilled requirements and flexibility of the service. A new proprietary service was therefore created and evaluated through a proof of concept wherea prototype of the service was implemented in one client and one server control unit. A final recommendation was then given suggesting how to implement an event-based service and how to solve the found problems.

    The elicitation of the need for an event-based service resulted a confirmed need in three different domains and 23 different requirements which the service ResponseOnEvent was compared against. The service failed to meet all the requirements and therefore a proprietary service was designed. The prototype implementation of the proprietary service showed on multiple difficulties connected to the realization of an event-based service in the current architecture. One of the biggest was the fact that diagnostic services was assumed to always have a one-to-one relation between request and response, which an event-based service would not have. Different workarounds were discovered and assessed. Another problem was the linking between an event triggered response message and the triggercondition. It was concluded that some restrictions would have to be made to facilitatethe process of linking a response to its trigger condition. Non-determinism was another problem, since there were no guarantees that an event would not trigger too often causinga bus overload. In the final recommendation there are suggestions of how to solve these problems and some suggested areas for further research.

    The thesis confirms the need for a new way to diagnose vehicle functionality due to their increased complexity and the limited bandwidth of some of today’s in-vehicle networks. The event-based service ResponseOnEvent offers a good alternative but might lacksome key functionality. Therefore it might be valuable to consider a proprietary service instead. Due to its nature, an event-based service might require a restructuring of thesystem architecture and limitations in the hardware might limit the usability and flexibilityof the service.

    Keywords: event-based service, Response on Event, ECU, Vehicle Diagnostics, UDS,KWP.

  • 37.
    Lindgren, Erik
    et al.
    Linköping University, Department of Computer and Information Science.
    Allard, Niklas
    Linköping University, Department of Computer and Information Science.
    Exploring unsupervised anomaly detection in Bill of Materials structures.2019Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Siemens produce a variety of different products that provide innovative solutions within different areas such as electrification, automation and digitalization, some of which are turbine machines. During the process of creating or modifying a machine, it is vital that the documentation used as reference is trustworthy and complete. If the documentation is incomplete during the process, the risk of delivering faulty machines to customers drastically increases, causing potential harm to Siemens. This thesis aims to explore the possibility of finding anomalies in Bill of Material structures, in order to determine the completeness of a given machine structure. A prototype that determines the completeness of a given machine structure by utilizing anomaly detection, was created. Three different anomaly detection algorithms where tested in the prototype: DBSCAN, LOF and Isolation Forest. From the tests, we could see indications of DBSCAN generally performing the best, making it the algorithm of choice for the prototype. In order to achieve more accurate results, more tests needs to be performed.

  • 38.
    Andersson, Filip
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Fault Diagnosis in Distributed Simulation Systems over Wide Area Networks using Active Probing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The domain of distributed simulation is growing rapidly. This growth leads to larger and more complex supporting network architectures with high requirements on availability and reliability. For this purpose, efficient fault-monitoring is required. This work is an attempt to evaluate the viability of an Active probing approach in a distributed simulation system in a wide area network setting. In addition, some effort was directed towards building the probing-software with future extensions in mind. The Active probing approach was implemented and tested against certain performance requirements in a simulated environment. It was concluded that the approach is viable for detecting the health of the network components. However, additional research is required to draw a conclusion about the viability in more complicated scenarios that depend on more than the responsiveness of the nodes. The extensibility of the implemented software was evaluated with the QMOOD-metric and not deemed particularly extensible.

  • 39.
    Laurentz, Henrik
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Feasibility of using network support data to predict risk level of trouble tickets2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Internet Service Providers gather vast amounts of data in the form of trouble tickets created from connectivity related issues. This data is often stored and seldom used for proactive purposes. This thesis explores the feasibility of finding correlations in network support data through the use of data mining activities. Correlations such as these could be used for improving troubleshooting or staffing related activities. The approach uses the data mining methodology CRISP-DM to investigate typical data mining operations from the perspective of a Network Operation Center. The results show that correlations between the solving time and other ticket related attributes do exist and that support data could be used for the activities mentioned. The results also show that it exists a lot of room for improvement when it comes to data mining activities in network support data.

  • 40.
    Golubovic, Denis
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Nieminen, Niklas
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Hardware test equipment utilization measurement2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Today’s software developers are familiar and often faced with the challenge of strict deadlines which can further be worsened by lack of resources for testing purposes. In order to measure the true utilization and provide relevant information to address this problem, the RCI-lab Resource Utilization tool was created. The tool was created with information from interviews which were conducted with developers from different teams who all agreed that the main reason for over-booking resources is to make sure that they have access when they really need it. A model for resource utilization was defined and used as a basis for the thesis. The developed tool was later used to measure, and visualize the real utilization of hardware resources where the results confirmed the information provided from the interviews. The interview participants estimated the true utilization to be about 20-30% out of twenty-four hours. The data collected from the RCI-lab Resource Utilization tool showed an overall average utilization of about 33% which corresponds well with the estimation by the developers. It was also shown that for the majority of the resources, the maximum utilization level reached to about 60% of the booked time. This overbooking is believed to be due to the need to always have a functioning resource and could possibly be because of the agile environment where resources are a necessity in order to be able to finish the short sprints in time. Even though Ericsson invests in new resources to meet the need, the developers still find it difficult to get access to the resources when they really need it. The developers at the studied department at Ericsson work with scrum where the sprints are 1,5 weeks long. The need for hardware resources varies depending on the tasks in the given sprint which makes it very difficult to predict when a resource is needed. The created tool is also intended to help the stakeholders at the studied department at Ericsson in making investment decisions for new resources and work as a basis for future implementation on additional resource types. Resource utilization is important in many organizations where this thesis provides different aspects of approaching this matter.

  • 41.
    Hero-Ek, Pontus
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Improving AR visualizationwith Kalman filtering andhorizon-based orientation: – To prevent boats to run aground at sea2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis researched the possibility of improving the compass of smartphones as theearth’s magnetic field is not strong and is easily disturbed, either by the environment ortechnology. The compass is used in Augmented Reality (AR) when the AR visualizationshould correspond to a position on earth. The issue lies in oscillating input values to thecompass that reduces the AR experience.To improve the AR experience without the use of external equipment, this work tried toboth filter the incoming values with a Kalman filter and to know the direction by capturingan image with a horizon that was image processed. The Kalman filter achieved a reductionin incoming disturbances and the horizon was matched against a panorama image thatwas generated from 3D data. The thesis starts off with requirements and contents of ARand goes through the different approaches that begins with a LAS point cloud and ends inmatching horizons with normalized cross-correlation.This thesis furthermore measures performance and battery drainage of the built applicationon three different smartphones that are nearly a year apart each. Drift was alsomeasured as it is a common issue if there is no earthly orientation to correct itself unto,for instance the magnetometer. This showed that these methods can be used on OnePlus2, Samsung Galaxy S7, and Samsung Galaxy S8, there is a steady performance and efficiencyincrease in each generation and that ARCore causes less drift. Furthermore thisthesis shows the difference between a compass and a local orientation with an offset.The application that was made focused to work at sea but it was also tested on buildingswith good results. The application also underwent usability tests that showed that theapplied functionalities improved the AR-experience. The conclusion shows that it is possibleto improve the orientation of smartphones. Albeit it can go wrong sometimes which iswhy this thesis also presents two ways to indicate that the heading is off.

  • 42.
    Gomes de Oliveira Neto, Francisco
    et al.
    Chalmers/University of Gothenburg, Sweden.
    Ahmad, Azeem
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Enoiu, Eduard
    Mälardalen University, Sweden.
    Improving continuous integration with similarity-based test case selection2018In: Proceedings of the 13th International Workshop on Automation of Software Test, New York: ACM Digital Library, 2018, p. 39-45Conference paper (Refereed)
    Abstract [en]

    Automated testing is an essential component of Continuous Integration (CI) and Delivery (CD), such as scheduling automated test sessions on overnight builds. That allows stakeholders to execute entire test suites and achieve exhaustive test coverage, since running all tests is often infeasible during work hours, i.e., in parallel to development activities. On the other hand, developers also need test feedback from CI servers when pushing changes, even if not all test cases are executed. In this paper we evaluate similarity-based test case selection (SBTCS) on integration-level tests executed on continuous integration pipelines of two companies. We select test cases that maximise diversity of test coverage and reduce feedback time to developers. Our results confirm existing evidence that SBTCS is a strong candidate for test optimisation, by reducing feedback time (up to 92% faster in our case studies) while achieving full test coverage using only information from test artefacts themselves.

  • 43.
    Hellenberg, Rickard
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Improving the Performance of the Eiffel Event Persistence Solution2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Deciding which database management system (DBMS) to use has perhaps never been harder. In recent years there has been an explosive growth of new types of database management systems that address different issues and performs well for different scenarios. This thesis is an improving case study of an Event Persistence Solution for the Eiffel Framework, which is a framework used for achieving traceability in very-large-scale systems development. The purpose of this thesis is to investigate whether it is possible to improve the performance of the Eiffel Event Persistence Solution by changing from MongoDB, to Elasticsearch or ArangoDB. Experiments were conducted to measure the request throughput for 4 types of requests. As a prerequisite to measuring the performance, support for the different DBMSs and the possibility to change between them was implemented. The results showed that Elasticsearch performed better than MongoDB in terms of nested-document-search as well as for graph-traversal operations. ArangoDB had even better performance for graph-traversal operations but had an inadequate performance for nested-document-search.

  • 44.
    Jensen, Henrik
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Industry Foundation Classes: A study of its requested use in Configura2015Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Configura Sverige AB is developing the software solutions Configura and CET Designer for companies dealing with highly configurable and complex products that also require space planning. The aim is to simplify the selling process. Configura Sverige AB has received requests from their customers to be able to read and write files according to an ISO standard called Industry Foundation Classes (IFC). IFC is an open international standard within Building Information Modeling (BIM) to exchange data between different software applications used for projects in the building industry and facility management. To assist Configura Sverige AB in a decision on how to further proceed, questions why users request IFC, how they need to work with IFC, and about possible workflows with IFC are considered in this thesis. To answer the questions, an interpretive case study method was used to view the questions from different perspectives. A qualitative approach was used to collect and analyze data, involving for example a survey among users requesting IFC and input from two different contractors requesting IFC files from these users. The results show that users have been requested by architects and contractors to supply IFC files, and a conclusion is that demands on the use of BIM and IFC within the public sector in certain countries is a major reason to these requests. The results has much focus on import and export of IFC files and on possible workflows using IFC files. With IFC files, users may be a part of a collaboration between several different disciplines within the building industry. Users need to base their work on other disciplines models, which in many cases will be the architect's IFC file. An IFC export shall only include the user's products, it will be up to another application to integrate these products in a coordination BIM. The IFC export will be used for interdisciplinary coordination, visualization and collision detection and it is important to use simple graphical representation of the products.

  • 45.
    Minder, Patrik
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Introducing modified TypeScript in an existing framework to improve error handling2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Error messages in compilers is a topic that is often overlooked. The quality of the messages can have a big impact on development time and ease oflearning. Another method used to speed up development is to build a domainspecific language (DSL). This thesis migrates an existing framework to use TypeScript in order to speed up development time with compile-time error handling. Alternative methods for implementing a DSL are evaluated based onhow they affect the ability to generate good error messages. This is done usinga proposed list of six usability heuristics for error messages. They are also usedto perform heuristic evaluation on the error messages in the TypeScript compiler. This showed that it struggled with syntax errors but had semantic errormessages with low amount of usability problems. Finally, a method for implementing a DSL and presenting its error messages is suggested. The evaluationof said method showed promise despite the existence of usability problems.

  • 46.
    Leifler, Ola
    Linköping University, Department of Computer and Information Science.
    Jämförande studie av LEM2 och Dynamiska Redukter2002Independent thesis Basic level (professional degree)Student thesis
    Abstract [en]

    This thesis presents the results of the implementation and evaluation of two machine learning algorithms [Baz98, GB97]based on notions from Rough Set theory [Paw82]. Both algorithms were implemented and tested using the Weka [WF00]software framework. The main purpose for doing this was to investigate whether the experimental results obtained in [Baz98]could be reproduced, by implementing both algorithms in a framework that provided common functionalities needed by both. As a result of this thesis, a Rough Set framework accompanying the Weka system was designed and implemented, as well as three methods for discretization and three classi cation methods.

    The results of the evaluation did not match those obtained by the original authors. On two standard benchmarking datasets also used previously in [Baz98](Breast Cancer and Lymphography), signi cant results indicating that one of the algorithms performed better than the other could not be established, using the Students t- test and a con dence limit of 95%. However, on two other datasets (Balance Scale and Zoo) differences could be established with more than 95% signi cance. The Dynamic Reduct Approach scored better on the Balance Scale dataset whilst the LEM2 Approach scored better on the Zoo dataset.

  • 47.
    Leifler, Ola
    et al.
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Eriksson, Henrik
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Message classification as a basis for studying command and control communication: an evaluation of machine learning approaches2012In: Journal of Intelligent Information Systems, ISSN 0925-9902, E-ISSN 1573-7675, Vol. 38, no 2, p. 299-320Article in journal (Refereed)
    Abstract [en]

    In military command and control, success relies on being able to perform key functions such as communicating intent. Most staff functions are carried out using standard means of text communication. Exactly how members of staff perform their duties, who they communicate with and how, and how they could perform better, is an area of active research. In command and control research, there is not yet a single model which explains all actions undertaken by members of staff well enough to prescribe a set of procedures for how to perform functions in command and control. In this context, we have studied whether automated classification approaches can be applied to textual communication to assist researchers who study command teams and analyze their actions. Specifically, we report the results from evaluating machine leaning with respect to two metrics of classification performance: (1) the precision of finding a known transition between two activities in a work process, and (2) the precision of classifying messages similarly to human researchers that search for critical episodes in a workflow. The results indicate that classification based on text only provides higher precision results with respect to both metrics when compared to other machine learning approaches, and that the precision of classifying messages using text-based classification in already classified datasets was approximately 50%. We present the implications that these results have for the design of support systems based on machine learning, and outline how to practically use text classification for analyzing team communications by demonstrating a specific prototype support tool for workflow analysis.

  • 48.
    Norman, Niclas
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Mutation testing as quality assurance in base station software2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Telecom base stations are a critical part of society's information infrastructure. To ensure high quality base station software, automated testing is an important part of development. Ericsson measures the quality of automated tests with statement coverage, counting the number of statements executed by a test suite. Alone, however, statement coverage does not guarantee test quality. Mutation testing is a technique to improve test quality by injecting faults and verifying that test suites detect them. This thesis investigates whether mutation testing is a viable way to increase the reliability of test suites for base station software at Ericsson. Using the open-source mutation testing tool MiLu, we describe a practical method of using mutation testing that is viable for daily development. We also describe how mutation testing reveals a numbers of potential errors in the production code that current test suites miss even though they have very good statement coverage.

  • 49.
    Moral López, Elena
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems.
    Muting pattern strategy for positioning in cellular networks.2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Location Based Services (LBS) calculate the position of the user for different purposes like advertising and navigation. Most importantly, these services are also used to help emergency services by calculating the position of the person that places the emergency phone call. This has introduced a number of requirements on the accuracy of the measurements of the position. Observed Time Difference of Arrival (OTDOA) is the method used to estimate the position of the user due to its high accuracy. Nevertheless, this method relies on the correct reception of so called positioning signals, and therefore the calculations can suffer from errors due to interference between the signals. To lower the probability of interference, muting patterns can be used. These methods can selectively mute certain signals to increase the signal to interference and noise ratio (SINR) of others and therefore the number of signals detected. In this thesis, a simulation environment for the comparison of the different muting patterns has been developed. The already existing muting patterns have been simulated and compared in terms of number of detected nodes and SINR values achieved. A new muting pattern has been proposed and compared to the others. The results obtained have been presented and an initial conclusion on which of the muting patterns offers the best performance has been drawn.

  • 50.
    Henrik, Thoreson
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Robin, Wesslund
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Naive Bayes-klassificering av förarbeteende2017Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    To be able to classify a driving style implies that you classify a driving behaviour, which is the foundation of safety and environmental driving classification.

    In this thesis we have let two drivers drive a car in an attempt to classify, with a desired accuracy of around 90%, which one of us drove the car. This was done by exclusively using speed and rpm data values provided from the OBD:II port of the car via the CAN-bus. We approched this problem like you would a text classification one, thus using two common models of Naive Bayes — Multinominal and Gaussian Naive Bayes together with N-gram and discretization.

    We found that using Multinominal Naive Bayes consisting of 4-gram resulted in an avarage accuracy of 91.48% in predicting the driver, non-discretized speed and discretized rpm values.

12 1 - 50 of 83
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf