liu.seSearch for publications in DiVA
Change search
Refine search result
12345 1 - 50 of 210
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Agirre, Jon
    et al.
    Univ York, England.
    Atanasova, Mihaela
    Univ York, England.
    Bagdonas, Haroldas
    Univ York, England.
    Ballard, Charles B.
    Rutherford Appleton Lab, England; Rutherford Appleton Lab, England.
    Basle, Arnaud
    Newcastle Univ, England.
    Beilsten-Edmands, James
    Diamond Light Source, England.
    Borges, Rafael J.
    Univ Campinas UNICAMP, Brazil.
    Brown, David G.
    Lab Servier SAS Inst Rech, France.
    Burgos-Marmol, J. Javier
    Univ Liverpool, England.
    Berrisford, John M.
    European Mol Biol Lab, England.
    Bond, Paul S.
    Univ York, England.
    Caballero, Iracema
    CSIC, Spain.
    Catapano, Lucrezia
    MRC Lab Mol Biol, England; Kings Coll London, England.
    Chojnowski, Grzegorz
    European Mol Biol Lab, Germany.
    Cook, Atlanta G.
    Univ Edinburgh, Scotland.
    Cowtan, Kevin D.
    Univ York, England.
    Croll, Tristan I.
    Univ Cambridge, England; Altos Labs, England.
    Debreczeni, Judit E.
    AstraZeneca, England.
    Devenish, Nicholas E.
    Diamond Light Source, England.
    Dodson, Eleanor J.
    Univ York, England.
    Drevon, Tarik R.
    Rutherford Appleton Lab, England; Rutherford Appleton Lab, England.
    Emsley, Paul
    MRC Lab Mol Biol, England.
    Evans, Gwyndaf
    Diamond Light Source, England; Rosalind Franklin Inst, England.
    Evans, Phil R.
    MRC Lab Mol Biol, England.
    Fando, Maria
    Rutherford Appleton Lab, England; Rutherford Appleton Lab, England.
    Foadi, James
    Univ Bath, England.
    Fuentes-Montero, Luis
    Diamond Light Source, England.
    Garman, Elspeth F.
    Univ Oxford, England.
    Gerstel, Markus
    Diamond Light Source, England.
    Gildea, Richard J.
    Diamond Light Source, England.
    Hatti, Kaushik
    Univ Cambridge, England.
    Hekkelman, Maarten L.
    Netherlands Canc Inst, Netherlands; Netherlands Canc Inst, Netherlands.
    Heuser, Philipp
    DESY, Germany.
    Hoh, Soon Wen
    Univ York, England.
    Hough, Michael A.
    Diamond Light Source, England; Univ Essex, England.
    Jenkins, Huw T.
    Univ York, England.
    Jimenez, Elisabet
    CSIC, Spain.
    Joosten, Robbie P.
    Netherlands Canc Inst, Netherlands; Netherlands Canc Inst, Netherlands.
    Keegan, Ronan M.
    Rutherford Appleton Lab, England; Rutherford Appleton Lab, England; Univ Liverpool, England.
    Keep, Nicholas
    Birkbeck Coll, England.
    Krissinel, Eugene B.
    Rutherford Appleton Lab, England; Rutherford Appleton Lab, England.
    Kolenko, Petr
    Czech Tech Univ, Czech Republic; Czech Acad Sci, Czech Republic.
    Kovalevskiy, Oleg
    Rutherford Appleton Lab, England; Rutherford Appleton Lab, England.
    Lamzin, Victor S.
    European Mol Biol Lab, Germany.
    Lawson, David M.
    John Innes Ctr, England.
    Lebedev, Andrey A.
    Rutherford Appleton Lab, England; Rutherford Appleton Lab, England.
    Leslie, Andrew G. W.
    MRC Lab Mol Biol, England.
    Lohkamp, Bernhard
    Karolinska Inst, Sweden.
    Long, Fei
    MRC Lab Mol Biol, England.
    Maly, Martin
    Czech Tech Univ, Czech Republic; Czech Acad Sci, Czech Republic; Univ Southampton, England.
    McCoy, Airlie J.
    Univ Cambridge, England.
    McNicholas, Stuart J.
    Univ York, England.
    Medina, Ana
    CSIC, Spain.
    Millan, Claudia
    Univ Cambridge, England.
    Murray, James W.
    Imperial Coll London, England.
    Murshudov, Garib N.
    MRC Lab Mol Biol, England.
    Nicholls, Robert A.
    MRC Lab Mol Biol, England.
    Noble, Martin E. M.
    Newcastle Univ, England.
    Oeffner, Robert
    Univ Cambridge, England.
    Pannu, Navraj S.
    Leiden Univ, Netherlands.
    Parkhurst, James M.
    Diamond Light Source, England; Rosalind Franklin Inst, England.
    Pearce, Nicholas
    Linköping University, Department of Physics, Chemistry and Biology, Bioinformatics. Linköping University, Faculty of Science & Engineering.
    Pereira, Joana
    Univ Basel, Switzerland; Univ Basel, Switzerland.
    Perrakis, Anastassis
    Netherlands Canc Inst, Netherlands; Netherlands Canc Inst, Netherlands.
    Powell, Harold R.
    Imperial Coll London, England.
    Read, Randy J.
    Univ Cambridge, England.
    Rigden, Daniel J.
    Univ Liverpool, England.
    Rochira, William
    Univ York, England.
    Sammito, Massimo
    Univ Cambridge, England; AstraZeneca, England.
    Rodriguez, Filomeno Sanchez
    Univ York, England; Diamond Light Source, England; Univ Liverpool, England.
    Sheldrick, George M.
    Georg August Univ Gottingen, Germany.
    Shelley, Kathryn L.
    Univ Washington, WA 98195 USA.
    Simkovic, Felix
    Univ Liverpool, England.
    Simpkin, Adam J.
    Lab Servier SAS Inst Rech, France.
    Skubak, Pavol
    Leiden Univ, Netherlands.
    Sobolev, Egor
    DESY, Germany.
    Steiner, Roberto A.
    European Mol Biol Lab, England; Univ Padua, Italy.
    Stevenson, Kyle
    Rutherford Appleton Lab, England.
    Tews, Ivo
    Univ Southampton, England.
    Thomas, Jens M. H.
    Univ Liverpool, England.
    Thorn, Andrea
    Univ Hamburg, Germany.
    Trivino Valls, Josep
    CSIC, Spain.
    Uski, Ville
    Rutherford Appleton Lab, England; Rutherford Appleton Lab, England.
    Uson, Isabel
    CSIC, Spain; ICREA, Spain.
    Vagin, Alexei
    Univ York, England.
    Velankar, Sameer
    European Mol Biol Lab, England.
    Vollmar, Melanie
    European Mol Biol Lab, England.
    Walden, Helen
    Univ Glasgow, Scotland.
    Waterman, David
    Rutherford Appleton Lab, England; Rutherford Appleton Lab, England.
    Wilson, Keith S.
    Univ York, England.
    Winn, Martyn D.
    Sci & Technol Facil Council, England.
    Winter, Graeme
    Diamond Light Source, England.
    Wojdyr, Marcin
    Global Phasing Ltd, England.
    Yamashita, Keitaro
    MRC Lab Mol Biol, England.
    The CCP4 suite: integrative software for macromolecular crystallography2023In: Acta Crystallographica Section D: Structural Biology , E-ISSN 2059-7983, Vol. 79, p. 449-461Article in journal (Refereed)
    Abstract [en]

    The Collaborative Computational Project No. 4 (CCP4) is a UK-led international collective with a mission to develop, test, distribute and promote software for macromolecular crystallography. The CCP4 suite is a multiplatform collection of programs brought together by familiar execution routines, a set of common libraries and graphical interfaces. The CCP4 suite has experienced several considerable changes since its last reference article, involving new infrastructure, original programs and graphical interfaces. This article, which is intended as a general literature citation for the use of the CCP4 software suite in structure determination, will guide the reader through such transformations, offering a general overview of the new features and outlining future developments. As such, it aims to highlight the individual programs that comprise the suite and to provide the latest references to them for perusal by crystallographers around the world.

    Download full text (pdf)
    fulltext
  • 2.
    Ahokas, Jakob
    et al.
    Linköping University, Department of Computer and Information Science.
    Persson, Jonathan
    Linköping University, Department of Computer and Information Science.
    Formal security verification of the Drone Remote Identification Protocol using Tamarin2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The current standard for remote identification of unmanned aircraft does not contain anyform of security considerations, opening up possibilities for impersonation attacks. Thenewly proposed Drone Remote Identification Protocol aims to change this. To fully ensurethat the protocol is secure before real world implementation, we conduct a formal verification using the Tamarin Prover tool, with the goal of detecting possible vulnerabilities. Theunderlying technologies of the protocol are studied and important aspects are identified.The main contribution of this thesis is the formal verification of session key secrecy andmessage authenticity within the proposed protocol. Certain aspects of protocol securityare still missing from the scripts, but the protocol is deemed secure to the extent of themodel. Many features of both the protocol and Tamarin Prover are presented in detail,serving as a potential base for the continued work toward a complete formal verificationof the protocol in the future.

    Download full text (pdf)
    fulltext
  • 3.
    Albertsson, Marcus
    et al.
    Linköping University, Department of Computer and Information Science.
    Öberg Bustad, Adrian
    Linköping University, Department of Computer and Information Science.
    Sundmark, Mattias
    Linköping University, Department of Computer and Information Science.
    Gerde, Elof
    Linköping University, Department of Computer and Information Science.
    Boberg, Jessika
    Linköping University, Department of Computer and Information Science.
    Abdulla, Ariyan
    Linköping University, Department of Computer and Information Science.
    Danielsson, Oscar
    Linköping University, Department of Computer and Information Science.
    Johnsson Bittmann, Felicia
    Linköping University, Department of Computer and Information Science.
    Moberg, Anton
    Linköping University, Department of Computer and Information Science.
    Hur en webbapplikation kan utvecklas för att leverera säkerhet, handlingsbarhet och navigerbarhet: PimpaOvven – Utveckling av en e-butik för märken och accessoarer till studentoveraller2017Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
    Abstract [en]

    Among students in many of Sweden’s Universities the student overall is an established possession. Many students like to decorate their overalls with embroidered patches and other types of accessories, the supply of these is however limited. This report presents the development process and result of the web application “PimpaOvven” – an e-shop with the purpose of increasing the accessibility of patches and overall accessories. The development has been iterative and focused on building a secure web application that generates a useable environment regarding actability and navigability that also provides an impression of security to the user. The methods used which generated the resulting web application together with the reference framework form the basis of the report’s discussion. During the project plenty of usability tests and security tests were conducted, from these tests together with the report’s discussion the conclusion was drawn that the produced web application was secure and useable.

    Download full text (pdf)
    Kandidatarbete TDDD83 PimpaOvven
  • 4.
    Alqaysi, Hiba
    et al.
    Department of Electronics Design, Mid Sweden University, Sweden.
    Fedorov, Igor
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Qureshi, Faisal Z
    Faculty of Science, University of Ontario Institute of Technology, Oshawa, Canada.
    ONils, Mattias
    Department of Electronics Design, Mid Sweden University, Sundsvall, Sweden.
    A Temporal Boosted YOLO-Based Model for Birds Detection around Wind Farms2021In: Journal of imaging, ISSN 2313-433X, Vol. 7, no 11, article id 277Article in journal (Refereed)
    Abstract [en]

    Object detection for sky surveillance is a challenging problem due to having small objects in a large volume and a constantly changing background which requires high resolution frames. For example, detecting flying birds in wind farms to prevent their collision with the wind turbines. This paper proposes a YOLOv4-based ensemble model for bird detection in grayscale videos captured around wind turbines in wind farms. In order to tackle this problem, we introduce two datasets-(1) Klim and (2) Skagen-collected at two locations in Denmark. We use Klim training set to train three increasingly capable YOLOv4 based models. Model 1 uses YOLOv4 trained on the Klim dataset, Model 2 introduces tiling to improve small bird detection, and the last model uses tiling and temporal stacking and achieves the best mAP values on both Klim and Skagen datasets. We used this model to set up an ensemble detector, which further improves mAP values on both datasets. The three models achieve testing mAP values of 82%, 88%, and 90% on the Klim dataset. mAP values for Model 1 and Model 3 on the Skagen dataset are 60% and 92%. Improving object detection accuracy could mitigate birds mortality rate by choosing the locations for such establishment and the turbines location. It can also be used to improve the collision avoidance systems used in wind energy facilities.

    Download full text (pdf)
    fulltext
  • 5.
    Altarriba Bertran, Ferran
    et al.
    University of California, Santa Cruz, Santa Cruz, CA, USA.
    Börütecene, Ahmet
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Turan Buruk, Oguz
    Tampere University, Tampere, Finland.
    Thibault, Mattia
    Tampere University, Tampere, Finland.
    Isbister, Katherine
    University of California, Santa Cruz, Santa Cruz, CA, USA.
    MESMER: Towards a Playful Tangible Tool for Non-Verbal Multi-Stakeholder Conversations2020Conference paper (Refereed)
    Abstract [en]

    In this paper we present MESMER, a work-in-progress tangible conversation tool for playful design. Our work extends the Otherworld Framework (OF) [7] for tangible tools by centering specifically on play as a conversation topic. Here we unpack how early experiments with OF motivated our work and describe the current iteration of the MESMER tool, which comprises persona cards, various boards, and a shared physical token. MESMER is inspired by our findings from early trials with OF: performative playful interaction promoted playful and divergent thinking; embodied non-verbal communication led to shared insights, the board's contents and structure helped scaffold conversations, a diversity of personas and narratives seemed desirable, and role-playing personas encouraged multi-stakeholder empathy. Our ongoing research aims to help designers and researchers to facilitate engaging, fruitful and inspiring conversations where diverse stakeholders can contribute to playful technology design.

    Download full text (pdf)
    fulltext
  • 6.
    Andersson, Peter
    Linköping University, Department of Thematic Studies, Technology and Social Change. Linköping University, Faculty of Arts and Sciences.
    Informationsteknologi i organisationer: bestämningsfaktorer och mönster1989Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Researchers in this field have placed different emphasis on the structural constraints visa- vis the freedom of the actors. An awareness that IT is a social construction does not necessarily mean that some individual actor or actor entity can perform freely. An inflexible, tightly structured social situation can considerably limit the action space. Actors are hemmed in by "objective" circumstances, ie, a rather closely controlled situation established by other actors, and which is apparently unyielding in the face of technological decisions.

    By creating a perspective that addresses both structural and actor aspects, this study attempts a holistic understanding which will lay bare the probable dialectic process between the changeable and the nonchangeable. This aspiration to comprehend the whole, when viewed against the complex character of the subject, calls for an understanding oriented approach.

    The study at hand deals with the choice of information technology in organizations, with special focus on automatic data processing (ADP) for administrative purposes. Its main aim is to improve an understanding of factors that determine the choice of ADP technology in organizations.

    The empirical section of the work at hand consists of two case studies and an overview study. The case studies, which concern two extensive ADP projects, are emphasized. The purpose of these two projects was to raise the degree of computerization and to choose both a configuration and degree of uniformity. In both cases however the configuration turned out to be the most critical issue. One concerned the administration of social insurance in Sweden, Rationalisering av den allmänna försäkringens administration (Rationalization of the Swedish social insurance administration), hereafter called the RAFA project. The other case study, referred to here as the FFV study, deals with an administrative system for the manufacturing sector of the FFV Group  The overview study, called the Norrköping study, deals mainly with the technological level and the ADP configuration in a wide spectrum of organizations. The level and the configuration are viewed against an overarching organizational structure, the worksite placement of qualified ADP staff, the line of business and the size of the firm. The study consists of an opinion poll and three delimited secondary studies.

    In the initial stage of each project, rational motives dominated. These were founded on cost and effect assessments and on developments in the field of computer science. From a structural viewpoint, investments in computers seemed self-evident; efficiency goals were paramount. However, an ADP undertaking entails not only rationalization in the conventional sense, it also brings to the ideational aspects inherent in the organization. While ADP technology was believed necessary, it became, in the preplanning and argumentation phase, a means of projecting socially determined concepts and goals. An ADP solution was sought which would combine the latest innovations in computer science with the dominant actors' organizational ideas.

    The dominant actors at FFV were for the most part newly appointed managers, imprinted with other organizational ideals and relationships than those characterizing FFV. The choice stood between a departure from company tradition by selecting a solution based on local minicomputers, or expanding the existing centralized main frame facility. The critics were specialists who had taken part in the design of the existing configuration. At FFV, the structural determinants had to be toned down in favor of the deliberate performance of the dominant actors. In the RAF A case, the opposite was true. The critics wanted a certain change of existing circumstances, while the dominant actors sought to preserve status quo and its underlying ideas. In the RAF A case, ADP thus became a cementing force rather than the catalyst.

    The Norrkoping study clearly indicates that the direction and size of an enterprise has primary importance for how much the computers are used. The ADP configuration appearance varies mainly with the organizational relationships. This is true for the placement of ADP staff and the overall structure of the organization. The main tendency is that the configuration reflects the relationships in an organization. This supports the view in the case studies that proximity to and control of the ADP has a major organizational value.

  • 7.
    Andersson, Tim
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, The Institute of Technology.
    Bluetooth Low Energy and Smartphones for Proximity-Based Automatic Door Locks2014Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Bluetooth Low Energy is becoming increasingly popular in mobile applications due to the possibility of using it for proximity data. Proximity can be estimated by measuring the strength of the Bluetooth signal, and actions can then be performed based on a user's proximity to a certain location or object. One of the most interesting applications of proximity information is automating common tasks; this paper evaluates Bluetooth Low Energy in the context of using smartphones to automatically unlock a door when a user approaches the door. Measurements were performed to determine signal strength reliability, energy consumption and connection latency. The results show that Bluetooth Low Energy is a suitable technology for proximity-based door locks despite the large variance in signal strength.

    Download full text (pdf)
    Bluetooth Low Energy and Smartphones for Proximity-Based Automatic Door Locks
  • 8.
    Anwer, Rao Muhammad
    et al.
    Aalto Univ, Finland.
    Khan, Fahad
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    van de Weijer, Joost
    Univ Autonoma Barcelona, Spain.
    Molinier, Matthieu
    VTT Tech Res Ctr Finland Ltd, Finland.
    Laaksonen, Jorma
    Aalto Univ, Finland.
    Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification2018In: ISPRS journal of photogrammetry and remote sensing (Print), ISSN 0924-2716, E-ISSN 1872-8235, Vol. 138, p. 74-85Article in journal (Refereed)
    Abstract [en]

    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification. (C) 2018 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

  • 9.
    Arding, Petter
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Hedelin, Hugo
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Computer virus: design and detection2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Computer viruses uses a few different techniques, with various intentions, toinfect files. However, what most of them have in common is that they wantto avoid detection by anti-malware software. To not get detected and stay unnoticed,virus creators have developed several methods for this. Anti-malwaresoftware is constantly trying to counter these methods of virus infections withtheir own detection-techniques. In this paper we have analyzed the differenttypes of viruses and their infection techniques, and tried to determined whichworks the best to avoid detection. In the experiments we have done we havesimulated executing the viruses at the same time as an anti-malware softwarewas running. Our conclusion is that metamorphic viruses uses the best methodsto stay unnoticed by anti-malware software’s detection techniques.

    Download full text (pdf)
    Computer virus design and detection
  • 10.
    Arvidsson, Martin
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Paulsson, Eric
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Utveckling av beslutsstöd för kreditvärdighet2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The aim is to develop a new decision-making model for credit-loans. The model will be specific for credit applicants of the OKQ8 bank, becauseit is based on data of earlier applicants of credit from the client (the bank). The final model is, in effect, functional enough to use informationabout a new applicant as input, and predict the outcome to either the good risk group or the bad risk group based on the applicant’s properties.The prediction may then lay the foundation for the decision to grant or deny credit loan.

    Because of the skewed distribution in the response variable, different sampling techniques are evaluated. These include oversampling with SMOTE, random undersampling and pure oversampling in the form of scalar weighting of the minority class. It is shown that the predictivequality of a classifier is affected by the distribution of the response, and that the oversampled information is not too redundant.

    Three classification techniques are evaluated. Our results suggest that a multi-layer neural network with 18 neurons in a hidden layer, equippedwith an ensemble technique called boosting, gives the best predictive power. The most successful model is based on a feed forward structure andtrained with a variant of back-propagation using conjugate-gradient optimization.

    Two other models with a good prediction quality are developed using logistic regression and a decision tree classifier, but they do not reach thelevel of the network. However, the results of these models are used to answer the question regarding which customer properties are importantwhen determining credit risk. Two examples of important customer properties are income and the number of earlier credit reports of the applicant.

    Finally, we use the best classification model to predict the outcome of a set of applicants declined by the existent filter. The results show that thenetwork model accepts over 60 % of the applicants who had previously been denied credit. This may indicate that the client’s suspicionsregarding that the existing model is too restrictive, in fact are true.

    Download full text (pdf)
    Utveckling av beslutsstöd för kreditvärdighet
  • 11. Banissi, Ebad
    et al.
    Bertschi, StefanBurkhard, RemoCvek, UrskaEppler, MartinForsell, CamillaLinköping University, Department of Science and Technology, Media and Information Technology.Grinstein, GeorgesJohansson, JimmyLinköping University, Department of Science and Technology, Media and Information Technology.Kenderdine, SarahMarchese, Francis T.Maple, CarstenTrutschl, MarjamSarfraz, MuhammadStuart, LizUrsyn, AnnaWyeld, Theodor G.
    Information Visualization2011Conference proceedings (editor) (Refereed)
  • 12.
    Barakat, Arian
    Linköping University, Department of Computer and Information Science, The Division of Statistics and Machine Learning.
    What makes an (audio)book popular?2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Audiobook reading has traditionally been used for educational purposes but has in recent times grown into a popular alternative to the more traditional means of consuming literature. In order to differentiate themselves from other players in the market, but also provide their users enjoyable literature, several audiobook companies have lately directed their efforts on producing own content. Creating highly rated content is, however, no easy task and one reoccurring challenge is how to make a bestselling story. In an attempt to identify latent features shared by successful audiobooks and evaluate proposed methods for literary quantification, this thesis employs an array of frameworks from the field of Statistics, Machine Learning and Natural Language Processing on data and literature provided by Storytel - Sweden’s largest audiobook company.

    We analyze and identify important features from a collection of 3077 Swedish books concerning their promotional and literary success. By considering features from the aspects Metadata, Theme, Plot, Style and Readability, we found that popular books are typically published as a book series, cover 1-3 central topics, write about, e.g., daughter-mother relationships and human closeness but that they also hold, on average, a higher proportion of verbs and a lower degree of short words. Despite successfully identifying these, but also other factors, we recognized that none of our models predicted “bestseller” adequately and that future work may desire to study additional factors, employ other models or even use different metrics to define and measure popularity.

    From our evaluation of the literary quantification methods, namely topic modeling and narrative approximation, we found that these methods are, in general, suitable for Swedish texts but that they require further improvement and experimentation to be successfully deployed for Swedish literature. For topic modeling, we recognized that the sole use of nouns provided more interpretable topics and that the inclusion of character names tended to pollute the topics. We also identified and discussed the possible problem of word inflections when modeling topics for more morphologically complex languages, and that additional preprocessing treatments such as word lemmatization or post-training text normalization may improve the quality and interpretability of topics. For the narrative approximation, we discovered that the method currently suffers from three shortcomings: (1) unreliable sentence segmentation, (2) unsatisfactory dictionary-based sentiment analysis and (3) the possible loss of sentiment information induced by translations. Despite only examining a handful of literary work, we further found that books written initially in Swedish had narratives that were more cross-language consistent compared to books written in English and then translated to Swedish.

    Download full text (pdf)
    what_makes_an_audiobook_popular
  • 13.
    Belka, Kamila
    Linköping University, Department of Computer and Information Science.
    Multicriteria analysis and GIS application in the selection of sustainable motorway corridor2005Independent thesis Advanced level (degree of Magister), 20 points / 30 hpStudent thesis
    Abstract [en]

    Effects of functioning transportation infrastructure are receiving more and more environmental and social concern nowadays. Nevertheless, preliminary corridor plans are usually developed on the basis of technical and economic criteria exclusively. By the time of environmental impact assessment (EIA), which succeeds, relocation is practically impossible and only preventative measures can be applied.

    This paper proposes a GIS-based method of delimiting motorway corridor and integrating social, environmental and economic factors into the early stages of planning. Multiple criteria decision making (MCDM) techniques are used to assess all possible alternatives. GIS-held weighted shortest path algorithm enables to locate the corridor. The evaluation criteria are exemplary. They include nature conservation, buildings, forests and agricultural resources, and soils. Resulting evaluation surface is divided into a grid of cells, which are assigned suitability scores derived from all evaluation criteria. Subsequently, a set of adjacent cells connecting two pre-specified points is traced by the least-cost path algorithm. The best alternative has a lowest total value of suitability scores.

    As a result, the proposed motorway corridor is routed from origin to destination. It is afterwards compared with an alternative derived by traditional planning procedures. Concluding remarks are that the location criteria need to be adjusted to meet construction

    requirements as well as analysis process to be automated. Nevertheless, the geographic information system and the embedded shortest path algorithm proved to be well suited for preliminary corridor location analysis. Future research directions are sketched.

    Download full text (pdf)
    FULLTEXT01
  • 14.
    Bergdahl, Filip
    Linköping University, Department of Science and Technology.
    Analys av lämplighet för användning av RFID-teknik inom Schenkers verksamhet2007Independent thesis Advanced level (degree of Magister), 20 points / 30 hpStudent thesis
    Abstract [sv]

    Radio Frequency Identification är en relativt gammal teknik (sedan andra världskriget), som upplevt en renässans. Då som nu användes RFID för att identifiera föremål, dock med vissa tekniska skillnader. Ökade krav på industrier och samhället i övrigt har lett till den ”informa-tionsålder” vi nu lever i. Som ett steg i denna utveckling ställs allt högre krav på insamlingen av den information som många processer och beslut baseras på. Förenklat kan kommunika-tionen förklaras som en radiosignal som skickas från en RFID-läsare till en RFID-tagg. Taggen sitter på objektet som skall identifieras och information om objektet finns lagrad i taggen. Radiosignalen väcker taggen som skickar den information som finns lagrad i dess mikrochip tillbaka till läsaren. RFID kan användas i en stor mängd olika områden och applikationer där två av dessa är inom transportnätverk och försörjningskedjor. Anledningen till att detta projekt initierades var att logistikföretaget Schenker AB såg ett behov i att undersöka hur tekniken kan användas inom deras verksamhet. För Schenkers del var det viktigt att få utreda eventuella möjligheter med tekniken, och dess kostnader innan kunderna kom med krav eller önskemål om användning. Tre förslag på hur RFID kan användas i verksamheten har utarbetats för att få en bra bild av hur användningen kan gå till och vad som krävs. Studien visar att det krävs en hel del av Schenker i form av utrustning och även datasystem. De slutsatser som kan dras efter att projektet är genomfört är att det finns potential för förbättringar vid användning av RFID-teknik inom Schenkers verksamhet. Dock är dessa förknippade med relativt höga initiala kostnader. Vidare finns även en del tekniska begränsningar vilket gör att systemet måste planeras och konstrueras noggrant för full funktionalitet. Ytterligare undersökningar måste göras för att få mer underlag för hur pass väl RFID kan användas inom Schenker. Tester och försök i mindre flöden hos Schenker vore ett bra sätt för att få erfarenhet och kunskap om teknikens funktionalitet, möjligheter och begränsningar.

    Download full text (pdf)
    FULLTEXT01
  • 15.
    Berggren, Magnus
    et al.
    Linköping University, Department of Science and Technology, Physics and Electronics. Linköping University, Faculty of Science & Engineering.
    Simon, Daniel
    Linköping University, Department of Science and Technology, Physics and Electronics. Linköping University, Faculty of Science & Engineering.
    Nilsson, D
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Dyreklev, P
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Norberg, P
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Nordlinder, S
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Ersman, PA
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Gustafsson, G
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Wikner, Jacob
    Linköping University, Department of Electrical Engineering, Integrated Circuits and Systems. Linköping University, Faculty of Science & Engineering.
    Hederén, J
    DU Radio, Ericsson AB, SE-583 30, Linköping, Sweden..
    Hentzell, H
    Swedish ICT Research, Box 1151, SE-164 26, Kista, Sweden..
    Browsing the Real World using Organic Electronics, Si-Chips, and a Human Touch.2016In: Advanced Materials, ISSN 0935-9648, E-ISSN 1521-4095, Vol. 28, no 10, p. 1911-1916Article in journal (Refereed)
    Abstract [en]

    Organic electronics have been developed according to an orthodox doctrine advocating "all-printed, "all-organic and "ultra-low-cost primarily targeting various e-paper applications. In order to harvest from the great opportunities afforded with organic electronics potentially operating as communication and sensor outposts within existing and future complex communication infrastructures, high-quality computing and communication protocols must be integrated with the organic electronics. Here, we debate and scrutinize the twinning of the signal-processing capability of traditional integrated silicon chips with organic electronics and sensors, and to use our body as a natural local network with our bare hand as the browser of the physical world. The resulting platform provides a body network, i.e., a personalized web, composed of e-label sensors, bioelectronics, and mobile devices that together make it possible to monitor and record both our ambience and health-status parameters, supported by the ubiquitous mobile network and the resources of the "cloud".

    Download full text (pdf)
    fulltext
  • 16.
    Bissessar, Daniel
    et al.
    Linköping University, Department of Computer and Information Science.
    Bois, Alexander
    Linköping University, Department of Computer and Information Science.
    Evaluation of methods for question answering data generation: Using large language models2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    One of the largest challenges in the field of artificial intelligence and machine learning isthe acquisition of a large quantity of quality data to train models on.This thesis investigates and evaluates approaches to data generation in a telecom domain for the task of extractive QA. To do this a pipeline was built using a combination ofBERT-like models and T5 models for data generation. We then evaluated our generateddata using the downstream task of QA on a telecom domain data set. We measured theperformance using EM and F1-scores. We achieved results that are state of the art on thetelecom domain data set.We found that synthetic data generation is a viable approach to obtaining synthetictelecom QA data with the potential of improving model performance when used in addition to human-annotated data. We also found that using models from the general domainprovided results that are on par or better than domain-specific models for the generation, which provides possibilities to use a single generation pipeline for many differentdomains. Furthermore, we found that increasing the amount of synthetic data providedlittle benefit for our models on the downstream task, with diminishing returns setting inquickly. We were unable to pinpoint the reason for this. In short, our approach works butmuch more work remains to understand and optimize it for greater results

    Download full text (pdf)
    fulltext
  • 17. Order onlineBuy this publication >>
    Bivall, Petter
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Touching the Essence of Life: Haptic Virtual Proteins for Learning2010Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This dissertation presents research in the development and use of a multi-modal visual and haptic virtual model in higher education. The model, named Chemical Force Feedback (CFF), represents molecular recognition through the example of protein-ligand docking, and enables students to simultaneously see and feel representations of the protein and ligand molecules and their force interactions. The research efforts have been divided between educational research aspects and development of haptic feedback techniques.

    The CFF model was evaluated in situ through multiple data-collections in a university course on molecular interactions. To isolate possible influences of haptics on learning, half of the students ran CFF with haptics, and the others used the equipment with force feedback disabled. Pre- and post-tests showed a significant learning gain for all students. A particular influence of haptics was found on students reasoning, discovered through an open-ended written probe where students' responses contained elaborate descriptions of the molecular recognition process.

    Students' interactions with the system were analyzed using customized information visualization tools. Analysis revealed differences between the groups, for example, in their use of visual representations on offer, and in how they moved the ligand molecule. Differences in representational and interactive behaviours showed relationships with aspects of the learning outcomes.

    The CFF model was improved in an iterative evaluation and development process. A focus was placed on force model design, where one significant challenge was in conveying information from data with large force differences, ranging from very weak interactions to extreme forces generated when atoms collide. Therefore, a History Dependent Transfer Function (HDTF) was designed which adapts the translation of forces derived from the data to output forces according to the properties of the recently derived forces. Evaluation revealed that the HDTF improves the ability to haptically detect features in volumetric data with large force ranges.

    To further enable force models with high fidelity, an investigation was conducted to determine the perceptual Just Noticeable Difference (JND) in force for detection of interfaces between features in volumetric data. Results showed that JNDs vary depending on the magnitude of the forces in the volume and depending on where in the workspace the data is presented.

    List of papers
    1. Designing and Evaluating a Haptic System for Biomolecular Education
    Open this publication in new window or tab >>Designing and Evaluating a Haptic System for Biomolecular Education
    Show others...
    2007 (English)In: IEEE Virtual Reality Conference, 2007. VR '07. / [ed] Sherman, W; Lin, M; Steed, A, Piscataway, NJ, USA: IEEE , 2007, p. 171-178Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper we present an in situ evaluation of a haptic system, with a representative test population, we aim to determine what, if any, benefit haptics can have in a biomolecular education context. We have developed a haptic application for conveying concepts of molecular interactions, specifically in protein-ligand docking. Utilizing a semi-immersive environment with stereo graphics, users are able to manipulate the ligand and feel its interactions in the docking process. The evaluation used cognitive knowledge tests and interviews focused on learning gains. Compared with using time efficiency as the single quality measure this gives a better indication of a system's applicability in an educational environment. Surveys were used to gather opinions and suggestions for improvements. Students do gain from using the application in the learning process but the learning appears to be independent of the addition of haptic feedback. However the addition of force feedback did decrease time requirements and improved the students understanding of the docking process in terms of the forces involved, as is apparent from the students' descriptions of the experience. The students also indicated a number of features which could be improved in future development.

    Place, publisher, year, edition, pages
    Piscataway, NJ, USA: IEEE, 2007
    Keywords
    Haptic Interaction, Haptics, Virtual Reality, Computer-assisted instruction, Life Science Education, Protein Interactions, Visualization, Protein-ligand docking
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-39934 (URN)10.1109/VR.2007.352478 (DOI)000245919300022 ()51733 (Local ID)1-4244-0906-3 (ISBN)51733 (Archive number)51733 (OAI)
    Conference
    IEEE Virtual Reality Conference, Charlotte, NC, USA, 10-14 March 2007
    Note

    ©2009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Petter Bivall Persson, Matthew Cooper, Lena Tibell, Shaaron Ainsworth, Anders Ynnerman and Bengt-Harald Jonsson, Designing and Evaluating a Haptic System for Biomolecular Education, 2007, IEEE Virtual Reality Conference 2007, 171-178. http://dx.doi.org/10.1109/VR.2007.352478

    Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2016-05-04Bibliographically approved
    2. Improved Feature Detection over Large Force Ranges Using History Dependent Transfer Functions
    Open this publication in new window or tab >>Improved Feature Detection over Large Force Ranges Using History Dependent Transfer Functions
    Show others...
    2009 (English)In: Third Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems, WorldHaptics 2009, IEEE , 2009, p. 476-481Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper we present a history dependent transfer function (HDTF) as a possible approach to enable improved haptic feature detection in high dynamic range (HDR) volume data. The HDTF is a multi-dimensional transfer function that uses the recent force history as a selection criterion to switch between transfer functions, thereby adapting to the explored force range. The HDTF has been evaluated using artificial test data and in a realistic application example, with the HDTF applied to haptic protein-ligand docking. Biochemistry experts performed docking tests, and expressed that the HDTF delivers the expected feedback across a large force magnitude range, conveying both weak attractive and strong repulsive protein-ligand interaction forces. Feature detection tests have been performed with positive results, indicating that the HDTF improves the ability of feature detection in HDR volume data as compared to a static transfer function covering the same range.

    Place, publisher, year, edition, pages
    IEEE, 2009
    Keywords
    Haptics, Virtual Reality, Scientific Visualization
    National Category
    Interaction Technologies
    Identifiers
    urn:nbn:se:liu:diva-45355 (URN)10.1109/WHC.2009.4810843 (DOI)81912 (Local ID)978-1-4244-3858-7 (ISBN)81912 (Archive number)81912 (OAI)
    Conference
    Third Joint EuroHaptics conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics 2009.Salt Lake City, UT, USA, 18-20 March 2009
    Projects
    VisMolLS
    Note

    ©2009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Petter Bivall Persson, Gunnar E. Höst, Matthew D. Cooper, Lena A. E. Tibell and Anders Ynnerman, Improved Feature Detection over Large Force Ranges Using History Dependent Transfer Functions, 2009, Third Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems, WorldHaptics 2009, 476-481. http://dx.doi.org/10.1109/WHC.2009.4810843

    Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2016-05-04Bibliographically approved
    3. Do Haptic Representations Help Complex Molecular Learning?
    Open this publication in new window or tab >>Do Haptic Representations Help Complex Molecular Learning?
    2011 (English)In: Science Education, ISSN 0036-8326, E-ISSN 1098-237X, Vol. 95, no 4, p. 700-719Article in journal (Refereed) Published
    Abstract [en]

    This study explored whether adding a haptic interface (that provides users with somatosensory information about virtual objects by force and tactile feedback) to a three-dimensional (3D) chemical model enhanced students' understanding of complex molecular interactions. Two modes of the model were compared in a between-groups pre- and posttest design. In both modes, users could move and rotate virtual 3D representations of the chemical structures of the two molecules, a protein and a small ligand molecule. In addition, in a haptic mode users could feel the interactions (repulsive and attractive) between molecules as forces with a haptic device. Twenty postgraduate students (10 in each condition) took pretests about the process of protein--ligand recognition before exploring the model in ways suggested by structured worksheets and then completing a posttest. Analysis addressed quantitative learning outcomes and more qualitatively students' reasoning during the learning phase. Results showed that the haptic system helped students learn more about the process of protein–ligand recognition and changed the way they reasoned about molecules to include more force-based explanations. It may also have protected students from drawing erroneous conclusions about the process of protein–ligand recognition observed when students interacted with only the visual model.

    Keywords
    Haptic learning, multimodality, molecular interactions, protein-ligand docking
    National Category
    Didactics Biochemistry and Molecular Biology Media and Communication Technology
    Identifiers
    urn:nbn:se:liu:diva-60354 (URN)10.1002/sce.20439 (DOI)
    Projects
    VisMolLS
    Available from: 2010-10-12 Created: 2010-10-12 Last updated: 2018-01-12
    4. Using logging data to visualize and explore students’ interaction and learning with a haptic virtual model of protein-ligand docking
    Open this publication in new window or tab >>Using logging data to visualize and explore students’ interaction and learning with a haptic virtual model of protein-ligand docking
    (English)Manuscript (preprint) (Other academic)
    Abstract [en]

    This study explores students’ interaction and learning with a haptic virtual model of biomolecular recognition. Twenty students assigned to a haptics or no-haptics condition performed a protein-ligand docking task where interaction was captured in log files. Any improvement in understanding of recognition was measured by comparing written responses to a conceptual question before and after interaction. A log-profiling tool visualized students’ traversal of the ligand while multivariate parallel coordinate analyses uncovered trends in the data. Students who experienced force feedback (haptics) displayed docked positions that were more clustered in comparison with no-haptics students, coupled to docking profiles that depicted a more focused traversal of the ligand. Students in the no-haptics condition employed double the amount of behaviours concerned with switching between multiple visual representations offered by the system. In the no-haptics group, this visually intense processing was associated with ‘fitting’ the ligand closer distances to the surface of the protein. A negative relationship between high representational switching activity and learning gain as well as spatial aptitude was also revealed. From an information-processing perspective, visual and haptic coordination could permit engagement of each perceptual channel simultaneously, in effect offloading the visual pathway by placing less strain on visual working memory.

    Keywords
    Interactive learning environments; multimedia systems; pedagogical issues; postsecondary education; virtual reality
    National Category
    Natural Sciences
    Identifiers
    urn:nbn:se:liu:diva-60355 (URN)
    Available from: 2010-10-12 Created: 2010-10-12 Last updated: 2016-05-04
    5. Haptic Just Noticeable Difference in Continuous Probing of Volume Data
    Open this publication in new window or tab >>Haptic Just Noticeable Difference in Continuous Probing of Volume Data
    2010 (English)Report (Other academic)
    Abstract [en]

    Just noticeable difference (JND) describes how much two perceptual sensory inputs must differ in order to be distinguishable from each other. Knowledge of the JND is vital when two features in a dataset are to be separably represented. JND has received a lot of attention in haptic research and this study makes a contribution to the field by determining JNDs during users' probing of volumetric data at two force levels. We also investigated whether these JNDs were affected by where in the haptic workspace the probing occurred. Reference force magnitudes were 0.1 N and 0.8 N, and the volume data was presented in rectangular blocks positioned at the eight corners of a cube 10 cm3 in size. Results showed that the JNDs varied significantly for the two force levels, with mean values of 38.5% and 8.8% obtained for the 0.1 N and 0.8 N levels, respectively, and that the JND was influenced by where the data was positioned.

    Place, publisher, year, edition, pages
    Linköping: Linköping University Electronic Press, 2010. p. 19
    Series
    Technical reports in Computer and Information Science, ISSN 1654-7233 ; 6
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-58011 (URN)
    Available from: 2010-07-16 Created: 2010-07-16 Last updated: 2010-10-12Bibliographically approved
    Download full text (pdf)
    Touching the Essence of Life : Haptic Virtual Proteins for Learning
    Download (pdf)
    Cover
  • 18.
    Bladin, Kalle
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Axelsson, Emil
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Broberg, Erik
    Linköping University, Faculty of Science & Engineering.
    Emmart, Carter
    Amer Museum Nat Hist, NY 10024 USA.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Bock, Alexander
    Linköping University, Department of Science and Technology. Linköping University, Faculty of Science & Engineering. NYU, NY 10003 USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization2018In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 24, no 1, p. 802-811Article in journal (Refereed)
    Abstract [en]

    Results of planetary mapping are often shared openly for use in scientific research and mission planning. In its raw format, however, the data is not accessible to non-experts due to the difficulty in grasping the context and the intricate acquisition process. We present work on tailoring and integration of multiple data processing and visualization methods to interactively contextualize geospatial surface data of celestial bodies for use in science communication. As our approach handles dynamic data sources, streamed from online repositories, we are significantly shortening the time between discovery and dissemination of data and results. We describe the image acquisition pipeline, the pre-processing steps to derive a 2.5D terrain, and a chunked level-of-detail, out-of-core rendering approach to enable interactive exploration of global maps and high-resolution digital terrain models. The results are demonstrated for three different celestial bodies. The first case addresses high-resolution map data on the surface of Mars. A second case is showing dynamic processes. such as concurrent weather conditions on Earth that require temporal datasets. As a final example we use data from the New Horizons spacecraft which acquired images during a single flyby of Pluto. We visualize the acquisition process as well as the resulting surface data. Our work has been implemented in the OpenSpace software [8], which enables interactive presentations in a range of environments such as immersive dome theaters. interactive touch tables. and virtual reality headsets.

    Download full text (pdf)
    fulltext
  • 19.
    Bleser, Gabriele
    et al.
    Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany; Department of Computer Science, Technical University of Kaiserslautern, Kaiserslautern, Germany.
    Damen, Dima
    Department of Computer Science, University of Bristol, Bristol, UK.
    Behera, Ardhendu
    School of Computing, University of Leeds, Leeds, UK; Department of Computing, Edge Hill University, Ormskirk, UK.
    Hendeby, Gustaf
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Mura, Katharina
    SmartFactory KL e.V., Kaiserslautern, Germany.
    Miezal, Markus
    Department of Computer Science, Technical University of Kaiserslautern, Kaiserslautern, Germany.
    Gee, Andrew
    Department of Computer Science, University of Bristol, Bristol, UK.
    Petersen, Nils
    Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Maçães, Gustavo
    Department Computer Vision, Interaction and Graphics, Center for Computer Graphics, Guimarães, Portugal.
    Domingues, Hugo
    Department Computer Vision, Interaction and Graphics, Center for Computer Graphics, Guimarães, Portugal.
    Gorecky, Dominic
    SmartFactory KL e.V., Kaiserslautern, Germany.
    Almeida, Luis
    Department Computer Vision, Interaction and Graphics, Center for Computer Graphics, Guimarães, Portugal.
    Mayol-Cuevas, Walterio
    Department of Computer Science, University of Bristol, Bristol, UK.
    Calway, Andrew
    Department of Computer Science, University of Bristol, Bristol, UK.
    Cohn, Anthony G.
    School of Computing, University of Leeds, Leeds, UK.
    Hogg, David C.
    School of Computing, University of Leeds, Leeds, UK.
    Stricker, Didier
    Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks2015In: PLOS ONE, E-ISSN 1932-6203, Vol. 10, no 6, article id e0127769Article in journal (Refereed)
    Abstract [en]

    Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user’s pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.

    Download full text (pdf)
    fulltext
  • 20. Order onlineBuy this publication >>
    Bock, Alexander
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Science and Technology, Media and Information Technology.
    Tailoring visualization applications for tasks and users2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Exponential increases in available computational resources over the recent decades have fueled an information explosion in almost every scientific field. This has led to a societal change shifting from an information-poor research environment to an over-abundance of information. As many of these cases involve too much information to directly comprehend, visualization proves to be an effective tool to gain insight into these large datasets. While visualization has been used since the beginning of mankind, its importance is only increasing as the exponential information growth widens the difference between the amount of gathered data and the relatively constant human ability to ingest information. Visualization, as a methodology and tool of transforming complex data into an intuitive visual representation can leverage the combined computational resources and the human cognitive capabilities in order to mitigate this growing discrepancy.

    A large portion of visualization research is, directly or indirectly, targets users in an application domain, such as medicine, biology, physics, or others. Applied research is aimed at the creation of visualization applications or systems that solve a specific problem within the domain. Combining prior research and applying it to a concrete problem enables the possibility to compare and determine the usability and usefulness of existing visualization techniques. These applications can only be effective when the domain experts are closely involved in the design process, leading to an iterative workflow that informs its form and function. These visualization solutions can be separated into three categories: Exploration, in which users perform an initial study of data, Analysis, in which an established technique is repeatedly applied to a large number of datasets, and Communication in which findings are published to a wider public audience.

    This thesis presents five examples of application development in finite element modeling, medicine, urban search & rescue, and astronomy and astrophysics. For the finite element modeling, an exploration tool for simulations of stress tensors in a human heart uses a compression method to achieve interactive frame rates. In the medical domain, an analysis system aimed at guiding surgeons during Deep Brain Stimulation interventions fuses multiple modalities in order to improve their outcome. A second analysis application is targeted at the Urban Search & Rescue community supporting the extraction of injured victims and enabling a more sophisticated decision making strategy. For the astronomical domain, first, an exploration application enables the analysis of time-varying volumetric plasma simulations to improving these simulations and thus better predict space weather. A final system focusses on combining all three categories into a single application that enables the same tools to be used for Exploration, Analysis, and Communication, thus requiring the handling of large coordinate systems, and high-fidelity rendering of planetary surfaces and spacecraft operations.

    List of papers
    1. Coherency-Based Curve Compression for High-Order Finite Element Model Visualization
    Open this publication in new window or tab >>Coherency-Based Curve Compression for High-Order Finite Element Model Visualization
    Show others...
    2012 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 12, p. 2315-2324Article in journal (Refereed) Published
    Abstract [en]

    Finite element (FE) models are frequently used in engineering and life sciences within time-consuming simulations. In contrast with the regular grid structure facilitated by volumetric data sets, as used in medicine or geosciences, FE models are defined over a non-uniform grid. Elements can have curved faces and their interior can be defined through high-order basis functions, which pose additional challenges when visualizing these models. During ray-casting, the uniformly distributed sample points along each viewing ray must be transformed into the material space defined within each element. The computational complexity of this transformation makes a straightforward approach inadequate for interactive data exploration. In this paper, we introduce a novel coherency-based method which supports the interactive exploration of FE models by decoupling the expensive world-to-material space transformation from the rendering stage, thereby allowing it to be performed within a precomputation stage. Therefore, our approach computes view-independent proxy rays in material space, which are clustered to facilitate data reduction. During rendering, these proxy rays are accessed, and it becomes possible to visually analyze high-order FE models at interactive frame rates, even when they are time-varying or consist of multiple modalities. Within this paper, we provide the necessary background about the FE data, describe our decoupling method, and introduce our interactive rendering algorithm. Furthermore, we provide visual results and analyze the error introduced by the presented approach.

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers (IEEE), 2012
    Keywords
    Finite element visualization, GPU-base dray-casting
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-86633 (URN)10.1109/TVCG.2012.206 (DOI)000310143100035 ()
    Note

    Funding Agencies|Swedish Research Council (VR)|2011-4113|Excellence Center at Linkoping and Lund in Information Technology (ELLIIT)||Swedish e-Science Research Centre (SeRC)||

    Available from: 2012-12-20 Created: 2012-12-20 Last updated: 2018-05-21
    2. Guiding Deep Brain Stimulation Interventions by Fusing Multimodal Uncertainty Regions
    Open this publication in new window or tab >>Guiding Deep Brain Stimulation Interventions by Fusing Multimodal Uncertainty Regions
    Show others...
    2013 (English)Conference paper, Published paper (Other academic)
    Abstract [en]

    Deep Brain Stimulation (DBS) is a surgical intervention that is known to reduce or eliminate the symptoms of common movement disorders, such as Parkinson.s disease, dystonia, or tremor. During the intervention the surgeon places electrodes inside of the patient.s brain to stimulate speci.c regions. Since these regions span only a couple of millimeters, and electrode misplacement has severe consequences, reliable and accurate navigation is of great importance. Usually the surgeon relies on fused CT and MRI data sets, as well as direct feedback from the patient. More recently Microelectrode Recordings (MER), which support navigation by measuring the electric .eld of the patient.s brain, are also used. We propose a visualization system that fuses the different modalities: imaging data, MER and patient checks, as well as the related uncertainties, in an intuitive way to present placement-related information in a consistent view with the goal of supporting the surgeon in the .nal placement of the stimulating electrode. We will describe the design considerations for our system, the technical realization, present the outcome of the proposed system, and provide an evaluation.

    Place, publisher, year, edition, pages
    IEEE conference proceedings, 2013
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-92857 (URN)10.1109/PacificVis.2013.6596133 (DOI)000333746600013 ()9781467347976 (ISBN)
    Conference
    IEEE Pacific Visualization, 26 February - 1 March 2013, Sydney, Australia
    Funder
    ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish e‐Science Research CenterSwedish Research Council, 2011-4113
    Available from: 2013-05-27 Created: 2013-05-27 Last updated: 2018-05-21
    3. Supporting Urban Search & Rescue Mission Planning through Visualization-Based Analysis
    Open this publication in new window or tab >>Supporting Urban Search & Rescue Mission Planning through Visualization-Based Analysis
    2014 (English)In: Proceedings of the Vision, Modeling, and Visualization Conference 2014, Eurographics - European Association for Computer Graphics, 2014Conference paper, Published paper (Refereed)
    Abstract [en]

    We propose a visualization system for incident commanders in urban search~\&~rescue scenarios that supports access path planning for post-disaster structures. Utilizing point cloud data acquired from unmanned robots, we provide methods for assessment of automatically generated paths. As data uncertainty and a priori unknown information make fully automated systems impractical, we present a set of viable access paths, based on varying risk factors, in a 3D environment combined with the visual analysis tools enabling informed decisions and trade-offs. Based on these decisions, a responder is guided along the path by the incident commander, who can interactively annotate and reevaluate the acquired point cloud to react to the dynamics of the situation. We describe design considerations for our system, technical realizations, and discuss the results of an expert evaluation.

    Place, publisher, year, edition, pages
    Eurographics - European Association for Computer Graphics, 2014
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-117772 (URN)10.2312/vmv.20141275 (DOI)978-3-905674-74-3 (ISBN)
    Conference
    Vision, Modeling, and Visualization
    Projects
    ELLIIT; VR; SeRC
    Funder
    ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish e‐Science Research CenterSwedish Research Council, 2011-4113
    Available from: 2015-05-08 Created: 2015-05-08 Last updated: 2018-05-21Bibliographically approved
    4. An interactive visualization system for urban search & rescue mission planning
    Open this publication in new window or tab >>An interactive visualization system for urban search & rescue mission planning
    2014 (English)In: 12th IEEE International Symposium on Safety, Security and Rescue Robotics, SSRR 2014 - Symposium Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2014, no 7017652Conference paper, Published paper (Refereed)
    Abstract [en]

    We present a visualization system for incident commanders in urban search and rescue scenarios that supports the inspection and access path planning in post-disaster structures. Utilizing point cloud data acquired from unmanned robots, the system allows for assessment of automatically generated paths, whose computation is based on varying risk factors, in an interactive 3D environment increasing immersion. The incident commander interactively annotates and reevaluates the acquired point cloud based on live feedback. We describe design considerations, technical realization, and discuss the results of an expert evaluation that we conducted to assess our system.

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers Inc., 2014
    Series
    12th IEEE International Symposium on Safety, Security and Rescue Robotics, SSRR 2014 - Symposium Proceedings
    National Category
    Electrical Engineering, Electronic Engineering, Information Engineering
    Identifiers
    urn:nbn:se:liu:diva-116761 (URN)10.1109/SSRR.2014.7017652 (DOI)2-s2.0-84923174457 (Scopus ID)9781479941995 (ISBN)
    Conference
    12th IEEE International Symposium on Safety, Security and Rescue Robotics, SSRR 2014
    Available from: 2015-04-02 Created: 2015-04-02 Last updated: 2018-05-21
    5. A Visualization-Based Analysis System for Urban Search & Rescue Mission Planning Support
    Open this publication in new window or tab >>A Visualization-Based Analysis System for Urban Search & Rescue Mission Planning Support
    Show others...
    2017 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, no 6, p. 148-159Article in journal (Refereed) Published
    Abstract [en]

    We propose a visualization system for incident commanders (ICs) in urban searchandrescue scenarios that supports path planning in post-disaster structures. Utilizing point cloud data acquired from unmanned robots, we provide methods for the assessment of automatically generated paths. As data uncertainty and a priori unknown information make fully automated systems impractical, we present the IC with a set of viable access paths, based on varying risk factors, in a 3D environment combined with visual analysis tools enabling informed decision making and trade-offs. Based on these decisions, a responder is guided along the path by the IC, who can interactively annotate and reevaluate the acquired point cloud and generated paths to react to the dynamics of the situation. We describe visualization design considerations for our system and decision support systems in general, technical realizations of the visualization components, and discuss the results of two qualitative expert evaluation; one online study with nine searchandrescue experts and an eye-tracking study in which four experts used the system on an application case.

    Place, publisher, year, edition, pages
    WILEY, 2017
    Keywords
    urban search and rescue decision support application
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-140952 (URN)10.1111/cgf.12869 (DOI)000408634200009 ()
    Note

    Funding Agencies|Excellence Center at Linkoping and Lund in Information Technology; Swedish e-Science Research Centre; VR grant [2011-4113]

    Available from: 2017-09-19 Created: 2017-09-19 Last updated: 2020-12-22
    6. Visual Verification of Space Weather Ensemble Simulations
    Open this publication in new window or tab >>Visual Verification of Space Weather Ensemble Simulations
    Show others...
    2015 (English)In: 2015 IEEE Scientific Visualization Conference (SciVis), IEEE, 2015, p. 17-24Conference paper, Published paper (Refereed)
    Abstract [en]

    We propose a system to analyze and contextualize simulations of coronal mass ejections. As current simulation techniques require manual input, uncertainty is introduced into the simulation pipeline leading to inaccurate predictions that can be mitigated through ensemble simulations. We provide the space weather analyst with a multi-view system providing visualizations to: 1. compare ensemble members against ground truth measurements, 2. inspect time-dependent information derived from optical flow analysis of satellite images, and 3. combine satellite images with a volumetric rendering of the simulations. This three-tier workflow provides experts with tools to discover correlations between errors in predictions and simulation parameters, thus increasing knowledge about the evolution and propagation of coronal mass ejections that pose a danger to Earth and interplanetary travel

    Place, publisher, year, edition, pages
    IEEE, 2015
    National Category
    Computer Sciences Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-128037 (URN)10.1109/SciVis.2015.7429487 (DOI)000380564400003 ()978-1-4673-9785-8 (ISBN)
    Conference
    2015 IEEE Scientific Visualization Conference
    Available from: 2016-05-16 Created: 2016-05-16 Last updated: 2018-07-19
    7. Dynamic Scene Graph: Enabling Scaling, Positioning, and Navigation in the Universe
    Open this publication in new window or tab >>Dynamic Scene Graph: Enabling Scaling, Positioning, and Navigation in the Universe
    Show others...
    2017 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, no 3, p. 459-468Article in journal (Refereed) Published
    Abstract [en]

    In this work, we address the challenge of seamlessly visualizing astronomical data exhibiting huge scale differences in distance, size, and resolution. One of the difficulties is accurate, fast, and dynamic positioning and navigation to enable scaling over orders of magnitude, far beyond the precision of floating point arithmetic. To this end we propose a method that utilizes a dynamically assigned frame of reference to provide the highest possible numerical precision for all salient objects in a scene graph. This makes it possible to smoothly navigate and interactively render, for example, surface structures on Mars and the Milky Way simultaneously. Our work is based on an analysis of tracking and quantification of the propagation of precision errors through the computer graphics pipeline using interval arithmetic. Furthermore, we identify sources of precision degradation, leading to incorrect object positions in screen-space and z-fighting. Our proposed method operates without near and far planes while maintaining high depth precision through the use of floating point depth buffers. By providing interoperability with order-independent transparency algorithms, direct volume rendering, and stereoscopy, our approach is well suited for scientific visualization. We provide the mathematical background, a thorough description of the method, and a reference implementation.

    Place, publisher, year, edition, pages
    WILEY, 2017
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-139628 (URN)10.1111/cgf.13202 (DOI)000404881200042 ()
    Conference
    19th Eurographics/IEEE VGTC Conference on Visualization (EuroVis)
    Note

    Funding Agencies|Swedish e-Science Research Center (SeRC); NASA [NNX16AB93A]; Moore-Sloan Data Science Environment at NYU; NSF [CNS-1229185, CCF-1533564, CNS-1544753]

    Available from: 2017-08-16 Created: 2017-08-16 Last updated: 2018-05-21
    8. Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization
    Open this publication in new window or tab >>Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization
    Show others...
    2018 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 24, no 1, p. 802-811Article in journal (Refereed) Published
    Abstract [en]

    Results of planetary mapping are often shared openly for use in scientific research and mission planning. In its raw format, however, the data is not accessible to non-experts due to the difficulty in grasping the context and the intricate acquisition process. We present work on tailoring and integration of multiple data processing and visualization methods to interactively contextualize geospatial surface data of celestial bodies for use in science communication. As our approach handles dynamic data sources, streamed from online repositories, we are significantly shortening the time between discovery and dissemination of data and results. We describe the image acquisition pipeline, the pre-processing steps to derive a 2.5D terrain, and a chunked level-of-detail, out-of-core rendering approach to enable interactive exploration of global maps and high-resolution digital terrain models. The results are demonstrated for three different celestial bodies. The first case addresses high-resolution map data on the surface of Mars. A second case is showing dynamic processes. such as concurrent weather conditions on Earth that require temporal datasets. As a final example we use data from the New Horizons spacecraft which acquired images during a single flyby of Pluto. We visualize the acquisition process as well as the resulting surface data. Our work has been implemented in the OpenSpace software [8], which enables interactive presentations in a range of environments such as immersive dome theaters. interactive touch tables. and virtual reality headsets.

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers (IEEE), 2018
    Keywords
    Astronomical visualization; globe rendering; public dissemination. science communication; space mission visualization
    National Category
    Other Computer and Information Science
    Identifiers
    urn:nbn:se:liu:diva-144142 (URN)10.1109/TVCG.2017.2743958 (DOI)000418038400079 ()28866505 (PubMedID)2-s2.0-85028711409 (Scopus ID)
    Conference
    IEEE VIS Conference
    Note

    Funding Agencies|Knut and Alice Wallenberg Foundation; Swedish e-Science Research Center (SeRC); ELLIIT; Vetenskapsradet [VR-2015-05462]; NASA [NNX16AB93A]; Moore-Sloan Data Science Environment at New York University; NSF [CNS-1229185, CCF-1533564, CNS-1544753, CNS-1730396]

    Available from: 2018-01-10 Created: 2018-01-10 Last updated: 2018-05-21Bibliographically approved
    Download full text (pdf)
    Tailoring visualization applications for tasks and users
    Download (pdf)
    omslag
    Download (png)
    presentationsbild
  • 21.
    Bodemar, Gustaf
    Linköping University, Department of Computer and Information Science.
    Data mining historical insights for a software keyword from GitHub and Libraries.io; GraphQL2022Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    This paper explores an approach to extracting historical insights into a software keyword by data mining GitHub and Libraries.io. We test our method using the keyword GraphQL to see what insights we can gain. We managed to plot several timelines of how repositories and software libraries related to our keyword were created over time. We could also do a rudimentary analysis of how active said items were. We also extracted programing language data associated with each repository and library from GitHub and Libraries.io. With this data, we could, at worst, correlate which programming languages were associated with each item or, in the best case, predict what implementations of GraphQL they used. We found through our attempt many problems and caveats that needed to be dealt with but still concluded that extracting historical insights by data mining GitHub and Libraries.io is worthwhile.

    Download full text (pdf)
    fulltext
  • 22.
    Boström, Axel
    et al.
    Linköping University, Department of Computer and Information Science.
    Börjesson, Oliver
    Linköping University, Department of Computer and Information Science.
    Simulating ADS-B vulnerabilities by imitating aircrafts: Using an air traffic management simulator2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Air traffic communication is one of the most vital systems for air traffic management controllers. It is used every day to allow millions of people to travel safely and efficiently across the globe. But many of the systems considered industry-standard are used without any sort of encryption and authentication meaning that they are vulnerable to different wireless attacks.

    In this thesis vulnerabilities within an air traffic management system called ADS-B will be investigated. The structure and theory behind this system will be described as well as the reasons why ADS-B is unencrypted. Two attacks will then be implemented and performed in an open-source air traffic management simulator called openScope. ADS-B data from these attacks will be gathered and combined with actual ADS-B data from genuine aircrafts. The collected data will be cleaned and used for machine learning purposes where three different algorithms will be applied to detect attacks.

    Based on our findings, where two out of the three machine learning algorithms used were able to detect 99.99% of the attacks, we propose that machine learning algorithms should be used to improve ADS-B security. We also think that educating air traffic controllers on how to detect and handle attacks is an important part of the future of air traffic management.

    Download full text (pdf)
    fulltext
  • 23.
    Bowers, Shawn
    et al.
    Computer Science and Engineering Department, Oregon Graduate Institute, USA.
    Delcambre, Lois
    Computer Science and Engineering Department, Oregon Graduate Institute, USA.
    A Generic Representation for Exploiting Model-Based Information2001Report (Other academic)
    Abstract [en]

    There are a variety of ways to represent information and each representation scheme typically has associated tools to manipulate it. In this paper, we present a single, generic representation that can accommodate a broad range of information representation schemes (i.e., structural models), such as XML, RDF, Topic Maps, and various database models.We focus here on model-based information where the information representation scheme prescribes structural modeling constructs (analogous to a data model in a database). For example, the XML model includes elements, attributes, and permits elements to be nested. Similarly, RDF models information through resources and properties.

    Having a generic representation for a broad range of structural models provides an opportunity to build generic technology to manage and store information. Additionally, we can use the generic representation to exploit a formally defined mapping language to transform information, e.g., from one scheme to another. In this paper, we present the generic representation and the associated mapping formalism to transform information and discuss some of the opportunities and challenges presented by this work.

    Download full text (pdf)
    fulltext
  • 24.
    Bruhner, Carl Magnus
    Linköping University, Department of Computer and Information Science.
    Bridging the Privacy Gap: a proposal for enhanced technical mechanisms to strengthen users' privacy control online in the age of GDPR and CCPA2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the age of the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), privacy and consent control have become even more apparent for every-day users of the internet. Privacy banners in all shapes and sizes asks for your permission through more or less challenging designs and makes privacy control more of a struggle than actually helping the users’ privacy.

    This thesis presents a novel solution expanding on the Advanced Data Protection Control (ADPC) mechanism in order to bridge current gaps in user data and privacy control. It moves the consent control to the browser interface to give a seamless and hassle-free experience for users, while at the same time offering content providers a way to be legally compliant with legislation including the GDPR.

    Motivated by an extensive academic review to evaluate previous work and identify current gaps in user data control, the aim of this thesis is to present a blueprint for future implementation of suggested features to support privacy control online for users globally.

    Download full text (pdf)
    fulltext
  • 25.
    Bäckman, Love
    et al.
    Linköping University, Department of Computer and Information Science.
    Vedin, Albin
    Linköping University, Department of Computer and Information Science.
    Evaluation of the Protobuf plugin protoc-gen-validate: A performance study2019Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Data validation is one of several approaches that can be used to increase the stability of a system. Code for validating data can either be written manually or generated from some structure.In this paper we evaluate the performance of protoc-gen-validate, aGoogle Protocol Buffers compilerpluginwhich generates code fordatavalidation.With use-case structures from Ericsson and manually constructed structures that test the performance of isolateddata typeand rulecombinationswe produce results that can be used as indicators ofthe overhead introduced by protoc-gen-validate’svalidation-features. The results show that the CPU time required to validate a message is lower than that of deserializing amessage in both Go and C++. It is also shownthat the CPU time required to validate a message is lower than that of serializing amessage in Go, whilevalidation takes longer than serialization in C++.

  • 26.
    Bångerius, Sebastian
    Linköping University, Department of Computer and Information Science.
    LSTM Feature Engineering Through Time Series Similarity Embedding2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Time series prediction has many applications. In cases with simultaneous series (like measurements of weather from multiple stations, or multiple stocks on the stock market)it is not unlikely that these series from different measurement origins behave similarly, or respond to the same contextual signals. Training input to a prediction model could be constructed from all simultaneous measurements to try and capture the relations between the measurement origins. A generalized approach is to train a prediction model on samples from any individual measurement origin. The data mass is the same in both cases, but in the first case, fewer samples of a larger width are used, while the second option uses a higher number of smaller samples. The first, high-width option, risks over-fitting as a result of fewer training samples per input variable. The second, general option, would have no way to learn relations between the measurement origins. Amending the general model with contextual information would allow for keeping a high samples-per-variable ratio without losing the ability to take the origin of the measurements into account. This thesis presents a vector embedding method for measurement origins in an environment with shared response to contextual signals. The embeddings are based on multi-variate time series from the origins. The embedding method is inspired by co-occurrence matrices commonly used in Natural Language Processing. The similarity measures used between the series are Dynamic Time Warping (DTW), Step-wise Euclidean Distance, and Pearson Correlation. The dimensionality of the resulting embeddings is reduced by Principal Component Analysis (PCA) to increase information density, and effectively preserve variance in the similarity space. The created embedding system allows contextualization of samples, akin to the human intuition that comes from knowing where measurements were taken from, like knowing what sort of company a stock ticker represents, or what environment a weather station is located in. In the embedded space, embeddings of series from fundamentally similar measurement origins are closely located, so that information regarding the behavior of one can be generalized to its neighbors. The resulting embeddings from this work resonate well with existing clustering methods in a weather dataset, and partially in a financial dataset, and do provide performance improvement for an LSTM network acting on said financial dataset. The similarity embeddings also outperform an embedding layer trained together with the LSTM.

    Download full text (pdf)
    fulltext
  • 27.
    Campbell, Walter S.
    et al.
    Univ Nebraska Med Ctr, NE 68198 USA.
    Karlsson, Daniel
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.
    Vreeman, Daniel J.
    Indiana Univ Sch Med, IN 46202 USA.
    Lazenby, Audrey J.
    Univ Nebraska Med Ctr, NE 68198 USA.
    Talmon, Geoffrey A.
    Univ Nebraska Med Ctr, NE 68198 USA.
    Campbell, James R.
    Univ Nebraska Med Ctr, NE USA.
    A computable pathology report for precision medicine: extending an observables ontology unifying SNOMED CT and LOINC2018In: JAMIA Journal of the American Medical Informatics Association, ISSN 1067-5027, E-ISSN 1527-974X, Vol. 25, no 3, p. 259-266Article in journal (Refereed)
    Abstract [en]

    The College of American Pathologists (CAP) introduced the first cancer synoptic reporting protocols in 1998. However, the objective of a fully computable and machine-readable cancer synoptic report remains elusive due to insufficient definitional content in Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT) and Logical Observation Identifiers Names and Codes (LOINC). To address this terminology gap, investigators at the University of Nebraska Medical Center (UNMC) are developing, authoring, and testing a SNOMED CT observable ontology to represent the data elements identified by the synoptic worksheets of CAP. Investigators along with collaborators from the US National Library of Medicine, CAP, the International Health Terminology Standards Development Organization, and the UK Health and Social Care Information Centre analyzed and assessed required data elements for colorectal cancer and invasive breast cancer synoptic reporting. SNOMED CT concept expressions were developed at UNMC in the Nebraska LexiconA (c) SNOMED CT namespace. LOINC codes for each SNOMED CT expression were issued by the Regenstrief Institute. SNOMED CT concepts represented observation answer value sets. UNMC investigators created a total of 194 SNOMED CT observable entity concept definitions to represent required data elements for CAP colorectal and breast cancer synoptic worksheets, including biomarkers. Concepts were bound to colorectal and invasive breast cancer reports in the UNMC pathology system and successfully used to populate a UNMC biobank. The absence of a robust observables ontology represents a barrier to data capture and reuse in clinical areas founded upon observational information. Terminology developed in this project establishes the model to characterize pathology data for information exchange, public health, and research analytics.

  • 28.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Department of Computer Science and Media Technology, ISOVIS Research Group, Sweden.
    Park, Vilhelm
    Linnaeus University, Department of Computer Science and Media Technology, ISOVIS Research Group, Sweden.
    Kerren, Andreas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linnaeus University, Sweden.
    Evaluating StackGenVis with a Comparative User Study2022In: Proceedings of the 15th IEEE Pacific Visualization Symposium (PacificVis '22), IEEE , 2022, p. 161-165Conference paper (Refereed)
    Abstract [en]

    Stacked generalization (also called stacking) is an ensemble method in machine learning that deploys a metamodel to summarize the predictive results of heterogeneous base models organized into one or more layers. Despite being capable of producing high-performance results, building a stack of models can be a trial-and-error procedure. Thus, our previously developed visual analytics system, entitled StackGenVis, was designed to monitor and control the entire stacking process visually. In this work, we present the results of a comparative user study we performed for evaluating the StackGenVis system. We divided the study participants into two groups to test the usability and effectiveness of StackGenVis compared to Orange Visual Stacking (OVS) in an exploratory usage scenario using healthcare data. The results indicate that StackGenVis is significantly more powerful than OVS based on the qualitative feedback provided by the participants. However, the average completion time for all tasks was comparable between both tools.

  • 29.
    Chen, Boqi
    et al.
    McGill Univ, Canada.
    Chen, Kua
    McGill Univ, Canada.
    Hassani, Shabnam
    Univ Ottawa, Canada.
    Yang, Yujing
    McGill Univ, Canada.
    Amyot, Daniel
    Univ Ottawa, Canada.
    Lessard, Lysanne
    Univ Ottawa, Canada.
    Mussbachcr, Gunter
    McGill Univ, Canada.
    Sabetzadeh, Mehrdad
    Univ Ottawa, Canada.
    Varro, Daniel
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. McGill Univ, Canada.
    On the Use of GPT-4 for Creating Goal Models: An Exploratory Study2023In: 2023 IEEE 31ST INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE WORKSHOPS, REW, IEEE COMPUTER SOC , 2023, p. 262-271Conference paper (Refereed)
    Abstract [en]

    The emergence of large language models and conversational front-ends such as ChatGPT is revolutionizing many software engineering activities. The extent to which such technologies can help with requirements engineering activities, especially the ones surrounding modeling, however, remains to be seen. This paper reports on early experimental results on the potential use of GPT-4 in the latter context, with a focus on the development of goal-oriented models. We first explore GPT-4s current knowledge and mastering of a specific modeling language, namely the Goal-oriented Requirement Language (GRL). We then use four combinations of prompts with and without a proposed textual syntax, and with and without contextual domain knowledge to guide the creation of GRL models for two case studies. The first case study focuses on a well-documented topic in the goal modeling community (Kids Help Phone), whereas the second one explores a context for which, to our knowledge, no public goal models currently exist (Social Housing). We explore the interactive construction of a goal model through specific follow-up prompts aimed to fix model issues and expand on the model content. Our results suggest that GPT-4 preserves considerable knowledge on goal modeling, and although many elements generated by GPT-4 are generic, reflecting what is already in the prompt, or even incorrect, there is value in getting exposed to the generated concepts, many of which being non-obvious to stakeholders outside the domain. Furthermore, aggregating results from multiple runs yields a far better outcome than from any individual run.

  • 30.
    Christiansen, Cecilia
    et al.
    Linköping University, Department of Science and Technology.
    Sandin Värn, Veronica
    Linköping University, Department of Science and Technology.
    Webbaserat system för effektiv registrering och hantering av reklamationer2006Independent thesis Basic level (degree of Bachelor), 10 points / 15 hpStudent thesis
    Abstract [sv]

    Vitamex AB är en nordisk egenvårdskoncern med huvudkontor i Norrköping som tillverkar och säljer naturläkemedel och kosttillskott. Inom Vitamex Production AB finns ett reklamationssystem för interna och externa reklamationer som hanteras pappersvägen. För att underlätta hanteringen önskade man ett webbaserat behörighetsstyrt datasystem. I det här examensarbetet beskrivs utvecklingen av ett webbaserat system där fokus har lagts på användbarhet. För utvecklingen har designteorin Usability Engineering använts. Med hjälp av denna process har gränssnittet testats och utvärderats. Rapporten beskriver hur vi lärt känna användarna och uppgiften genom intervjuer. Här beskriver vi hur designprocessen har fungerat genom att låta användarna vara med och påverka genom hela designfasen. Tre prototyper framställdes och dessa presenterades och utvärderades på fokusgrupper. Resultatet har blivit en prototyp med viss interaktivitet och med fokus på användbarhet.

    Download full text (pdf)
    FULLTEXT01
  • 31.
    Costa, Jonathas
    et al.
    NYU, NY 10003 USA.
    Bock, Alexander
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Univ Utah, UT 84112 USA.
    Emmart, Carter
    Amer Museum Nat Hist, NY 10024 USA.
    Hansen, Charles
    Univ Utah, UT 84112 USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV). Univ Utah, UT 84112 USA.
    Silva, Claudio
    NYU, NY 10003 USA.
    Interactive Visualization of Atmospheric Effects for Celestial Bodies2021In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 27, no 2, p. 785-795Article in journal (Refereed)
    Abstract [en]

    We present an atmospheric model tailored for the interactive visualization of planetary surfaces. As the exploration of the solar system is progressing with increasingly accurate missions and instruments, the faithful visualization of planetary environments is gaining increasing interest in space research, mission planning, and science communication and education. Atmospheric effects are crucial in data analysis and to provide contextual information for planetary data. Our model correctly accounts for the non-linear path of the light inside the atmosphere (in Earths case), the light absorption effects by molecules and dust particles, such as the ozone layer and the Martian dust, and a wavelength-dependent phase function for Mie scattering. The mode focuses on interactivity, versatility, and customization, and a comprehensive set of interactive controls make it possible to adapt its appearance dynamically. We demonstrate our results using Earth and Mars as examples. However, it can be readily adapted for the exploration of other atmospheres found on, for example, of exoplanets. For Earths atmosphere, we visually compare our results with pictures taken from the International Space Station and against the CIE clear sky model. The Martian atmosphere is reproduced based on available scientific data, feedback from domain experts, and is compared to images taken by the Curiosity rover. The work presented here has been implemented in the OpenSpace system, which enables interactive parameter setting and real-time feedback visualization targeting presentations in a wide range of environments, from immersive dome theaters to virtual reality headsets.

    Download full text (pdf)
    fulltext
  • 32.
    Dahlgren, Eric
    Linköping University, Department of Computer and Information Science, Human-Centered systems.
    Enhancement of an Ad Reviewal Process through Interpretable Anomaly Detecting Machine Learning Models2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Technological advancements made in recent decades in the fields of artificial intelligence (AI) and machine learning (ML) has lead to further automation of tasks previously performed by humans. Manually reviewing and assessing content uploaded to social media and marketplace platforms is one of said tasks that is both tedious and expensive to perform, and could possibly be automated through ML based systems. When introducing ML model predictions to a human decision making process, interpretability and explainability of models has been proven to be important factors for humans to trust in individual sample predictions.

    This thesis project aims to explore the performance of interpretable ML models used together with humans in an ad review process for a rental marketplace platform. Utilizing the XGBoost framework and SHAP for interpretable ML, a system was built with the ability to score an individual ad and explain the prediction with human readable sentences based on feature importance. The model reached an ROC AUC score of 0.90 and an Average Precision score of 0.64 on a held out test set. An end user survey was conducted which indicated some trust in the model and an appreciation for the local prediction explanations, but low general impact and helpfulness. While most related work focus on model performance, this thesis contributes with a smaller model usability study which can provide grounds for utilizing interpretable ML software in any manual decision making process.

    Download full text (pdf)
    fulltext
  • 33.
    Danielsson, Bengt
    et al.
    Linköping University, Department of Science and Technology, Physics, Electronics and Mathematics. Linköping University, Faculty of Science & Engineering.
    Santini, Marina
    RISE Res Inst Sweden, Sweden.
    Lundberg, Peter
    Linköping University, Department of Health, Medicine and Caring Sciences, Division of Diagnostics and Specialist Medicine. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Diagnostics, Medical radiation physics. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Al-Abasse, Yosef
    Linköping University, Department of Health, Medicine and Caring Sciences, Division of Diagnostics and Specialist Medicine. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Diagnostics, Medical radiation physics.
    Jönsson, Arne
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Eneling, Emma
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.
    Stridsman, Magnus
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.
    Classifying Implant-Bearing Patients via their Medical Histories: a Pre-Study on Swedish EMRs with Semi-Supervised GAN-BERT2022In: LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, EUROPEAN LANGUAGE RESOURCES ASSOC-ELRA , 2022, p. 5428-5435Conference paper (Refereed)
    Abstract [en]

    In this paper, we compare the performance of two BERT-based text classifiers whose task is to classify patients (more precisely, their medical histories) as having or not having implant(s) in their body. One classifier is a fully-supervised BERT classifier. The other one is a semi-supervised GAN-BERT classifier. Both models are compared against a fully-supervised SVM classifier. Since fully-supervised classification is expensive in terms of data annotation, with the experiments presented in this paper, we investigate whether we can achieve a competitive performance with a semi-supervised classifier based only on a small amount of annotated data. Results are promising and show that the semi-supervised classifier has a competitive performance when compared with the fully-supervised classifier.

  • 34.
    de Roos, Hans
    University of Amsterdam, Amsterdam, the netherlands; Free university, Berlin, Germany.
    The Digital Sculpture Project. Applying 3D Scanning Techniques for the Morphological Comparison of Sculptures2004Report (Other academic)
    Abstract [en]

    Over the last decade, highly accurate mobile 3D measurement technologies have become available and are widely used now in industry and entertainment. In the cultural heritage field, various 3D pilot projects for conservation and restauration purposes have been initiated or completed already. My Digital Sculpture Project, started in November 2001, focuses on establishing an efficient workflow for creating high-resolution geometric models of both small and large plaster, terra-cotta and bronze sculptures, with a limited budget and a small support team, in improvised, non-laboratory situations and within narrow time windows, as encountered in the course of a significant series of museum visits all over Europe. Specific requirements and scanning strategies for scanning complex sculptures are discussed, along with a series of scanner tests. The article explains possible applications of 3D documentation methods, especially their relevance for comparative morphological analysis and issues of originality and authenticity. To demonstrate the use of 3D difference maps, this paper presents an exact and objective comparison of two monumental plasters of Auguste Rodin's 'Thinker', located in France and Poland respectively, conducted in December 2003.

    Download full text (pdf)
    fulltext
  • 35.
    Dee, Laura E.
    et al.
    University of Minnesota Twin Cities, MN 55108 USA; University of Minnesota Twin Cities, MN 55108 USA.
    Allesina, Stefano
    University of Chicago, IL 60637 USA; University of Chicago, IL 60637 USA.
    Bonn, Aletta
    UFZ Helmholtz Centre Environm Research, Germany; Friedrich Schiller University of Jena, Germany; Gerrnan Centre Integrat Biodivers Research iDiv, Germany.
    Eklöf, Anna
    Linköping University, Department of Physics, Chemistry and Biology, Theoretical Biology. Linköping University, Faculty of Science & Engineering.
    Gaines, Steven D.
    University of Calif Santa Barbara, CA 93117 USA.
    Hines, Jes
    Gerrnan Centre Integrat Biodivers Research iDiv, Germany; University of Leipzig, Germany.
    Jacob, Ute
    Gerrnan Centre Integrat Biodivers Research iDiv, Germany; University of Goettingen, Germany.
    McDonald-Madden, Eve
    University of Queensland, Australia.
    Possingham, Hugh
    University of Queensland, Australia.
    Schroeter, Matthias
    UFZ Helmholtz Centre Environm Research, Germany; Gerrnan Centre Integrat Biodivers Research iDiv, Germany.
    Thompson, Ross M.
    University of Canberra, Australia.
    Operationalizing Network Theory for Ecosystem Service Assessments2017In: Trends in Ecology & Evolution, ISSN 0169-5347, E-ISSN 1872-8383, Vol. 32, no 2, p. 118-130Article, review/survey (Refereed)
    Abstract [en]

    Managing ecosystems to provide ecosystem services in the face of global change is a pressing challenge for policy and science. Predicting how alternative management actions and changing future conditions will alter services is complicated by interactions among components in ecological and socioeconomic systems. Failure to understand those interactions can lead to detrimental outcomes from management decisions. Network theory that integrates ecological and socioeconomic systems may provide a path to meeting this challenge. While network theory offers promising approaches to examine ecosystem services, few studies have identified how to operationalize networks for managing and assessing diverse ecosystem services. We propose a framework for how to use networks to assess how drivers and management actions will directly and indirectly alter ecosystem services.

  • 36.
    Delavennat, Julien
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Physically-based Real-time Glare2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The theme of this master’s thesis is the real-time rendering of glare as seen through human eyes, as a post-processing effect applied to a first-person view in a 3D application. Several techniques already exist, and the basis for this project is a paper from 2009 titled Temporal Glare: Real-Time Dynamic Simulation of the Scattering in the Human Eye, by Ritschel et al. The goal of my project was initially to implement that paper as part of a larger project, but it turned out that there might be some opportunities to build upon aspects of the techniques described in Temporal Glare; in consequence these various opportunities have been explored and constitute the main substance of this project.

    Download full text (pdf)
    fulltext
  • 37.
    Dimitriadis, Spyridon
    Linköping University, Department of Computer and Information Science, The Division of Statistics and Machine Learning.
    Multi-task regression QSAR/QSPR prediction utilizing text-based Transformer Neural Network and single-task using feature-based models2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With the recent advantages of machine learning in cheminformatics, the drug discovery process has been accelerated; providing a high impact in the field of medicine and public health. Molecular property and activity prediction are key elements in the early stages of drug discovery by helping prioritize the experiments and reduce the experimental work. In this thesis, a novel approach for multi-task regression using a text-based Transformer model is introduced and thoroughly explored for training on a number of properties or activities simultaneously. This multi-task regression with Transformer based model is inspired by the field of Natural Language Processing (NLP) which uses prefix tokens to distinguish between each task. In order to investigate our architecture two data categories are used; 133 biological activities from ExCAPE database and three physical chemistry properties from MoleculeNet benchmark datasets.

    The Transformer model consists of the embedding layer with positional encoding, a number of encoder layers, and a Feedforward Neural Network (FNN) to turn it into a regression problem. The molecules are represented as a string of characters using the Simplified Molecular-Input Line-Entry System (SMILES) which is a ’chemistry language’ with its own syntax. In addition, the effect of Transfer Learning is explored by experimenting with two pretrained Transformer models, pretrained on 1.5 million and on 100 million molecules. The text-base Transformer models are compared with a feature-based Support Vector Regression (SVR) with the Tanimoto kernel where the input molecules are encoded as Extended Connectivity Fingerprint (ECFP), which are calculated features.

    The results have shown that Transfer Learning is crucial for improving the performance on both property and activity predictions. On bioactivity tasks, the larger pretrained Transformer on 100 million molecules achieved comparable performance to the feature-based SVR model; however, overall SVR performed better on the majority of the bioactivity tasks. On the other hand, on physicochemistry property tasks, the larger pretrained Transformer outperformed SVR on all three tasks. Concluding, the multi-task regression architecture with the prefix token had comparable performance with the traditional feature-based approach on predicting different molecular properties or activities. Lastly, using the larger pretrained models trained on a wide chemical space can play a key role in improving the performance of Transformer models on these tasks.

    Download full text (pdf)
    fulltext
  • 38.
    Do Ruibin, Kevin
    et al.
    Linköping University, Department of Management and Engineering.
    Vintilescu Borglöv, Tobias
    Linköping University, Department of Management and Engineering.
    Predicting Customer Lifetime Value: Understanding its accuracy and drivers from a frequent flyer program perspective2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Each individual customer relationship represents a valuable asset to the firm. Loyalty programs serve as one of the key activities in managing these relationships and the well-developed frequent flyer programs in the airline industry is a prime example of this. Both marketing scholars and practitioners, though, have shown that the linkage between loyalty and profit is not always clear. In marketing literature, customer lifetime value is proposed as a suitable forward-looking metric that can be used to quantify the monetary value that customers bring back to the firm and can thus serve as a performance metric for loyalty programs. To consider the usefulness of these academic findings, this study has evaluated the predicted airline customer lifetime value as a loyalty program performance metric and evaluated the drivers of customer lifetime value from a frequent flyer program perspective.

    In this study, the accuracy of the Pareto/NBD Gamma-Gamma customer lifetime value has been evaluated on a large dataset supplied by a full-service carrier belonging to a major airline alliance. By comparing the accuracy to a managerial heuristic used by the studied airline, the suitability as a managerial tool was determined. Furthermore, based on existing literature, the drivers of customer lifetime value from a frequent flyer perspective were identified and analyzed through a regression analysis of behavioral data supplied by the studied airline.

    The analysis of the results of this study shows that the Pareto/NBD customer lifetime value model outperforms the managerial heuristic in predicting customer lifetime value in regard to almost all error metrics that have been calculated. At an aggregate-level, the errors are considered small in relation to average customer lifetime value, whereas at an individual-level, the errors are large. When evaluating the drivers of customer lifetime value, points-pressure, rewarded-behavior, and cross-buying have a positive association with customer lifetime value.

    This study concludes that the Pareto/NBD customer lifetime value predictions are only suitable as a managerial tool on an aggregate-level. Furthermore, the loyalty program mechanisms studied have a positive effect on the airline customer lifetime value. The implications of these conclusions are that customer lifetime value can be used as a key performance indicator of behavioral loyalty, but the individual-level predictions should not be used to allocate marketing resources for individual customers. To leverage the drivers of customer lifetime value in frequent flyer programs, cross-buying and the exchange of points for free flights should be facilitated and encouraged.

    Download full text (pdf)
    fulltext
  • 39.
    Doyle, Scott
    et al.
    Rutgers University, Dept. of Biomedical Engineering Piscataway, NJ, USA.
    Monaco, James
    Rutgers University, Dept. of Biomedical Engineering Piscataway, NJ, USA.
    Madabhushi, Anant
    Rutgers University, Dept. of Biomedical Engineering Piscataway, NJ, USA.
    Lindholm, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Siemens Corporate Research,Princeton, NJ, USA.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Siemens Corporate Research,Princeton, NJ, USA.
    Ladic, Lance
    Siemens Corporate Research,Princeton, NJ, USA.
    Tomaszewski, John
    University of Pennsylvania,Dept. of Surgical Pathology Philadelphia, PA, USA.
    Feldman, Michael
    University of Pennsylvania,Dept. of Surgical Pathology Philadelphia, PA, USA.
    Evaluation of effects of JPEG2000 compression on a computer-aided detection system for prostate cancer on digitized histopathology2010In: Biomedical Imaging: From Nano to Macro, 2010 IEEE International Symposium on, 2010, p. 1313-1316Conference paper (Refereed)
    Abstract [en]

    A single digital pathology image can occupy over 10 gigabytes of hard disk space, rendering it difficult to store, analyze, and transmit. Though image compression provides a means of reducing the storage requirement, its effects on computer-aided diagnosis (CAD) and pathologist performance are not yet clear. In this work we assess the impact of compression on the ability of a CAD system to detect carcinoma of the prostate (CaP) on histological sections. The CAD algorithm proceeds as follows: Glands in the tissue are segmented using a region-growing algorithm, and the size of each gland is extracted. A Markov prior (specifically, a probabilistic pairwise Markov model) is employed to encourage nearby glands to share the same class (i.e. cancerous or non-cancerous). Finally, cancerous glands are aggregated into continuous regions using a distancehull algorithm. We trained the CAD system on 28 images of wholemount histology (WMH) and evaluated performance on 12 images compressed at 14 different compression ratios (a total of 168 experiments) using JPEG2000. Algorithm performance (measured using the under the receiver operating characteristic curves) remains relatively constant for compression ratios up to1 :256, beyond which performance degrades precipitously. For completeness we also have an expert pathologist view a randomly-selected set of compressed images from one of the whole mount studies and assign a confidence measure as to their diagnostic fidelity. Pathologist confidence declined with increasing compression ratio as the information necessary to diagnose the sample was lost, dropping from 100% confidence at ratio 1:64 to 0% at ratio 1:8192.

  • 40.
    Eilert, Rickard
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Development of a framework for creating cross-platform TV HTML5 applications2015Independent thesis Basic level (professional degree), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    When developing HTML5 applications for TV platforms, the TV platforms provide, in addition to standardHTML5 functionality, also extra APIs for TV-specific features. These extra APIs differ between TVplatforms, and that is a problem when developing an application targeting several platforms. This thesis hasexamined if it is possible to design a framework which provides the developer with one API that works formany platforms by wrapping their platform-specific code. The answer is yes. With success, platform-specificfeatures including: TV remote control input, video, volume, Internet connection status, TV channel streamsand EPG data have been harmonised under an API in a JavaScript library. Furthermore, a build systempackages the code in the way the platforms expect. The framework eases the development of TV platformHTML5 applications. At the moment, the framework supports the Pace, PC and Samsung Smart TVplatforms, but it can be extended with more TV platform back-ends.

    Download full text (pdf)
    fulltext
  • 41.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kottravel, Sathish
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Visual Computing Research Group, Ulm University.
    A Crowdsourcing System for Integrated and Reproducible Evaluation in Scientific Visualization2016In: 2016 IEEE Pacific Visualization Symposium (PacificVis), IEEE Computer Society, 2016, p. 40-47Conference paper (Refereed)
    Abstract [en]

    User evaluations have gained increasing importance in visualization research over the past years, as in many cases these evaluations are the only way to support the claims made by visualization researchers. Unfortunately, recent literature reviews show that in comparison to algorithmic performance evaluations, the number of user evaluations is still very low. Reasons for this are the required amount of time to conduct such studies together with the difficulties involved in participant recruitment and result reporting. While it could be shown that the quality of evaluation results and the simplified participant recruitment of crowdsourcing platforms makes this technology a viable alternative to lab experiments when evaluating visualizations, the time for conducting and reporting such evaluations is still very high. In this paper, we propose a software system, which integrates the conduction, the analysis and the reporting of crowdsourced user evaluations directly into the scientific visualization development process. With the proposed system, researchers can conduct and analyze quantitative evaluations on a large scale through an evaluation-centric user interface with only a few mouse clicks. Thus, it becomes possible to perform iterative evaluations during algorithm design, which potentially leads to better results, as compared to the time consuming user evaluations traditionally conducted at the end of the design process. Furthermore, the system is built around a centralized database, which supports an easy reuse of old evaluation designs and the reproduction of old evaluations with new or additional stimuli, which are both driving challenges in scientific visualization research. We will describe the system's design and the considerations made during the design process, and demonstrate the system by conducting three user evaluations, all of which have been published before in the visualization literature.

    Download full text (pdf)
    fulltext
    Download (pdf)
    Appendix
  • 42.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Visual Computing Research Group, Ulm University.
    Evaluating the perception of semi-transparent structures in direct volume rendering techniques2016In: Proceeding SA '16 SIGGRAPH ASIA 2016 Symposium on Visualization, Association for Computing Machinery (ACM), 2016Conference paper (Refereed)
    Abstract [en]

    Direct volume rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. A key benefit of DVR is that semi-transparency can be facilitated in order to convey the complexity of the visualized data. Unfortunately, semi-transparency introduces new challenges in spatial comprehension of the visualized data, as the ambiguities inherent to semi-transparent representations affect spatial comprehension. Accordingly, many visualization techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we conduct a user evaluation in which we compare standard DVR with five visualization techniques which have been proposed to enhance the spatial comprehension of DVR images. In our study, we investigate the perceptual performance of these techniques and compare them against each other to find out which technique is most suitable for different types of data and purposes. In order to do this, a large-scale user study was conducted with 300 participants who completed a number of micro-tasks designed such that the aggregated feedback gives us insight on how well these techniques aid the end user to perceive depth and shape of objects. Within this paper we discuss the tested techniques, present the conducted study and analyze the retrieved results.

    Download full text (pdf)
    fulltext
  • 43.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ultrasound Surface Extraction Using Radial Basis Functions2014In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part II / [ed] George Bebis, Springer Publishing Company, 2014, Vol. 8888, p. 163-172Conference paper (Refereed)
    Abstract [en]

    Data acquired from ultrasound examinations is of interest not only for the physician, but also for the patient. While the physician uses the ultrasound data for diagnostic purposes the patient might be more interested in beautiful images in the case of prenatal imaging. Ultrasound data is noisy by nature and visually compelling 3D renderings are not always trivial to produce. This paper presents a technique which enables extraction of a smooth surface mesh from the ultrasound data by combining previous research in ultrasound processing with research in point cloud surface reconstruction. After filtering the ultrasound data using Variational Classification we extract a set of surface points. This set of points is then used to train an Adaptive Compactly Supported Radial Basis Functions system, a technique for surface reconstruction of noisy laser scan data. The resulting technique can be used to extract surfaces with adjustable smoothness and resolution and has been tested on various ultrasound datasets.

    Download full text (pdf)
    fulltext
  • 44.
    Envall, David
    et al.
    Linköping University, Department of Computer and Information Science.
    Blåberg Kristoffersson, Paul
    Linköping University, Department of Computer and Information Science.
    The buzz behind the stock market: Analysis and characterization of the social media activity around the time of big stock valuation changes2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    As the discussion of stocks on social media is increasing its effect on the financial market is distinct. This has led to new opportunities in influencing private investors to make uninformed decisions affecting the value of stocks. This thesis aims to enable readers to distinguish patterns in social media discussion regarding stocks and thus provide an understanding of the effect it has on public opinion. By identifying significant events of big stock valuation changes and collecting corresponding stock-related data from the social media platforms Reddit and Twitter, analysis in the fields of frequency of posts and Sentiment analysis was performed. The results display trends of an increase in discussion on social media leading up to the occurrence of significant events and an overall increment of interest online for specific stocks after significant events have occurred. Furthermore, the overall sentiment in the discussion for both increasing and decreasing events is positive in almost every case, with the exception that the sentiment score of increasing events is higher than its counterpart. The day-to-day sentiment score during events indicates a much higher fluctuation in sentiment for Reddit compared to Twitter. However, a significant increase in score the day before an event occurs is prevalent for both. These findings imply the possibility to predict stock valuation changes using data gathered from social media platforms.

    Download full text (pdf)
    fulltext
  • 45.
    Eriksson, Björn
    et al.
    Linköping University, Department of Management and Engineering, Fluid and Mechanical Engineering Systems. Linköping University, The Institute of Technology.
    Nordin, Peter
    Linköping University, Department of Management and Engineering, Fluid and Mechanical Engineering Systems. Linköping University, The Institute of Technology.
    Krus, Petter
    Linköping University, Department of Management and Engineering, Fluid and Mechanical Engineering Systems. Linköping University, The Institute of Technology.
    Hopsan NG, A C++ Implementation using the TLM Simulation Technique2010In: SIMS 2010 Proceedings, The 51st Conference on Simulation and Modelling, 14-15 October 2010 Oulu, Finland / [ed] sko Juuso, Oulu, Finland, 2010Conference paper (Refereed)
    Abstract [en]

    The Hopsan simulation package, used primarily for hydro-mechanical simulation, was first released in 1977. Modeling in Hopsan is based on a method using transmission line modeling, TLM. In TLM, component models are decoupled from each other through time delays. As components are decoupled and use distributed solvers, the simulation environment is suitable for distributed simulations. No numerical errors are introduced at simulation time when using TLM; all errors are related to modeling errors. This yields robust and fast simulations where the size of the time step does not have to be adjusted to achieve a numerically stable simulation. The distributive nature of TLM makes it convenient for use in multi-core approaches and high speed simulations. The latest version of Hopsan was released in August 2002, but now the next generation of this simulation package is being developed. This paper presents the development version of Hopsan NG and discusses some of its features and possible uses.

  • 46. Ernvik, Aron
    et al.
    Bergström, Staffan
    Lundström, Claes
    Linköping University, Department of Science and Technology, Media and Information Technology.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology.
    Ynnerman, Anders
    Linköping University.
    Image data set compression based on viewing parameters for storing medical image data from multidimensional data sets, related systems, methods and computer products2012Patent (Other (popular science, discussion, etc.))
  • 47.
    Fahl, Gustav
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Object views of relational data in multidatabase systems1994Licentiate thesis, monograph (Other academic)
    Abstract [en]

    In a multidatabase system it is possible to access and update data residing in multiple databases. The databases may be distributed, heterogeneous, and autonomous. The first part of the thesis provides an overview of different kinds of multidatabase system architectures and discusses their relative merits. In particular, it presents the AMOS multidatabase system architecture which we have designed with the purpose of combining the advantages and minimizing the disadvantages of the different kinds of proposed architectures.

    A central problem in multidatabase systems is that of data model heterogeneity: the fact that the participating databases may use different conceptual data models. A common way of dealing with this is to use a canonical data model (CDM). Object-oriented data models, such as the AMOS data model, have all the essential properties which make a data model suitable as the CDM. When a CDM is used, the schemas of the participating databases are mapped to equivalent schemas in the CDM. This means that the data model heterogeneity problem in AMOS is equivalent to the problem of defining an object-oriented view (or object view for short) over each participating database.

    We have developed such a view mechanism for relational databases. This is the topic of the second part of the thesis. We discuss the relationship between the relational data model and the AMOS data model and show, in detail, how queries to the object view are processed.

    We discuss the key issues when an object view of a relational database is created, namely: how to provide the concept of object identity in the view; how to represent relational database access in query plans; how to handle the fact that the extension of types in the view depends on the state of the relational database; and how to map relational structures to subtype/supertype hierarchies in the view.

    A special focus is on query optimization.

  • 48.
    Fjellborg, Björn
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    An approach to extraction of pipeline structures for VLSI high-level synthesis1990Licentiate thesis, monograph (Other academic)
    Abstract [en]

    One of the concerns in high-level synthesis is how to efficiently exploit the potential concurrency in a design. Pipelining achieves a high degree of concurrency, and a certain structural regularity through exploitation of locality in communication. However, pipelining cannot be applied to all designs. Pipeline extraction localizes parts of the design that can benefit form pipelining. Such extraction is a first step in pipeline synthesis. While current pipeline synthesis systems are restricted to exploitation of loops, this thesis addresses the problem of extracting pipeline structures from arbitrary designs without apparent pipelining properties. Therefore, an approach that is based on pipelining of individual computations is explored. Still, loops constitute an important special case, and can be encompassed within the approach in an efficient way. The general formulation of the approach cannot be applied directly for extraction purposes, because of a combinatorial explosion of the design space. An iterative search strategy to handle this problem i presented. A specific polynomial-time algorithm based on this strategy, using several additional heuristics to reduce complexity, has been implemented in the PiX system, which operates as a preprocessor to the CAMAD VLSI design system. The input to PiX is an algorithmic description in a Pascal-like language, which is translated into the Extended Timed Petri Net (ETPN) representation. The extraction is realized as analysis of and transformations on the ETPN. Preliminary results from PiX show that the approach is feasible and useful for realistic designs.

  • 49.
    Flodmark, Axel
    et al.
    Linköping University, Department of Computer and Information Science.
    Jakum, Markus
    Linköping University, Department of Computer and Information Science.
    Characterizing Bitcoin Spam Emails: An analysis of what makes certain Bitcoin spams generate millions of dollars2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Bitcoin scams cause billions of dollars worth of damage every year, targeting both large corporations as well as individuals. A commonly used method for these scams is spam emails. These emails all share the same intention of trying to trick people into sending Bit- coin to the address attached in the emails, which can be done using various methods like threats and social engineering. This thesis investigates Bitcoin spam emails and tries to dis- tinguish the characteristics of the successful ones. The study was conducted by collecting data on 250,000+ Bitcoin addresses from emails that have all been reported as spam to the Bitcoinabuse website. These addresses were analyzed using their number of transactions and final balance, which were extracted with a Python script using Blockchain’s public API. It was found that the successful Bitcoin spam emails only made up a tiny percentage of the entire data set. Looking at the most successful subset of spams, a few key aspects were found that distinguished them from the rest. The most successful spam emails were using blackmail techniques such as sextortion and ransomware to fool their victims. This method is believed to work so well because of the emotional response it invokes from the victims, which in many cases is enough for them to fold. In addition, luck seemed to play a rather big role for the scams to work. The emails had to reach the perfect target: a per- son that would fall for the trick, have money to send as well as the knowledge to buy and transfer Bitcoin. To increase the odds of finding these types of people, the scammers send emails in large volumes. 

    Download full text (pdf)
    fulltext
  • 50.
    Fontan, Angela
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, Faculty of Science & Engineering.
    Altafini, Claudio
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, Faculty of Science & Engineering.
    A signed network perspective on the government formation process in parliamentary democracies2021In: Scientific Reports, E-ISSN 2045-2322, Vol. 11, no 1, article id 5134Article in journal (Refereed)
    Abstract [en]

    In parliamentary democracies, government negotiations talks following a general election can sometimes be a long and laborious process. In order to explain this phenomenon, in this paper we use structural balance theory to represent a multiparty parliament as a signed network, with edge signs representing alliances and rivalries among parties. We show that the notion of frustration, which quantifies the amount of "disorder" encoded in the signed graph, correlates very well with the duration of the government negotiation talks. For the 29 European countries considered in this study, the average correlation between frustration and government negotiation talks ranges between 0.42 and 0.69, depending on what information is included in the edges of the signed network. Dynamical models of collective decision-making over signed networks with varying frustration are proposed to explain this correlation.

    Download full text (pdf)
    fulltext
12345 1 - 50 of 210
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf