liu.seSearch for publications in DiVA
Change search
Refine search result
123 1 - 50 of 130
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Albertsson, Marcus
    et al.
    Linköping University, Department of Computer and Information Science.
    Öberg Bustad, Adrian
    Linköping University, Department of Computer and Information Science.
    Sundmark, Mattias
    Linköping University, Department of Computer and Information Science.
    Gerde, Elof
    Linköping University, Department of Computer and Information Science.
    Boberg, Jessika
    Linköping University, Department of Computer and Information Science.
    Abdulla, Ariyan
    Linköping University, Department of Computer and Information Science.
    Danielsson, Oscar
    Linköping University, Department of Computer and Information Science.
    Johnsson Bittmann, Felicia
    Linköping University, Department of Computer and Information Science.
    Moberg, Anton
    Linköping University, Department of Computer and Information Science.
    Hur en webbapplikation kan utvecklas för att leverera säkerhet, handlingsbarhet och navigerbarhet: PimpaOvven – Utveckling av en e-butik för märken och accessoarer till studentoveraller2017Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
    Abstract [en]

    Among students in many of Sweden’s Universities the student overall is an established possession. Many students like to decorate their overalls with embroidered patches and other types of accessories, the supply of these is however limited. This report presents the development process and result of the web application “PimpaOvven” – an e-shop with the purpose of increasing the accessibility of patches and overall accessories. The development has been iterative and focused on building a secure web application that generates a useable environment regarding actability and navigability that also provides an impression of security to the user. The methods used which generated the resulting web application together with the reference framework form the basis of the report’s discussion. During the project plenty of usability tests and security tests were conducted, from these tests together with the report’s discussion the conclusion was drawn that the produced web application was secure and useable.

  • 2.
    Andersson, Peter
    Linköping University, Department of Thematic Studies, Technology and Social Change. Linköping University, Faculty of Arts and Sciences.
    Informationsteknologi i organisationer: bestämningsfaktorer och mönster1989Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Researchers in this field have placed different emphasis on the structural constraints visa- vis the freedom of the actors. An awareness that IT is a social construction does not necessarily mean that some individual actor or actor entity can perform freely. An inflexible, tightly structured social situation can considerably limit the action space. Actors are hemmed in by "objective" circumstances, ie, a rather closely controlled situation established by other actors, and which is apparently unyielding in the face of technological decisions.

    By creating a perspective that addresses both structural and actor aspects, this study attempts a holistic understanding which will lay bare the probable dialectic process between the changeable and the nonchangeable. This aspiration to comprehend the whole, when viewed against the complex character of the subject, calls for an understanding oriented approach.

    The study at hand deals with the choice of information technology in organizations, with special focus on automatic data processing (ADP) for administrative purposes. Its main aim is to improve an understanding of factors that determine the choice of ADP technology in organizations.

    The empirical section of the work at hand consists of two case studies and an overview study. The case studies, which concern two extensive ADP projects, are emphasized. The purpose of these two projects was to raise the degree of computerization and to choose both a configuration and degree of uniformity. In both cases however the configuration turned out to be the most critical issue. One concerned the administration of social insurance in Sweden, Rationalisering av den allmänna försäkringens administration (Rationalization of the Swedish social insurance administration), hereafter called the RAFA project. The other case study, referred to here as the FFV study, deals with an administrative system for the manufacturing sector of the FFV Group  The overview study, called the Norrköping study, deals mainly with the technological level and the ADP configuration in a wide spectrum of organizations. The level and the configuration are viewed against an overarching organizational structure, the worksite placement of qualified ADP staff, the line of business and the size of the firm. The study consists of an opinion poll and three delimited secondary studies.

    In the initial stage of each project, rational motives dominated. These were founded on cost and effect assessments and on developments in the field of computer science. From a structural viewpoint, investments in computers seemed self-evident; efficiency goals were paramount. However, an ADP undertaking entails not only rationalization in the conventional sense, it also brings to the ideational aspects inherent in the organization. While ADP technology was believed necessary, it became, in the preplanning and argumentation phase, a means of projecting socially determined concepts and goals. An ADP solution was sought which would combine the latest innovations in computer science with the dominant actors' organizational ideas.

    The dominant actors at FFV were for the most part newly appointed managers, imprinted with other organizational ideals and relationships than those characterizing FFV. The choice stood between a departure from company tradition by selecting a solution based on local minicomputers, or expanding the existing centralized main frame facility. The critics were specialists who had taken part in the design of the existing configuration. At FFV, the structural determinants had to be toned down in favor of the deliberate performance of the dominant actors. In the RAF A case, the opposite was true. The critics wanted a certain change of existing circumstances, while the dominant actors sought to preserve status quo and its underlying ideas. In the RAF A case, ADP thus became a cementing force rather than the catalyst.

    The Norrkoping study clearly indicates that the direction and size of an enterprise has primary importance for how much the computers are used. The ADP configuration appearance varies mainly with the organizational relationships. This is true for the placement of ADP staff and the overall structure of the organization. The main tendency is that the configuration reflects the relationships in an organization. This supports the view in the case studies that proximity to and control of the ADP has a major organizational value.

  • 3.
    Andersson, Tim
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, The Institute of Technology.
    Bluetooth Low Energy and Smartphones for Proximity-Based Automatic Door Locks2014Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Bluetooth Low Energy is becoming increasingly popular in mobile applications due to the possibility of using it for proximity data. Proximity can be estimated by measuring the strength of the Bluetooth signal, and actions can then be performed based on a user's proximity to a certain location or object. One of the most interesting applications of proximity information is automating common tasks; this paper evaluates Bluetooth Low Energy in the context of using smartphones to automatically unlock a door when a user approaches the door. Measurements were performed to determine signal strength reliability, energy consumption and connection latency. The results show that Bluetooth Low Energy is a suitable technology for proximity-based door locks despite the large variance in signal strength.

  • 4.
    Anwer, Rao Muhammad
    et al.
    Aalto Univ, Finland.
    Khan, Fahad
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    van de Weijer, Joost
    Univ Autonoma Barcelona, Spain.
    Molinier, Matthieu
    VTT Tech Res Ctr Finland Ltd, Finland.
    Laaksonen, Jorma
    Aalto Univ, Finland.
    Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification2018In: ISPRS journal of photogrammetry and remote sensing (Print), ISSN 0924-2716, E-ISSN 1872-8235, Vol. 138, p. 74-85Article in journal (Refereed)
    Abstract [en]

    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification. (C) 2018 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

  • 5.
    Arding, Petter
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Hedelin, Hugo
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Computer virus: design and detection2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Computer viruses uses a few different techniques, with various intentions, toinfect files. However, what most of them have in common is that they wantto avoid detection by anti-malware software. To not get detected and stay unnoticed,virus creators have developed several methods for this. Anti-malwaresoftware is constantly trying to counter these methods of virus infections withtheir own detection-techniques. In this paper we have analyzed the differenttypes of viruses and their infection techniques, and tried to determined whichworks the best to avoid detection. In the experiments we have done we havesimulated executing the viruses at the same time as an anti-malware softwarewas running. Our conclusion is that metamorphic viruses uses the best methodsto stay unnoticed by anti-malware software’s detection techniques.

  • 6.
    Arvidsson, Martin
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Paulsson, Eric
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Utveckling av beslutsstöd för kreditvärdighet2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The aim is to develop a new decision-making model for credit-loans. The model will be specific for credit applicants of the OKQ8 bank, becauseit is based on data of earlier applicants of credit from the client (the bank). The final model is, in effect, functional enough to use informationabout a new applicant as input, and predict the outcome to either the good risk group or the bad risk group based on the applicant’s properties.The prediction may then lay the foundation for the decision to grant or deny credit loan.

    Because of the skewed distribution in the response variable, different sampling techniques are evaluated. These include oversampling with SMOTE, random undersampling and pure oversampling in the form of scalar weighting of the minority class. It is shown that the predictivequality of a classifier is affected by the distribution of the response, and that the oversampled information is not too redundant.

    Three classification techniques are evaluated. Our results suggest that a multi-layer neural network with 18 neurons in a hidden layer, equippedwith an ensemble technique called boosting, gives the best predictive power. The most successful model is based on a feed forward structure andtrained with a variant of back-propagation using conjugate-gradient optimization.

    Two other models with a good prediction quality are developed using logistic regression and a decision tree classifier, but they do not reach thelevel of the network. However, the results of these models are used to answer the question regarding which customer properties are importantwhen determining credit risk. Two examples of important customer properties are income and the number of earlier credit reports of the applicant.

    Finally, we use the best classification model to predict the outcome of a set of applicants declined by the existent filter. The results show that thenetwork model accepts over 60 % of the applicants who had previously been denied credit. This may indicate that the client’s suspicionsregarding that the existing model is too restrictive, in fact are true.

  • 7. Banissi, Ebad
    et al.
    Bertschi, StefanBurkhard, RemoCvek, UrskaEppler, MartinForsell, CamillaLinköping University, Department of Science and Technology, Media and Information Technology.Grinstein, GeorgesJohansson, JimmyLinköping University, Department of Science and Technology, Media and Information Technology.Kenderdine, SarahMarchese, Francis T.Maple, CarstenTrutschl, MarjamSarfraz, MuhammadStuart, LizUrsyn, AnnaWyeld, Theodor G.
    Information Visualization2011Conference proceedings (editor) (Refereed)
  • 8.
    Barakat, Arian
    Linköping University, Department of Computer and Information Science, The Division of Statistics and Machine Learning.
    What makes an (audio)book popular?2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Audiobook reading has traditionally been used for educational purposes but has in recent times grown into a popular alternative to the more traditional means of consuming literature. In order to differentiate themselves from other players in the market, but also provide their users enjoyable literature, several audiobook companies have lately directed their efforts on producing own content. Creating highly rated content is, however, no easy task and one reoccurring challenge is how to make a bestselling story. In an attempt to identify latent features shared by successful audiobooks and evaluate proposed methods for literary quantification, this thesis employs an array of frameworks from the field of Statistics, Machine Learning and Natural Language Processing on data and literature provided by Storytel - Sweden’s largest audiobook company.

    We analyze and identify important features from a collection of 3077 Swedish books concerning their promotional and literary success. By considering features from the aspects Metadata, Theme, Plot, Style and Readability, we found that popular books are typically published as a book series, cover 1-3 central topics, write about, e.g., daughter-mother relationships and human closeness but that they also hold, on average, a higher proportion of verbs and a lower degree of short words. Despite successfully identifying these, but also other factors, we recognized that none of our models predicted “bestseller” adequately and that future work may desire to study additional factors, employ other models or even use different metrics to define and measure popularity.

    From our evaluation of the literary quantification methods, namely topic modeling and narrative approximation, we found that these methods are, in general, suitable for Swedish texts but that they require further improvement and experimentation to be successfully deployed for Swedish literature. For topic modeling, we recognized that the sole use of nouns provided more interpretable topics and that the inclusion of character names tended to pollute the topics. We also identified and discussed the possible problem of word inflections when modeling topics for more morphologically complex languages, and that additional preprocessing treatments such as word lemmatization or post-training text normalization may improve the quality and interpretability of topics. For the narrative approximation, we discovered that the method currently suffers from three shortcomings: (1) unreliable sentence segmentation, (2) unsatisfactory dictionary-based sentiment analysis and (3) the possible loss of sentiment information induced by translations. Despite only examining a handful of literary work, we further found that books written initially in Swedish had narratives that were more cross-language consistent compared to books written in English and then translated to Swedish.

  • 9.
    Belka, Kamila
    Linköping University, Department of Computer and Information Science.
    Multicriteria analysis and GIS application in the selection of sustainable motorway corridor2005Independent thesis Advanced level (degree of Magister), 20 points / 30 hpStudent thesis
    Abstract [en]

    Effects of functioning transportation infrastructure are receiving more and more environmental and social concern nowadays. Nevertheless, preliminary corridor plans are usually developed on the basis of technical and economic criteria exclusively. By the time of environmental impact assessment (EIA), which succeeds, relocation is practically impossible and only preventative measures can be applied.

    This paper proposes a GIS-based method of delimiting motorway corridor and integrating social, environmental and economic factors into the early stages of planning. Multiple criteria decision making (MCDM) techniques are used to assess all possible alternatives. GIS-held weighted shortest path algorithm enables to locate the corridor. The evaluation criteria are exemplary. They include nature conservation, buildings, forests and agricultural resources, and soils. Resulting evaluation surface is divided into a grid of cells, which are assigned suitability scores derived from all evaluation criteria. Subsequently, a set of adjacent cells connecting two pre-specified points is traced by the least-cost path algorithm. The best alternative has a lowest total value of suitability scores.

    As a result, the proposed motorway corridor is routed from origin to destination. It is afterwards compared with an alternative derived by traditional planning procedures. Concluding remarks are that the location criteria need to be adjusted to meet construction

    requirements as well as analysis process to be automated. Nevertheless, the geographic information system and the embedded shortest path algorithm proved to be well suited for preliminary corridor location analysis. Future research directions are sketched.

  • 10.
    Bergdahl, Filip
    Linköping University, Department of Science and Technology.
    Analys av lämplighet för användning av RFID-teknik inom Schenkers verksamhet2007Independent thesis Advanced level (degree of Magister), 20 points / 30 hpStudent thesis
    Abstract [sv]

    Radio Frequency Identification är en relativt gammal teknik (sedan andra världskriget), som upplevt en renässans. Då som nu användes RFID för att identifiera föremål, dock med vissa tekniska skillnader. Ökade krav på industrier och samhället i övrigt har lett till den ”informa-tionsålder” vi nu lever i. Som ett steg i denna utveckling ställs allt högre krav på insamlingen av den information som många processer och beslut baseras på. Förenklat kan kommunika-tionen förklaras som en radiosignal som skickas från en RFID-läsare till en RFID-tagg. Taggen sitter på objektet som skall identifieras och information om objektet finns lagrad i taggen. Radiosignalen väcker taggen som skickar den information som finns lagrad i dess mikrochip tillbaka till läsaren. RFID kan användas i en stor mängd olika områden och applikationer där två av dessa är inom transportnätverk och försörjningskedjor. Anledningen till att detta projekt initierades var att logistikföretaget Schenker AB såg ett behov i att undersöka hur tekniken kan användas inom deras verksamhet. För Schenkers del var det viktigt att få utreda eventuella möjligheter med tekniken, och dess kostnader innan kunderna kom med krav eller önskemål om användning. Tre förslag på hur RFID kan användas i verksamheten har utarbetats för att få en bra bild av hur användningen kan gå till och vad som krävs. Studien visar att det krävs en hel del av Schenker i form av utrustning och även datasystem. De slutsatser som kan dras efter att projektet är genomfört är att det finns potential för förbättringar vid användning av RFID-teknik inom Schenkers verksamhet. Dock är dessa förknippade med relativt höga initiala kostnader. Vidare finns även en del tekniska begränsningar vilket gör att systemet måste planeras och konstrueras noggrant för full funktionalitet. Ytterligare undersökningar måste göras för att få mer underlag för hur pass väl RFID kan användas inom Schenker. Tester och försök i mindre flöden hos Schenker vore ett bra sätt för att få erfarenhet och kunskap om teknikens funktionalitet, möjligheter och begränsningar.

  • 11.
    Berggren, Magnus
    et al.
    Linköping University, Department of Science and Technology, Physics and Electronics. Linköping University, Faculty of Science & Engineering.
    Simon, Daniel
    Linköping University, Department of Science and Technology, Physics and Electronics. Linköping University, Faculty of Science & Engineering.
    Nilsson, D
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Dyreklev, P
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Norberg, P
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Nordlinder, S
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Ersman, PA
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Gustafsson, G
    Acreo Swedish ICT, Box 787, SE-601 17, Norrköping, Sweden..
    Wikner, Jacob
    Linköping University, Department of Electrical Engineering, Integrated Circuits and Systems. Linköping University, Faculty of Science & Engineering.
    Hederén, J
    DU Radio, Ericsson AB, SE-583 30, Linköping, Sweden..
    Hentzell, H
    Swedish ICT Research, Box 1151, SE-164 26, Kista, Sweden..
    Browsing the Real World using Organic Electronics, Si-Chips, and a Human Touch.2016In: Advanced Materials, ISSN 0935-9648, E-ISSN 1521-4095, Vol. 28, no 10, p. 1911-1916Article in journal (Refereed)
    Abstract [en]

    Organic electronics have been developed according to an orthodox doctrine advocating "all-printed, "all-organic and "ultra-low-cost primarily targeting various e-paper applications. In order to harvest from the great opportunities afforded with organic electronics potentially operating as communication and sensor outposts within existing and future complex communication infrastructures, high-quality computing and communication protocols must be integrated with the organic electronics. Here, we debate and scrutinize the twinning of the signal-processing capability of traditional integrated silicon chips with organic electronics and sensors, and to use our body as a natural local network with our bare hand as the browser of the physical world. The resulting platform provides a body network, i.e., a personalized web, composed of e-label sensors, bioelectronics, and mobile devices that together make it possible to monitor and record both our ambience and health-status parameters, supported by the ubiquitous mobile network and the resources of the "cloud".

  • 12.
    Bivall, Petter
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Touching the Essence of Life: Haptic Virtual Proteins for Learning2010Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This dissertation presents research in the development and use of a multi-modal visual and haptic virtual model in higher education. The model, named Chemical Force Feedback (CFF), represents molecular recognition through the example of protein-ligand docking, and enables students to simultaneously see and feel representations of the protein and ligand molecules and their force interactions. The research efforts have been divided between educational research aspects and development of haptic feedback techniques.

    The CFF model was evaluated in situ through multiple data-collections in a university course on molecular interactions. To isolate possible influences of haptics on learning, half of the students ran CFF with haptics, and the others used the equipment with force feedback disabled. Pre- and post-tests showed a significant learning gain for all students. A particular influence of haptics was found on students reasoning, discovered through an open-ended written probe where students' responses contained elaborate descriptions of the molecular recognition process.

    Students' interactions with the system were analyzed using customized information visualization tools. Analysis revealed differences between the groups, for example, in their use of visual representations on offer, and in how they moved the ligand molecule. Differences in representational and interactive behaviours showed relationships with aspects of the learning outcomes.

    The CFF model was improved in an iterative evaluation and development process. A focus was placed on force model design, where one significant challenge was in conveying information from data with large force differences, ranging from very weak interactions to extreme forces generated when atoms collide. Therefore, a History Dependent Transfer Function (HDTF) was designed which adapts the translation of forces derived from the data to output forces according to the properties of the recently derived forces. Evaluation revealed that the HDTF improves the ability to haptically detect features in volumetric data with large force ranges.

    To further enable force models with high fidelity, an investigation was conducted to determine the perceptual Just Noticeable Difference (JND) in force for detection of interfaces between features in volumetric data. Results showed that JNDs vary depending on the magnitude of the forces in the volume and depending on where in the workspace the data is presented.

    List of papers
    1. Designing and Evaluating a Haptic System for Biomolecular Education
    Open this publication in new window or tab >>Designing and Evaluating a Haptic System for Biomolecular Education
    Show others...
    2007 (English)In: IEEE Virtual Reality Conference, 2007. VR '07. / [ed] Sherman, W; Lin, M; Steed, A, Piscataway, NJ, USA: IEEE , 2007, p. 171-178Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper we present an in situ evaluation of a haptic system, with a representative test population, we aim to determine what, if any, benefit haptics can have in a biomolecular education context. We have developed a haptic application for conveying concepts of molecular interactions, specifically in protein-ligand docking. Utilizing a semi-immersive environment with stereo graphics, users are able to manipulate the ligand and feel its interactions in the docking process. The evaluation used cognitive knowledge tests and interviews focused on learning gains. Compared with using time efficiency as the single quality measure this gives a better indication of a system's applicability in an educational environment. Surveys were used to gather opinions and suggestions for improvements. Students do gain from using the application in the learning process but the learning appears to be independent of the addition of haptic feedback. However the addition of force feedback did decrease time requirements and improved the students understanding of the docking process in terms of the forces involved, as is apparent from the students' descriptions of the experience. The students also indicated a number of features which could be improved in future development.

    Place, publisher, year, edition, pages
    Piscataway, NJ, USA: IEEE, 2007
    Keywords
    Haptic Interaction, Haptics, Virtual Reality, Computer-assisted instruction, Life Science Education, Protein Interactions, Visualization, Protein-ligand docking
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-39934 (URN)10.1109/VR.2007.352478 (DOI)000245919300022 ()51733 (Local ID)1-4244-0906-3 (ISBN)51733 (Archive number)51733 (OAI)
    Conference
    IEEE Virtual Reality Conference, Charlotte, NC, USA, 10-14 March 2007
    Note

    ©2009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Petter Bivall Persson, Matthew Cooper, Lena Tibell, Shaaron Ainsworth, Anders Ynnerman and Bengt-Harald Jonsson, Designing and Evaluating a Haptic System for Biomolecular Education, 2007, IEEE Virtual Reality Conference 2007, 171-178. http://dx.doi.org/10.1109/VR.2007.352478

    Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2016-05-04Bibliographically approved
    2. Improved Feature Detection over Large Force Ranges Using History Dependent Transfer Functions
    Open this publication in new window or tab >>Improved Feature Detection over Large Force Ranges Using History Dependent Transfer Functions
    Show others...
    2009 (English)In: Third Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems, WorldHaptics 2009, IEEE , 2009, p. 476-481Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper we present a history dependent transfer function (HDTF) as a possible approach to enable improved haptic feature detection in high dynamic range (HDR) volume data. The HDTF is a multi-dimensional transfer function that uses the recent force history as a selection criterion to switch between transfer functions, thereby adapting to the explored force range. The HDTF has been evaluated using artificial test data and in a realistic application example, with the HDTF applied to haptic protein-ligand docking. Biochemistry experts performed docking tests, and expressed that the HDTF delivers the expected feedback across a large force magnitude range, conveying both weak attractive and strong repulsive protein-ligand interaction forces. Feature detection tests have been performed with positive results, indicating that the HDTF improves the ability of feature detection in HDR volume data as compared to a static transfer function covering the same range.

    Place, publisher, year, edition, pages
    IEEE, 2009
    Keywords
    Haptics, Virtual Reality, Scientific Visualization
    National Category
    Interaction Technologies
    Identifiers
    urn:nbn:se:liu:diva-45355 (URN)10.1109/WHC.2009.4810843 (DOI)81912 (Local ID)978-1-4244-3858-7 (ISBN)81912 (Archive number)81912 (OAI)
    Conference
    Third Joint EuroHaptics conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics 2009.Salt Lake City, UT, USA, 18-20 March 2009
    Projects
    VisMolLS
    Note

    ©2009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Petter Bivall Persson, Gunnar E. Höst, Matthew D. Cooper, Lena A. E. Tibell and Anders Ynnerman, Improved Feature Detection over Large Force Ranges Using History Dependent Transfer Functions, 2009, Third Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems, WorldHaptics 2009, 476-481. http://dx.doi.org/10.1109/WHC.2009.4810843

    Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2016-05-04Bibliographically approved
    3. Do Haptic Representations Help Complex Molecular Learning?
    Open this publication in new window or tab >>Do Haptic Representations Help Complex Molecular Learning?
    2011 (English)In: Science Education, ISSN 0036-8326, E-ISSN 1098-237X, Vol. 95, no 4, p. 700-719Article in journal (Refereed) Published
    Abstract [en]

    This study explored whether adding a haptic interface (that provides users with somatosensory information about virtual objects by force and tactile feedback) to a three-dimensional (3D) chemical model enhanced students' understanding of complex molecular interactions. Two modes of the model were compared in a between-groups pre- and posttest design. In both modes, users could move and rotate virtual 3D representations of the chemical structures of the two molecules, a protein and a small ligand molecule. In addition, in a haptic mode users could feel the interactions (repulsive and attractive) between molecules as forces with a haptic device. Twenty postgraduate students (10 in each condition) took pretests about the process of protein--ligand recognition before exploring the model in ways suggested by structured worksheets and then completing a posttest. Analysis addressed quantitative learning outcomes and more qualitatively students' reasoning during the learning phase. Results showed that the haptic system helped students learn more about the process of protein–ligand recognition and changed the way they reasoned about molecules to include more force-based explanations. It may also have protected students from drawing erroneous conclusions about the process of protein–ligand recognition observed when students interacted with only the visual model.

    Keywords
    Haptic learning, multimodality, molecular interactions, protein-ligand docking
    National Category
    Didactics Biochemistry and Molecular Biology Media and Communication Technology
    Identifiers
    urn:nbn:se:liu:diva-60354 (URN)10.1002/sce.20439 (DOI)
    Projects
    VisMolLS
    Available from: 2010-10-12 Created: 2010-10-12 Last updated: 2018-01-12
    4. Using logging data to visualize and explore students’ interaction and learning with a haptic virtual model of protein-ligand docking
    Open this publication in new window or tab >>Using logging data to visualize and explore students’ interaction and learning with a haptic virtual model of protein-ligand docking
    (English)Manuscript (preprint) (Other academic)
    Abstract [en]

    This study explores students’ interaction and learning with a haptic virtual model of biomolecular recognition. Twenty students assigned to a haptics or no-haptics condition performed a protein-ligand docking task where interaction was captured in log files. Any improvement in understanding of recognition was measured by comparing written responses to a conceptual question before and after interaction. A log-profiling tool visualized students’ traversal of the ligand while multivariate parallel coordinate analyses uncovered trends in the data. Students who experienced force feedback (haptics) displayed docked positions that were more clustered in comparison with no-haptics students, coupled to docking profiles that depicted a more focused traversal of the ligand. Students in the no-haptics condition employed double the amount of behaviours concerned with switching between multiple visual representations offered by the system. In the no-haptics group, this visually intense processing was associated with ‘fitting’ the ligand closer distances to the surface of the protein. A negative relationship between high representational switching activity and learning gain as well as spatial aptitude was also revealed. From an information-processing perspective, visual and haptic coordination could permit engagement of each perceptual channel simultaneously, in effect offloading the visual pathway by placing less strain on visual working memory.

    Keywords
    Interactive learning environments; multimedia systems; pedagogical issues; postsecondary education; virtual reality
    National Category
    Natural Sciences
    Identifiers
    urn:nbn:se:liu:diva-60355 (URN)
    Available from: 2010-10-12 Created: 2010-10-12 Last updated: 2016-05-04
    5. Haptic Just Noticeable Difference in Continuous Probing of Volume Data
    Open this publication in new window or tab >>Haptic Just Noticeable Difference in Continuous Probing of Volume Data
    2010 (English)Report (Other academic)
    Abstract [en]

    Just noticeable difference (JND) describes how much two perceptual sensory inputs must differ in order to be distinguishable from each other. Knowledge of the JND is vital when two features in a dataset are to be separably represented. JND has received a lot of attention in haptic research and this study makes a contribution to the field by determining JNDs during users' probing of volumetric data at two force levels. We also investigated whether these JNDs were affected by where in the haptic workspace the probing occurred. Reference force magnitudes were 0.1 N and 0.8 N, and the volume data was presented in rectangular blocks positioned at the eight corners of a cube 10 cm3 in size. Results showed that the JNDs varied significantly for the two force levels, with mean values of 38.5% and 8.8% obtained for the 0.1 N and 0.8 N levels, respectively, and that the JND was influenced by where the data was positioned.

    Place, publisher, year, edition, pages
    Linköping: Linköping University Electronic Press, 2010. p. 19
    Series
    Technical reports in Computer and Information Science, ISSN 1654-7233 ; 6
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-58011 (URN)
    Available from: 2010-07-16 Created: 2010-07-16 Last updated: 2010-10-12Bibliographically approved
  • 13.
    Bladin, Kalle
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Axelsson, Emil
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Broberg, Erik
    Linköping University, Faculty of Science & Engineering.
    Emmart, Carter
    Amer Museum Nat Hist, NY 10024 USA.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Bock, Alexander
    Linköping University, Department of Science and Technology. Linköping University, Faculty of Science & Engineering. NYU, NY 10003 USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization2018In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 24, no 1, p. 802-811Article in journal (Refereed)
    Abstract [en]

    Results of planetary mapping are often shared openly for use in scientific research and mission planning. In its raw format, however, the data is not accessible to non-experts due to the difficulty in grasping the context and the intricate acquisition process. We present work on tailoring and integration of multiple data processing and visualization methods to interactively contextualize geospatial surface data of celestial bodies for use in science communication. As our approach handles dynamic data sources, streamed from online repositories, we are significantly shortening the time between discovery and dissemination of data and results. We describe the image acquisition pipeline, the pre-processing steps to derive a 2.5D terrain, and a chunked level-of-detail, out-of-core rendering approach to enable interactive exploration of global maps and high-resolution digital terrain models. The results are demonstrated for three different celestial bodies. The first case addresses high-resolution map data on the surface of Mars. A second case is showing dynamic processes. such as concurrent weather conditions on Earth that require temporal datasets. As a final example we use data from the New Horizons spacecraft which acquired images during a single flyby of Pluto. We visualize the acquisition process as well as the resulting surface data. Our work has been implemented in the OpenSpace software [8], which enables interactive presentations in a range of environments such as immersive dome theaters. interactive touch tables. and virtual reality headsets.

  • 14.
    Bleser, Gabriele
    et al.
    Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany; Department of Computer Science, Technical University of Kaiserslautern, Kaiserslautern, Germany.
    Damen, Dima
    Department of Computer Science, University of Bristol, Bristol, UK.
    Behera, Ardhendu
    School of Computing, University of Leeds, Leeds, UK; Department of Computing, Edge Hill University, Ormskirk, UK.
    Hendeby, Gustaf
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Mura, Katharina
    SmartFactory KL e.V., Kaiserslautern, Germany.
    Miezal, Markus
    Department of Computer Science, Technical University of Kaiserslautern, Kaiserslautern, Germany.
    Gee, Andrew
    Department of Computer Science, University of Bristol, Bristol, UK.
    Petersen, Nils
    Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Maçães, Gustavo
    Department Computer Vision, Interaction and Graphics, Center for Computer Graphics, Guimarães, Portugal.
    Domingues, Hugo
    Department Computer Vision, Interaction and Graphics, Center for Computer Graphics, Guimarães, Portugal.
    Gorecky, Dominic
    SmartFactory KL e.V., Kaiserslautern, Germany.
    Almeida, Luis
    Department Computer Vision, Interaction and Graphics, Center for Computer Graphics, Guimarães, Portugal.
    Mayol-Cuevas, Walterio
    Department of Computer Science, University of Bristol, Bristol, UK.
    Calway, Andrew
    Department of Computer Science, University of Bristol, Bristol, UK.
    Cohn, Anthony G.
    School of Computing, University of Leeds, Leeds, UK.
    Hogg, David C.
    School of Computing, University of Leeds, Leeds, UK.
    Stricker, Didier
    Department Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks2015In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 10, no 6, article id e0127769Article in journal (Refereed)
    Abstract [en]

    Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user’s pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.

  • 15.
    Bock, Alexander
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Science and Technology, Media and Information Technology.
    Tailoring visualization applications for tasks and users2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Exponential increases in available computational resources over the recent decades have fueled an information explosion in almost every scientific field. This has led to a societal change shifting from an information-poor research environment to an over-abundance of information. As many of these cases involve too much information to directly comprehend, visualization proves to be an effective tool to gain insight into these large datasets. While visualization has been used since the beginning of mankind, its importance is only increasing as the exponential information growth widens the difference between the amount of gathered data and the relatively constant human ability to ingest information. Visualization, as a methodology and tool of transforming complex data into an intuitive visual representation can leverage the combined computational resources and the human cognitive capabilities in order to mitigate this growing discrepancy.

    A large portion of visualization research is, directly or indirectly, targets users in an application domain, such as medicine, biology, physics, or others. Applied research is aimed at the creation of visualization applications or systems that solve a specific problem within the domain. Combining prior research and applying it to a concrete problem enables the possibility to compare and determine the usability and usefulness of existing visualization techniques. These applications can only be effective when the domain experts are closely involved in the design process, leading to an iterative workflow that informs its form and function. These visualization solutions can be separated into three categories: Exploration, in which users perform an initial study of data, Analysis, in which an established technique is repeatedly applied to a large number of datasets, and Communication in which findings are published to a wider public audience.

    This thesis presents five examples of application development in finite element modeling, medicine, urban search & rescue, and astronomy and astrophysics. For the finite element modeling, an exploration tool for simulations of stress tensors in a human heart uses a compression method to achieve interactive frame rates. In the medical domain, an analysis system aimed at guiding surgeons during Deep Brain Stimulation interventions fuses multiple modalities in order to improve their outcome. A second analysis application is targeted at the Urban Search & Rescue community supporting the extraction of injured victims and enabling a more sophisticated decision making strategy. For the astronomical domain, first, an exploration application enables the analysis of time-varying volumetric plasma simulations to improving these simulations and thus better predict space weather. A final system focusses on combining all three categories into a single application that enables the same tools to be used for Exploration, Analysis, and Communication, thus requiring the handling of large coordinate systems, and high-fidelity rendering of planetary surfaces and spacecraft operations.

    List of papers
    1. Coherency-Based Curve Compression for High-Order Finite Element Model Visualization
    Open this publication in new window or tab >>Coherency-Based Curve Compression for High-Order Finite Element Model Visualization
    Show others...
    2012 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 12, p. 2315-2324Article in journal (Refereed) Published
    Abstract [en]

    Finite element (FE) models are frequently used in engineering and life sciences within time-consuming simulations. In contrast with the regular grid structure facilitated by volumetric data sets, as used in medicine or geosciences, FE models are defined over a non-uniform grid. Elements can have curved faces and their interior can be defined through high-order basis functions, which pose additional challenges when visualizing these models. During ray-casting, the uniformly distributed sample points along each viewing ray must be transformed into the material space defined within each element. The computational complexity of this transformation makes a straightforward approach inadequate for interactive data exploration. In this paper, we introduce a novel coherency-based method which supports the interactive exploration of FE models by decoupling the expensive world-to-material space transformation from the rendering stage, thereby allowing it to be performed within a precomputation stage. Therefore, our approach computes view-independent proxy rays in material space, which are clustered to facilitate data reduction. During rendering, these proxy rays are accessed, and it becomes possible to visually analyze high-order FE models at interactive frame rates, even when they are time-varying or consist of multiple modalities. Within this paper, we provide the necessary background about the FE data, describe our decoupling method, and introduce our interactive rendering algorithm. Furthermore, we provide visual results and analyze the error introduced by the presented approach.

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers (IEEE), 2012
    Keywords
    Finite element visualization, GPU-base dray-casting
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-86633 (URN)10.1109/TVCG.2012.206 (DOI)000310143100035 ()
    Note

    Funding Agencies|Swedish Research Council (VR)|2011-4113|Excellence Center at Linkoping and Lund in Information Technology (ELLIIT)||Swedish e-Science Research Centre (SeRC)||

    Available from: 2012-12-20 Created: 2012-12-20 Last updated: 2018-05-21
    2. Guiding Deep Brain Stimulation Interventions by Fusing Multimodal Uncertainty Regions
    Open this publication in new window or tab >>Guiding Deep Brain Stimulation Interventions by Fusing Multimodal Uncertainty Regions
    Show others...
    2013 (English)Conference paper, Published paper (Other academic)
    Abstract [en]

    Deep Brain Stimulation (DBS) is a surgical intervention that is known to reduce or eliminate the symptoms of common movement disorders, such as Parkinson.s disease, dystonia, or tremor. During the intervention the surgeon places electrodes inside of the patient.s brain to stimulate speci.c regions. Since these regions span only a couple of millimeters, and electrode misplacement has severe consequences, reliable and accurate navigation is of great importance. Usually the surgeon relies on fused CT and MRI data sets, as well as direct feedback from the patient. More recently Microelectrode Recordings (MER), which support navigation by measuring the electric .eld of the patient.s brain, are also used. We propose a visualization system that fuses the different modalities: imaging data, MER and patient checks, as well as the related uncertainties, in an intuitive way to present placement-related information in a consistent view with the goal of supporting the surgeon in the .nal placement of the stimulating electrode. We will describe the design considerations for our system, the technical realization, present the outcome of the proposed system, and provide an evaluation.

    Place, publisher, year, edition, pages
    IEEE conference proceedings, 2013
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-92857 (URN)10.1109/PacificVis.2013.6596133 (DOI)000333746600013 ()9781467347976 (ISBN)
    Conference
    IEEE Pacific Visualization, 26 February - 1 March 2013, Sydney, Australia
    Funder
    ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish e‐Science Research CenterSwedish Research Council, 2011-4113
    Available from: 2013-05-27 Created: 2013-05-27 Last updated: 2018-05-21
    3. Supporting Urban Search & Rescue Mission Planning through Visualization-Based Analysis
    Open this publication in new window or tab >>Supporting Urban Search & Rescue Mission Planning through Visualization-Based Analysis
    2014 (English)In: Proceedings of the Vision, Modeling, and Visualization Conference 2014, Eurographics - European Association for Computer Graphics, 2014Conference paper, Published paper (Refereed)
    Abstract [en]

    We propose a visualization system for incident commanders in urban search~\&~rescue scenarios that supports access path planning for post-disaster structures. Utilizing point cloud data acquired from unmanned robots, we provide methods for assessment of automatically generated paths. As data uncertainty and a priori unknown information make fully automated systems impractical, we present a set of viable access paths, based on varying risk factors, in a 3D environment combined with the visual analysis tools enabling informed decisions and trade-offs. Based on these decisions, a responder is guided along the path by the incident commander, who can interactively annotate and reevaluate the acquired point cloud to react to the dynamics of the situation. We describe design considerations for our system, technical realizations, and discuss the results of an expert evaluation.

    Place, publisher, year, edition, pages
    Eurographics - European Association for Computer Graphics, 2014
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-117772 (URN)10.2312/vmv.20141275 (DOI)978-3-905674-74-3 (ISBN)
    Conference
    Vision, Modeling, and Visualization
    Projects
    ELLIIT; VR; SeRC
    Funder
    ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish e‐Science Research CenterSwedish Research Council, 2011-4113
    Available from: 2015-05-08 Created: 2015-05-08 Last updated: 2018-05-21Bibliographically approved
    4. An interactive visualization system for urban search & rescue mission planning
    Open this publication in new window or tab >>An interactive visualization system for urban search & rescue mission planning
    2014 (English)In: 12th IEEE International Symposium on Safety, Security and Rescue Robotics, SSRR 2014 - Symposium Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2014, no 7017652Conference paper, Published paper (Refereed)
    Abstract [en]

    We present a visualization system for incident commanders in urban search and rescue scenarios that supports the inspection and access path planning in post-disaster structures. Utilizing point cloud data acquired from unmanned robots, the system allows for assessment of automatically generated paths, whose computation is based on varying risk factors, in an interactive 3D environment increasing immersion. The incident commander interactively annotates and reevaluates the acquired point cloud based on live feedback. We describe design considerations, technical realization, and discuss the results of an expert evaluation that we conducted to assess our system.

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers Inc., 2014
    Series
    12th IEEE International Symposium on Safety, Security and Rescue Robotics, SSRR 2014 - Symposium Proceedings
    National Category
    Electrical Engineering, Electronic Engineering, Information Engineering
    Identifiers
    urn:nbn:se:liu:diva-116761 (URN)10.1109/SSRR.2014.7017652 (DOI)2-s2.0-84923174457 (Scopus ID)9781479941995 (ISBN)
    Conference
    12th IEEE International Symposium on Safety, Security and Rescue Robotics, SSRR 2014
    Available from: 2015-04-02 Created: 2015-04-02 Last updated: 2018-05-21
    5. A Visualization-Based Analysis System for Urban Search & Rescue Mission Planning Support
    Open this publication in new window or tab >>A Visualization-Based Analysis System for Urban Search & Rescue Mission Planning Support
    Show others...
    2017 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, no 6, p. 148-159Article in journal (Refereed) Published
    Abstract [en]

    We propose a visualization system for incident commanders (ICs) in urban searchandrescue scenarios that supports path planning in post-disaster structures. Utilizing point cloud data acquired from unmanned robots, we provide methods for the assessment of automatically generated paths. As data uncertainty and a priori unknown information make fully automated systems impractical, we present the IC with a set of viable access paths, based on varying risk factors, in a 3D environment combined with visual analysis tools enabling informed decision making and trade-offs. Based on these decisions, a responder is guided along the path by the IC, who can interactively annotate and reevaluate the acquired point cloud and generated paths to react to the dynamics of the situation. We describe visualization design considerations for our system and decision support systems in general, technical realizations of the visualization components, and discuss the results of two qualitative expert evaluation; one online study with nine searchandrescue experts and an eye-tracking study in which four experts used the system on an application case.

    Place, publisher, year, edition, pages
    WILEY, 2017
    Keywords
    urban search and rescue decision support application
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-140952 (URN)10.1111/cgf.12869 (DOI)000408634200009 ()
    Note

    Funding Agencies|Excellence Center at Linkoping and Lund in Information Technology; Swedish e-Science Research Centre; VR grant [2011-4113]

    Available from: 2017-09-19 Created: 2017-09-19 Last updated: 2018-05-21
    6. Visual Verification of Space Weather Ensemble Simulations
    Open this publication in new window or tab >>Visual Verification of Space Weather Ensemble Simulations
    Show others...
    2015 (English)In: 2015 IEEE Scientific Visualization Conference (SciVis), IEEE, 2015, p. 17-24Conference paper, Published paper (Refereed)
    Abstract [en]

    We propose a system to analyze and contextualize simulations of coronal mass ejections. As current simulation techniques require manual input, uncertainty is introduced into the simulation pipeline leading to inaccurate predictions that can be mitigated through ensemble simulations. We provide the space weather analyst with a multi-view system providing visualizations to: 1. compare ensemble members against ground truth measurements, 2. inspect time-dependent information derived from optical flow analysis of satellite images, and 3. combine satellite images with a volumetric rendering of the simulations. This three-tier workflow provides experts with tools to discover correlations between errors in predictions and simulation parameters, thus increasing knowledge about the evolution and propagation of coronal mass ejections that pose a danger to Earth and interplanetary travel

    Place, publisher, year, edition, pages
    IEEE, 2015
    National Category
    Computer Sciences Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-128037 (URN)10.1109/SciVis.2015.7429487 (DOI)000380564400003 ()978-1-4673-9785-8 (ISBN)
    Conference
    2015 IEEE Scientific Visualization Conference
    Available from: 2016-05-16 Created: 2016-05-16 Last updated: 2018-07-19
    7. Dynamic Scene Graph: Enabling Scaling, Positioning, and Navigation in the Universe
    Open this publication in new window or tab >>Dynamic Scene Graph: Enabling Scaling, Positioning, and Navigation in the Universe
    Show others...
    2017 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, no 3, p. 459-468Article in journal (Refereed) Published
    Abstract [en]

    In this work, we address the challenge of seamlessly visualizing astronomical data exhibiting huge scale differences in distance, size, and resolution. One of the difficulties is accurate, fast, and dynamic positioning and navigation to enable scaling over orders of magnitude, far beyond the precision of floating point arithmetic. To this end we propose a method that utilizes a dynamically assigned frame of reference to provide the highest possible numerical precision for all salient objects in a scene graph. This makes it possible to smoothly navigate and interactively render, for example, surface structures on Mars and the Milky Way simultaneously. Our work is based on an analysis of tracking and quantification of the propagation of precision errors through the computer graphics pipeline using interval arithmetic. Furthermore, we identify sources of precision degradation, leading to incorrect object positions in screen-space and z-fighting. Our proposed method operates without near and far planes while maintaining high depth precision through the use of floating point depth buffers. By providing interoperability with order-independent transparency algorithms, direct volume rendering, and stereoscopy, our approach is well suited for scientific visualization. We provide the mathematical background, a thorough description of the method, and a reference implementation.

    Place, publisher, year, edition, pages
    WILEY, 2017
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-139628 (URN)10.1111/cgf.13202 (DOI)000404881200042 ()
    Conference
    19th Eurographics/IEEE VGTC Conference on Visualization (EuroVis)
    Note

    Funding Agencies|Swedish e-Science Research Center (SeRC); NASA [NNX16AB93A]; Moore-Sloan Data Science Environment at NYU; NSF [CNS-1229185, CCF-1533564, CNS-1544753]

    Available from: 2017-08-16 Created: 2017-08-16 Last updated: 2018-05-21
    8. Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization
    Open this publication in new window or tab >>Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization
    Show others...
    2018 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 24, no 1, p. 802-811Article in journal (Refereed) Published
    Abstract [en]

    Results of planetary mapping are often shared openly for use in scientific research and mission planning. In its raw format, however, the data is not accessible to non-experts due to the difficulty in grasping the context and the intricate acquisition process. We present work on tailoring and integration of multiple data processing and visualization methods to interactively contextualize geospatial surface data of celestial bodies for use in science communication. As our approach handles dynamic data sources, streamed from online repositories, we are significantly shortening the time between discovery and dissemination of data and results. We describe the image acquisition pipeline, the pre-processing steps to derive a 2.5D terrain, and a chunked level-of-detail, out-of-core rendering approach to enable interactive exploration of global maps and high-resolution digital terrain models. The results are demonstrated for three different celestial bodies. The first case addresses high-resolution map data on the surface of Mars. A second case is showing dynamic processes. such as concurrent weather conditions on Earth that require temporal datasets. As a final example we use data from the New Horizons spacecraft which acquired images during a single flyby of Pluto. We visualize the acquisition process as well as the resulting surface data. Our work has been implemented in the OpenSpace software [8], which enables interactive presentations in a range of environments such as immersive dome theaters. interactive touch tables. and virtual reality headsets.

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers (IEEE), 2018
    Keywords
    Astronomical visualization; globe rendering; public dissemination. science communication; space mission visualization
    National Category
    Other Computer and Information Science
    Identifiers
    urn:nbn:se:liu:diva-144142 (URN)10.1109/TVCG.2017.2743958 (DOI)000418038400079 ()28866505 (PubMedID)2-s2.0-85028711409 (Scopus ID)
    Conference
    IEEE VIS Conference
    Note

    Funding Agencies|Knut and Alice Wallenberg Foundation; Swedish e-Science Research Center (SeRC); ELLIIT; Vetenskapsradet [VR-2015-05462]; NASA [NNX16AB93A]; Moore-Sloan Data Science Environment at New York University; NSF [CNS-1229185, CCF-1533564, CNS-1544753, CNS-1730396]

    Available from: 2018-01-10 Created: 2018-01-10 Last updated: 2018-05-21Bibliographically approved
  • 16.
    Bäckman, Love
    et al.
    Linköping University, Department of Computer and Information Science.
    Vedin, Albin
    Linköping University, Department of Computer and Information Science.
    Evaluation of the Protobuf plugin protoc-gen-validate: A performance study2019Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Data validation is one of several approaches that can be used to increase the stability of a system. Code for validating data can either be written manually or generated from some structure.In this paper we evaluate the performance of protoc-gen-validate, aGoogle Protocol Buffers compilerpluginwhich generates code fordatavalidation.With use-case structures from Ericsson and manually constructed structures that test the performance of isolateddata typeand rulecombinationswe produce results that can be used as indicators ofthe overhead introduced by protoc-gen-validate’svalidation-features. The results show that the CPU time required to validate a message is lower than that of deserializing amessage in both Go and C++. It is also shownthat the CPU time required to validate a message is lower than that of serializing amessage in Go, whilevalidation takes longer than serialization in C++.

  • 17.
    Campbell, Walter S.
    et al.
    Univ Nebraska Med Ctr, NE 68198 USA.
    Karlsson, Daniel
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.
    Vreeman, Daniel J.
    Indiana Univ Sch Med, IN 46202 USA.
    Lazenby, Audrey J.
    Univ Nebraska Med Ctr, NE 68198 USA.
    Talmon, Geoffrey A.
    Univ Nebraska Med Ctr, NE 68198 USA.
    Campbell, James R.
    Univ Nebraska Med Ctr, NE USA.
    A computable pathology report for precision medicine: extending an observables ontology unifying SNOMED CT and LOINC2018In: JAMIA Journal of the American Medical Informatics Association, ISSN 1067-5027, E-ISSN 1527-974X, Vol. 25, no 3, p. 259-266Article in journal (Refereed)
    Abstract [en]

    The College of American Pathologists (CAP) introduced the first cancer synoptic reporting protocols in 1998. However, the objective of a fully computable and machine-readable cancer synoptic report remains elusive due to insufficient definitional content in Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT) and Logical Observation Identifiers Names and Codes (LOINC). To address this terminology gap, investigators at the University of Nebraska Medical Center (UNMC) are developing, authoring, and testing a SNOMED CT observable ontology to represent the data elements identified by the synoptic worksheets of CAP. Investigators along with collaborators from the US National Library of Medicine, CAP, the International Health Terminology Standards Development Organization, and the UK Health and Social Care Information Centre analyzed and assessed required data elements for colorectal cancer and invasive breast cancer synoptic reporting. SNOMED CT concept expressions were developed at UNMC in the Nebraska LexiconA (c) SNOMED CT namespace. LOINC codes for each SNOMED CT expression were issued by the Regenstrief Institute. SNOMED CT concepts represented observation answer value sets. UNMC investigators created a total of 194 SNOMED CT observable entity concept definitions to represent required data elements for CAP colorectal and breast cancer synoptic worksheets, including biomarkers. Concepts were bound to colorectal and invasive breast cancer reports in the UNMC pathology system and successfully used to populate a UNMC biobank. The absence of a robust observables ontology represents a barrier to data capture and reuse in clinical areas founded upon observational information. Terminology developed in this project establishes the model to characterize pathology data for information exchange, public health, and research analytics.

  • 18.
    Christiansen, Cecilia
    et al.
    Linköping University, Department of Science and Technology.
    Sandin Värn, Veronica
    Linköping University, Department of Science and Technology.
    Webbaserat system för effektiv registrering och hantering av reklamationer2006Independent thesis Basic level (degree of Bachelor), 10 points / 15 hpStudent thesis
    Abstract [sv]

    Vitamex AB är en nordisk egenvårdskoncern med huvudkontor i Norrköping som tillverkar och säljer naturläkemedel och kosttillskott. Inom Vitamex Production AB finns ett reklamationssystem för interna och externa reklamationer som hanteras pappersvägen. För att underlätta hanteringen önskade man ett webbaserat behörighetsstyrt datasystem. I det här examensarbetet beskrivs utvecklingen av ett webbaserat system där fokus har lagts på användbarhet. För utvecklingen har designteorin Usability Engineering använts. Med hjälp av denna process har gränssnittet testats och utvärderats. Rapporten beskriver hur vi lärt känna användarna och uppgiften genom intervjuer. Här beskriver vi hur designprocessen har fungerat genom att låta användarna vara med och påverka genom hela designfasen. Tre prototyper framställdes och dessa presenterades och utvärderades på fokusgrupper. Resultatet har blivit en prototyp med viss interaktivitet och med fokus på användbarhet.

  • 19.
    Dee, Laura E.
    et al.
    University of Minnesota Twin Cities, MN 55108 USA; University of Minnesota Twin Cities, MN 55108 USA.
    Allesina, Stefano
    University of Chicago, IL 60637 USA; University of Chicago, IL 60637 USA.
    Bonn, Aletta
    UFZ Helmholtz Centre Environm Research, Germany; Friedrich Schiller University of Jena, Germany; Gerrnan Centre Integrat Biodivers Research iDiv, Germany.
    Eklöf, Anna
    Linköping University, Department of Physics, Chemistry and Biology, Theoretical Biology. Linköping University, Faculty of Science & Engineering.
    Gaines, Steven D.
    University of Calif Santa Barbara, CA 93117 USA.
    Hines, Jes
    Gerrnan Centre Integrat Biodivers Research iDiv, Germany; University of Leipzig, Germany.
    Jacob, Ute
    Gerrnan Centre Integrat Biodivers Research iDiv, Germany; University of Goettingen, Germany.
    McDonald-Madden, Eve
    University of Queensland, Australia.
    Possingham, Hugh
    University of Queensland, Australia.
    Schroeter, Matthias
    UFZ Helmholtz Centre Environm Research, Germany; Gerrnan Centre Integrat Biodivers Research iDiv, Germany.
    Thompson, Ross M.
    University of Canberra, Australia.
    Operationalizing Network Theory for Ecosystem Service Assessments2017In: Trends in Ecology & Evolution, ISSN 0169-5347, E-ISSN 1872-8383, Vol. 32, no 2, p. 118-130Article, review/survey (Refereed)
    Abstract [en]

    Managing ecosystems to provide ecosystem services in the face of global change is a pressing challenge for policy and science. Predicting how alternative management actions and changing future conditions will alter services is complicated by interactions among components in ecological and socioeconomic systems. Failure to understand those interactions can lead to detrimental outcomes from management decisions. Network theory that integrates ecological and socioeconomic systems may provide a path to meeting this challenge. While network theory offers promising approaches to examine ecosystem services, few studies have identified how to operationalize networks for managing and assessing diverse ecosystem services. We propose a framework for how to use networks to assess how drivers and management actions will directly and indirectly alter ecosystem services.

  • 20.
    Do Ruibin, Kevin
    et al.
    Linköping University, Department of Management and Engineering.
    Vintilescu Borglöv, Tobias
    Linköping University, Department of Management and Engineering.
    Predicting Customer Lifetime Value: Understanding its accuracy and drivers from a frequent flyer program perspective2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Each individual customer relationship represents a valuable asset to the firm. Loyalty programs serve as one of the key activities in managing these relationships and the well-developed frequent flyer programs in the airline industry is a prime example of this. Both marketing scholars and practitioners, though, have shown that the linkage between loyalty and profit is not always clear. In marketing literature, customer lifetime value is proposed as a suitable forward-looking metric that can be used to quantify the monetary value that customers bring back to the firm and can thus serve as a performance metric for loyalty programs. To consider the usefulness of these academic findings, this study has evaluated the predicted airline customer lifetime value as a loyalty program performance metric and evaluated the drivers of customer lifetime value from a frequent flyer program perspective.

    In this study, the accuracy of the Pareto/NBD Gamma-Gamma customer lifetime value has been evaluated on a large dataset supplied by a full-service carrier belonging to a major airline alliance. By comparing the accuracy to a managerial heuristic used by the studied airline, the suitability as a managerial tool was determined. Furthermore, based on existing literature, the drivers of customer lifetime value from a frequent flyer perspective were identified and analyzed through a regression analysis of behavioral data supplied by the studied airline.

    The analysis of the results of this study shows that the Pareto/NBD customer lifetime value model outperforms the managerial heuristic in predicting customer lifetime value in regard to almost all error metrics that have been calculated. At an aggregate-level, the errors are considered small in relation to average customer lifetime value, whereas at an individual-level, the errors are large. When evaluating the drivers of customer lifetime value, points-pressure, rewarded-behavior, and cross-buying have a positive association with customer lifetime value.

    This study concludes that the Pareto/NBD customer lifetime value predictions are only suitable as a managerial tool on an aggregate-level. Furthermore, the loyalty program mechanisms studied have a positive effect on the airline customer lifetime value. The implications of these conclusions are that customer lifetime value can be used as a key performance indicator of behavioral loyalty, but the individual-level predictions should not be used to allocate marketing resources for individual customers. To leverage the drivers of customer lifetime value in frequent flyer programs, cross-buying and the exchange of points for free flights should be facilitated and encouraged.

  • 21.
    Doyle, Scott
    et al.
    Rutgers University, Dept. of Biomedical Engineering Piscataway, NJ, USA.
    Monaco, James
    Rutgers University, Dept. of Biomedical Engineering Piscataway, NJ, USA.
    Madabhushi, Anant
    Rutgers University, Dept. of Biomedical Engineering Piscataway, NJ, USA.
    Lindholm, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Siemens Corporate Research,Princeton, NJ, USA.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Siemens Corporate Research,Princeton, NJ, USA.
    Ladic, Lance
    Siemens Corporate Research,Princeton, NJ, USA.
    Tomaszewski, John
    University of Pennsylvania,Dept. of Surgical Pathology Philadelphia, PA, USA.
    Feldman, Michael
    University of Pennsylvania,Dept. of Surgical Pathology Philadelphia, PA, USA.
    Evaluation of effects of JPEG2000 compression on a computer-aided detection system for prostate cancer on digitized histopathology2010In: Biomedical Imaging: From Nano to Macro, 2010 IEEE International Symposium on, 2010, p. 1313-1316Conference paper (Refereed)
    Abstract [en]

    A single digital pathology image can occupy over 10 gigabytes of hard disk space, rendering it difficult to store, analyze, and transmit. Though image compression provides a means of reducing the storage requirement, its effects on computer-aided diagnosis (CAD) and pathologist performance are not yet clear. In this work we assess the impact of compression on the ability of a CAD system to detect carcinoma of the prostate (CaP) on histological sections. The CAD algorithm proceeds as follows: Glands in the tissue are segmented using a region-growing algorithm, and the size of each gland is extracted. A Markov prior (specifically, a probabilistic pairwise Markov model) is employed to encourage nearby glands to share the same class (i.e. cancerous or non-cancerous). Finally, cancerous glands are aggregated into continuous regions using a distancehull algorithm. We trained the CAD system on 28 images of wholemount histology (WMH) and evaluated performance on 12 images compressed at 14 different compression ratios (a total of 168 experiments) using JPEG2000. Algorithm performance (measured using the under the receiver operating characteristic curves) remains relatively constant for compression ratios up to1 :256, beyond which performance degrades precipitously. For completeness we also have an expert pathologist view a randomly-selected set of compressed images from one of the whole mount studies and assign a confidence measure as to their diagnostic fidelity. Pathologist confidence declined with increasing compression ratio as the information necessary to diagnose the sample was lost, dropping from 100% confidence at ratio 1:64 to 0% at ratio 1:8192.

  • 22.
    Eilert, Rickard
    Linköping University, Department of Computer and Information Science, Software and Systems.
    Development of a framework for creating cross-platform TV HTML5 applications2015Independent thesis Basic level (professional degree), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    When developing HTML5 applications for TV platforms, the TV platforms provide, in addition to standardHTML5 functionality, also extra APIs for TV-specific features. These extra APIs differ between TVplatforms, and that is a problem when developing an application targeting several platforms. This thesis hasexamined if it is possible to design a framework which provides the developer with one API that works formany platforms by wrapping their platform-specific code. The answer is yes. With success, platform-specificfeatures including: TV remote control input, video, volume, Internet connection status, TV channel streamsand EPG data have been harmonised under an API in a JavaScript library. Furthermore, a build systempackages the code in the way the platforms expect. The framework eases the development of TV platformHTML5 applications. At the moment, the framework supports the Pace, PC and Samsung Smart TVplatforms, but it can be extended with more TV platform back-ends.

  • 23.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kottravel, Sathish
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Visual Computing Research Group, Ulm University.
    A Crowdsourcing System for Integrated and Reproducible Evaluation in Scientific Visualization2016In: 2016 IEEE Pacific Visualization Symposium (PacificVis), IEEE Computer Society, 2016, p. 40-47Conference paper (Refereed)
    Abstract [en]

    User evaluations have gained increasing importance in visualization research over the past years, as in many cases these evaluations are the only way to support the claims made by visualization researchers. Unfortunately, recent literature reviews show that in comparison to algorithmic performance evaluations, the number of user evaluations is still very low. Reasons for this are the required amount of time to conduct such studies together with the difficulties involved in participant recruitment and result reporting. While it could be shown that the quality of evaluation results and the simplified participant recruitment of crowdsourcing platforms makes this technology a viable alternative to lab experiments when evaluating visualizations, the time for conducting and reporting such evaluations is still very high. In this paper, we propose a software system, which integrates the conduction, the analysis and the reporting of crowdsourced user evaluations directly into the scientific visualization development process. With the proposed system, researchers can conduct and analyze quantitative evaluations on a large scale through an evaluation-centric user interface with only a few mouse clicks. Thus, it becomes possible to perform iterative evaluations during algorithm design, which potentially leads to better results, as compared to the time consuming user evaluations traditionally conducted at the end of the design process. Furthermore, the system is built around a centralized database, which supports an easy reuse of old evaluation designs and the reproduction of old evaluations with new or additional stimuli, which are both driving challenges in scientific visualization research. We will describe the system's design and the considerations made during the design process, and demonstrate the system by conducting three user evaluations, all of which have been published before in the visualization literature.

  • 24.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Visual Computing Research Group, Ulm University.
    Evaluating the perception of semi-transparent structures in direct volume rendering techniques2016In: Proceeding SA '16 SIGGRAPH ASIA 2016 Symposium on Visualization, ACM Digital Library, 2016Conference paper (Refereed)
    Abstract [en]

    Direct volume rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. A key benefit of DVR is that semi-transparency can be facilitated in order to convey the complexity of the visualized data. Unfortunately, semi-transparency introduces new challenges in spatial comprehension of the visualized data, as the ambiguities inherent to semi-transparent representations affect spatial comprehension. Accordingly, many visualization techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we conduct a user evaluation in which we compare standard DVR with five visualization techniques which have been proposed to enhance the spatial comprehension of DVR images. In our study, we investigate the perceptual performance of these techniques and compare them against each other to find out which technique is most suitable for different types of data and purposes. In order to do this, a large-scale user study was conducted with 300 participants who completed a number of micro-tasks designed such that the aggregated feedback gives us insight on how well these techniques aid the end user to perceive depth and shape of objects. Within this paper we discuss the tested techniques, present the conducted study and analyze the retrieved results.

  • 25.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ultrasound Surface Extraction Using Radial Basis Functions2014In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part II / [ed] George Bebis, Springer Publishing Company, 2014, Vol. 8888, p. 163-172Conference paper (Refereed)
    Abstract [en]

    Data acquired from ultrasound examinations is of interest not only for the physician, but also for the patient. While the physician uses the ultrasound data for diagnostic purposes the patient might be more interested in beautiful images in the case of prenatal imaging. Ultrasound data is noisy by nature and visually compelling 3D renderings are not always trivial to produce. This paper presents a technique which enables extraction of a smooth surface mesh from the ultrasound data by combining previous research in ultrasound processing with research in point cloud surface reconstruction. After filtering the ultrasound data using Variational Classification we extract a set of surface points. This set of points is then used to train an Adaptive Compactly Supported Radial Basis Functions system, a technique for surface reconstruction of noisy laser scan data. The resulting technique can be used to extract surfaces with adjustable smoothness and resolution and has been tested on various ultrasound datasets.

  • 26.
    Eriksson, Björn
    et al.
    Linköping University, Department of Management and Engineering, Fluid and Mechanical Engineering Systems. Linköping University, The Institute of Technology.
    Nordin, Peter
    Linköping University, Department of Management and Engineering, Fluid and Mechanical Engineering Systems. Linköping University, The Institute of Technology.
    Krus, Petter
    Linköping University, Department of Management and Engineering, Fluid and Mechanical Engineering Systems. Linköping University, The Institute of Technology.
    Hopsan NG, A C++ Implementation using the TLM Simulation Technique2010In: SIMS 2010 Proceedings, The 51st Conference on Simulation and Modelling, 14-15 October 2010 Oulu, Finland / [ed] sko Juuso, Oulu, Finland, 2010Conference paper (Refereed)
    Abstract [en]

    The Hopsan simulation package, used primarily for hydro-mechanical simulation, was first released in 1977. Modeling in Hopsan is based on a method using transmission line modeling, TLM. In TLM, component models are decoupled from each other through time delays. As components are decoupled and use distributed solvers, the simulation environment is suitable for distributed simulations. No numerical errors are introduced at simulation time when using TLM; all errors are related to modeling errors. This yields robust and fast simulations where the size of the time step does not have to be adjusted to achieve a numerically stable simulation. The distributive nature of TLM makes it convenient for use in multi-core approaches and high speed simulations. The latest version of Hopsan was released in August 2002, but now the next generation of this simulation package is being developed. This paper presents the development version of Hopsan NG and discusses some of its features and possible uses.

  • 27. Ernvik, Aron
    et al.
    Bergström, Staffan
    Lundström, Claes
    Linköping University, Department of Science and Technology, Media and Information Technology.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology.
    Ynnerman, Anders
    Linköping University.
    Image data set compression based on viewing parameters for storing medical image data from multidimensional data sets, related systems, methods and computer products2012Patent (Other (popular science, discussion, etc.))
  • 28.
    Fjellborg, Björn
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    An approach to extraction of pipeline structures for VLSI high-level synthesis1990Licentiate thesis, monograph (Other academic)
    Abstract [en]

    One of the concerns in high-level synthesis is how to efficiently exploit the potential concurrency in a design. Pipelining achieves a high degree of concurrency, and a certain structural regularity through exploitation of locality in communication. However, pipelining cannot be applied to all designs. Pipeline extraction localizes parts of the design that can benefit form pipelining. Such extraction is a first step in pipeline synthesis. While current pipeline synthesis systems are restricted to exploitation of loops, this thesis addresses the problem of extracting pipeline structures from arbitrary designs without apparent pipelining properties. Therefore, an approach that is based on pipelining of individual computations is explored. Still, loops constitute an important special case, and can be encompassed within the approach in an efficient way. The general formulation of the approach cannot be applied directly for extraction purposes, because of a combinatorial explosion of the design space. An iterative search strategy to handle this problem i presented. A specific polynomial-time algorithm based on this strategy, using several additional heuristics to reduce complexity, has been implemented in the PiX system, which operates as a preprocessor to the CAMAD VLSI design system. The input to PiX is an algorithmic description in a Pascal-like language, which is translated into the Extended Timed Petri Net (ETPN) representation. The extraction is realized as analysis of and transformations on the ETPN. Preliminary results from PiX show that the approach is feasible and useful for realistic designs.

  • 29.
    Galijasevic, Mirza
    et al.
    Linköping University, Department of Electrical Engineering.
    Liedgren, Carl
    Linköping University, Department of Electrical Engineering.
    Efficient content distribution in IPTV environments2008Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Existing VoD solutions often rely on unicast to distribute content, which leads to a higher load on the VoD server as more nodes become interested in the content. In such case, P2P is an alternative way of distributing content since it makes better use of available resources in the network. In this report, several P2P structures are evaluated from an operators point of view. We believe BitTorrent is the most adequate protocol for a P2P solution in IPTV environments. Two BitTorrent clients have been implemented on an IP-STB as proof of concept to find out whether P2P is suited for IPTV environments. Several tests were conducted to evaluate the performance of both clients and to see if they were able to reach a sufficient throughput on the IP-STB. Based upon the tests and the overall impressions, we are convinced that this particular P2P protocol is well suited for IPTV environments. Hopefully, a client developed from scratch for the IP-STB will offer even greater characteristics.

    Further, we have studied how to share recorded content among IP-STBs. Such a design would probably have many similarities to BitTorrent since a central node needs to keep track of content; the IP-STBs take care of the rest.

    The report also brings up whether BitTorrent is suitable for streaming. We believe that the necessary changes required to obtain such functionality will disrupt the strengths of BitTorrent. Some alternative solutions are presented where BitTorrent has been extended with additional modules, such as a server.

  • 30.
    Garcia Braga, Jose Renato
    et al.
    National Institute for Space Research, Sao Jose dos Campos, SP, Brazil.
    Conte, Gianpaolo
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Doherty, Patrick
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Campos Velho, Haroldo Fraga
    National Institute for Space Research, Sao Jose dos Campos, SP, Brazil.
    Shiguemori, Elcio Hideiti
    Institute of Advanced Studies, Sao Jose dos Campos, SP, Brazil.
    Use of Artificial Neural Networks for Automatic Categorical Change Detection in Satellite Imagery2016In: Proceedings of the 4th Conference of Computational Interdisciplinary Sciences (CCIS 2016), Pan American Association of Computational Interdisciplinary , 2016Conference paper (Other academic)
  • 31.
    Gharehbaghi, Arash
    et al.
    School of Innovation, Design and Technology, Mälardalen University, Västerås, Sweden.
    Babic, Ankica
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.
    Structural Risk Evaluation of a Deep Neural Network and a Markov Model in Extracting Medical Information from Phonocardiography2018In: Data, Informatics and Technology: An Inspiration for Improved Healthcare / [ed] Arie Hasman, Parisis Gallos, Joseph Liaskos, Mowafa S. Househ, John Mantas, IOS Press, 2018, Vol. 251, p. 157-160Chapter in book (Refereed)
    Abstract [en]

    This paper presents a method for exploring structural risk of any artificial intelligence-based method in bioinformatics, the A-Test method. This method provides a way to not only quantitate the structural risk associated with a classification method, but provides a graphical representation to compare the learning capacity of different classification methods. Two different methods, Deep Time Growing Neural Network (DTGNN) and Hidden Markov Model (HMM), are selected as two classification methods for comparison. Time series of heart sound signals are employed as the case study where the classifiers are trained to learn the disease-related changes. Results showed that the DTGNN offers a superior performance both in terms of the capacity and the structural risk. The A-Test method can be especially employed in comparing the learning methods with small data size.

  • 32.
    Gharehbaghi, Arash
    et al.
    Malardalen Univ, Sweden.
    Sepehri, Amir A.
    CAPIS Biomed Res and Dev Ctr, Belgium.
    Linden, Maria
    Malardalen Univ, Sweden.
    Babic, Ankica
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering. Univ Bergen, Norway.
    Intelligent Phonocardiography for Screening Ventricular Septal Defect Using Time Growing Neural Network2017In: INFORMATICS EMPOWERS HEALTHCARE TRANSFORMATION, IOS PRESS , 2017, Vol. 238, p. 108-111Conference paper (Refereed)
    Abstract [en]

    This paper presents results of a study on the applicability of the intelligent phonocardiography in discriminating between Ventricular Spetal Defect (VSD) and regurgitation of the atrioventricular valves. An original machine learning method, based on the Time Growing Neural Network (TGNN), is employed for classifying the phonocardiographic recordings collected from the pediatric referrals to a children hospital. 90 individuals, 30 VSD, 30 with the valvular regurgitation, and 30 healthy subjects, participated in the study after obtaining the informed consents. The accuracy and sensitivity of the approach is estimated to be 86.7% and 83.3%, respectively, showing a good performance to be used as a decision support system.

  • 33.
    Granlund, Gösta H.
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology.
    A Nonlinear, Image-content Dependent Measure of Image Quality1977Report (Other academic)
    Abstract [en]

    In recent years, considerable research effort has been devoted to the development of useful descriptors for image quality. The attempts have been hampered by i n complete understanding of the operation of the human visual system. This has made it difficult to relate physical measures and perceptual traits.

    A new model for determination of image quality is proposed. Its main feature is that it tries to invoke image content into consideration. The model builds upon a theory of image linearization, which means that the information in an image can well enough be represented using linear segments or structures within local spatial regions and frequency ranges. This implies a l so a suggestion that information in an image has to do with one- dimensional correlations. This gives a possibility to separate image content from noise in images, and measure them both.

    Also a hypothesis is proposed that the visual system of humans does in fact perform such a linearization.

  • 34.
    Gundlegård, David
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, Faculty of Science & Engineering.
    Transport Analytics Based on Cellular Network Signalling Data2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Cellular networks of today generate a massive amount of signalling data. A large part of this signalling is generated to handle the mobility of subscribers and contains location information that can be used to fundamentally change our understanding of mobility patterns. However, the location data available from standard interfaces in cellular networks is very sparse and an important research question is how this data can be processed in order to efficiently use it for traffic state estimation and traffic planning.

    In this thesis, the potentials and limitations of using this signalling data in the context of estimating the road network traffic state and understanding mobility patterns is analyzed. The thesis describes in detail the location data that is available from signalling messages in GSM, GPRS and UMTS networks, both when terminals are in idle mode and when engaged in a telephone call or a data session. The potential is evaluated empirically using signalling data and measurements generated by standard cellular phones. The data used for analysis of location estimation and route classification accuracy (Paper I-IV in the thesis) is collected using dedicated hardware and software for cellular network analysis as well as tailor-made Android applications. For evaluation of more advanced methods for travel time estimation, data from GPS devices located in Taxis is used in combination with data from fixed radar sensors observing point speed and flow on the road network (Paper V). To evaluate the potential in using cellular network signalling data for analysis of mobility patterns and transport planning, real data provided by a cellular network operator is used (Paper VI).

    The signalling data available in all three types of networks is useful to estimate several types of traffic data that can be used for traffic state estimation as well as traffic planning. However, the resolution in time and space largely depends on which type of data that is extracted from the network, which type of network that is used and how it is processed.

    The thesis proposes new methods based on integrated filtering and classification as well as data assimilation and fusion that allows measurement reports from the cellular network to be used for efficient route classification and estimation of travel times. The thesis also shows that participatory sensing based on GPS equipped smartphones is useful in estimating radio maps for fingerprint-based positioning as well as estimating mobility models for use in filtering of course trajectory data from cellular networks.

    For travel time estimation, it is shown that the CEP-67 location accuracy based on the proposed methods can be improved from 111 meters to 38 meters compared to standard fingerprinting methods. For route classification, it is shown that the problem can be solved efficiently for highway environments using basic classification methods. For urban environments the link precision and recall is improved from 0.5 and 0.7 for standard fingerprinting to 0.83 and 0.92 for the proposed method based on particle filtering with integrity monitoring and Hidden Markov Models.

    Furthermore, a processing pipeline for data driven network assignment is proposed for billing data to be used when inferring mobility patterns used for traffic planning in terms of OD matrices, route choice and coarse travel times. The results of the large-scale data set highlight the importance of the underlying processing pipeline for this type of analysis. However, they also show very good potential in using large data sets for identifying needs of infrastructure investment by filtering out relevant data over large time periods.

    List of papers
    1. The Smartphone As Enabler for Road Traffic Information Based on Cellular Network Signalling
    Open this publication in new window or tab >>The Smartphone As Enabler for Road Traffic Information Based on Cellular Network Signalling
    2013 (English)In: Intelligent Transportation Systems (ITSC), 2013, IEEE , 2013, p. 2106-2112Conference paper, Published paper (Refereed)
    Abstract [en]

    The higher penetration rate of GPS-enabled smartphones together with their improved processing power and battery life makes them suitable for a number of participatory sensing applications. The purpose of this paper is to analyse how GPS-enabled smartphones can be used in a participatory sensing context to build a radio map for RSS-based positioning, with a special focus on road traffic information based on cellular network signalling. The CEP-67 location accuracy achieved is 75 meters for both GSM and UMTS using Bayesian classification. For this test site, the accuracy is similar for GSM and UMTS, with slightly better results for UMTS in the CEP-95 error metric. The location accuracy achieved is good enough to avoid large errors in travel time estimation for highway environments, especially considering the possibility to filter out estimates with low accuracy using for example the posterior bin probability in Bayesian classification. For urban environments more research is required to determine how the location accuracy will affect the path inference problem in a dense road network. The location accuracy achieved in this paper is also sufficient for other traffic information types, for example origin-destination estimation based on location area updates.

    Place, publisher, year, edition, pages
    IEEE, 2013
    National Category
    Engineering and Technology Transport Systems and Logistics
    Identifiers
    urn:nbn:se:liu:diva-102022 (URN)10.1109/ITSC.2013.6728540 (DOI)978-147992914-6 (ISBN)
    Conference
    16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), 6-9 October 2013, The Hague, Netherlands
    Available from: 2013-11-26 Created: 2013-11-26 Last updated: 2018-11-15
    2. Handover location accuracy for travel time estimation in GSM and UMTS
    Open this publication in new window or tab >>Handover location accuracy for travel time estimation in GSM and UMTS
    2009 (English)In: IET Intelligent Transport Systems, ISSN 1751-956X, E-ISSN 1751-9578, Vol. 3, no 1, p. 87-94Article in journal (Refereed) Published
    Abstract [en]

    Field measurements from the GSM and UMTS networks are analysed in a road traffic information context. The measurements indicate a potentially large improvement using UMTS signalling data compared with GSM regarding handover location accuracy. These improvements can be used to generate real-time traffic information with higher quality and extend the geographic usage area for cellular-based travel time estimation systems. The results con. rm previous reports indicating that the technology has a large potential in GSM and also show that the potential might be even larger and more. exible using UMTS. Assuming that non-vehicle terminals can be. ltered out, that vehicles are tracked to the correct route and that handovers can be predicted correctly, a conclusion from the experiments is that the handover location accuracy in both GSM and UMTS will be sufficient to estimate useful travel times, also in urban environments. In a real system, these tasks are typically very challenging, especially in an urban environment. Further, it is reasonably established that the location error will be minor for the data obtained from UMTS.

    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-16517 (URN)10.1049/iet-its:20070067 (DOI)
    Available from: 2013-04-05 Created: 2009-01-30 Last updated: 2018-11-15Bibliographically approved
    3. Route Classification in Travel Time Estimation Based on Cellular Network Signaling
    Open this publication in new window or tab >>Route Classification in Travel Time Estimation Based on Cellular Network Signaling
    2009 (English)In: Proceedings of 12th International IEEE Conference on Intelligent Transport Systems (ITSC), October 3-7, St. Louis, USA, 2009, p. 474-479Conference paper, Published paper (Refereed)
    Abstract [en]

    Travel time estimation based on cellular network signaling is a promising technology for delivery of wide area travel times in real-time. The technology has received much attention recently, but few academic research reports has so far been published in the area, which together with uncertain location estimates and environmental dependent performance makes it difficult to assess the potential of the technology. This paper aims to investigate the route classification task in a cellular travel time estimation context in detail. In order to estimate the magnitude of the problem, two classification algorithms are developed, one based on nearest neighbor classification and one based on Bayesian classification. These are then evaluated using field measurements from the GSM network. A conclusion from the results is that the route classification problem is not trivial even in a highway environment, due to effects of multipath propagation and changing radio environment. In a highway environment the classification problem can be solved rather efficiently using e.g., one of the methods described in this paper, keeping the effect on travel time accuracy low. However, in order to solve the route classification task in urban environments more research is required.

    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-50949 (URN)10.1109/ITSC.2009.5309692 (DOI)978-1-4244-5520-1 (ISBN)978-1-4244-5519-5 (ISBN)
    Conference
    12th International IEEE Conference on Intelligent Transport Systems (ITSC), October 3-7, St. Louis, USA
    Available from: 2013-04-05 Created: 2009-10-15 Last updated: 2018-11-15Bibliographically approved
    4. Travel Time and Point Speed Fusion Based on a Macroscopic Traffic Model and Non-linear Filtering
    Open this publication in new window or tab >>Travel Time and Point Speed Fusion Based on a Macroscopic Traffic Model and Non-linear Filtering
    Show others...
    2015 (English)In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems, IEEE conference proceedings, 2015, p. 2121-2128Conference paper, Published paper (Refereed)
    Abstract [en]

    The number and heterogeneity of traffic sensors are steadily increasing. A large part of the emerging sensors are measuring point speeds or travel times and in order to make efficient use of this data, it is important to develop methods and frameworks for fusion of point speed and travel time measurements in real-time. The proposed method combines a macroscopic traffic model and a non-linear filter with a new measurement model for fusion of travel time observations in a system that uses the velocity of cells in the network as state vector. The method aims to improve the fusion efficiency, especially when travel time observations are relatively long compared to the spatial resolution of the estimation framework. The method is implemented using the Cell Transmission Model for velocity (CTM-v) and the Ensemble Kalman Filter (EnKF) and evaluated with promising results in a test site in Stockholm, Sweden, using point speed observations from radar and travel time observations from taxis.

    Place, publisher, year, edition, pages
    IEEE conference proceedings, 2015
    Series
    IEEE International Conference on Intelligent Transportation Systems-ITSC, ISSN 2153-0009
    Keywords
    Cell Transmisson Model, Data fusion, Ensemble Kalman Filtering, Traffic state estimation
    National Category
    Other Electrical Engineering, Electronic Engineering, Information Engineering Transport Systems and Logistics
    Identifiers
    urn:nbn:se:liu:diva-129376 (URN)10.1109/ITSC.2015.343 (DOI)000376668802033 ()978-1-4673-6595-6 (ISBN)
    Conference
    2015 IEEE 18th International Conference on Intelligent Transportation Systems. 15-18 Sept. 2015, Las Palmas
    Available from: 2016-06-17 Created: 2016-06-17 Last updated: 2018-11-15Bibliographically approved
    5. Travel demand estimation and network assignment based on cellular network data
    Open this publication in new window or tab >>Travel demand estimation and network assignment based on cellular network data
    2016 (English)In: COMPUTER COMMUNICATIONS, ISSN 0140-3664, Vol. 95, p. 29-42Article in journal (Refereed) Published
    Abstract [en]

    Cellular networks signaling data provide means for analyzing the efficiency of an underlying transportation system and assisting the formulation of models to predict its future use. This paper describes how signaling data can be processed and used in order to act as means for generating input for traditional transportation analysis models. Specifically, we propose a tailored set of mobility metrics and a computational pipeline including trip extraction, travel demand estimation as well as route and link travel flow estimation based on Call Detail Records (CDR) from mobile phones. The results are based on the analysis of data from the Data for development "D4D" challenge and include data from Cote dlvoire and Senegal. (C) 2016 Elsevier B.V. All rights reserved.

    Place, publisher, year, edition, pages
    ELSEVIER SCIENCE BV, 2016
    Keywords
    Mobility analytics; Travel demand estimation; Traffic modeling; Mobile phone call data; Cellular network data; Call detail records; Intelligent transport systems
    National Category
    Computer Engineering
    Identifiers
    urn:nbn:se:liu:diva-134086 (URN)10.1016/j.comcom.2016.04.015 (DOI)000390722300004 ()
    Note

    Funding Agencies|Swedish Governmental Agency for Innovation Systems (VINNOVA)

    Available from: 2017-01-26 Created: 2017-01-22 Last updated: 2018-11-19
  • 35.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    2D Shape Rendering by Distance Fields2012In: OpenGL Insights: OpenGL, OpenGL ES, and WebGL community experiences / [ed] Patrick Cozzi and Christophe Riccio, CRC Press, 2012, p. 173-182Chapter in book (Other academic)
    Abstract [en]

    We present a method for real time rendering of anti-aliased curved contours, combining recent results from research on distance transforms and modern GPU shading using GLSL. The method is capable of rendering glyphs and symbols of very high quality at arbitrary levels of magnification and minification, and it is both versatile and easy to use.

  • 36.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Procedural Textures in GLSL2012In: OpenGL Insights: OpenGL, OpenGL ES and WebGL community experiences / [ed] Patrick Cozzi and Christophe Riccio, CRC Press, 2012, p. 105-119Chapter in book (Other academic)
    Abstract [en]

    Procedural shading has been a versatile and popular tool for off-line rendering for decades. With the ever increasing speed and computational capabilities of modern GPUs, it is now becoming possible to use procedural shading also for real time rendering. This chapter is an introduction to some classic procedural shading techniques, adapted for real time use.

  • 37.
    Hadwiger, Markus
    et al.
    VRVis Research Center, Vienna, Austria.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Siemens Corporate Research, Princeton, USA.
    Rezk Salama, Christof
    University of Siegen, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. University of M¨unster, Germany.
    Advanced illumination techniques for GPU volume raycasting2008In: ACM Siggraph Asia 2008 Courses, 2008, p. 1-11Conference paper (Refereed)
    Abstract [en]

    Volume raycasting techniques are important for both visual arts and visualization. They allow an efficient generation of visual effects and the visualization of scientific data obtained by tomography or numerical simulation. Thanks to their flexibility, experts agree that GPU-based raycasting is the state-of-the art technique for interactive volume rendering. It will most likely replace existing slice-based techniques in the near future. Volume rendering techniques are also effective for the direct rendering of implicit surfaces used for soft body animation and constructive solid geometry.

    The lecture starts off with an in-depth introduction to the concepts behind GPU-based ray-casting to provide a common base for the following parts. The focus of this course is on advanced illumination techniques which approximate the physically-based light transport more convincingly. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion and simple Monte-Carlo based approaches to global illumination including translucency and scattering. With the proposed techniques, users are able to interactively create convincing images from volumetric data whose visual quality goes far beyond traditional approaches. The optical properties in participating media are defined using the phase function. Many approximations to the physically based light transport applied for rendering natural phenomena such as clouds or smoke assume a rather homogenous phase function model. For rendering volumetric scans on the other hand different phase function models are required to account for both surface-like structures and fuzzy boundaries in the data. Using volume rendering techniques, artists who create medical visualization for science magazines may now work on tomographic scans directly, without the necessity to fall back to creating polygonal models of anatomical structures.

  • 38.
    Hadwiger, Markus
    et al.
    VRVis Research Center, Vienna, Austria.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Siemens Corporate Research, Princeton, USA.
    Rezk-Salama, Christof
    University of Siegen, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. University of Münster, Germany.
    Advanced Illumination Techniques for GPU-Based Volume Raycasting2009Other (Other academic)
    Abstract [en]

    Volume raycasting techniques are important for both visual arts and visualization. They allow an efficient generation of visual effects and the visualization of scientific data obtained by tomography or numerical simulation. Thanks to their flexibility, experts agree that GPU-based raycasting is the state-of-the art technique for interactive volume rendering. It will most likely replace existing slice-based techniques in the near future. Volume rendering techniques are also effective for the direct rendering of implicit surfaces used for soft body animation and constructive solid geometry.

    The lecture starts off with an in-depth introduction to the concepts behind GPU-based ray-casting to provide a common base for the following parts. The focus of this course is on advanced illumination techniques which approximate the physically-based light transport more convincingly. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion and simple Monte-Carlo based approaches to global illumination including translucency and scattering. With the proposed techniques, users are able to interactively create convincing images from volumetric data whose visual quality goes far beyond traditional approaches. The optical properties in participating media are defined using the phase function. Many approximations to the physically based light transport applied for rendering natural phenomena such as clouds or smoke assume a rather homogenous phase function model. For rendering volumetric scans on the other hand different phase function models are required to account for both surface-like structures and fuzzy boundaries in the data. Using volume rendering techniques, artists who create medical visualization for science magazines may now work on tomographic scans directly, without the necessity to fall back to creating polygonal models of anatomical structures.

  • 39.
    Hagström, Åsa
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology.
    Understanding Certificate Revocation2006Licentiate thesis, monograph (Other academic)
    Abstract [en]

    Correct certificate revocation practices are essential to each public-key infrastructure. While there exist a number of protocols to achieve revocation in PKI systems, there has been very little work on the theory behind it: Which different types of revocation can be identified? What is the intended effect of a specific revocation type to the knowledge base of each entity?

    As a first step towards a methodology for the development of reliable models, we present a graph-based formalism for specification and reasoning about the distribution and revocation of public keys and certificates. The model is an abstract generalization of existing PKIs and distributed in nature; each entity can issue certificates for public keys that they have confidence in, and distribute or revoke these to and from other entities.

    Each entity has its own public-key base and can derive new knowledge by combining this knowledge with certificates signed with known keys. Each statement that is deduced or quoted within the system derives its support from original knowledge formed outside the system. When such original knowledge is removed, all statements that depended upon it are removed as well. Cyclic support is avoided through the use of support sets.

    We define different revocation reasons and show how they can be modelled as specific actions. Revocation by removal, by inactivation, and by negation are all included. By policy, negative statements are the strongest, and positive are the weakest. Collisions are avoided by removing the weaker statement and, when necessary, its support.

    Graph transformation rules are the chosen formalism. Rules are either interactive changes that can be applied by entities, or automatically applied deductions that keep the system sound and complete after the application of an interactive rule.

    We show that the proposed model is sound and complete with respect to our definition of a valid state.

  • 40.
    Hassan, Waqar Ul
    Linköping University, Department of Computer and Information Science.
    Pixel Based and Object Oriented Multi-spectral Remotely Sensed Data Analysis for Flood Risk Assessment and Vulnerability Mapping.2010Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Geographical information system with remotely sensed data can be instrumental in many ways for disaster management and post disaster rehabilitation. During last few decades the usage of remotely sensed data has extensively increased, although image interpretation tools are not highly accurate but still considered as fast, reliable and useful way to get information from imagery. Disaster assessment, management and rehabilitation are always creates challenge for experts. Population growth, expansion in settlements either in the rural or in the urban areas bring more problems not only for the humans but it also affect the global environment Such global changes on the massive scale disturbs the ecological processes. GIS along with Remote sensing data can change the whole scenario in very short period of time. All the departments concerning to strategic disaster planning process can share their information by using the single platform, so for this purpose spatial database can be helpful by providing the spatial data in digital format to the department concerned. Spatial phenomena can be observed by using different image analysis techniques and the resultant thematic map display the spatial variations and changes that describe the particular phenomenon whether it was any disaster or change in soil type or vegetation type. Remotely sensed data like aerial, satellite and radar images are very useful for disaster management strategy formulation process. Integration of GIS and remote sensing proved itself the best especially for land-use, land-cover mapping. For this purpose pixel based, sub-pixel based, pre-field and object oriented classification approach are being in use around the world. But thematic maps created from image analyzed by using object oriented classifiers contain more accuracy than any other techniques.

  • 41.
    Heintz, Fredrik
    et al.
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab. Linköping University, The Institute of Technology.
    Erlander Klein, Inger
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Civilingenjör i Mjukvaruteknik vid Linköpings universitet: mål, design och erfarenheter2013In: Proceedings of 4:de Utvecklingskonferensen för Sveriges ingenjörsutbildningar (UtvSvIng) / [ed] S. Vikström, R. Andersson, F. Georgsson, S. Gunnarsson, J. Malmqvist, S. Pålsson och D. Raudberget, 2013Conference paper (Refereed)
    Abstract [en]

    Hösten 2013 startade Linköpings universitet den första civilingenjörsutbildningen i Mjukvaruteknik. Utbildningens mål är att bland annat att ge ett helhetsperspektiv på modern storskalig mjukvaruutveckling, ge en gedigen grund i datavetenskap och computational thinking samt främja entreprenörskap och innovation. Studenternas gensvar har varit över förväntan med över 600 sökande till de 30 platserna varav 134 förstahandssökande. Här presenterar vi programmets vision, mål, designprinciper samt det färdiga programmet. En viktig förebild är ACM/IEEE Computer Science Curricula som precis kommit i en ny uppdaterad version. Tre pedagogiska idéer vi har följt är (1) att använda projektkurser för att integrera teori och praktik samt ge erfarenhet i den vanligaste arbetsformen i näringslivet; (2) att undervisa i flera olika programspråk och flera olika programutvecklingsmetodiker för att ge en plattform att ta till sig det senaste på området; och (3) att införa en programsammanhållande kurs i ingenjörsprofessionalism i årskurs 1–3 som ger studenterna verktyg att reflektera över sitt eget lärande, att jobba i näringslivet samt sin professionella yrkesroll. Artikeln avslutas med en diskussion om viktiga aspekter som computational thinking och ACM/IEEE CS Curricula.

  • 42.
    Heintz, Fredrik
    et al.
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab. Linköping University, The Institute of Technology.
    Erlander Klein, Inger
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    The Design of Sweden's First 5-year Computer Science and Software Engineering Program2014In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE 2014), ACM Press, 2014, p. 199-204Conference paper (Refereed)
  • 43.
    Heintz, Fredrik
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Intergrated Computer systems. Linköping University, The Institute of Technology.
    Färnqvist, Tommy
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Återkoppling genom automaträttning2013In: Proceedings of 4:de Utvecklingskonferensen för Sveriges ingenjörsutbildningar (UtvSvIng), 2013Conference paper (Refereed)
    Abstract [sv]

    Vi har undersökt olika former av återkoppling genom automaträttning i en kurs i datastrukturer och algoritmer. 2011 undersökte vi effekterna av tävlingsliknande moment som också använder automaträttning. 2012 införde vi automaträttning av laborationerna. Vi undersökte då hur återkoppling genom automaträttning påverkar studenternasarbetssätt, prestationsgrad och relation till den examinerande personalen. Genom automaträttning får studenterna omedelbar återkoppling om deras program är tillräckligt snabbt och ger rätt svar på testdata. När programmet är korrekt och resurseffektivt kontrollerar kursassistenterna att programmet även uppfyller andra krav som att vara välskrivet och välstrukturerat. Efter kursen undersökte vi studenternas inställning till och upplevelse av automaträttning genom en enkät. Resultaten är att studenterna är positiva till automaträttning (80% av alla som svarade) och att den påverkade studenternas sätt att arbeta huvudsakligen positivt. Till exempel svarade 50% att de ansträngde sig hårdare tack vare automaträttningen. Dessutom blir rättningen mer objektiv då den görs på exakt samma sätt för alla. Vår slutsats är att återkoppling genom automaträttning ger positiva effekter och upplevs som positiv av studenterna.

  • 44.
    Hellgren, Marcus
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, The Institute of Technology.
    Enbrant, Ida
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, The Institute of Technology.
    Vilken Open Source SIP-server lämpar sig bäst förAndroid?2014Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [sv]

    Denna studie tacklar ett av de problem som modern IP-telefoni brottas med, som gör att det är svårare att konkurrera med traditionell telefoni: fördröjningar på grund av otillräckliga Session Initiation Protocol-servrar (SIP). Genom tester av de viktigaste faktorerna jämförs fyra högaktuella Open Source SIP-servrar: OpenSIPS, Kamailio, FreeSWITCH och Yate. Avsikten är att underlätta valet av SIP-server för nya applikationer inom IP-telefoni, öka prestandan samt snabba på utvecklingen. Studien behandlar de intressantaste faktorerna vid val SIP, såsom användarvänlighet, hastighet, lagring av användardata samt ljudkvalité. Slutsatsen blev att Kamailio stod som klar segrare, med överlägsna resultat i jämförelse med övriga servrar, på de parametrar som valts ut. Skillnaderna var förhållandevis små prestandamässigt – det som verkligen avgjorde var främst hur avancerade servrarna var att installera, använda samt konfigurera.

  • 45.
    Hernell, Frida
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Local ambient occlusion in direct volume rendering2010In: Visualization and Computer Graphics, IEEE Transactions on, ISSN 1077-2626, Vol. 16, no 4, p. 548-559Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel technique to efficiently compute illumination for Direct Volume Rendering using a local approximation of ambient occlusion to integrate the intensity of incident light for each voxel. An advantage with this local approach is that fully shadowed regions are avoided, a desirable feature in many applications of volume rendering such as medical visualization.

    Additional transfer function interactions are also presented, for instance, to highlight specific structures with luminous tissue effects and create an improved context for semitransparent tissues with a separate absorption control for the illumination settings. Multiresolution volume management and GPU-based computation are used to accelerate the calculations and support large data sets. The scheme yields interactive frame rates with an adaptive sampling approach for incrementally refined illumination under arbitrary transfer function changes. The illumination effects can give a better understanding of the shape and density of tissues and so has the potential to increase the diagnostic value of medical volume rendering. Since the proposed method is gradient-free, it is especially beneficial at the borders of clip planes, where gradients are undefined, and for noisy data sets.

  • 46.
    Ho, Quan
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Jern, Mikael
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Exploratory 3D Geovisual Analytics2008In: 2008 IEEE International Conference on Research, Innovation and Vision for the Future in Computing Communications Technologies,2008 / [ed] Tru Cao, Tu-Bao Ho, P. O. Box 1331, 445 Hoes Lane, Piscataway, NJ 08855-1331 USA: IEEE Operations Center , 2008, p. 276-283Conference paper (Refereed)
    Abstract [en]

    In this paper, we extend our generic -GeoAnalytics- visualization (GAV) component toolkit, based on the principles behind Visual Analytics (VA), to also support time-oriented, geographically referenced and multivariate attribute volumetric data. GAV includes components that support a mixture of technologies from the three data visualization fields: information visualization (InfoVis), geovisualization (GeoVis) and scientific visualization (SciVis). Our research concentrates on visual user interface (VUI) techniques through dynamic and direct data manipulation that permit the visual analytical process to become more interactive and focused. This paper encourages synergies between well-known information- and volume data visualization methods applied in a multiple-linked and coordinated views interface. We address challenges for improved data interaction techniques with volumetric data and the need for immediate response. Varieties of explorative data analysis (EDA) tasks and the possibility to view the information simultaneously from different perspectives and scenarios are discussed. The effectiveness of our geovisual analytics framework is demonstrated in a tailor-made volume data explorer (VDE) application that integrates InfoVis, GeoVis and SciVis visualization methods assembled from GAV components. VDE facilitates dynamic exploration and correlation of temporal ocean space temperature and salinity data supplied in a NetCDF format from NOAA. This real-world phenomenon that corresponds to a huge volumetric data set comprises more than 31 million values for a time period of 12 months in 1994.

  • 47.
    Ho, Quan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Lundblad, Patrik
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Åström, Tobias
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Jern, Mikael
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    A Web-Enabled Visualization Toolkit for Geovisual Analytics2012In: Information Visualization, ISSN 1473-8716, E-ISSN 1473-8724, Vol. 11, no 1, p. 22-42Article in journal (Refereed)
    Abstract [en]

    A framework and class library (GAV Flash) implemented in Adobe’s ActionScript is introduced, designed with the intention to significantly shorten the time and effort needed to develop customized web-enabled applications for geovisual analytics tasks. Through an atomic layered component architecture, GAV Flash provides a collection of interactive geo- and information visualization representations for exploring high dimensional spaces and extended with motion behavior. Versatile interaction methods are drawn from many data visualization research areas and optimized for dynamic web visualization of spatio-temporal and multivariate data. Based on layered component thinking and the use of programming interface mechanism the GAV Flash architecture is open and facilitates the creation of new or improved versions of existing components so that ideas can be tried out or optimized rapidly in a fully functional environment. Not only is GAV Flash a tool for interactive visualization but also supports storytelling around visual analytics in which visual representations not only serve as a discovery tool for individuals but also as a mean to share stories among users fostering a social style of collaborative data analysis. A mechanism “snapshot” for saving the explorative results of a reasoning process is introduced that aids collaboration and publication of gained insight and knowledge embedded as dynamic visualizations in blogs or web pages with associative metadata or “storytelling”.

  • 48.
    Ho, Quan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Lundblad, Patrik
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Åström, Tobias
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Jern, Mikael
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    A Web-Enabled Visualization Toolkit for Geovisual Analytics2011In: Proceedings of SPIE, the International Society for Optical Engineering: SPIE: Electronic Imaging Science and Technology, Visualization and Data Analysis / [ed] Chung Wong, Pak; Park, Jinah; Hao, Ming C.; Chen, Chaomei; Börner, Katy; Kao, David L.; Roberts, Jonathan C., SPIE, Bellingham WA, ETATS-UNIS: SPIE - International Society for Optical Engineering, 2011, p. 78680R-78680R-12Conference paper (Refereed)
    Abstract [en]

    We introduce a framework and class library (GAV Flash) implemented in Adobe’s ActionScript, designed with the intention to significantly shorten the time and effort needed to develop customized web-enabled applications for visual analytics or geovisual analytics tasks. Through an atomic layered component architecture, GAV Flash provides a collection of common geo- and information visualization representations extended with motion behavior including scatter matrix, extended parallel coordinates, table lens, choropleth map and treemap, integrated in a multiple, time-linked layout. Versatile interaction methods are drawn from many data visualization research areas and optimized for dynamic web visualization of spatio-temporal and multivariate data. Based on layered component thinking and the use of programming interface mechanism the GAV Flash architecture is open and facilitates the creation of new or improved versions of existing components so that ideas can be tried out or optimized rapidly in a fully functional environment. Following the Visual Analytics mantra, a mechanism “snapshot” for saving the explorative results of a reasoning process is developed that aids collaboration and publication of gained insight and knowledge embedded as dynamic visualizations in blogs or web pages with associative metadata or “storytelling”.

  • 49.
    Ho, Quan
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Åström, Tobias
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Jern, Mikael
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Geovisual Analytics for Self-Organizing Network Data2009In: Proceedings of IEEE Symposium on Visual Analytics Science and Technology, 2009 (VAST 2009 / [ed] John Stasko, Jarke J. van Wijk, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855-1331 USA: IEEE Service Center , 2009, p. 43-50Conference paper (Refereed)
    Abstract [en]

    Cellular radio networks are continually growing in both node count and complexity. It therefore becomes more difficult to manage the networks and necessary to use time and cost effective automatic algorithms to organize the network’s neighbor cell relations. There have been a number of attempts to develop such automatic algorithms. Network operators, however, may not trust them because they need to have an understanding of their behavior and of their reliability and performance, which is not easily perceived. This paper presents a novel web-enabled geovisual analytics approach to exploration and understanding of self-organizing network data related to cells and neighbor cell relations. A demonstrator and case study are presented in this paper, developed in close collaboration with the Swedish telecom company Ericsson and based on large multivariate, time-varying and geospatial data provided by the company. It allows the operators to follow, interact with and analyze the evolution of a self-organizing network and enhance their understanding of how an automatic algorithm configures locally-unique physical cell identities and organizes neighbor cell relations of the network. The geovisual analytics tool is tested with a self-organizing network that is operated by the Automatic Neighbor Relations (ANR) algorithm. The demonstrator has been tested with positive results by a group of domain experts from Ericsson and will be tested in production.

  • 50.
    Ho, Quan
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Åström, Tobias
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Jern, Mikael
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Moe, Johan
    Wireless Access Networks, Ericsson Research, Ericsson AB, Sweden.
    Gunnarsson, Fredrik
    Wireless Access Networks, Ericsson Research, Ericsson AB, Sweden.
    Kallin, Harald
    Wireless Access Networks, Ericsson Research, Ericsson AB, Sweden.
    Visualization of Self-Organizing NetworksOperated by the ANR Algorithm2009In: 2009 IEEE-RIVF International Conference on Computing and Communication Technologies: Research, Innovation and Vision for the Future / [ed] Tru Cao, Ralf-Detlef Kutsche, Akim Demaille, Piscataway, NJ, USA: IEEE , 2009, p. 312-319Conference paper (Refereed)
    Abstract [en]

    Cellular radio networks are continually growing in both node count and complexity. It therefore becomes more and more difficult to manage the networks and necessary to use time and cost effective automatic computer algorithms to organize the network’s neighbor cell relations. Ericsson has developed such an algorithm, called Automatic Neighbor Relations (ANR), which solves a part of this problem by automatically creating and updating neighbor cell relation (NCR) lists, based on measured network data. Network operators need to have an understanding of the algorithm and of its reliability and performance, which is not easily perceived. This paper presents a visualization tool that visualizes the performance of ANR and gives the user a possibility to control it via policies. The tool allows the operators to follow the evolution of the network and to find problems occurring over time. In addition, it also supports finding potential problems that can occur in the future. The tool was evaluated by a group of relevant domain users and the results from the evaluation were highly positive.

123 1 - 50 of 130
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf