liu.seSearch for publications in DiVA
Endre søk
Begrens søket
58596061 3001 - 3018 of 3018
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 3001.
    Ågerfalk, Pär
    et al.
    Linköpings universitet, Filosofiska fakulteten. Linköpings universitet, Institutionen för datavetenskap, VITS - Laboratoriet för verksamhetsinriktad systemutveckling.
    Goldkuhl, Göran
    Linköpings universitet, Filosofiska fakulteten. Linköpings universitet, Institutionen för datavetenskap, VITS - Laboratoriet för verksamhetsinriktad systemutveckling.
    Fitzgerald, Brian
    Bannon, Liam
    Reflecting on action in language, organisations and information systems2006Inngår i: European Journal of Information Systems, ISSN 0960-085X, E-ISSN 1476-9344, Vol. 15, nr 1, s. 4-8Artikkel i tidsskrift (Annet vitenskapelig)
  • 3002.
    Åkesson, Daniel
    Linköpings universitet, Institutionen för datavetenskap, PELAB - Laboratoriet för programmeringsomgivningar. Linköpings universitet, Tekniska högskolan.
    An LLVM Back-end for REPLICA: Code Generation for a Multi-core VLIWProcessor with Chaining2012Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    REPLICA is a PRAM-NUMA hybrid architecture, with support for instructionlevel parallelism as a VLIW architecture. REPLICA can also chain instructionsso that the output from an earlier instruction can be used as input to a laterinstruction in the same execution step.

    There are plans in the REPLICA project to develop a new C-based program-ming language, compilers and libraries to speed up development of parallel pro-grams. We have developed a LLVM back-end as a part of the REPLICA projectthat can be used to generate code for the REPLICA architecture. We have alsocreated a simple optimization algorithm to make better use of REPLICAs supportfor instruction level parallelism. Some changes to Clang, LLVMs front-end forC/C++/Objective-C, was also necessary so that we could use assembler in-liningin our REPLICA programs.

    Using Clang to compile C-code to LLVMs internal representation and LLVMwith our REPLICA back-end to transform LLVMs internal representation intoMBTAC1 assembler.

  • 3003.
    Ålind, Markus
    et al.
    Linköpings universitet.
    Eriksson, Mattias
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för datavetenskap, PELAB - Laboratoriet för programmeringsomgivningar.
    Kessler, Christoph
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för datavetenskap, PELAB - Laboratoriet för programmeringsomgivningar.
    BlockLib: A Skeleton Library for Cell Broadband Engine2008Inngår i: Proceedings - International Conference on Software Engineering, New York, USA: ACM , 2008, s. 7-14Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Cell Broadband Engine is a heterogeneous multicore processor for high-performance computing and gaming. Its architecture allows for an impressive peak performance but, at the same time, makes it very hard to write efficient code. The need to simultaneously exploit SIMD instructions, coordinate parallel execution of the slave processors, overlap DMA memory traffic with computation, keep data properly aligned in memory, and explicitly manage the very small on-chip memory buffers of the slave processors, leads to very complex code. In this work, we adopt the skeleton programming approach to abstract from much of the complexity of Cell programming while maintaining high performance. The abstraction is achieved through a library of parallel generic building blocks, called BlockLib. Macro-based generative programming is used to reduce the overhead of genericity in skeleton functions and control code size expansion. We demonstrate the library usage with a parallel ODE solver application. Our experimental results show that BlockLib code achieves performance close to hand-written code and even outperforms the native IBM BLAS library in cases where several slave processors are used.

  • 3004.
    Åsberg, Mikael
    Linköpings universitet, Institutionen för datavetenskap.
    Jämförelse av Oracle och MySQL med fokus på användning i laborationer för universitetsutbildning2008Independent thesis Basic level (professional degree), 10 poäng / 15 hpOppgave
    Abstract [sv]

    Syftet med arbetet som beskrivs i denna rapport var att undersöka om den Oracle-baserade laborationsmiljö som användes hos ADIT gick att överföra till MySQL. Oracle är ett komplext system som är krävande att administrera, något som ADIT ansvarat för med egen personal och egen hårdvara och detta var inte idealiskt. I kombination med ett stort intresse från studenter att använda just MySQL vid laborationer hos ADIT beslutades det att man skulle undersöka om MySQL nu var moget att axla den roll som Oracle tidigare haft. Utifrån detta går rapporten igenom vad som behövde göras med det befintliga laborationsmaterialet. En introduktion till relations¬modellen och SQL samt förklaringar av skillnader i features mellan Oracle och MySQL som hade betydelse för laborationerna återfinns också. Det visade sig att överföringen var enkel att göra och sist i rapporten sammanställs våra erfarenheter.

  • 3005.
    Åsberg, Mikael
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för datavetenskap, Databas och informationsteknik.
    Strömbäck, Lena
    Linköpings universitet, Institutionen för datavetenskap, Databas och informationsteknik. Linköpings universitet, Tekniska högskolan.
    Bioinformatics: From Disparate Web Services to Semantics and Interoperability2010Inngår i: International Journal of Advances in Software, ISSN 1942-2628, Vol. 3, nr 3-4, s. 396-406Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In the field of bioinformatics, there exists a large number of web service providers and many competing standards regarding how data should be represented and interfaced. However, these web services are often hard to use for a non-programmer and it can be especially hard to understand how different services can be used together to create scientific workflows. In this paper we have performed a literature study to identify problems involved in developing interoperable webservices for the bioinformatics community and steps taken by other projects to address them. We have also conducted a case study by developing our own bioinformatic web service to further investigate these problems. Based on our case study we have identified a number of design issues important to consider when designing web services. The paper is concluded by discussing current approaches aimed at making web services easier to use and by presenting our own proposal of an easy-to-use solution for integrating information from web services.

  • 3006.
    Åslin, Fredrik
    Linköpings universitet, Institutionen för datavetenskap.
    Evaluation of Hierarchical Temporal Memory in algorithmic trading2010Independent thesis Advanced level (professional degree), 10 poäng / 15 hpOppgave
    Abstract [en]

    This thesis looks into how one could use Hierarchal Temporal Memory (HTM) networks to generate models that could be used as trading algorithms. The thesis begins with a brief introduction to algorithmic trading and commonly used concepts when developing trading algorithms. The thesis then proceeds to explain what an HTM is and how it works. To explore whether an HTM could be used to generate models that could be used as trading algorithms, the thesis conducts a series of experiments. The goal of the experiments is to iteratively optimize the settings for an HTM and try to generate a model that when used as a trading algorithm would have more profitable trades than losing trades. The setup of the experiments is to train an HTM to predict if it is a good time to buy some shares in a security and hold them for a fixed time before selling them again. A fair amount of the models generated during the experiments was profitable on data the model have never seen before, therefore the author concludes that it is possible to train an HTM so it can be used as a profitable trading algorithm.

  • 3007.
    Öberg, Tomas
    Linköpings universitet, Institutionen för datavetenskap.
    Design av databassystem för testresultat från Tor-systemet2004Independent thesis Basic level (professional degree)Oppgave
    Abstract [en]

    This master’s thesis was performed at PartnerTech AB in Åtvidaberg. It addresses the problem of managing test results obtained from testing electronics manufactured by PartnerTech. PartnerTech has developed a test system, called Tor, which performs tests on manufactured boards and stores the test results in files. The Tor system consists of both hardware and software part, where the software runs on an ordinary PC with MS DOS/Windows 2000. The effects on the existing Tor system that this thesis implies are minimal.

    This work focuses on a way of storing the produced test files in a database. In this work a data model has been developed, implemented, and evaluated together with a system that imports test files into the database and a graphical user interface that allows a user to easily search and browse the stored test results. It is also possible to print test reports from the Tor system. For implementing the database system Microsoft SQL Server 2000 was chosen as database server and an XML based data format was chosen to import and export data to and from the database. Two alternative graphical user interface applications were developed and compared - one server based on Microsoft IIS and one client based in Microsoft Access. For advanced data manipulation certain parts of the system were developed in Microsoft Excel.

  • 3008.
    Öberg, Viktor
    Linköpings universitet, Institutionen för datavetenskap. Linköpings universitet, Tekniska högskolan.
    Middleware med Google Web Toolkit2012Independent thesis Basic level (university diploma), 10,5 poäng / 16 hpOppgave
    Abstract [sv]

    Detta examensarbete har utförts i samarbete med företaget Systemagic AB. Systemagic är ett teknikföretag vars expertis ligger inom mjukvaruutveckling inom IPTV-teknologi. Detta innefattar bland annat utveckling av middleware till digitalboxar, också kallade Set-top-boxar.

    Middleware är den mjuk- och hårdvaruinfrastruktur som sammankopplar de olika delarna av ett IPTV-system. Det är ett distribuerat operativt system som både finns på operatörens servrar samt i slutanvändarens digitalbox. Då det idag ställs allt högre krav på funktionalitet och dynamik från beställare blir utvecklingen av ett modernt middleware mycket resurskrävande. Systemagic anser att en stor bidragande orsak till den resurskrävande utvecklingsprocessen kan kopplas till användningen av scriptspråket JavaScript och är därför intresserade av alternativa metoder.

    Google Web Toolkit (GWT) är ett Java-ramverk som kan användas för att på ett snabbt och smidigt sätt utveckla interaktiva webbapplikationer. Detta möjliggörs genom att all utveckling sker i Java. Slutprodukten efter kompilering är standard-kompatibel HTML och JavaScript, helt oberoende av Java.

    Detta examensarbete har gått ut på att undersöka möjligheten att använda Java och GWT för att underlätta och potentiellt snabba upp utvecklingen av ett middleware. Målet var att undersöka om GWT kan användas rakt av för att ta fram den del av ett middleware som återfinns i slutanvändarens box, en så kallad portal, eller om ramverket måste anpassas för att de mest grundläggande funktionerna hos en box ska kunna implementeras. Rapporten beskriver frågeställningar, tillvägagångssätt, problem och  svårigheter som uppstod, de lösningar som använts samt en analys och diskussion av resultatet.

  • 3009.
    Öh, Rickard
    Linköpings universitet, Institutionen för datavetenskap.
    Analysis and implementation of remote support for ESAB’s welding systems: using WeldPoint and web services2009Independent thesis Basic level (university diploma), 10 poäng / 15 hpOppgave
    Abstract [en]

    This thesis was written on behalf of ESAB Research and Development department, in LaxåSweden. One of ESAB’s product areas is developing various welding systems.Today if ESAB’s customers experience a problem with one of their welding systems they callESAB’s service center. If the problem seems to have been caused by software, or if it requireslog files to be analyzed, ESAB needs a way to get this system information from the customer’swelding system to ESAB’s employees.One of the goals with this project thesis was to perform an analysis answering how the systeminformation should be sent, stored and what unit in the customer’s welding system that shouldsend it. Another goal was to implement the solution that the analysis presented.The analysis shows that WeldPoint in combination with a web service is the best way to sendthe system information from the customer’s welding system. WeldPoint is a PC control and logsoftware connected to the customer’s welding system. A web service provides a serviceinterface enabling clients to interact with a web server. Clients communicate with the webservice using HTTP, this means that clients can easily communicate across firewalls and othernetwork obstacles.The thesis work resulted in three different applications written in C#.NET. The first applicationis a simple form called WeldPoint Remote Support (WRS). This form extracts customerinformation, welding system information and log files from the customer and the customer’swelding system. All this information is called a case. The case is received by ESAB using thesecond application, WeldPoint Web service (WWS). WWS stores the received case in adatabase. The third application is called WeldPoint Remote Support Center (WRSC). Thisapplication is used by the ESAB employee’s to view the case sent from the customer’s weldingsystem.The above implementation has been tested and supports a robust and secure way to send andview the system information from the customer’s welding system. The conclusions showed thatall goals and requirements set by ESAB were met.

  • 3010.
    Öhberg, Tomas
    Linköpings universitet, Institutionen för datavetenskap, Programvara och system.
    Auto-tuning Hybrid CPU-GPU Execution of Algorithmic Skeletons in SkePU2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    The trend in computer architectures has for several years been heterogeneous systems consisting of a regular CPU and at least one additional, specialized processing unit, such as a GPU.The different characteristics of the processing units and the requirement of multiple tools and programming languages makes programming of such systems a challenging task. Although there exist tools for programming each processing unit, utilizing the full potential of a heterogeneous computer still requires specialized implementations involving multiple frameworks and hand-tuning of parameters.To fully exploit the performance of heterogeneous systems for a single computation, hybrid execution is needed, i.e. execution where the workload is distributed between multiple, heterogeneous processing units, working simultaneously on the computation.

    This thesis presents the implementation of a new hybrid execution backend in the algorithmic skeleton framework SkePU. The skeleton framework already gives programmers a user-friendly interface to algorithmic templates, executable on different hardware using OpenMP, CUDA and OpenCL. With this extension it is now also possible to divide the computational work of the skeletons between multiple processing units, such as between a CPU and a GPU. The results show an improvement in execution time with the hybrid execution implementation for all skeletons in SkePU. It is also shown that the new implementation results in a lower and more predictable execution time compared to a dynamic scheduling approach based on an earlier implementation of hybrid execution in SkePU.

  • 3011.
    Öhgren, Annika
    Linköpings universitet, Institutionen för datavetenskap, MDA - Human Computer Interfaces. Linköpings universitet, Tekniska högskolan.
    Towards an Ontology Development Methodology for Small and Medium-sized Enterprises2009Licentiatavhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    This thesis contributes to the research field information logistics. Information logistics aims at improving information flow and at reducing information overload by providing the right information, in the right context, at the right time, at the right place through the right channel.

    Ontologies are expected to contribute to reduced information overload and solving information supply problems. An ontology is created to form some kind of shared understanding for the involved stakeholders in the domain at hand. By using this semantic structure you can further build applications that use the ontology and support the employee by providing only the most important information for this person.

    During the last years, there has been an increasing number of successful cases in which industrial applications successfully use ontologies. Most of these cases however, stem from large enterprises or IT-intensive small or medium-sized enterprises (SME). The current ontology development methodologies are not tailored for SME and their specific demands and preferences, such as that SME prefer mature technologies, and show a clear preference for to a large extent standardised solutions. The author proposes a new ontology development methodology, taking the specific characteristics of SME into consideration. This methodology was tested in an application case, which resulted in a number of concrete improvement ideas, but also the conclusion that further specialisation of the methodology was needed, for example for a specific usage area or domain. In order to find out in which direction to specify the methodology a survey was performed among SME in the region of Jönköping.

    The main conclusion from the survey is that ontologies can be expected to be useful for SME mainly in the area of product configuration and variability modelling. Another area of interest is document management for supporting project work. The area of information search and retrieval can also be seen as a possible application field, as many of the respondents of the survey spend much time finding and saving information.

  • 3012.
    Öhlin, Petra
    Linköpings universitet, Institutionen för datavetenskap, Programvara och system.
    Prioritizing Tests with Spotify’s Test & Build Data using History-based, Modification-based & Machine Learning Approaches2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    This thesis intends to determine the extent to which machine learning can be used to solve the regression test prioritization (RTP) problem. RTP is used to order tests with respect to probability of failure. This will optimize for a fast failure, which is desirable if a test suite takes a long time to run or uses a significant amount of computational resources. A common machine learning task is to predict probabilities; this makes RTP an interesting application of machine learning. A supervised learning method is investigated to train a model to predict probabilities of failure, given a test case and a code change. The features investigated are chosen based on previous research of history- based and modification-based RTP. The main motivation for looking at these research areas is that they resemble the data provided by Spotify. The result of the report shows that it is possible to improve how tests run with RTP using machine learning. Nevertheless, a much simpler history- based approach is the best performing approach. It is looking at the history of test results, the more failures recorded for the test case over time, the higher priority it gets. Less is sometimes more. 

  • 3013.
    Öhrström, Fredrik
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system.
    Cluster Analysis with Meaning: Detecting Texts that Convey the Same Message2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Textual duplicates can be hard to detect as they differ in words but have similar semantic meaning. At Etteplan, a technical documentation company, they have many writers that accidentally re-write existing instructions explaining procedures. These "duplicates" clutter the database.

    This is not desired because it is duplicate work. The condition of the database will only deteriorate as the company expands. This thesis attempts to map where the problem is worst, and also how to calculate how many duplicates there are.

    The corpus is small, but written in a controlled natural language called Simplified Technical English. The method uses document embeddings from doc2vec and clustering by use of HDBSCAN* and validation using Density-Based Clustering Validation index (DBCV), to chart the problems. A survey was sent out to try to determine a threshold value of when documents stop being duplicates, and then using this value, a theoretical duplicate count was calculated.

  • 3014.
    Ölvingson, Christina
    Linköpings universitet, Institutionen för datavetenskap. Linköpings universitet, Tekniska högskolan.
    On development of information systems with GIS functionality in public health informatics: a requirements engineering approach2003Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Public health informatics has in recent years emerged as a field of its own from medical informatics. Since public health informatics is newly established and also new to public health professionals, previous research in the field is relatively scarce. Even if the overlap with medical informatics is large, there are differences between the two fields. Public health is, for example, more theoretical and more multi-professional than most clinical fields and the focus is on populations rather than individuals. These characteristics result in a complex setting for development of information systems. To our knowledge there exist few systems that support the collaborative process that constitutes the foundation of public health programs. Moreover, most applications that do support public health practitioners are small-scale, developed for a specific purpose and have not gained any wider recognition.

    The main objective of this thesis is to explore a novel approach to identifying the requirements for information system support with geographical information system (GIS) functionality in public health informatics. The work is based on four case studies that are used to provide the foundation for the development of an initial system design. In the first study, problems that public health practitioners experience in their daily work were explored. The outcome of the study was in terms of descriptions of critical activities. In the second study, the use case map notation was exploited for modeling the process of public health programs. The study provides a contextual description of the refinement of data to information that could constitute a basis for both political and practical decision in complex inter-organizational public health programs. In the third study, ethical conflicts that arose when sharing geographically referenced data in public health programs were analyzed to find out how these affect the design of information systems. The results pointed out issues that have to be considered when developing public health information systems. In the fourth study, the use of information systems with GIS functionality in WHO Safe Communities in Sweden and the need for improvements were explored. The study resulted in identification of particular needs concerning information system support among public health practitioners.

    From these studies, general knowledge about the issues public health practitioners experience in daily practice was gained and the requirements identified were used as a starting-point for the design of information systems for Motala WHO Safe Community.

    The main contributions of the thesis involve two areas: public health informatics and requirements engineering. First, a novel approach to system development in public health informatics is presented. Second, the application of use case maps as a tool for requirements engineering in complex settings such as public health programs is presented. Third, the introduction of requirements engineering in public health informatics has been exemplified. The contributions of the thesis should enhance the possibility to perform more adequate requirements engineering in the field of public health informatics. As a result, it should be possible to develop information systems that better meet the needs in the field of public health. Hence, it contributes to making the public health programs more effective, which in the long run will improve public health. 

    Delarbeid
    1. Using the critical incident technique to define a minimal data set for requirements elicitation in public health
    Åpne denne publikasjonen i ny fane eller vindu >>Using the critical incident technique to define a minimal data set for requirements elicitation in public health
    2002 (engelsk)Inngår i: International Journal of Medical Informatics, ISSN 1386-5056, E-ISSN 1872-8243, Vol. 68, nr 1-3, s. 165-174Artikkel i tidsskrift (Fagfellevurdert) Published
    Abstract [en]

    The introduction of computer-based information systems (ISs) in public health provides enhanced possibilities for service improvements and hence also for improvement of the population's health. Not least, new communication systems can help in the socialization and integration process needed between the different professions and geographical regions. Therefore, development of ISs that truly support public health practices require that technical, cognitive, and social issues be taken into consideration. A notable problem is to capture ‘voices’ of all potential users, i.e., the viewpoints of different public health practitioners. Failing to capture these voices will result in inefficient or even useless systems. The aim of this study is to develop a minimal data set for capturing users' voices on problems experienced by public health professionals in their daily work and opinions about how these problems can be solved. The issues of concern thus captured can be used both as the basis for formulating the requirements of ISs for public health professionals and to create an understanding of the use context. Further, the data can help in directing the design to the features most important for the users.

    Emneord
    Critical incident technique, Information systems design, Public health, Public health informatics, Requirements engineering
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-46775 (URN)10.1016/S1386-5056(02)00074-6 (DOI)
    Tilgjengelig fra: 2009-10-11 Laget: 2009-10-11 Sist oppdatert: 2017-12-13
    2. Requirements Engineering for inter-organizational health information systems with functions for spatial analyses: modeling a WHO safe community applying Use Case Maps
    Åpne denne publikasjonen i ny fane eller vindu >>Requirements Engineering for inter-organizational health information systems with functions for spatial analyses: modeling a WHO safe community applying Use Case Maps
    2002 (engelsk)Inngår i: Methods of Information in Medicine, ISSN 0026-1270, Vol. 41, nr 4, s. 299-304Artikkel i tidsskrift (Fagfellevurdert) Published
    Abstract [en]

    Objectives: To evaluate Use Case Maps (UCMs) as a technique for Requirements Engineering (RE) in the development of information systems with functions for spatial analyses in inter-organizational public health settings.

    Methods: In this study, Participatory Action Research (PAR) is used to explore the UCM notation for requirements elicitation and to gather the opinions of the users. The Delphi technique is used to reach consensus in the construction of UCMs.

    Results: The results show that UCMs can provide a visualization of the system's functionality and in combination with PAR provide a sound basis for gathering requirements in inter-organizational settings. UCMs were found to represent a suitable level for describing the organization and the dynamic flux of information including spatial resolution to all stakeholders. Moreover, by using PAR, the voices of the users and their tacit knowledge is intercepted. Further, UCMs are found useful in generating intuitive requirements by the creation of use cases.

    Conclusions: With UCMs and PAR it is possible to study the effects of design changes in the general information display and the spatial resolution in the same context. Both requirements on the information system in general and the functions for spatial analyses are possible to elicit when identifying the different responsibilities and the demands on spatial resolution associated to the actions of each administrative unit. However, the development process of UCM is not well documented and needs further investigation and formulation of guidelines.

    Emneord
    health informatics, public health, system development, requirements engineering (RE), case study methods
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-48753 (URN)12425241 (PubMedID)
    Tilgjengelig fra: 2009-10-11 Laget: 2009-10-11 Sist oppdatert: 2017-12-12
    3. Ethical issues in public health informatics: implications for system design when sharing geographic information
    Åpne denne publikasjonen i ny fane eller vindu >>Ethical issues in public health informatics: implications for system design when sharing geographic information
    2002 (engelsk)Inngår i: Journal of Biomedical Informatics, ISSN 1532-0464, E-ISSN 1532-0480, Vol. 35, nr 3, s. 178-185Artikkel i tidsskrift (Fagfellevurdert) Published
    Abstract [en]

    Public health programs today constitute a multi-professional inter-organizational environment, where both health service and other organizations are involved. Developing information systems, including the IT security measures needed to suit this complex context, is a challenge. To ensure that all involved organizations work together towards a common goal, i.e., promotion of health, an intuitive strategy would be to share information freely in these programs. However, in practice it is seldom possible to realize this ideal scenario. One reason may be that ethical issues are often ignored in the system development process. This investigation uses case study methods to explore ethical obstacles originating in the shared use of geographic health information in public health programs and how this affects the design of information systems. Concerns involving confidentiality caused by geographically referenced health information and influences of professional and organizational codes are discussed. The experience presented shows that disregard of ethical issues can result in a prolonged development process for public health information systems. Finally, a theoretical model of design issues based on the case study results is presented.

    Emneord
    Confidentiality, Geographical information systems (GIS), Health informatics, IT security, Privacy, Public health, Requirements engineering (RE), Systems development
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-46991 (URN)10.1016/S1532-0464(02)00527-0 (DOI)
    Tilgjengelig fra: 2009-10-11 Laget: 2009-10-11 Sist oppdatert: 2017-12-13
    4. Prerequisites to use information system as support in Public Health Programs: an initial requirements elicitation and analysis for WHO safe sommunities
    Åpne denne publikasjonen i ny fane eller vindu >>Prerequisites to use information system as support in Public Health Programs: an initial requirements elicitation and analysis for WHO safe sommunities
    (engelsk)Manuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    The public health context constitutes a heterogeneous environment and presents a complex task for system developers. In this study, the requirements elicitation and analysis of prerequisites for using information systems (ISs) in public health programs is investigated. Special interest is also paid to geographical information system (GIS) functionality. The specific objective of this study is to explore the need for support of ISs and GISs that exists in WHO Safe Communities in Sweden. To elicit the requirements, a questionnaire based on the critical incident technique (CIT) was used. By using CIT, it is possible to focus the development on the problems experienced by the users. Moreover, it covers both technical and social requirements. Thereafter a voice of the customer table is used to transform the needs to technical requirements. The study results in recommendations for ISs development with GIS functionality for public health practitioners.

    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-86940 (URN)
    Tilgjengelig fra: 2013-01-08 Laget: 2013-01-08 Sist oppdatert: 2013-09-05
    5. Design of information systems for Public Health Programs: the case of Motala WHO safe community
    Åpne denne publikasjonen i ny fane eller vindu >>Design of information systems for Public Health Programs: the case of Motala WHO safe community
    (engelsk)Manuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    Objectives: In public health only a fraction of the potential that information systems (ISs) and geographical information systems (GISs) provides has been used. Public-health programs are executed in complex environments, and are characterized by being multi-professional and inter-organizational. Hence, there is a need for extensive studies of how ISs should be con figured to truly support public health practitioners. The objective of this study is to explore how information technology, including GIS functionality, should be configured to support practitioners in community-based public health programs.

    Measurements: The critical incident technique, interviews, the voice of the customer table, and use case maps were used for data collection.

    Results: Communication and a clearinghouse with contact persons were identified as key features and support for creating both official and unofficial contact networks is provided. The design has a module-based architecture, including an extendable easy-to-use module with GIS functionality.

    Conclusions: To support both individuals and heterogonous teams in complex public health programs, a module-based architecture is proposed. Hence, the system can be tailor-made to support individuals in their specific tasks and at their specific skill level.

    Emneord
    Public health, Requirements engineering, Prototypes, Information systems development, Safe community, GIS
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-86942 (URN)
    Tilgjengelig fra: 2013-01-08 Laget: 2013-01-08 Sist oppdatert: 2013-09-05
  • 3015.
    Ölvingson, Christina
    et al.
    Linköpings universitet, Institutionen för datavetenskap. Linköpings universitet, Tekniska högskolan.
    Hallberg, Niklas
    Linköpings universitet, Institutionen för datavetenskap. Linköpings universitet, Tekniska högskolan.
    Timpka, Toomas
    Linköpings universitet, Institutionen för hälsa och samhälle. Linköpings universitet, Hälsouniversitetet.
    Greenes, RA
    Adaptation of the critical incident technique to requirements engineering in public health2001Inngår i: Studies in Health Technology and Informatics, ISSN 0926-9630, E-ISSN 1879-8365, Vol. 84, nr 2, s. 1180-1184Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The introduction of modern information systems in public health provides new possibilities for improvements in public health services and hence also of population's health. However, development of information systems that truly supports public health practices requires that technical, cognitive, and social issues be taken into consideration. In requirements engineering for public health, a notable problem is that of capturing all aspects of the future user's voices, i.e., the viewpoints of different public health practitioners. Failing to capture these voices will result in inefficient or even useless systems. The aim of this paper is to report a requirements-engineering instrument to describe problems in the daily work of public health professionals. The issues of concern thus captured can be used as the basis for formulating the requirements of information systems for public health professionals.

  • 3016.
    Örnberg, Dennis
    Linköpings universitet, Institutionen för datavetenskap, Databas och informationsteknik.
    Comparison and implementation of graph visualization algorithms using JavaFX2016Independent thesis Basic level (university diploma), 10,5 poäng / 16 hpOppgave
    Abstract [en]

    Graph drawing is an important area in computer science and it has many different application areas. For example, graphs can be used to visualize structures like networks and databases. When the graphs are really big, however, it becomes difficult to draw them so that the user can get a good overview of the whole graph and all of its data. There exist a number of different algorithms that can be used to draw graphs, but they have a lot of differences. The goal of this report was to find an algorithm that produces graphs of satisfying quality in little time for the purpose of ontology engineering, and implement it using a platform that visualizes the graph using JavaFX. It is supposed to work on a visualization table with a touch display. A list of criteria for both the algorithm and the application was made to ensure that the final result would be satisfactory. A comparison between four well-known graph visualization algorithms was made and “GEM” was found to be the best suited algorithm for visualizing big graphs. The two platforms Gephi and Prefux were introduced and compared to each other, and the decision was made to implement the algorithm in Prefux since it has support for JavaFX. The algorithm was implemented and evaluated, it was found to produce visually pleasing graphs within a reasonable time frame. A modified version of the algorithm called GEM-2 was also introduced, implemented and evaluated. With GEM-2, the user can pick a specific number of levels to be expanded at first, additional levels can then be expanded by hand. This greatly improves the performance when there is no need to expand the whole graph at once, however, it also increases the amount of edge crossings which makes the graph less visually pleasing.

  • 3017.
    Östergaard, Stefan
    Linköpings universitet, Institutionen för datavetenskap.
    Extending IMS specifications based on the charging needs of IPTV2006Independent thesis Basic level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    With the standardization of IP Multimedia Subsystem (IMS), the telecommunications scene becomes more and more converged and in the future we will most likely access our services from all kinds of devices and link them together. One important future access method that has so far been left out of the standardization is television. There is a need for Internet Protocol Television (IPTV) to work together with IMS and this thesis focuses on one aspect of that convergence, namely charging.

    The problem explored in this thesis is if there is an efficient way of charging for IPTV services while taking advantage of the IMS charging functionality and this is done for two aspects of the problem. First, the possiblilty of an efficient Session Initiation Protocol (SIP) signaling schema is investigated and then a good charging Application Programming Interface (API) to be used when developing applications is investigated. The findings of these two investigations are then tested and improved during the implementation of a demo application.

    This thesis delivers specifications for a signaling schema that enables a Set-Top Box (STB) to pass charging information to an IMS network via INFO requests inside a special charging session. The schema is small and extendable to ensure that it can be modified further on if necessary. The thesis also delivers an encapsulating and intuitive charging API to be used by developers who want to charge for their services.

  • 3018.
    Östlund, Per
    Linköpings universitet, Institutionen för datavetenskap.
    Simulation of Modelica Models on the CUDA Architecture2009Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Simulations are very important for many reasons, and finding ways of accelerating simulations are therefore interesting. In this thesis the feasibility of automatically generating simulation code for a limited set of Modelica models that can be executed on NVIDIAs CUDA architecture is studied. The OpenModelica compiler, an open-source Modelica compiler, was for this purpose extended to generate CUDA code.

    This thesis presents an overview of the CUDA architecture, and looks at the problems that need to be solved to generate efficient simulation code for this architecture. Methods of finding parallelism in models that can be used on the highly parallel CUDA architecture are shown, and methods of efficiently using the available memory spaces on the architecture are also presented.

    This thesis shows that it is possible to generate CUDA simulation code for the set of Modelica models that were chosen. It also shows that for models with a large amount of parallelism it is possible to get significant speedups compared with simulation on a normal processor, and a speedup of 4.6 was reached for one of the models used in the thesis. Several suggestions on how the CUDA architecture can be used even more efficiently for Modelica simulations are also given.

58596061 3001 - 3018 of 3018
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf