liu.seSearch for publications in DiVA
Change search
Refine search result
1234567 1 - 50 of 2896
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    Lundqvist, Tobias
    Linköping University, Department of Electrical Engineering, Computer Vision.
    3D mapping with iPhone2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Today, 3D models of cities are created from aerial images using a camera rig. Images, together with sensor data from the flights, are stored for further processing when building 3D models. However, there is a market demand for a more mobile solution of satisfactory quality. If the camera position can be calculated for each image, there is an existing algorithm available for the creation of 3D models.

    This master thesis project aims to investigate whether the iPhone 4 offers good enough image and sensor data quality from which 3D models can be created. Calculations on movements and rotations from sensor data forms the foundation of the image processing, and should refine the camera position estimations.

    The 3D models are built only from image processing since sensor data cannot be used due to poor data accuracy. Because of that, the scaling of the 3D models are unknown and a measurement is needed on the real objects to make scaling possible. Compared to a test algorithm that calculates 3D models from only images, already available at the SBD’s system, the quality of the 3D model in this master thesis project is almost the same or, in some respects, even better when compared with the human eye.

  • 2.
    Schlaug, Frida
    Linköping University, Department of Electrical Engineering, Information Coding.
    3D Modeling in Augmented Reality2011Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This project aims to make 3D modeling easy through the use of augmented reality. Black and white markers are used to augment the virtual objects. Detection of these is done with help from ARToolKit, developed at University of Washington.

    The model is represented by voxels, and visualised through the marching cubes algorithm. Two physical tools are available to edit the model; one for adding and one for removing volume. Thus the application is similar to sculpting or drawing in 3D.

    Thee resulting application is both easy to use and cheap in that it does not require expensive equipment.

     

  • 3.
    Hultman, Martin
    et al.
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.
    Fredriksson, Ingemar
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering. Perimed AB, Järfälla-Stockholm, Sweden.
    Larsson, Marcus
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.
    Alvandpour, Atila
    Linköping University, Department of Electrical Engineering, Integrated Circuits and Systems. Linköping University, Faculty of Science & Engineering.
    Strömberg, Tomas
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.
    A 15.6 frames per second 1 megapixel Multiple Exposure Laser Speckle Contrast Imaging setup2017In: Journal of Biophotonics, ISSN 1864-063X, E-ISSN 1864-0648Article in journal (Refereed)
    Abstract [en]

    A multiple exposure laser speckle contrast imaging (MELSCI) setup for visualizing blood perfusion was developed using a field programmable gate array (FPGA), connected to a 1000 frames per second (fps) 1-megapixel camera sensor. Multiple exposure time images at 1, 2, 4, 8, 16, 32 and 64 milliseconds were calculated by cumulative summation of 64 consecutive snapshot images. The local contrast was calculated for all exposure times using regions of 4 × 4 pixels. Averaging of multiple contrast images from the 64-millisecond acquisition was done to improve the signal-to-noise ratio. The results show that with an effective implementation of the algorithm on an FPGA, contrast images at all exposure times can be calculated in only 28 milliseconds. The algorithm was applied to data recorded during a 5 minutes finger occlusion. Expected contrast changes were found during occlusion and the following hyperemia in the occluded finger, while unprovoked fingers showed constant contrast during the experiment. The developed setup is capable of massive data processing on an FPGA that enables processing of MELSCI data in 15.6 fps (1000/64 milliseconds). It also leads to improved frame rates, enhanced image quality and enables the calculation of improved microcirculatory perfusion estimates compared to single exposure time systems.

    The full text will be freely available from 2018-08-07 12:43
  • 4.
    Bergman, Niclas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Bayesian Approach to Terrain-Aided Navigation1996Report (Other academic)
    Abstract [en]

    The terrain-aided navigation problem is a highly nonlinear estimation problem with application to aircraft navigation and missile guidance. In this work the Bayesian approach is used to estimate the aircraft position. With a quantization of the state space an implementable algorithm is found. Problems with low excitation, rough terrain and parallel position hypothesis are handled in a reliable way. The algorithm is evaluated using simulations on real terrain databases.

  • 5.
    Bergman, Niclas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Bayesian Approach to Terrain-Aided Navigation1997In: Proceedings of the 11th IFAC Symposium on System Identification, 1997, 1531-1536 p.Conference paper (Refereed)
    Abstract [en]

    The terrain-aided navigation problem is a highly nonlinear estimation problem with application to aircraft navigation and missile guidance. In this work the Bayesian approach is used to estimate the aircraft position. With a quantization of the state space an implementable algorithm is found. Problems with low excitation, rough terrain and parallel position hypothesis are handled in a reliable way. The algorithm is evaluated using simulations on real terrain databases.

  • 6.
    Bergman, Niclas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Bayesian Approach to Terrain-Aided Navigation II1997Report (Other academic)
    Abstract [en]

    The terrain-aided navigation problem is a highly nonlinear estimation problem with application to aircraft navigation and missile guidance. In this work the Bayesian approach is used to estimate the aircraft position. With a quantization of the state space an implementable algorithm is found. Problems with low excitation, rough terrain and parallel position hypothesis are handled in a reliable way. The algorithm is evaluated using simulations on real terrain databases.

  • 7.
    Eklund, Anders
    et al.
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Science & Engineering. Linköping University, Faculty of Arts and Sciences.
    Lindqvist, Martin A
    Department of Biostatistics, Johns Hopkins University, Baltimore, USA.
    Villani, Mattias
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    A Bayesian Heteroscedastic GLM with Application to fMRI Data with Motion Spikes2017In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 155, 354-369 p.Article in journal (Refereed)
    Abstract [en]

    We propose a voxel-wise general linear model with autoregressive noise and heteroscedastic noise innovations (GLMH) for analyzing functional magnetic resonance imaging (fMRI) data. The model is analyzed from a Bayesian perspective and has the benefit of automatically down-weighting time points close to motion spikes in a data-driven manner. We develop a highly efficient Markov Chain Monte Carlo (MCMC) algorithm that allows for Bayesian variable selection among the regressors to model both the mean (i.e., the design matrix) and variance. This makes it possible to include a broad range of explanatory variables in both the mean and variance (e.g., time trends, activation stimuli, head motion parameters and their temporal derivatives), and to compute the posterior probability of inclusion from the MCMC output. Variable selection is also applied to the lags in the autoregressive noise process, making it possible to infer the lag order from the data simultaneously with all other model parameters. We use both simulated data and real fMRI data from OpenfMRI to illustrate the importance of proper modeling of heteroscedasticity in fMRI data analysis. Our results show that the GLMH tends to detect more brain activity, compared to its homoscedastic counterpart, by allowing the variance to change over time depending on the degree of head motion.

    The full text will be freely available from 2018-05-01 10:46
  • 8. Rantanen, VV
    et al.
    Gyllenberg, M
    Koski, Timo
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Mathematical Statistics .
    Johnson, MS
    A Bayesian molecular interaction library2003In: Journal of Computer-Aided Molecular Design, ISSN 0920-654X, Vol. 17, no 7, 435-461 p.Article in journal (Refereed)
    Abstract [en]

    We describe a library of molecular fragments designed to model and predict non-bonded interactions between atoms. We apply the Bayesian approach, whereby prior knowledge and uncertainty of the mathematical model are incorporated into the estimated model and its parameters. The molecular interaction data are strengthened by narrowing the atom classification to 14 atom types, focusing on independent molecular contacts that lie within a short cutoff distance, and symmetrizing the interaction data for the molecular fragments. Furthermore, the location of atoms in contact with a molecular fragment are modeled by Gaussian mixture densities whose maximum a posteriori estimates are obtained by applying a version of the expectation-maximization algorithm that incorporates hyperparameters for the components of the Gaussian mixtures. A routine is introduced providing the hyperparameters and the initial values of the parameters of the Gaussian mixture densities. A model selection criterion, based on the concept of a 'minimum message length' is used to automatically select the optimal complexity of a mixture model and the most suitable orientation of a reference frame for a fragment in a coordinate system. The type of atom interacting with a molecular fragment is predicted by values of the posterior probability function and the accuracy of these predictions is evaluated by comparing the predicted atom type with the actual atom type seen in crystal structures. The fact that an atom will simultaneously interact with several molecular fragments forming a cohesive network of interactions is exploited by introducing two strategies that combine the predictions of atom types given by multiple fragments. The accuracy of these combined predictions is compared with those based on an individual fragment. Exhaustive validation analyses and qualitative examples ( e. g., the ligand-binding domain of glutamate receptors) demonstrate that these improvements lead to effective modeling and prediction of molecular interactions.

  • 9.
    Curescu, C.
    et al.
    Ericsson Research, Torshamnsgatan 23, Kista, 164 83 Stockholm, Sweden.
    Nadjm-Tehrani, Simin
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, RTSLAB - Real-Time Systems Laboratory.
    A bidding algorithm for optimized utility-based resource allocation in ad hoc networks2008In: IEEE Transactions on Mobile Computing, ISSN 1536-1233, Vol. 7, no 12, 1397-1414 p.Article in journal (Refereed)
    Abstract [en]

    This paper proposes a scheme for bandwidth allocation in wireless ad hoc networks. The quality-of-service (QoS) levels for each end-to-end flow are expressed using a resource-utility function, and our algorithms aim to maximize aggregated utility. The shared channel is modeled as a bandwidth resource defined by maximal cliques of mutual interfering links. We propose a novel resource allocation algorithm that employs an auction mechanism in which flows are bidding for resources. The bids depend both on the flow's utility function and the intrinsically derived shadow prices. We then combine the admission control scheme with a utility-aware on-demand shortest path routing algorithm where shadow prices are used as a natural distance metric. As a baseline for evaluation, we show that the problem can be formulated as a linear programming (LP) problem. Thus, we can compare the performance of our distributed scheme to the centralized LP solution, registering results very close to the optimum. Next, we isolate the performance of price-based routing and show its advantages in hotspot scenarios, and also propose an asynchronous version that is more feasible for ad hoc environments. Further experimental evaluation compares our scheme with the state of the art derived from Kelly's utility maximization framework and shows that our approach exhibits superior performance for networks with increased mobility or less frequent allocations. © 2008 IEEE.

  • 10.
    Bredström, David
    et al.
    Linköping University, Department of Mathematics, Optimization . Linköping University, The Institute of Technology.
    Rönnqvist, Mikael
    Norwegian School of Economics and Business Administration (NHH), Bergen, Norway.
    A branch and price algorithm for the combined vehicle routing and scheduling problem with synchronization constraints2007In: Social Science Research Network, Vol. 7Article in journal (Refereed)
    Abstract [en]

    In this paper we present a branch and price algorithm for the combined vehicle routing and scheduling problem with synchronization constraints. The synchronization constraints are used to model situations when two or more customers need simultaneous service. The synchronization constraints impose a temporal dependency between vehicles, and it follows that a classical decomposition of the vehicle routing and scheduling problem is not directly applicable. With our algorithm, we have solved 44 problems to optimality from the 60 problems used for numerical experiments. The algorithm performs time window branching, and the number of subproblem calls is kept low by adjustment of the columns service times.

  • 11.
    Edström, Krister
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Causal Propagation Algorithm for Switched Bond Graphs using Bicausality1998Report (Other academic)
  • 12.
    Edström, Krister
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Causal Propagation Algorithm for Switched Bond Graphs using Bicausality1999In: Proceedings of the 1999 International Conference on Bond Graph Modeling and Simulation, 1999, 77-82 p.Conference paper (Refereed)
  • 13.
    Pham, Tuan D
    et al.
    Bioinformatics Applications Research Center; School of Information Technology, James Cook University, Townsville, QLD, Australia.
    Shim, Byung-Sub
    Bioinformatics Applications Research Center.
    A cepstral distortion measure for protein comparison and identification2005In: Machine Learning and Cybernetics, 2005. Proceedings of 2005 International Conference on, 2005, Vol. 9, 5609-5614 p.Conference paper (Refereed)
    Abstract [en]

    Protein sequence comparison is the most powerful tool for the identification of novel protein structure and function. This type of inference is commonly based on the similar sequence-similar structure-similar function paradigm, and derived by sequence similarity searching on databases of protein sequences. As entire genomes have been being determined at a rapid rate, computational methods for comparing protein sequences will be more essential for probing the complexity of molecular machines. In this paper we introduce a pattern-comparison algorithm, which is based on the mathematical concept of linear-predictive-coding based cepstral distortion measure, for comparison and identification of protein sequences. Experimental results on a real data set of functionally related and functionally non-related protein sequences have shown the effectiveness of the proposed approach on both accuracy and computational efficiency.

  • 14.
    Ribeiro, Luis
    et al.
    Uninova - CTS, Departamento de Engenharia Electrotécnica, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal.
    Barata, Jose
    Uninova - CTS, Departamento de Engenharia Electrotécnica, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal.
    Ferreira, Joao
    Uninova - CTS, Departamento de Engenharia Electrotécnica, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal.
    A co-evolving diagnostic algorithm for evolvable production systems: A case of learning2010In: 10th IFAC Workshop on Intelligent Manufacturing Systems (2010) / [ed] Paulo Leitao, Carlos Eduardo Pereira, José Barata, International Federation of Automatic Control , 2010, Vol. 10, 126-131 p.Conference paper (Refereed)
    Abstract [en]

    With the systematic implantation and acceptance of IT in the shop-floor a wide range of Production Paradigms have emerged that exploring these technological novelties promise to revolutionize the way current plant floor operate and react to emerging opportunities and disturbances. With the increase of distributed and autonomous components that interact in the execution of processes current diagnostic approaches will soon be insufficient. While current system dynamics are complex and to a certain extent unpredictable the adoption of the next generation of approaches and technologies comes at the cost of an yet increased complexity. The peer to peer nature of the interactions and the evolving nature of the future systems' structure require a co-evolving regulatory mechanism that to a great deal has to be implemented under the scope of monitoring and diagnosis. In this article a diagnostic algorithm that has the ability to co-evolve with the remaining system, through learning and adaptation to the operational conditions, is presented and discussed.

  • 15.
    Björklund, Patrik
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Science and Technology.
    Värbrand, Peter
    Linköping University, The Institute of Technology. Linköping University, Department of Science and Technology.
    Yuan, Di
    Linköping University, The Institute of Technology. Linköping University, Department of Science and Technology, Communications and Transport Systems.
    A Column Generation Method for Spatial TDMA Scheduling in Ad Hoc Networks2004In: Ad hoc networks, ISSN 1570-8705, Vol. 2, no Issue 4, 405-418 p.Article in journal (Refereed)
    Abstract [en]

    An ad hoc network can be set up by a number of units without the need of any permanent infrastructure. Two units establish a communication link if the channel quality is sufficiently high. As not all pairs of units can establish direct links, traffic between two units may have to be relayed through other units. This is known as the multi-hop functionality. In military command and control systems, ad hoc networks are also referred to as multi-hop radio networks. Spatial TDMA (STDMA) is a scheme for access control in ad hoc networks. STDMA improves TDMA by allowing simultaneous transmission of multiple units. In this paper, we study the optimization problem of STDMA scheduling, where the objective is to find minimum-length schedules. Previous work for this problem has focused on heuristics, whose performance is difficult to analyze when optimal solutions are not known. We develop novel mathematical programming formulations for this problem, and present a column generation solution method. Our numerical experiments show that the method generates a very tight bound to the optimal schedule length, and thereby enables optimal or near-optimal solutions. The column generation method can be used to provide benchmarks when evaluating STDMA scheduling algorithms. In particular, we use the bound obtained in the column generation method to evaluate a simple greedy algorithm that is suitable for distributed implementations.

  • 16.
    Xie, Yi
    et al.
    Wuhan Engineering Consulting Bureau, Wuhan, China.
    Takala, Josu
    Faculty of TechnologyUniversity of Vaasa, Vaasa, Finland.
    Liu, Yang
    Faculty of TechnologyUniversity of Vaasa, Vaasa, Finland.
    Chen, Yong
    Old Dominion University, Norfolk, USA.
    A combinatorial optimization model for enterprise patent transfer2015In: Journal of Special Topics in Information Technology and Management, ISSN 1385-951X, E-ISSN 1573-7667, Vol. 16, no 4, 327-337 p.Article in journal (Refereed)
    Abstract [en]

    Enterprises need patent transfer strategies to improve their technology management. This paper proposes a combinatorial optimization model that is based on intelligent computing to support enterprises’ decision making in developing patent transfer strategy. The model adopts the Black–Scholes Option Pricing Model and Arbitrage Pricing Theory to estimate a patent’s value. Based on the estimation, a hybrid genetic algorithm is applied that combines genetic algorithms and greedy strategy for the optimization purpose. Encode repairing and a single-point crossover are applied as well. To validate this proposed model, a case study is conducted. The results indicate that the proposed model is effective for achieving optimal solutions. The combinatorial optimization model can help enterprise promote their benefits from patent sale and support the decision making process when enterprises develop patent transfer strategies.

  • 17.
    Jung, Daniel
    et al.
    Linköping University, Department of Electrical Engineering, Vehicular Systems. Linköping University, Faculty of Science & Engineering.
    Yew Ng, Kok
    Monash University, Malaysia.
    Frisk, Erik
    Linköping University, Department of Electrical Engineering, Vehicular Systems. Linköping University, Faculty of Science & Engineering.
    Krysander, Mattias
    Linköping University, Department of Electrical Engineering, Computer Engineering. Linköping University, Faculty of Science & Engineering.
    A combined diagnosis system design using model-based and data-driven methods2016In: 2016 3RD CONFERENCE ON CONTROL AND FAULT-TOLERANT SYSTEMS (SYSTOL), IEEE , 2016, 177-182 p.Conference paper (Refereed)
    Abstract [en]

    A hybrid diagnosis system design is proposed that combines model-based and data-driven diagnosis methods for fault isolation. A set of residuals are used to detect if there is a fault in the system and a consistency-based fault isolation algorithm is used to compute all diagnosis candidates that can explain the triggered residuals. To improve fault isolation, diagnosis candidates are ranked by evaluating the residuals using a set of one-class support vector machines trained using data from different faults. The proposed diagnosis system design is evaluated using simulations of a model describing the air-flow in an internal combustion engine.

  • 18.
    Jung, Daniel
    et al.
    Linköping University, Department of Electrical Engineering, Vehicular Systems.
    Ng, Kok Yew
    Linköping University, Department of Electrical Engineering, Vehicular Systems.
    Frisk, Erik
    Linköping University, Department of Electrical Engineering, Vehicular Systems.
    Krysander, Mattias
    Linköping University, Department of Electrical Engineering, Vehicular Systems.
    A combined diagnosis system design using model-based and data-driven techniques2016Conference paper (Refereed)
    Abstract [en]

    A hybrid diagnosis system design is proposed that combines model-based and data-driven diagnosis methods for fault isolation. A set of residuals are used to detect if there is a fault in the system and a consistency-based fault isolation algorithm is used to compute all diagnosis candidates that can explain the triggered residuals. To improve fault isolation, diagnosis candidates are ranked by evaluating the residuals using a set of one-class support vector machines trained using data from different faults. The proposed diagnosis system design is evaluated using simulations of a model describing the air-flow in an internal combustion engine.

  • 19.
    McKelvey, Tomas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Combined State-Space Identification Algorithm Applied to Data From a Modal Analysis Experiment on a Separation System1994Report (Other academic)
    Abstract [en]

    This paper discusses identification of state-space models from impulse response or initial value experiments. Kung's geometrical realization algorithm is combined with classical nonlinear parametric optimization to improve the quality of the estimated state-space model. These ideas are applied on real data originating from a modal analysis experiment on a separation system. The results indicate that the parametric optimization step increases the model quality significantly compared with the initial model the realization algorithm provides.

  • 20.
    McKelvey, Tomas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Combined State-Space Identification Algorithm Applied to Data From a Modal Analysis Experiment on a Separation System1994In: Proceedings of the 33rd IEEE Conference on Decision and Control, 1994, 2286-2287 vol.3 p.Conference paper (Refereed)
    Abstract [en]

    This paper discusses identification of state-space models from impulse response or initial value experiments. Kung's geometrical realization algorithm is combined with classical nonlinear parametric optimization to improve the quality of the estimated state-space model. These ideas are applied on real data originating from a modal analysis experiment on a separation system. The results indicate that the parametric optimization step increases the model quality significantly compared with the initial model the realization algorithm provides.

  • 21.
    Ljung, Lennart
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Sjöberg, Jonas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Comment on Leakage in Adaptive Algorithms1992Report (Other academic)
    Abstract [en]

    By "leakage" in adaptive control and adaptive signal processing algorithm is understood that a pull term towards a given parameter value is introduced. Leakage has been introduced both as trick to be able to prove certain convergence results as an ad hoc means for obtaining less drifting parameters. Leakage is the same as regularization and we explain what benefits - from an estimation point of view - this gives.

  • 22.
    Ljung, Lennart
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Sjöberg, Jonas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Comment on Leakage in Adaptive Algorithms1991Report (Other academic)
    Abstract [en]

    By "leakage" in adaptive control and adaptive signal processing algorithm is understood that a pull term towards a given parameter value is introduced. Leakage has been introduced both as trick to be able to prove certain convergence results as an ad hoc means for obtaining less drifting parameters. Leakage is the same as regularization and we explain what benefits - from an estimation point of view - this gives.

  • 23.
    Ljung, Lennart
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Sjöberg, Jonas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Comment on Leakage in Adaptive Algorithms1992In: Proceedings of the 4th IFAC International Symposium on Adaptive Systems in Control and Signal Processing, 1992, 377-382 p.Conference paper (Refereed)
    Abstract [en]

    By "leakage" in adaptive control and adaptive signal processing algorithm is understood that a pull term towards a given parameter value is introduced. Leakage has been introduced both as trick to be able to prove certain convergence results as an ad hoc means for obtaining less drifting parameters. Leakage is the same as regularization and we explain what benefits - from an estimation point of view - this gives.

  • 24.
    Hildebrand, Cisilia
    et al.
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, The Institute of Technology.
    Hörtin, Stina
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, The Institute of Technology.
    A comparative study between Emme and Visum with respect to public transport assignment2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Macroscopic traffic simulations are widely used in the world in order to provide assistance in the traffic infrastructure development as well as for the strategic traffic planning. When studying a large traffic network macroscopic traffic simulation can be used to model current and future traffic situations. The two most common software used for traffic simulation in Sweden today are Emme and Visum, developed by INRO respective PTV.

    The aim of the thesis is to perform a comparison between the software Emme and Visum with respect to the assignment of public transport, in other words how passengers choose their routes on the existing public transport lines. However, in order to make a complete software comparison the run-time, analysis capabilities, multi-modality, capacity to model various behavioural phenomena like crowding, fares etc. this will not be done in this comparison. It is of interest to study the differences between the two software algorithms and why they might occur because the Swedish Transport Administration uses Emme and the Traffic Administration in Stockholm uses Visum when planning public transport. The comparison will include the resulting volumes on transit lines, travel times, flow through specific nodes, number of boarding, auxiliary volumes and number of transits. The goal of this work is to answer the following objective: What are the differences with modelling a public transport network in Emme and in Visum, based on that the passengers only have information about the travel times and the line frequency, and why does the differences occur?

    In order to evaluate how the algorithms work in a larger network, Nacka municipality (in Stockholm) and the new metro route between Nacka Forum and Kungsträdgården have been used. The motivation for choosing this area and case is due to that it is interesting to see what differences could occur between the programs when there is a major change in the traffic network.

    The network of Nacka, and parts of Stockholm City, has been developed from an existing road network of Sweden and then restricted by "cutting out" the area of interest and then removing all public transportation lines outside the selected area. The OD-matrix was also limited and in order not to loose the correct flow of travellers portal zones was used to collect and retain volumes.

    To find out why the differences occur the headway-based algorithms in each software were studied carefully. An example of a small and simple network (consisting of only a start and end node) has been used to demonstrate and show how the algorithms work and why volumes split differently on the existing transit lines in Emme and Visum. The limited network of Nacka shows how the different software may produce different results in a larger public transport network.

    The results show that there are differences between the program algorithms but the significance varies depending on which output is being studied and the size of the network. The Visum algorithm results in more total boardings, i.e. more passengers have an optimal strategy including a transit. The algorithms are very similar in both software programs, since they include more or less parts of the optimal strategy. The parameters used are taken more or less into consideration in Emme and Visum. For example Visum will first of all focus on the shortest total travel time and then consider the other lines with respect to the maximum waiting time. Emme however, first focuses on the shortest travel time and then considers the total travel time for other lines with half the waiting time instead of the maximum wait time. This results in that less transit lines will be attractive in Emme compared to Visum. The thesis concludes that varying the parameters for public transport in each software algorithm one can obtain similar results, which implies that it is most important to choose the best parameter values and not to choose the "best" software when simulating a traffic network.

  • 25.
    Gustavsson, Johan
    Linköping University, Department of Computer and Information Science, Software and Systems. Zenterio.
    A Comparative Study of Automated Test Explorers2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With modern computer systems becoming more and more complicated, theimportance of rigorous testing to ensure the quality of the product increases.This, however, means that the cost to perform tests also increases. In orderto address this problem, a lot of research has been conducted during thelast years to find a more automated way of testing software systems. Inthis thesis, different algorithms to automatically explore and test a systemhave been implemented and evaluated. In addition to this, a second setof algorithms have been implemented with the objective to isolate whichinteractions with the system were responsible for a failure. These algorithmswere also evaluated and compared against each other. In the first evaluationtwo explorers, which I called DeBruijn and LStarExplorer, were consideredsuperior to the other. The first used a DeBruijn sequence to brute forcea solution while the second used the L*-algorithm to build an FSM overthe system under test. This FSM could then be used to provide a moreaccurate description for when the failure occurred. The result from thesecond evaluation were two reducers which both tried to recreate a failureby first applying interactions performed just before the failure occurred. Ifthis was not successful, they tried interactions further and further away, untilthe failure was triggered. In addition to this, the thesis contains descriptionsabout the framework used to run the different strategies.

  • 26.
    Sundström, Timmy
    Linköping University, Department of Electrical Engineering.
    A comparison of circuit implementations from a security perspective2005Independent thesis Basic level (professional degree), 20 points / 30 hpStudent thesis
    Abstract [en]

    In the late 90's research showed that all circuit implementations were susceptible to power analysis and that this analysis could be used to extract secret information. Further research to counteract this new threat by adding countermeasures or modifying the nderlaying algorithm only seemed to slow down the attack.

    There were no objective analysis of how different circuit implementations leak information and by what magnitude.

    This thesis will present such an objective comparison on five different logic styles. The comparison results are based on simulations performed on transistor level and show that it is possible to implement circuits in a more secure and easier way than what has been previously suggested.

  • 27.
    Daneva (Mitradjieva), Maria
    et al.
    Linköping University, Department of Mathematics, Optimization . Linköping University, The Institute of Technology.
    Larsson, Torbjörn
    Linköping University, Department of Mathematics, Optimization . Linköping University, The Institute of Technology.
    Patriksson, Michael
    Mathematical Sciences, Chalmers University of Technology and Göteborg University, Gothenburg, Sweden.
    Rydergren, Clas
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    A Comparison of Feasible Direction Methods for the Stochastic Transportation Problem2010In: Computational optimization and applications, ISSN 0926-6003, E-ISSN 1573-2894, Vol. 46, no 3, 451-466 p.Article in journal (Refereed)
    Abstract [en]

    The feasible direction method of Frank and Wolfe has been claimed to be efficient for solving the stochastic transportation problem. While this is true for very moderate accuracy requirements, substantially more efficient algorithms are otherwise diagonalized Newton and conjugate Frank–Wolfe algorithms, which we describe and evaluate. Like the Frank–Wolfe algorithm, these two algorithms take advantage of the structure of the stochastic transportation problem. We also introduce a Frank–Wolfe type algorithm with multi-dimensional search; this search procedure exploits the Cartesian product structure of the problem. Numerical results for two classic test problem sets are given. The three new methods that are considered are shown to be superior to the Frank–Wolfe method, and also to an earlier suggested heuristic acceleration of the Frank–Wolfe method.

  • 28.
    Ledin, Staffan
    Linköping University, Department of Electrical Engineering, Integrated Circuits and Systems.
    A Comparison of Radix-2 Square Root Algorithms Using Digit Recurrence2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    When designing an electronic system, it might be desirable to implement a custom square root calculator unit to ensure quick calculations. The different questions when it comes to square root units are many. What algorithms are there? How are these algorithms implemented? What are the benefits and disadvantages of the different implementations? The goal of this thesis work is to try to answer these questions. In this paper, several different methods of calculating the radix-2 square root by digit recurrence are studied, designed and compared. The three main algorithms that are studied are the restoring square root algorithm, the non-restoring square root algorithm and the SRT (Sweeney, Robertson, Tocher) square root algorithm. They are all designed using the same technology and identical components where applicable. This is done in order to ensure that the comparisons give a fair assessment of the viability of the different algorithms. It is shown that the restoring and non-restoring square root algorithms perform similarly when using 65 nm technology, a 16 bit input, full data rate and 1.2 V power supply. The restoring square root algorithm have a slight edge when the systems are not pipelined, while the non-restoring algorithm performs slightly better when the systems are fully pipelined. The SRT square root algorithm perform worse than the other two in all cases.

  • 29.
    Reiss, Attila
    et al.
    German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany.
    Hendeby, Gustaf
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Stricker, Didier
    German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany.
    A Competitive Approach for Human Activity Recognition on Smartphones2013In: ESANN 2013, ESANN , 2013, 455-460 p.Conference paper (Refereed)
    Abstract [en]

    This paper describes a competitive approach developed for an activity recognition challenge. The competition was defined on a new and publicly available dataset of human activities, recorded with smartphone sensors. This work investigates different feature sets for the activity recognition task of the competition. Moreover, the focus is also on the introduction of a new, confidence-based boosting algorithm called ConfAda- Boost.M1. Results show that the new classification method outperforms commonly used classifiers, such as decision trees or AdaBoost.M1.

  • 30.
    Kroha, Petr
    et al.
    Technical University Praque, FEL-CVUT, Czechoslovakia .
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    A Compiler with Scheduling for a Specialized Synchronous Multiprocessor System1990In: Compiler Compilers / [ed] Dieter Hammer, Springer Berlin/Heidelberg, 1990, 132-146 p.Conference paper (Refereed)
    Abstract [en]

    This paper presents an algorithm for scheduling parallel activities in a specialized synchronous multiprocessor system. The algorithm is being implemented as a part of a cross-compiler for an extended parallel Single Instruction Computer (SIC). A SIC machine may contain multiple arithmetic processors, each associated with certain addresses in the address space.

  • 31. Hilding, D.
    et al.
    Torstenfelt, Bo
    Linköping University, The Institute of Technology. Linköping University, Department of Management and Engineering, Solid Mechanics .
    Klarbring, Anders
    Linköping University, The Institute of Technology. Linköping University, Department of Management and Engineering, Mechanics .
    A computational methodology for shape optimization of structures in frictionless contact2001In: Computer Methods in Applied Mechanics and Engineering, ISSN 0045-7825, Vol. 190, no 31, 4043-4060 p.Article in journal (Refereed)
    Abstract [en]

    This paper presents a computational methodology for shape optimization of structures in frictionless contact, which provides a basis for developing user-friendly and efficient shape optimization software. For evaluation it has been implemented as a subsystem of a general finite element software. The overall design and main principles of operation of this software are outlined. The parts connected to shape optimization are described in more detail. The key building blocks are: analytic sensitivity analysis, an adaptive finite element method, an accurate contact solver, and a sequential convex programing optimization algorithm. Results for three model application examples are presented, in which the contact pressure and the effective stress are optimized. cr 2001 Elsevier Science B.V. All rights reserved.

  • 32.
    Hessler, Martin
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Applied Mathematics.
    A computer study of some 1-error correcting perfect binary codes2005In: Australasian journal of combinatorics, ISSN 1034-4942, Vol. 33, 217-229 p.Article in journal (Refereed)
    Abstract [en]

    A general algrothm for classifying 1-error correction perfect binary codes of length n, rank n – log2(n+1)+1 and kernel of dimension n – log2/n+1) – 2 is presented. The algorithm gives for n = 31.

  • 33. Sparring Björkstén, Karin
    et al.
    Ekberg, Stefan
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Medicine and Care, Radiation Physics. Östergötlands Läns Landsting, Centre of Surgery and Oncology, Department of Radiation Physics.
    Säfström, Pia
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Medicine and Care, Medical Radiology. Östergötlands Läns Landsting, Centre for Medical Imaging, Department of Radiology UHL.
    Dige, N
    Granerus, Göran
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Medicine and Care, Clinical Physiology. Östergötlands Läns Landsting, Heart Centre, Department of Clinical Physiology.
    A computerized human reference brain for rCBF/SPET technetium-99m exametazime (HMPAO) investigation of elderly2004In: Clinical Physiology and Functional Imaging, ISSN 1475-0961, Vol. 24, no 4, 196-204 p.Article in journal (Refereed)
    Abstract [en]

    Using the bull's eye approach, a reference brain from the single photon emission tomography (SPET) images of 10 subjects aged 62-81 years with excellent mental and physical health was constructed. SPET images were acquired twice, 1 week apart, using a single detector rotating gamma camera collecting 64 planar images over a 360° orbit. The centre of each transaxial slice was first defined with an automatic edge detecting algorithm applied to an anterior-posterior and a side profile of the brain. Each slice was divided into 40 sectors. Maximum counts/pixel in each sector was picked. The 40 maximum count values from one transaxial slice were allowed to form a horizontal row in a new parametric image on the x-axis and slice number from the vertex to the basal parts of the brain on the y-axis. This new image was scaled to a 64 × 16 pixel matrix by interpolation, which meant a normalization of all studies to the same size. The parametric image in each subject was scaled with regard to intensity by a factor calculated by a normalization procedure using the least squares analysis. Mean and SD for each pixel were calculated, thereby constructing a 'mean parametric image', and a 'SD parametric image'. These two images are meant to be used as the reference brain for evaluation of patient studies. This method can be used for objective measurements of diffuse brain changes and for pattern recognition in larger groups of patients. Statistical multifactorial analysis of parameters used for acquisition and data processing is possible. © 2004 Blackwell Publishing Ltd.

  • 34.
    Gunnarsson, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Blom, Jonas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Gustafsson, Fredrik
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Concept of Power Control in Cellular Radio Systems1999Report (Other academic)
    Abstract [en]

    Due to the rapid expansion of the cellular radio systems market, and the need for wireless multimedia services, the available resources have to be utilized efficently. A common strategy is to control the transmitter powers of the mobiles and base stations. However, when applying power control to real systems, a number of challenges are prevalent. The performance is limited by time delays, nonlinearities and the availability of measurements and adequate quality measures. In this paper we present a Power Regulator concept, which comprises an Unknown Input Observer, a Quality Mapper and a Power Control Algorithm. The applicability of the concept is exemplified using frequency hopping GSM, and simulations indicate benefits of employing the proposed concept.

  • 35.
    Gunnarsson, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Blom, Jonas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Gustafsson, Fredrik
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Concept of Power Control in Cellular Radio Systems1999In: Proceedings of the 14th World Congress, 1999Conference paper (Refereed)
    Abstract [en]

    Due to the rapid expansion of the cellular radio systems market, and the need for wireless multimedia services, the available resources have to be utilized efficently. A common strategy is to control the transmitter powers of the mobiles and base stations. However, when applying power control to real systems, a number of challenges are prevalent. The performance is limited by time delays, nonlinearities and the availability of measurements and adequate quality measures. In this paper we present a Power Regulator concept, which comprises an Unknown Input Observer, a Quality Mapper and a Power Control Algorithm. The applicability of the concept is exemplified using frequency hopping GSM, and simulations indicate benefits of employing the proposed concept.

  • 36.
    Daneva, Maria
    et al.
    Linköping University, Department of Mathematics, Optimization . Linköping University, The Institute of Technology.
    Lindberg, Per Olov
    Linköping University, Department of Mathematics, Optimization . Linköping University, The Institute of Technology.
    A Conjugate Direction Frank-Wolfe Method for Nonconvex Problems2003Report (Refereed)
    Abstract [en]

    In this paper we propose an algorithm for solving problems with nonconvex objective function and linear constraints. We extend the previously suggested Conjugate direction Frank–Wolfe algorithm to nonconvex problems. We apply our method to multi-class user equilibria under social marginal cost pricing. Results of numerical experiments on Sioux Falls and Winnipeg are reported.

  • 37.
    Petersson, Ulla
    et al.
    Linköping University, Department of Medical and Health Sciences, General Practice. Linköping University, Faculty of Health Sciences.
    Östgren, Carl Johan
    Linköping University, Department of Medical and Health Sciences, General Practice. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Local Health Care Services in West Östergötland, Primary Health Care in Motala.
    Brudin, Lars
    Linköping University, Department of Medical and Health Sciences, Clinical Physiology. Linköping University, Faculty of Health Sciences.
    Nilsson, Peter
    Department of Clinical Sciences, Lund University, University Hospital, Malmö , Sweden.
    A consultation-based method is equal to SCORE and an extensive laboratory-based method in predicting risk of future cardiovascular disease2009In: European Journal of Cardiovascular Prevention & Rehabilitation, ISSN 1741-8267, E-ISSN 1741-8275, Vol. 16, no 5, 536-540 p.Article in journal (Refereed)
    Abstract [en]

    BACKGROUND: As cardiovascular disease (CVD) is one of the most common causes of mortality worldwide, much interest has been focused on reliable methods to predict cardiovascular risk.

    DESIGN: A cross-sectional, population-based screening study with 17-year follow-up in Southern Sweden.

    METHODS: We compared a non-laboratory, consultation-based risk assessment method comprising age, sex, present smoking, prevalent diabetes or hypertension at baseline, blood pressure (systolic >/=140 or diastolic >/=90), waist/height ratio and family history of CVD to Systemic COronary Risk Evaluation (SCORE) and a third model including several laboratory analyses, respectively, in predicting CVD risk. The study included clinical baseline data on 689 participants aged 40-59 years without CVD. Blood samples were analyzed for blood glucose, serum lipids, insulin, insulin-like growth factor-I, insulin-like growth factor binding protein-1, C-reactive protein, asymmetric dimethyl arginine and symmetric dimethyl arginine. During 17 years, the incidence of total CVD (first event) and death was registered.

    RESULTS: A non-laboratory-based risk assessment model, including variables easily obtained during one consultation visit to a general practitioner, predicted cardiovascular events as accurately [hazard ratio (HR): 2.72; 95% confidence interval (CI): 2.18-3.39, P<0.001] as the established SCORE algorithm (HR: 2.73; 95% CI: 2.10-3.55, P<0.001), which requires laboratory testing. Furthermore, adding a combination of sophisticated laboratory measurements covering lipids, inflammation and endothelial dysfunction, did not confer any additional value to the prediction of CVD risk (HR: 2.72; 95% CI: 2.19-3.37, P<0.001). The c-statistics for the consultation model (0.794; 95% CI: 0.762-0.823) was not significantly different from SCORE (0.767; 95% CI: 0.733-0.798, P=0.12) or the extended model (0.806; 95% CI: 0.774-0.835, P=0.55).

    CONCLUSION: A risk algorithm based on non-laboratory data from a single primary care consultation predicted long-term cardiovascular risk as accurately as either SCORE or an elaborate laboratory-based method in a defined middle-aged population.

  • 38.
    Arkad, Jenny
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control.
    Andersson, Tomas
    Linköping University, Department of Electrical Engineering, Automatic Control.
    A Control Algorithm for an Ultrasonic Motor2011Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This report is the result of a master thesis work where the goal was to develop acontrol system for a type of ultrasonic motor. The ultrasonic motors use ultrasonicvibrations from a piezoelectric material to produce a rotating motion. They arepowered by two sinusoidal voltages and their control signals generally are thevoltages amplitude, frequency and the phase difference between the two voltages.In this work the focus is on control using only amplitude and frequency. A feedbacksignal was provided by an encoder, giving an angular position. The behavior of themotors were investigated for various sets of control signals. From collected data alinearized static model was derived for the motor speed. This derived model wasused to create a two part control system, with an inner control loop to managethe speed of the motors using a PI controller and an outer control loop to managethe position of the motors. A simple algorithm was used for the position controland the result was a control system able to position the motors with a 0.1 degreeaccuracy. The motors show potential for greater accuracy with a position feedback,but the result in this work is limited by the encoder used in the experiments.

  • 39.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kottravel, Sathish
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Visual Computing Research Group, Ulm University.
    A Crowdsourcing System for Integrated and Reproducible Evaluation in Scientific Visualization2016In: 2016 IEEE Pacific Visualization Symposium (PacificVis), IEEE Computer Society, 2016, 40-47 p.Conference paper (Refereed)
    Abstract [en]

    User evaluations have gained increasing importance in visualization research over the past years, as in many cases these evaluations are the only way to support the claims made by visualization researchers. Unfortunately, recent literature reviews show that in comparison to algorithmic performance evaluations, the number of user evaluations is still very low. Reasons for this are the required amount of time to conduct such studies together with the difficulties involved in participant recruitment and result reporting. While it could be shown that the quality of evaluation results and the simplified participant recruitment of crowdsourcing platforms makes this technology a viable alternative to lab experiments when evaluating visualizations, the time for conducting and reporting such evaluations is still very high. In this paper, we propose a software system, which integrates the conduction, the analysis and the reporting of crowdsourced user evaluations directly into the scientific visualization development process. With the proposed system, researchers can conduct and analyze quantitative evaluations on a large scale through an evaluation-centric user interface with only a few mouse clicks. Thus, it becomes possible to perform iterative evaluations during algorithm design, which potentially leads to better results, as compared to the time consuming user evaluations traditionally conducted at the end of the design process. Furthermore, the system is built around a centralized database, which supports an easy reuse of old evaluation designs and the reproduction of old evaluations with new or additional stimuli, which are both driving challenges in scientific visualization research. We will describe the system's design and the considerations made during the design process, and demonstrate the system by conducting three user evaluations, all of which have been published before in the visualization literature.

  • 40.
    Wallin, Ragnar
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Kao, Chung-Yao
    University of Melbourne, Australia.
    Hansson, Anders
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Cutting Plane Method for Solving KYP-SDPs2006Report (Other academic)
    Abstract [en]

    Semidefinite programs originating from the Kalman-Yakubovich-Popov lemma are convex optimization problems and there exist polynomial time algorithms that solve them. However, the number of variables is often very large making the computational time extremely long. Algorithms more efficient than general purpose solvers are thus needed. To this end structure exploiting algorithms have been proposed, based on the dual formulation. In this paper a cutting plane algorithm is proposed. In a comparison with a general purpose solver and a structure exploiting solver it is shown that the cutting plane based solver can handle optimization problems of much higher dimension.

  • 41.
    Wallin, Ragnar
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Kao, Chung-Yao
    University of Melbourne, Australia.
    Hansson, Anders
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Cutting Plane Method for Solving KYP-SDPs2008In: Automatica, ISSN 0005-1098, Vol. 44, no 2, 418-429 p.Article in journal (Refereed)
    Abstract [en]

    Semidefinite programs originating from the Kalman-Yakubovich-Popov lemma are convex optimization problems and there exist polynomial time algorithms that solve them. However, the number of variables is often very large making the computational time extremely long. Algorithms more efficient than general purpose solvers are thus needed. To this end structure exploiting algorithms have been proposed, based on the dual formulation. In this paper a cutting plane algorithm is proposed. In a comparison with a general purpose solver and a structure exploiting solver it is shown that the cutting plane based solver can handle optimization problems of much higher dimension.

  • 42.
    Razavi, Amir Reza
    et al.
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Gill, Hans
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Åhlfeldt, Hans
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Shahsavar, Nosrat
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    A Data Pre-processing Method to Increase Efficiency and Accuracy in Data Mining2005In: 10th Conference on Artificial Intelligence in Medicine, AIME2005 - Aberdeen, UK, 2005, 434-443 p.Conference paper (Other academic)
    Abstract [en]

    In medicine, data mining methods such as Decision Tree Induction (DTI) can be trained for extracting rules to predict the outcomes of new patients. However, incompleteness and high dimensionality of stored data are a problem. Canonical Correlation Analysis (CCA) can be used prior to DTI as a dimension reduction technique to preserve the character of the original data by omitting non-essential data. In this study, data from 3949 breast cancer patients were analysed. Raw data were cleaned by running a set of logical rules. Missing values were replaced using the Expectation Maximization algorithm. After dimension reduction with CCA, DTI was employed to analyse the resulting dataset. The validity of the predictive model was confirmed by ten-fold cross validation and the effect of pre-processing was analysed by applying DTI to data without pre-processing. Replacing missing values and using CCA for data reduction dramatically reduced the size of the resulting tree and increased the accuracy of the prediction of breast cancer recurrence.

  • 43.
    Falkeborn, Rikard
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Hansson, Anders
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Decomposition Algorithm for KYP-SDPs2012In: European Journal of Control, ISSN 0947-3580, E-ISSN 1435-5671, Vol. 18, no 3, 249-256 p.Article in journal (Refereed)
    Abstract [en]

    In this paper, a structure exploiting algorithm for semidefinite programs derived from the Kalmatz-Yakubovich-Popov lemma, where some of the constraints appear as complicating constraints is presented. A decomposition algorithm is proposed, where the structure of the problem can be utilized. In a numerical example, where a controller that minimizes the stun of the H-2-norm and the H-infinity-norm is designed, the algorithm, is shown to be faster than SeDuMi and the special purpose solver KYPD.

  • 44.
    Falkeborn, Rikard
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Hansson, Anders
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Decomposition Algorithm for KYP-SDPs2009Report (Other academic)
    Abstract [en]

    In this paper, a structure exploiting algorithm for semidefinite programs derived from the Kalman-Yakubovich-Popov lemma, where some of the constraints appear as complicating constraints is presented. A decomposition algorithm is proposed, where the structure of the problem can be utilized. In a numerical example, where a controller that minimizes the sum of the H2-norm and the H-norm is designed, the algorithm is shown to be faster than SeDuMi and the special purpose solver KYPD.

  • 45.
    Falkeborn, Rikard
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Hansson, Anders
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Decomposition Algorithm for KYP-SDPs2009In: Proceedings of the 10th European Control Conference (ECC), 2009, 3202-3207 p.Conference paper (Refereed)
    Abstract [en]

    In this paper, a structure exploiting algorithm for semidefinite programs derived from the Kalman-Yakubovich-Popov lemma, where some of the constraints appear as complicating constraints is presented. A decomposition algorithm is proposed, where the structure of the problem can be utilized. In a numerical example, where a controller that minimizes the sum of the H2-norm and the H-norm is designed, the algorithm is shown to be faster than SeDuMi and the special purpose solver KYPD.

  • 46.
    Wallin, Ragnar
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Hansson, Anders
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Gillberg, Jonas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Decomposition Approach for Solving KYP-SDPs2004Report (Other academic)
    Abstract [en]

    Semidefinite programs originating from the Kalman-Yakubovich-Popov lemma are convex optimization problems and there exist polynomial time algorithms that solve them. However, the number of variables is often very large making the computational time extremely long. Algorithms more efficient than general purpose solvers are thus needed. In this paper a generalized Benders decomposition algorithm is applied to the problem to improve efficiency.

  • 47.
    Wallin, Ragnar
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Kao, Chung-Yao
    University of Melbourne, Australia.
    Hansson, Anders
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Decomposition Approach for Solving KYP-SDPs2005In: Proceedings of the 16th IFAC World Congress, 2005, 1021-1021 p.Conference paper (Refereed)
    Abstract [en]

    Semidefinite programs originating from the Kalman-Yakubovich-Popov lemma are convex optimization problems and there exist polynomial time algorithms that solve them. However, the number of variables is often very large making the computational time extremely long. Algorithms more efficient than general purpose solvers are thus needed. In this paper a generalized Benders decomposition algorithm is applied to the problem to improve efficiency.

  • 48.
    Henriksson, Ola
    Linköping University, Department of Science and Technology.
    A Depth of Field Algorithm for Realtime 3D Graphics in OpenGL2002Independent thesis Basic level (professional degree)Student thesis
    Abstract [en]

    The company where this thesis was formulated constructs VR applications for the medical environment. The hardware used is ordinary dektops with consumer level graphics cards and haptic devices. In medicin some operations require microscopes or cameras. In order to simulate these in a virtual reality environment for educational purposes, the effect of depth of field or focus have to be considered.

    A working algorithm that generates this optical occurence in realtime, stereo rendered computer graphics is presented in this thesis. The algorithm is implemented in OpenGL and C++ to later be combined with a VR application simulating eye-surgery which is built with OpenGL Optimizer.

    Several different approaches are described in this report. The call for realtime stereo rendering (~60 fps) means taking advantage of the graphics hardware to a great extent. In OpenGL this means using the extensions to a specific graphic chip for better performance, in this case the algorithm is implemented for a GeForce3 card.

    To increase the speed of the algorithm much of the workload is moved from the CPU to the GPU (Graphics Processing Unit). By re-defining parts of the ordinary OpenGL pipeline via vertex programs, a distance-from-focus map can be stored in the alpha channel of the final image with little time loss.

    This can effectively be used to blend a previously blurred version of the scene with a normal render. Different techniques to quickly blur a renderedimage is discussed, to keep the speed up solutions that require moving data from the graphics card is not an option.

  • 49.
    Johansson, Kenny
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Electronics System.
    Gustafsson, Oscar
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Electronics System.
    Wanhammar, Lars
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Electronics System.
    A detailed complexity model for multiple constant multiplication and an algorithm to minimize the complexity2005In: European Conf. Circuit Theory Design,2005, Cork: IEEE , 2005, III/465- p.Conference paper (Refereed)
    Abstract [en]

    Multiple constant multiplication (MCM) has been an active research area for the last decade. Most work so far have only considered the number of additions to realize a number of constant multiplications with the same input. In this work, we consider the number of full and half adder cells required to realize those additions, and a novel complexity measure is proposed. The proposed complexity measure can be utilized for all types of constant operations based on shifts, additions and subtractions. Based on the proposed complexity measure a novel MCM algorithm is presented. Simulations show that compared with previous algorithms, the proposed MCM algorithm have a similar number of additions while the number of full adder cells are significantly reduced.

  • 50.
    Glad, S. T.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Differential Algebra Representation of the RGA2000Report (Other academic)
    Abstract [en]

    Extentions of the RGA (relative gain array) technique to nonlinear systems are considered. The steady-state properties are given by an array of nonlinear functions. It is shown that the corresponding dynamic description can be calculated using a reduction algorithm from differential algebra.

1234567 1 - 50 of 2896
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf