liu.seSearch for publications in DiVA
Change search
Refine search result
1234567 1 - 50 of 2755
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Oldest first
  • Newest first
Select all
  • 1.
    Torstensson, Johan
    Linköping University, Department of Mathematics.
    Computation of Mileage Limits for Traveling Salesmen by Means of Optimization Techniques2008Independent thesis Advanced level (degree of Magister), 20 points / 30 hpStudent thesis
    Abstract [en]

    Many companies have traveling salesmen that market and sell their products.This results in much traveling by car due to the daily customer visits. Thiscauses costs for the company, in form of travel expenses compensation, and environmentaleffects, in form of carbon dioxide pollution. As many companies arecertified according to environmental management systems, such as ISO 14001,the environmental work becomes more and more important as the environmentalconsciousness increases every day for companies, authorities and public.The main task of this thesis is to compute reasonable limits on the mileage ofthe salesmen; these limits are based on specific conditions for each salesman’sdistrict. The objective is to implement a heuristic algorithm that optimizes thecustomer tours for an arbitrary chosen month, which will represent a “standard”month. The output of the algorithm, the computed distances, will constitute amileage limit for the salesman.The algorithm consists of a constructive heuristic that builds an initial solution,which is modified if infeasible. This solution is then improved by a local searchalgorithm preceding a genetic algorithm, which task is to improve the toursseparately.This method for computing mileage limits for traveling salesmen generates goodsolutions in form of realistic tours. The mileage limits could be improved if theinput data were more accurate and adjusted to each district, but the suggestedmethod does what it is supposed to do.

  • 2.
    Warth, Benedikt
    Linköping University, The Department of Physics, Chemistry and Biology.
    Design and Application of Software Sensors in Batch and Fed-batch Cultivations during Recombinant Protein Expression in Escherichia coli2008Independent thesis Advanced level (degree of Master), 20 points / 30 hpStudent thesis
    Abstract [en]

    Software sensors are a potent tool to improve biotechnological real time process monitoring and control. In the current project, algorithms for six partly novel, software sensors were established and tested in a microbial reactor system. Eight batch and two fed-batch runs were carried out with a recombinant Escherichia coli to investigate the suitability of the different software sensor models in diverse cultivation stages. Special respect was given to effects on the sensors after recombinant protein expression was initiated by addition of an inducer molecule. It was an objective to figure out influences of excessive recombinant protein expression on the software sensor signals.

    Two of the developed algorithms calculated the biomass on-line and estimated furthermore, the specific growth rate by integration of the biomass changes with the time. The principle of the first was the application of a near infrared probe to obtain on-line readings of the optical density. The other algorithm was founded on the titration of ammonia as only available nitrogen source. The other two sensors analyzed for the specific consumption of glucose and the specific production of acetate and are predicted on an in-line HPLC system.

    The results showed that all software sensors worked as expected and are rather powerful to estimate important state parameters in real time. In some stages, restrictions may occur due to different limitation affects in the models or the physiology of the culture. However, the results were very convincing and suggested the development of further and more advanced software sensor models in the future.

  • 3.
    Haraldsson, Erik
    Linköping University, Department of Mathematics.
    Combining unobtainable shortest path graphs for OSPF2008Independent thesis Advanced level (degree of Master), 25 points / 37,5 hpStudent thesis
    Abstract [en]

    The well-known Dijkstra's algorithm uses weights to determine the shortest path. The focus here is instead on the opposite problem, does there exist weights for a certain set of shortest paths? OSPF (Open Shortest Path First) is one of several possible protocols that determines how routers will send data in a network like the internet. Network operators would however like to have some control of how the traffic is routed, and being able to determine the weights, which would lead to the desired shortest paths to be used, would be a help in this task.The first part of this thesis is a mathematical explanation of the problem with a lot of examples to make it easier to understand. The focus here is on trying to combine several routing patterns into one, so that the result will be fewer, but more fully spanned, routing patterns, and it can e.g. be shown that there can't exist a common set of weights if two routing patterns can't be combined.The second part is a program that can be used to make several tests and changes to a set of routing patterns. It has a polynomial implementation of a function that can combine routing patterns. The examples that I used to combine routing patterns, showed that this will increase the likelihood of finding and significantly speed up the computation of a “valid cycle”.

  • 4.
    Schmiterlöw, Maria
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Autonomous Path Following Using Convolutional Networks2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Autonomous vehicles have many application possibilities within many different fields like rescue missions, exploring foreign environments or unmanned vehicles etc. For such system to navigate in a safe manner, high requirements of reliability and security must be fulfilled.

    This master's thesis explores the possibility to use the machine learning algorithm convolutional network on a robotic platform for autonomous path following. The only input to predict the steering signal is a monochromatic image taken by a camera mounted on the robotic car pointing in the steering direction. The convolutional network will learn from demonstrations in a supervised manner.

    In this thesis three different preprocessing options are evaluated. The evaluation is based on the quadratic error and the number of correctly predicted classes. The results show that the convolutional network has no problem of learning a correct behaviour and scores good result when evaluated on similar data that it has been trained on. The results also show that the preprocessing options are not enough to ensure that the system is environment dependent.

  • 5.
    Hanson, Maryam
    Linköping University, Department of Electrical Engineering, Integrated Circuits and Systems.
    Study on Smart Dust Networks2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis work is done for the department of Electronic System at The Institute of Technology at Linköping University (Linköpings Tekniska Högskolan). Study's focus is to design and implement a protocol for smart dust networks to improve the energy consumption algorithm for this kind of network.

    Smart dust networks are in category of distributed sensor networks and power consumption is one of the key concerns for this type of network. This work shows that by focusing on improving the algorithmic behavior of power consumption in every network element (so called as mote), we can save a considerable amount of power for the whole network.

    Suggested algorithm is examined using Erlang for one mote object and the whole idea has put into test for a small network using SystemC.

  • 6.
    Aevan, Nadjib Danial
    Linköping University, Department of Management and Engineering, Fluid and Mechatronic Systems.
    MDO Framework for Design of Human PoweredPropellers using Multi-Objective Genetic Algorithm2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis showcases the challenges, downsides and advantages to building a MultiDisciplinary Optimization (MDO) framework to automate the generation of an efficientpropeller design built for lightly loaded operation, more specifically for humanpowered aircrafts. Two years ago, a human powered aircraft project was initiatedat Linköping University. With the help of several courses, various students performedconceptional design, calculated and finally manufactured a propeller bymeans of various materials and manufacturing techniques. The performance ofthe current propeller is utilized for benchmarking and comparing results obtainedby the MDO process.The developed MDO framework is constructed as a modeFRONITER project wereseveral Computer Aided Engineering softwares (CAE) such as MATLAB, CATIAand XFOIL are connected to perform multiple consequent optimization subprocesses.The user is presented with several design constraints such as blade quantity,required input power, segment-wise airfoil thickness, desired lift coefficientetc. Also, 6 global search optimization algorithms are investigated to determinethe one which generate most efficient result according to several set standards.The optimization process is thereafter initialized by identifying the most efficientchord distribution with a help of an initial blade cross-section which has been previouslyused in other human powered propellers, the findings are thereafter usedto determine the flow conditions at different propeller stations. Two different aerodynamicoptimized shapes are generated with the help of consecutively performedsubprocesses. The optimized propeller requires 7.5 W less input power to generatenearly equivalent thrust as the original propeller with a total efficiency exceedingthe 90 % mark (90.25 %). Moreover, the MDO framework include an automationprocess to generate a CAD design of the optimized propeller. The generatedCAD file illustrates a individual surface blade decrease of 12.5 % compared tothe original design, the lightweight design and lower input power yield an overallpropulsion system which is less tedious to operate.

  • 7.
    Hammarström, Emil
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, The Institute of Technology.
    Network optimisation and topology control of Free Space Optics2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In communication networks today, the amount of users and traffic is constantly increasing. This results in the need for upgrading the networks to handle the demand. Free space optics is a technique which is relatively cheap with high capacity compared to most systems today. On the other hand, FSO have some disadvantages with the effects on the system by, for instance, turbulence and weather. The aim of the project is to investigate the use of network optimization for designing an optimal network in terms of capacity and cost. Routing optimization is also covered in terms of singlepath and multipath routing. To mitigate the problem with turbulence affecting the system network survivability is implemented with both proactive and reactive solutions.

    The method used is to implement the system in Matlab, the system should also be tested so that it works as intended. The report covers related work as well as theory behind FSO and the chosen optimization algorithms.

    The system uses modified Bellman-Ford optimization as well as Kruskal’s minimum spanning tree. K-link-connectivity is also implemented for the network survivability and multipath algorithm.

    Results of the implementation shows that the network survivability improves the robustness of the system by changing paths for traffic which is affected by broken links. Routing done by multipath will increase the throughput and also reduce the delay for the traffic.

  • 8.
    Angerborn, Felix
    Linköping University, Department of Computer and Information Science, Human-Centered systems.
    Better text formatting for the mobile web with javascript2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As people read more and longer texts on the web, the simple formatting options that exists in todays browsers creates worse results than necessary. On behalf of Opera Software in Linköping, a better algorithm has been implemented in Javascript with the purpose of delivering a visually better experience for the reader. The implementation is first and foremost for mobile devices and therefore a large part of the thesis has been the evaluation and optimization of performance. 

  • 9.
    Ledin, Staffan
    Linköping University, Department of Electrical Engineering, Integrated Circuits and Systems.
    A Comparison of Radix-2 Square Root Algorithms Using Digit Recurrence2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    When designing an electronic system, it might be desirable to implement a custom square root calculator unit to ensure quick calculations. The different questions when it comes to square root units are many. What algorithms are there? How are these algorithms implemented? What are the benefits and disadvantages of the different implementations? The goal of this thesis work is to try to answer these questions. In this paper, several different methods of calculating the radix-2 square root by digit recurrence are studied, designed and compared. The three main algorithms that are studied are the restoring square root algorithm, the non-restoring square root algorithm and the SRT (Sweeney, Robertson, Tocher) square root algorithm. They are all designed using the same technology and identical components where applicable. This is done in order to ensure that the comparisons give a fair assessment of the viability of the different algorithms. It is shown that the restoring and non-restoring square root algorithms perform similarly when using 65 nm technology, a 16 bit input, full data rate and 1.2 V power supply. The restoring square root algorithm have a slight edge when the systems are not pipelined, while the non-restoring algorithm performs slightly better when the systems are fully pipelined. The SRT square root algorithm perform worse than the other two in all cases.

  • 10.
    Järemo Lawin, Felix
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Depth Data Processing and 3D Reconstruction Using the Kinect v22015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Kinect v2 is a RGB-D sensor manufactured as a gesture interaction tool for the entertainment console XBOX One. In this thesis we will use it to perform 3D reconstruction and investigate its ability to measure depth. In order to sense both color and depth the Kinect v2 has two cameras: one RGB camera and one infrared camera used to produce depth and near infrared images. These cameras need to be calibrated if we want to use them for 3D reconstruction. We present a calibration procedure for simultaneously calibrating the cameras and extracting their relative pose. This enables us to construct colored meshes of the environment. When we know the camera parameters of the infrared camera, the depth images could be used to perform the Kinectfusion algorithm. This produces well-formed meshes of the environment by combining many depth frames taken from several camera poses.The Kinect v2 uses a time-of-flight technology were the phase shifts are extracted from amplitude modulated infrared light signals produced by an emitter. The extracted phase shifts are then converted to depth values. However, the extraction of phase shifts includes a phase unwrapping procedure, which is sensitive to noise and can result in large depth errors.By utilizing the ability to access the raw phase measurements from the device we managed to modify the phase unwrapping procedure. This new procedure includes an extraction of several hypotheses for the unwrapped phase and a spatial propagation to select amongst them. This proposed method has been compared with the available drivers in the open source library libfreenect2 and the Microsoft Kinect SDK v2. Our experiments show that the depth images of the two available drivers have similar quality and our proposed method improves over libfreenect2. The calculations in the proposed method are more expensive than those in libfreenect2 but it still runs at 2.5× real time. However, contrary to libfreenect2 the proposed method lacks a filter that removes outliers from the depth images. It turned out that this is an important feature when performing Kinect fusion and future work should thus be focused on adding an outlier filter.

  • 11.
    Gustavsson, Johan
    Linköping University, Department of Computer and Information Science, Software and Systems. Zenterio.
    A Comparative Study of Automated Test Explorers2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With modern computer systems becoming more and more complicated, theimportance of rigorous testing to ensure the quality of the product increases.This, however, means that the cost to perform tests also increases. In orderto address this problem, a lot of research has been conducted during thelast years to find a more automated way of testing software systems. Inthis thesis, different algorithms to automatically explore and test a systemhave been implemented and evaluated. In addition to this, a second setof algorithms have been implemented with the objective to isolate whichinteractions with the system were responsible for a failure. These algorithmswere also evaluated and compared against each other. In the first evaluationtwo explorers, which I called DeBruijn and LStarExplorer, were consideredsuperior to the other. The first used a DeBruijn sequence to brute forcea solution while the second used the L*-algorithm to build an FSM overthe system under test. This FSM could then be used to provide a moreaccurate description for when the failure occurred. The result from thesecond evaluation were two reducers which both tried to recreate a failureby first applying interactions performed just before the failure occurred. Ifthis was not successful, they tried interactions further and further away, untilthe failure was triggered. In addition to this, the thesis contains descriptionsabout the framework used to run the different strategies.

  • 12.
    Johansson, Niklas
    Linköping University, Department of Physics, Chemistry and Biology. Linköping University, Faculty of Science & Engineering.
    Efficient Simulation of the Deutsch-Jozsa Algorithm2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    We provide a framework wherein one can simulate the Deutsch-Jozsa quantum algorithm on a regular computer within polynomial time, and with linear memory consumption. Under certain reasonable assumptions the simulation solves the problem with a bounded error of probability with only one function evaluation, which is comparable with the efficiency of the quantum algorithm. The provided framework lies within a slight extension of the toy model purposed by Robert W. Spekkens Phys. Rev. A 75 (2007), and consists of transformations that are reminiscent of transformations in quantum mechanics.

  • 13.
    Bergström, Joakim
    et al.
    Linköping University, Department of Science and Technology, Physics and Electronics. Linköping University, The Institute of Technology.
    Nilsson-Sundén, Hampus
    Linköping University, Department of Science and Technology, Physics and Electronics. Linköping University, The Institute of Technology.
    Cost effective optimization of system safety and reliability2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A method able to analyze and optimize subsystems could be useful to reduce project cost, increase subsystem reliability, improve overall aircraft safety and reduce subsystem weight. The earlier the optimization of development of an aircraft in the design phase can be performed, the better the yield of the optimization becomes. This master thesis was formed in order to construct an automatic analysis method, implementing a Matlab script, evaluating devices forming aircraft subsystems using a Genetic Algorithm. In addition to aircraft subsystems, the method constructed in the work is compatible with systems of various industries with minor modifications of the script.

  • 14.
    Nielsen, Emil
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time Wind Direction Filtering for Sailboat Race Tracking2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this paper, an algorithm that calculates the direction of the wind from the directions of sailors during fleet races is proposed. The algorithm is based on a 1-D spatial convolution and it is named Convolution Based Direction Filtering (CBDF). The CBDF-algorithm is used in the TracTrac race client that broadcasts sailboat races in real-time. The fact that the proposed algorithm is polynomial makes it suitable, to be used as a real-time application inside TracTrac, even for large fleets. More concretely, we show that the time complexity of the CBDF-algorithm is O(n2), in the worst-case, where n > 0 is the number of boats in competition. It is also shown that in more realistic sailing scenarios, the CBDF-algorithm is in fact a linear algorithm.

  • 15.
    Gatu, Torsten
    Linköping University, Department of Science and Technology, Physics and Electronics. Linköping University, The Institute of Technology.
    Implementation of spectrumanalyzer in Softube Console 12015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The purpose of this thesis was to implement an audio spectrum analyzer in Console 1, an audio mixing platform developed by Softube AB. The implementation needed to have good performance at low cost and minimal maintenace, while still integrating well with the Console 1 environment.

    The work consisted of finding a suitable FFT library, constructing an algorithm for visualization of the raw FFT data, and to collect and process sound data while maintaining the real-time performance of the Console 1 environment.

    The result was well a integrated spectrum analyzer with a minimal codebase that is performing well enough for its application.

  • 16.
    Henriksson, Johan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Face detection for selective polygon reduction of humanoid meshes2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Automatic mesh optimization algorithms suffer from the problem that humans are not uniformly sensitive to changes on different parts of the body. This is a problem because when a mesh optimization algorithm typically measures errors caused by triangle reductions, the errors are strictly geometrical, and an error of a certain magnitude on the thigh of a 3D model will be perceived by a human as less of an error than one of equal geometrical significance introduced on the face. The partial solution to this problem proposed in this paper consists of detecting the faces of the 3D assets to be optimized using conventional, existing 2D face detection algorithms, and then using this information to selectively and automatically preserve the faces of 3D assets that are to be optimized, leading to a smaller perceived error in the optimized model, albeit not necessarily a smaller geometrical error. This is done by generating a set of per-vertex weights that are used to scale the errors measured by the reduction algorithm, hence preserving areas with higher weights. The final optimized meshes produced by using this method is found to be subjectively closer to the original 3D asset than their non-weighed counterparts, and if the input meshes conform to certain criteria this method is well suited for inclusion in a fully automatic mesh decimation pipeline

  • 17.
    Valter, Andreas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Dynamic real-time scene voxelization and an application for large scale scenes2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This report describes a basic implementation of scene voxelization within the Frostbite engine created by EA Frostbite. The algorithm supports dynamic scenes by voxelizing in real-time using the Graphical Programming Unit. The voxel grid is stored inside a buffer with a binary representation using clip mapping and multiple levels of detail. An ambient occlusion algorithm is implemented to show the benefits of the structure. Results from running the application within the engine is presented, both with figures showing the resulting image and timings for diifferent parts of the algorithm. Several future improvements to make the algorithm more competitive is presented as well.

  • 18.
    Renström, Klara
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, Faculty of Science & Engineering.
    Automatic age estimation of children based on brain matter composition using quantitative MRI2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The development of a child can be monitored by studying the changes in physical appearance or the development of capabilities e.g. walking and talking. But is it possible to find a quantitative measure for brain development? The aim of this thesis work is to investigate that possibility using quantitative magnetic resonance imaging (qMRI) images by answering the following questions:

    • Can brain development be determined using qMRI? If so, what properties of the brain can be used?
    • Can the age of a child be automatically detected with an algorithm? If so, how can this algorithm function? With what accuracy?

    Previous studies have shown that it is possible to detect properties in the brain changing with age, based on MRI images. These properties have e.g. been changes in T1 and T2 relaxation time, i.e. properties in water signal behavior that can be measured using multiple MR acquisitions. In the literature this was linked to a rapid myelination process that occurs after birth. Furthermore the organization and growth of the brain is a property that can be measured and monitored.

    This thesis have investigated several different properties in the brain based on qMRI images in order to identify those who have a strong correlation with age in the range 0-20 years. The properties that were found to have a high correlation were:

    • Position of the first histogram peak in T1 weighted qMRI images,
    • Fraction of white matter in the brain,
    • Mean pixel value of PD weighted qMRI images,
    • Volume of white matter in the brain,

    Curves on the form f(x) = ae^(-bx) +c are fitted to the data sets and confidence intervals are calculated to frame the statistical insecurity of the curve. The mean error in percent for the different properties can be seen in the list below:

    Property, Mean error [%] 0-20 years, Mean error [%] 0-3 years

    Peak position: 53.84, 98.17

    Fraction of WM: 118.97, 71.67

    Mean pixel value: 200.89, 126.28

    Volume of WM: 241.72, 72.58

    The conclusions drawn based on the presented results are that there are properties in the brain that correlates well to aging, but the error is too large for making a valid prediction of age over the entire range of 0-20 years. When decreasing the age range to 0-3 years the mean error becomes smaller, but it is still too large. More data is needed to evaluate and improve this result.

  • 19.
    Magnusson, Karolina
    Linköping University, Department of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.
    Mechanical heart rate detection using cardiogenic impedance - a morphology approach2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The objective of this thesis is to examine the possibility to determine the mechanical heart rate using intracardiac impedance in the time domain. Deducing the mechanical heartrate from the impedance could help improve the performance of implanted devices that today depend on the measurement of the heart’s electrical activity. Cardiogenic – also known as intracardiac – impedance is based on the difference in conductivity between heart muscle tissue and blood, making the impedance vary as the heart is filled and emptied. The data used in this thesis was acquired from three previous studies performed by St Jude Medical, two clinical and one preclinical. Two impedance measurement configurations were chosen from these studies, one bipolar and one quadropolar. To deduce the heart rate from the intracardiac impedance six algorithms were evaluated. Three using continuous peak detection and three evaluating small frames of the impedance signal.The peak detection algorithms were peak detection on the impedance signal itself, on its derivative  and on its integral. The three others were an Auto Correlation Function (ACF), an Average Magnutide Difference Function (AMDF) and an Average Wave Comparison Function (AWCF). In order to assess the heart rates deduced from the intracardiac impedance by the algorithms, these rates were compared to both the IEGM or the ECG (depending on which study was at hand) and the blood pressure.

    Several issues affected the performance of the algorithms. Impedance morphology can vary between patients. Some display so called “double peaks”, making it hard to decide whether a patient has for example a pulse of 80 bpm or of 160 bpm. The impedance morphology was also affected by amplitude modulation with the respiration frequency which in some patients cause difficulties to analyze the impedance signal. The results show that the two impedance measurement configurations perform equally well and that the ACF method was the overall best performing algorithm. They also show that individual patient impedance morphology has a large influence on the results and for future studies it should therefore be interesting to calibrate the algorithms for each patient, as this should improve performance.

  • 20.
    Örtenberg, Alexander
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    Parallelization of DIRA and CTmod Using OpenMP and OpenCL2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Parallelization is the answer to the ever-growing demands of computing power by taking advantage of multi-core processor technology and modern many-core graphics compute units. Multi-core CPUs and many-core GPUs have the potential to substantially reduce the execution time of a program but it is often a challenging task to ensure that all available hardware is utilized. OpenMP and OpenCL are two parallel programming frameworks that have been developed to allow programmers to focus on high-level parallelism rather than dealing with low-level thread creation and management. This thesis applies these frameworks to the area of computed tomography by parallelizing the image reconstruction algorithm DIRA and the photon transport simulation toolkit CTmod. DIRA is a model-based iterative reconstruction algorithm in dual-energy computed tomography, which has the potential to improve the accuracy of dose planning in radiation therapy. CTmod is a toolkit for simulating primary and scatter projections in computed tomography to optimize scanner design and image reconstruction algorithms. The results presented in this thesis show that parallelization combined with computational optimization substantially decreased execution times of these codes. For DIRA the execution time was reduced from two minutes to just eight seconds when using four iterations and a 16-core CPU so a speedup of 15 was achieved. CTmod produced similar results with a speedup of 14 when using a 16-core CPU. The results also showed that for these particular problems GPU computing was not the best solution.

  • 21.
    Ljungqvist, Oskar
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, Faculty of Science & Engineering.
    Motion Planning and Stabilization for a Reversing Truck and Trailer System2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis work contains a stabilization and a motion planning strategy for a truck and trailer system. A dynamical model for a general 2-trailer with two rigid free joints and a kingpin hitching has been derived based on previous work. The model holds under the assumption of rolling without slipping of the wheels and has been used for control design and as a steering function in a probabilistic motion planning algorithm.

    A gain scheduled Linear Quadratic (LQ) controller with a Pure pursuit path following algorithm has been designed to stabilize the system around a given reference path. The LQ controller is only used in backward motion and the Pure pursuit controller is split into two parts which are chosen depending on the direction of motion.

    A motion planning algorithm called Closed-Loop Rapidly-exploring Random Tree (CL-RRT) has then been used to plan suitable reference paths for the system from an initial state configuration to a desired goal configuration with obstacle-imposed constraints. The motion planning algorithm solves a non-convex optimal control problem by randomly exploring the input space to the closed-loop system by performing forward simulations of the closed-loop system.

    Evaluations of performance is partly done in simulations and partly on a Lego platform consisting of a small-scale system. The controllers have been used on the Lego platform with successful results. When the reference path is chosen as a smooth function the closed-loop system is able to follow the desired path in forward and backward motion with a small control error.

    In the work, it is shown how the CL-RRT algorithm is able to plan non-trivial maneuvers in simulations by combining forward and backward motion. Beyond simulations, the algorithm has also been used for planning in open-loop for the Lego platform.

  • 22.
    Rizothanasis, Georgios
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Identifying User Actions from Network Traffic2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Identification of a user’s actions while browsing the Internet is mostly achieved by instrumentation of the user’s browser or by obtaining server logs. In both cases this requires installation of software on multiple clients and/or servers in order to obtain sufficient data. However, by using network traffic, access to user generated traffic from multiple clients to multiple servers is possible. In this project a proxy server is used for recording network traffic and a user-action identification algorithm is proposed. The proposed algorithm includes various policies of analyzing network traffic in order to identify user actions. This project also presents an evaluation framework for the proposed policies, based on which the tradeoff of the various policies is revealed. Proxy servers are widely deployed by numerous organizations and often used for web mining, so with the work of this project user action recognition can be a new tool when considering web traffic evaluation.

  • 23.
    Lomod Blaya, Lucia
    Linköping University, Department of Behavioural Sciences and Learning, Education, Teaching and Learning. Linköping University, Faculty of Educational Sciences.
    Huvudräkning eller algoritmräkning: En litteraturstudie om vilken räknemetod som kan främja elevernas matematiklärande2015Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Under mina verksamhetsförlagda utbildningar har jag lagt märke till att under matematiklektioner ägnar eleverna mycket av sin tid åt räkning i läroböcker. Jag har inte varit med om att läraren introducerar huvudräkningsstrategier till eleverna utan endast algoritmer, dvs. uppställningar. Jag har läst i litteratur att huvudräkning anses vara en bättre räknemetod i de tidigare årskurserna än algoritmräkning. I skolan däremot använder eleverna sig mest av uppställningar. Syftet med mitt arbete är att få fördjupad kunskap om vilken av de två metoderna främjar elevernas matematiklärande på lång sikt. I min litteratursökning har jag mest använt mig av en systematisk/elektronisk sökning. Jag har främst nyttjat databasen ERIC. Min litteraturstudie visar att huvudräkning ökar elevernas talförståelse. Den visar även att elever behöver taluppfattning och hög arbetsminneskapacitet för att kunna räkna i huvudet. Diskussioner bör också förekomma vid huvudräkning. Resultatet visar även att algoritmräkning är effektiv men ej flexibel. Metoden missgynnar dessutom elevernas förståelse för tal. Jag kan konstatera att huvudräkning är en bra metod för att utveckla elevernas förståelse för matematik. 

  • 24.
    Norman, Gustaf
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, Faculty of Science & Engineering.
    Sensor Validation Using Linear Parametric Models, Artificial Neural Networks and CUSUM2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Siemens gas turbines are monitored and controlled by a large number of sensors and actuators. Process information is stored in a database and used for offline calculations and analyses. Before storing the sensor readings, a compression algorithm checks the signal and skips the values that explain no significant change. Compression of 90 % is not unusual. Since data from the database is used for analyses and decisions are made upon results from these analyses it is important to have a system for validating the data in the database. Decisions made on false information can result in large economic losses. When this project was initiated no sensor validation system was available. In this thesis the uncertainties in measurement chains are revealed. Methods for fault detection are investigated and finally the most promising methods are put to the test. Linear relationships between redundant sensors are derived and the residuals form an influence structure allowing the faulty sensor to be isolated. Where redundant sensors are not available, a gas turbine model is utilized to state the input-output relationships so that estimates of the sensor outputs can be formed. Linear parametric models and an ANN (Artificial Neural Network) are developed to produce the estimates. Two techniques for the linear parametric models are evaluated; prediction and simulation. The residuals are also evaluated in two ways; direct evaluation against a threshold and evaluation with the CUSUM (CUmulative SUM) algorithm. The results show that sensor validation using compressed data is feasible. Faults as small as 1% of the measuring range can be detected in many cases.

  • 25.
    Dong, Yao
    et al.
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, The Institute of Technology.
    Sadegh Aminian, Mohammad
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, The Institute of Technology.
    Routing in Terrestrial Free Space Optical Ad-Hoc Networks2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Terrestrial free-space optical (FSO) communication uses visible or infrared wavelengths to broadcast high speed data wirelessly through the atmospheric channel. The performance of terrestrial FSO channel mainly depends on the local atmospheric conditions. Ad hoc networks offer cost-effective solutions for communications in areas where infrastructure is unavailable, e.g., intelligent transport system, disaster recovery and battlefield scenarios. Traditional ad hoc networks operate in the radio frequency (RF) spectrum, where the available bandwidth faces the challenge of rapidly increasing demands. FSO is an attractive alternative for RF in ad-hoc networks because of its high bandwidth and interference-free operation. This thesis investigates the influencing factors for routing traffic from given s-d pair while satisfying certain Quality of Services in terrestrial FSO ad hoc mesh networks under the effect of stochastic atmospheric turbulence. It starts with a comprehensive review of FSO technology, including the history, application, advantages and limitations. Subsequently the principle of operation, the building blocks and safety of FSO communication systems are discussed. The physics of atmosphere is taken into account to investigate how propagation of optical signals is affected in terrestrial FSO links. A propagation model is developed to grade the performance and reliability of the FSO ad hoc links in the network. Based on that model and the K-th shortest path algorithm, the performance of the path with highest reliability, the path with a second highest possible reliability and an independent path with no common links shared with the former two paths, were compared according to the simulation scenarios in node-dense area and node-sparse area. Matlab simulation shows that the short/long range dependent transmission delay are positively proportional to number of hops of the paths. Lower path reliability only dominate the cause of severe delay when traffic flow approaches near its upper link capacity in node-sparse area. In order to route traffic from given s-d pairs with satisfying certain Quality of Services, the path with highest reliability may not be the best choices since they may hold more hops which will degrade the QoS. Meanwhile, in case of exponential traffic congestion, it is recommended that both traffic demand and traffic flow propagating through the links should be pressed below a value close to the effective capacity, where the nonlinearity of the transmission delay curve starts to obviously aggravate.

  • 26.
    Kalms, Mikael
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    High-performance particle simulation using CUDA2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Over the past 15 years, modern PC graphics cards (GPUs) have changed from being pure graphics accelerators into parallel computing platforms.Several new parallel programming languages have emerged, including NVIDIA's parallel programming language for GPUs (CUDA).

    This report explores two related problems in parallel: How well-suited is CUDA for implementing algorithms that utilize non-trivial data structures?And, how does one develop a complex algorithm that uses a CUDA system efficiently?

    A guide for how to implement complex algorithms in CUDA is presented. Simulation of a dense 2D particle system is chosen as the problem domain foralgorithm optimization. Two algorithmic optimization strategies are presented which reduce the computational workload when simulating theparticle system. The strategies can either be used independently, or combined for slightly improved results. Finally, the resultingimplementations are benchmarked against a simpler implementation on a normal PC processor (CPU) as well as a simpler GPU-algorithm.

    A simple GPU solution is shown to run at least 10 times faster than a simple CPU solution. An improved GPU solution can thenyield another 10 times speed-up, while sacrificing some accuracy.

  • 27.
    Thalén, Björn
    Linköping University, Department of Mathematics, Optimization . Linköping University, The Institute of Technology.
    Manpower planning for airline pilots: A tabu search approach2010Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The airline industry faces some of the largest and most complicated optimization problems among all industries today. A large part of their costs iscrew costs and large savings can be made by efficient manpower planning. The research focus has been on the later steps in the planning process such as crew scheduling. The focus of this thesis is on the largely unexplored research area of staffing and transition planning for pilots. For the important question of which pilot should receive training to which position no solution strategy with optimization has, before this thesis, been presented. One reason for this might be that many complicated regulations concern this question, making an easily solved model impossible. I have developed a tabu search based algorithm with encouraging results. The algorithm was tested on data from Scandinavian Airlines. Compared to a reference algorithm based on commercial mixed integer programming software, the tabu search algorithm finds similar solutions about 30 times faster. I show how tabu search can be tailored to a specific complicated problem, and the results are good enough to not only be of theoretical interest but also of direct practical interest for use in the airline industry.

  • 28.
    Nezhadali, Vaheed
    Linköping University, Department of Management and Engineering, Machine Design.
    Multi-objective optimization of Industrial robots2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Industrial robots are the most widely manufactured and utilized type of robots in industries. Improving the design process of industrial robots would lead to further developments in robotics industries. Consequently, other dependant industries would be benefited. Therefore, there is an effort to make the design process more and more efficient and reliable. The design of industrial robots requires studies in various fields. Engineering softwares are the tools which facilitate and accelerate the robot design processes such as dynamic simulation, structural analysis, optimization, control and so forth. Therefore, designing a framework to automate the robot design process such that different tools interact automatically would be beneficial. In this thesis, the goal is to investigate the feasibility of integrating tools from different domains such as geometry modeling, dynamic simulation, finite element analysis and optimization in order to obtain an industrial robot design and optimization framework. Meanwhile, Meta modeling is used to replace the time consuming design steps. In the optimization step, various optimization algorithms are compared based on their performance and the best suited algorithm is selected. As a result, it is shown that the objectives are achievable in a sense that finite element analysis can be efficiently integrated with the other tools and the results can be optimized during the design process. A holistic framework which can be used for design of robots with several degrees of freedom is introduced at the end.

  • 29.
    Reininghaus, Jan
    et al.
    Zuse Institue Berlin.
    Günther, David
    Zuse Institue Berlin.
    Prohaska, Steffen
    Zuse Institue Berlin.
    Hotz, Ingrid
    Zuse Institue Berlin.
    TADD: A Computational Framework for Data Analysis using Discrete Morse Theory2010Conference paper (Refereed)
  • 30.
    Axelsson, Viktor
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Automatisk segmentering och maskering av implantat i mammografibilder2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A report on my thesis developing an algorithm to automatically classify a mammogram image as containing an implant or not and segmenting and masking any present breast implant in the image.

  • 31.
    Hildebrand, Cisilia
    et al.
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, The Institute of Technology.
    Hörtin, Stina
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, The Institute of Technology.
    A comparative study between Emme and Visum with respect to public transport assignment2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Macroscopic traffic simulations are widely used in the world in order to provide assistance in the traffic infrastructure development as well as for the strategic traffic planning. When studying a large traffic network macroscopic traffic simulation can be used to model current and future traffic situations. The two most common software used for traffic simulation in Sweden today are Emme and Visum, developed by INRO respective PTV.

    The aim of the thesis is to perform a comparison between the software Emme and Visum with respect to the assignment of public transport, in other words how passengers choose their routes on the existing public transport lines. However, in order to make a complete software comparison the run-time, analysis capabilities, multi-modality, capacity to model various behavioural phenomena like crowding, fares etc. this will not be done in this comparison. It is of interest to study the differences between the two software algorithms and why they might occur because the Swedish Transport Administration uses Emme and the Traffic Administration in Stockholm uses Visum when planning public transport. The comparison will include the resulting volumes on transit lines, travel times, flow through specific nodes, number of boarding, auxiliary volumes and number of transits. The goal of this work is to answer the following objective: What are the differences with modelling a public transport network in Emme and in Visum, based on that the passengers only have information about the travel times and the line frequency, and why does the differences occur?

    In order to evaluate how the algorithms work in a larger network, Nacka municipality (in Stockholm) and the new metro route between Nacka Forum and Kungsträdgården have been used. The motivation for choosing this area and case is due to that it is interesting to see what differences could occur between the programs when there is a major change in the traffic network.

    The network of Nacka, and parts of Stockholm City, has been developed from an existing road network of Sweden and then restricted by "cutting out" the area of interest and then removing all public transportation lines outside the selected area. The OD-matrix was also limited and in order not to loose the correct flow of travellers portal zones was used to collect and retain volumes.

    To find out why the differences occur the headway-based algorithms in each software were studied carefully. An example of a small and simple network (consisting of only a start and end node) has been used to demonstrate and show how the algorithms work and why volumes split differently on the existing transit lines in Emme and Visum. The limited network of Nacka shows how the different software may produce different results in a larger public transport network.

    The results show that there are differences between the program algorithms but the significance varies depending on which output is being studied and the size of the network. The Visum algorithm results in more total boardings, i.e. more passengers have an optimal strategy including a transit. The algorithms are very similar in both software programs, since they include more or less parts of the optimal strategy. The parameters used are taken more or less into consideration in Emme and Visum. For example Visum will first of all focus on the shortest total travel time and then consider the other lines with respect to the maximum waiting time. Emme however, first focuses on the shortest travel time and then considers the total travel time for other lines with half the waiting time instead of the maximum wait time. This results in that less transit lines will be attractive in Emme compared to Visum. The thesis concludes that varying the parameters for public transport in each software algorithm one can obtain similar results, which implies that it is most important to choose the best parameter values and not to choose the "best" software when simulating a traffic network.

  • 32.
    Persson, Mikael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Online Monocular SLAM: Rittums2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A classic Computer Vision task is the estimation of a 3D map from a collection of images. This thesis explores the online simultaneous estimation of camera poses and map points, often called Visual Simultaneous Localisation and Mapping [VSLAM]. In the near future the use of visual information by autonomous cars is likely, since driving is a vision dominated process. For example, VSLAM could be used to estimate the position of the car in relation to objects of interest, such as the road, other cars and pedestrians. Aimed at the creation of a real-time, robust, loop closing, single camera SLAM system, the properties of several state-of-the-art VSLAM systems and related techniques are studied. The system goals cover several important, if difficult, problems, which makes a solution widely applicable. This thesis makes two contributions: A rigorous qualitative analysis of VSLAM methods and a system designed accordingly. A novel tracking by matching scheme is proposed, which, unlike the trackers used by many similar systems, is able to deal better with forward camera motion. The system estimates general motion with loop closure in real time. The system is compared to a state-of-the-art monocular VSLAM algorithm and found to be similar in speed and performance.

  • 33.
    Andersson, Filip
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    GPGPU-Sim2014Independent thesis Basic level (university diploma), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    This thesis studies the impact of hardware features of graphics cards on performance of GPU computing using GPGPU-Sim simulation software tool. GPU computing is a growing topic in the world of computing, and could be an important milestone for computers. Therefore, such a study that seeks to identify the performance bottlenecks of the program with respect to hardware parameters of the devvice can be considered an important step towards tuning devices for higher efficiency.

    In this work we selected convolution algorithm - a typical GPGPU application - and conducted several tests to study different performance parameters. These tests were performed on two simulated graphics cards (NVIDIA GTX480, NVIDIA Tesla C2050), which are supported by GPGPU-Sim. By changing the hardware parameters of graphics card such as memory cache sizes, frequency and the number of cores, we can make a fine-grained analysis on the effect of these parameters on the performance of the program.

    A graphics card working on a picture convolution task releis on the L1 cache but has the worst performance with a small shared memory. Using this simulator to run performance tests on a theoretical GPU architecture could lead to better GPU design for embedded systems.

  • 34.
    Kardell, Martin
    Linköping University, Department of Biomedical Engineering. Linköping University, The Institute of Technology.
    Automatic Segmentation of Tissues in CT Images of the Pelvic Region2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In brachytherapy, radiation therapy is performed by placing the radiation source into or very close to the tumour. When calculating the absorbed dose, water is often used as the radiation transport and dose scoring medium for soft tissues and this leads to inaccuracies. The iterative reconstruction algorithm DIRA is under development at the Center for Medical Imaging Science and Visualization, Linköping University. DIRA uses dual-energy CT to decompose tissues into different doublets and triplets of base components for a better absorbed dose estimation. To accurately determine mass fractions of these base components for different tissues, the tissues needs to be identified in the image. The aims of this master thesis are: (i) Find an automated segmentation algorithm in CT that best segments the male pelvis. (ii) Implement a segmentation algorithm that can be used in DIRA. (iii) Implement a fully automatic segmentation algorithm.

    Seven segmentation methods were tested in Matlab using images obtained from Linköping University Hospital. The methods were: active contours, atlas based registration, graph cuts, level set, region growing, thresholding and watershed. Four segmentation algorithms were selected for further analysis: phase based atlas registration, region growing, thresholding and active contours without edges. The four algorithms were combined and supplemented with other image analysis methods to form a fully automated segmentation algorithm that was implemented in DIRA.

    The newly developed algorithm (named MK2014) was sufficiently stable for pelvic image segmentation with a mean computational time of 45.3 s and a mean Dice similarity coefficient of 0.925 per 512×512 image. The performance of MK2014 tested on a simplified anthropomorphic phantom in DIRA gave promising result. Additional tests with more realistic phantoms are needed to confirm the general applicability of MK2014 in DIRA.

  • 35.
    Avdic, Kenan
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    On-chip Pipelined Parallel Mergesort on the Intel Single-Chip Cloud Computer2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With the advent of mass-market consumer multicore processors, the growing trend in the consumer off-the-shelf general purpose processor industry has moved away from increasing clock frequency as the classical approach for achieving higher performance. This is commonly attributed to the well-known problems of power consumption and heat dissipation with high frequencies and voltage.

    This paradigm shift has prompted research into a relatively new field of "many-core" processors, such as the Intel Single-chip Cloud Computer. The SCC is a concept vehicle, an experimental homogenous architecture employing 48 IA32 cores interconnected by a high-speed communication network.

    As similar multiprocessor systems, such as the Cell Broadband Engine, demonstrate a significantly higher aggregate bandwidth in the interconnect network than in memory, we examine the viability of a pipelined approach to sorting on the Intel SCC. By tailoring an algorithm to the architecture, we investigate whether this is also the case with the SCC and whether employing a pipelining technique alleviates the classical memory bottleneck problem or provides any performance benefits.

    For this purpose, we employ and combine different classic algorithms, most significantly, parallel mergesort and samplesort.

  • 36.
    Prabahar, Jasila
    Linköping University, Department of Biomedical Engineering. Linköping University, The Institute of Technology.
    Localization of Stroke Using Microwave Technology and Inner product Subspace Classifier2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Stroke or “brain attack” occurs when a blood clot carried by the blood vessels from other part of the body blocks the cerebral artery in the brain or when a blood vessel breaks and interrupts the blood flow to parts of the brain. Depending on which part of the brain is being damaged functional abilities controlled by that region of the brain is lost. By interpreting the patient’s symptoms it is possible to make a coarse estimate of the location of the stroke, e.g. if it is on the left or right hemisphere of the brain. The aim of this study was to evaluate if microwave technology can be used to estimate the location of haemorrhagic stroke.

    In the first part of the thesis, CT images of the patients for whom the microwave measurement are taken is analysed and are used as a reference to know the location of bleeding in the brain. The X, Y and Z coordinates are calculated from the target slice (where the bleeding is more prominent). Based on the bleeding coordinated the datasets are divided into classes. Under supervised learning method the ISC algorithm is trained to classify stroke in the left and right hemispheres; stroke in the anterior and posterior part of the brain and the stroke in the inferior and superior region of the brain. The second part of the thesis is to analyse the classification result in order to identify the patients that were being misclassified.

    The classification results to classify the location of bleeding were promising with a high sensitivity and specificity that are indicated by the area under the ROC curve (AUC). AUC of 0.86 was obtained for bleedings in the left and right brain and an AUC of 0.94 was obtained for bleeding in the inferior and superior brain. The main constraint was the small size of the dataset and few availability of dataset with bleeding in the front brain that leads to imbalance between classes. After analysis it was found that bleedings that were close to the skull and few small bleedings that are deep inside the brain are being misclassified. Many factors can be responsible for misclassification like the antenna position, head size, amount of hair etc.

    The overall results indicate that SDD using ISC algorithm has high potential to distinguish bleedings in different locations. It is expected that the results will be more stable with increased patient dataset for training.

  • 37.
    Kallin Clarke, Semone
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Markerless Augmented Reality for Visualization of 3D Objects in the Real World2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This report documents the research and experiments on evaluating the possibilities of using OpenCV for developing a markerless augmented reality applications using the structure from motion algorithm. It gives a background on what augmented reality is and how it can be used and also theory about camera calibration and the structure from motion algorithm is presented. Based on the theory the algorithm was implemented using OpenCV and evaluated regarding its performance and possibilities when creating markerless augmented reality applications.

  • 38.
    Lundell, Christian
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology.
    Water simulation for cell based sandbox games2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis work presents a new algorithm for simulating fluid based on the Navier-Stokes equations. The algorithm is designed for cell based sandbox games where interactivity and performance are the main priorities. The algorithm enforces mass conservation conservatively instead of enforcing a divergence free velocity field. A global scale pressure model that simulates hydrostatic pressure is used where the pressure propagates between neighboring cells. A prefix sum algorithm is used to only compute work areas that contain fluid.

  • 39.
    Melin, Tomas
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Vidhall, Tomas
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Namecoin as authentication for public-key cryptography2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Public-key cryptography is a subject that is very important to everyone who wants confidentiality and privacy in networks. It is important to understand how public-key cryptography systems work and what flaws they have. In the first part of this report we describe some of the most common encryption schemes and key agreements. We carefully investigate their flaws, if they are broken and what threats have dire consequences. We find that the biggest issue is authentication and we present current solutions to the problem. The current solutions are flawed because they rely too much on trusting different entities. It is only required that one trusted entity becomes malicious for the entire authentication system to be compromised. Because of this we propose an alternative system in the second part, Namecoin. A risk analysis in form of an attack tree is performed on the Namecoin system, where we describe how the attacks are executed and what you can do to prevent them. We present different threats against the system and we describe how dire the consequences are and the probability of their execution. Since Namecoin is an implementation of the block chain algorithm we have also explained how the block chain works in detail. We present why we think that Namecoin is a system that should replace the currently used certificate authority system. The certificate authority system is flawed because it is centralized and dependant on that no authority makes any mistakes. The Namecoin system does not become compromised unless more than 50 % of the hashrate in the system is used with malicious intent. We have concluded that the biggest threats against Namecoin have such a low probability that they can be neglected.

  • 40.
    Falck, Markus
    Linköping University, Department of Mathematics, Computational Mathematics. Linköping University, The Institute of Technology.
    Local Volatility Calibration on the Foreign Currency Option Market2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis we develop and test a new method for interpolating and extrapolating prices of European options. The theoretical base originates from the local variance gamma model developed by Carr (2008), in which the local volatility model by Dupire (1994) is combined with the variance gamma model by Madan and Seneta (1990). By solving a simplied version of the Dupire equation under the assumption of a continuous ve parameter di usion term, we derive a parameterization dened for strikes in an interval of arbitrary size. The parameterization produces positive option prices which satisfy both conditions for absence of arbitrage in a one maturity setting, i.e. all adjacent vertical spreads and buttery spreads are priced non-negatively.

    The method is implemented and tested in the FX-option market. We suggest two sub-models, one with three and one with ve degrees of freedom. By using a least-square approach, we calibrate the two sub-models against 416 Reuters quoted volatility smiles. Both sub-models succeeds in generating prices within the bid-ask spread for all options in the sample. Compared to the three parameter model, the model with ve parameters calibrates more exactly to market quoted mids but has a longer calibration time. The three parameter model calibrates remarkably quickly; in a MATLAB implementation using a Levenberg-Marquardt algorithm the average calibration time is approximately 1 ms. Both sub-models produce volatility smiles which are C2 and well-behaving.

    Further, we suggest a technique allowing for arbitrage-free interpolation of calibrated option price functions in the maturity dimension. The interpolation is performed in parameter space, where every set of parameters uniquely determines an option price function. Furthermore, we produce sucient conditions to ensure absence of calendar spread arbitrage when calibrating the proposed model to several maturities. We use this technique to produce implied volatility surfaces which are suciently smooth, satisfy all conditions for absence of arbitrage and fit market quoted volatility surfaces within the bid-ask spread. In the final chapter we use the results for producing Dupire local volatility surfaces and for pricing variance swaps.

  • 41.
    Nielsen, Isak
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Axehill, Daniel
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    An O(log N) Parallel Algorithm for Newton Step Computation in Model Predictive Control2014In: In Proceedings of the 19th World Congress of the International Federation of Automatic Control, 2014, 10505-10511 p.Conference paper (Refereed)
    Abstract [en]

    The use of Model Predictive Control is steadily increasing in industry as more complicated problems can be addressed. Due to that online optimization is usually performed, the main bottleneck with Model Predictive Control is the relatively high computational complexity. Hence, much research has been performed to find efficient algorithms that solve the optimization problem. As parallel hardware is becoming more commonly available, the demand of efficient parallel solvers for Model Predictive Control has increased. In this paper, a tailored parallel algorithm that can adopt different levels of parallelism for solving the Newton step is presented. With sufficiently many processing units, it is capable of reducing the computational growth to logarithmic in the prediction horizon. Since the Newton step computation is where most computational effort is spent in both interior-point and active-set solvers, this new algorithm can significantly reduce the computational complexity of highly relevant solvers for Model Predictive Control.

  • 42.
    Säljö, Roger
    et al.
    Linköping University, The Tema Institute, Department of Communications Studies. Linköping University, Faculty of Arts and Sciences.
    Wyndhamn, Jan
    Linköping University, Department of Behavioural Sciences. Linköping University, Faculty of Arts and Sciences.
    The Forrnal Setting as Contextfor Cognitive Activities: An Ernpirical Study of Arithmetic Operations under Conflicting Premisses for Comrnunication1987In: European Journal of Psychology of Education, ISSN 0256-2928, E-ISSN 1878-5174, Vol. 2, no 3, 233-245 p.Article in journal (Refereed)
    Abstract [en]

    The general concern of the present article is to contribute to an understanding of the contextual determination of cognitive activities. More specifically, the focus of the empirical research reported has been to study how pupils define and deal with cognitive tasks in situations that are recognised as pedagogical in character. Within the context of their everyday mathematics teaching, 206 twelve year old primary school pupils were given work sheets containing elementary arithmetic problems. The experimental treatment consisted of introducing (through headings and instructions) pedagogical definitions of problems that were in conflict with the nature of the problems themselves. The results indicate that the predefinitions of cognitive activities typical of educational contexts have a strong impact on the way problems are dealt with. Clear differences could be discerned between groups at different achievement levels in the extent to which the cues present in pedagogical contexts were used in defining the problem. A crucial aspect of what are conventionally conceived as differences in mathematical ability seems, judging from the present results, to have more to do with the capacity to decipher ambiguous communicative situations than with the mastery of a mathematical algorithm per se.

  • 43.
    Nilsson, Petter
    Linköping University, Department of Electrical Engineering, Electronics System.
    Built-in self-test of analog-to-digital converters in FPGAs2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    When designing an ADC it is desirable to test its performance at two different points in the development process. The first is characterization and verification testing when a chip containing the ADC has been taped-out for the first time, and the second is production testing when the chip is manufactured in large scale. It is important to have a good correlation between the results of characterization and the results of production testing.

    This thesis project investigates the feasibility of using a built-in self-test to evaluate the performance of embedded ADCs in FPGAs, by using the FPGA fabric to run necessary test algorithms. The idea is to have a common base of C code for both characterization and production testing. The code can be compiled and run on a computer for a characterization test setup, but it can also be synthesized using a high-level synthesis (HLS) tool, and written to FPGA fabric as part of a built-in self-test for production testing. By using the same code base, it is easier to get a good correlation between the results, since any difference due to algorithm implementation can be ruled out. The algorithms include a static test where differential nonlinearity (DNL), integral nonlinearity (INL), offset and gain error are calculated using a sine-wave based histogram approach. A dynamic test with an FFT algorithm, that for example calculates signal-to-noise ratio (SNR) and total harmonic distortion (THD), is also included. All algorithms are based on the IEEE Standard for Terminology and Test Meth- ods for Analog-to-Digital Converters (IEEE Std 1241). To generate a sine-wave test signal it is attempted to use a delta-sigma DAC implemented in the FPGA fabric.

    Synthesizing the C code algorithms and running them on the FPGA proved successful. For the static test there was a perfect match of the results to 10 decimal places, between the algorithms running on a computer and on the FPGA, and for the dynamic test there was a match to two decimal places. Using a delta-sigma DAC to generate a test sine-wave did not prove feasible in this case. Assuming a brick-wall bandpass filter the performance of the delta-sigma DAC is estimated to an SNR of 53dB, and this signal is not pure enough to test the test case ADC with a specified SNR of 60dB.

  • 44.
    Wikström, Anders
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Resource allocation of drones flown in a simulated environment2014Independent thesis Basic level (university diploma), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    In this report we compare three different assignment algorithms in how they can be used to assign a set of drones to get to a set of goal locations in an as resource efficient way as possible. An experiment is set up to compare how these algorithms perform in a somewhat realistic simulated environment. The Robot Operating system (ROS) is used to create the experimental environment. We found that by introducing a threshold for the Hungarian algorithm we could reduce the total time it takes to complete the problem while only sightly increasing total distance traversed by the drones.

  • 45.
    Ottersten, Björn
    et al.
    Stanford University, CA, USA.
    Viberg, Mats
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Kailath, Thomas
    Stanford University, CA, USA.
    Analysis of Algorithms for Sensor Arrays with Invariance Structure1990Report (Other academic)
    Abstract [en]

    The problem of estimating signal parameters from sensor array data is addressed. If the array is composed of two identical subarrays, (i.e. one invariance) the ESPRIT algorithm is known to yield parameter estimates in a very cost efficient manner. Recently, the total least squares (TLS) version of ESPRIT has been formulated in a subspace fitting framework. In this formulation, the ESPRIT concept is easily generalized to arrays exhibiting more than one invariance. The asymptotic properties for this class of algorithms are derived. The estimates are shown to be statistically efficient under certain assumptions. The case of a uniform linear array is studied in more detail, and a generalization of the ESPRIT algorithm is proposed by introducing row weighting of the subspace estimate.

  • 46.
    Wahlberg, Bo
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Hannan, Edward J
    Australian National University, Australia.
    Parametric Signal Modelling using Laguerre Filters1990Report (Other academic)
    Abstract [en]

    Autoregressive (AR) modelling is generalized by replacing the delay operator by discrete Laguerre filters. The motivation is to reduce the number of parameters needed to obtain useful approximate models of stochastic processes, without increasing the computational complexity. Asymptotic statistical properties are investigated. Several AR model estimation results are extended to Laguerre models. In particular, it is shown how the choice of Laguerre time constant affects the resulting estimates. A Levinson-type algorithm for computing the Laguerre model estimates in an efficient way is also given. The Laguerre technique is illustrated by two simple examples.

  • 47.
    Ottersten, Björn
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Viberg, Mats
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Kailath, Thomas
    Stanford University, CA, USA.
    Performance Analysis of the Total Least Squares ESPRIT Algorithm1989Report (Other academic)
    Abstract [en]

    The asymptotic distribution of the estimation error for the total least squares (TLS) version of ESPRIT is derived. The application to a uniform linear array is treated in some detail, and a generalization of ESPRIT to include row weighting is discussed. The Cramer-Rao bound (CRB) for the ESPRIT problem formulation is derived and found to coincide with the asymptotic variance of the TLS ESPRIT estimates through numerical examples. A comparison of this method to least squares ESPRIT, MUSIC, and Root-MUSIC as well as to the CRB for a calibrated array is also presented. TLS ESPRIT is found to be competitive with the other methods, and the performance is close to the calibrated CRB for many cases of practical interest. For highly correlated signals, however, the performance deviates significantly from the calibrated CRB. Simulations are included to illustrate the applicability of the theoretical results to a finite number of data.

  • 48.
    Wahlberg, Bo
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    System Identification using High-Order Models, Revisited1989Report (Other academic)
    Abstract [en]

    The traditional approach of expanding transfer functions and noise models in the delay operator to obtain predictor models linear in the parameters leads to approximations of very high order in the case of rapid sampling and/or large dispersion in time constants. By using a priori information about the time constants of the system, more appropriate expansions, closely related to Laguerre networks, are introduced and analyzed. It is shown that these expansions need much lower orders to obtain reasonable approximations and improve the numerical properties of the estimation algorithm. Consistency (error bounds), persistence of excitation conditions, and asymptotic statistical properties are investigated

  • 49.
    Engvall, Sebastian
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Kaijsers algoritm för beräkning av Kantorovichavstånd parallelliserad i CUDA2013Independent thesis Basic level (university diploma), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    This thesis processes the work of developing CPU code and GPU code for Thomas Kaijsers algorithm for calculating the kantorovich distance and the performance between the two is compared. Initially there is a rundown of the algorithm which calculates the kantorovich distance between two images. Thereafter we go through the CPU implementation followed by GPGPU written in CUDA. Then the results are presented. Lastly, an analysis about the results and a discussion with possible improvements is presented for possible future applications.

  • 50.
    Ljung, Stefan
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    A Fast Algorithm to Solve Fredholm Integral Equations of the First Kind with Stationary Kernels1979Report (Other academic)
1234567 1 - 50 of 2755
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf