liu.seSök publikationer i DiVA
Ändra sökning
Avgränsa sökresultatet
1234567 151 - 200 av 483
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 151.
    Hedborg, Johan
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Ringaby, Erik
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Forssén, Per-Erik
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Structure and Motion Estimation from Rolling Shutter Video2011Ingår i: IEEE International Conference onComputer Vision Workshops (ICCV Workshops), 2011, IEEE Xplore , 2011, s. 17-23Konferensbidrag (Refereegranskat)
    Abstract [en]

    The majority of consumer quality cameras sold today have CMOS sensors with rolling shutters. In a rolling shutter camera, images are read out row by row, and thus each row is exposed during a different time interval. A rolling-shutter exposure causes geometric image distortions when either the camera or the scene is moving, and this causes state-of-the-art structure and motion algorithms to fail. We demonstrate a novel method for solving the structure and motion problem for rolling-shutter video. The method relies on exploiting the continuity of the camera motion, both between frames, and across a frame. We demonstrate the effectiveness of our method by controlled experiments on real video sequences. We show, both visually and quantitatively, that our method outperforms standard structure and motion, and is more accurate and efficient than a two-step approach, doing image rectification and structure and motion.

  • 152.
    Hedborg, Johan
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Robinson, Andreas
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Robust Three-View Triangulation Done Fast2014Ingår i: Proceedings: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2014, IEEE , 2014, s. 152-157Konferensbidrag (Refereegranskat)
    Abstract [en]

    Estimating the position of a 3-dimensional world point given its 2-dimensional projections in a set of images is a key component in numerous computer vision systems. There are several methods dealing with this problem, ranging from sub-optimal, linear least square triangulation in two views, to finding the world point that minimized the L2-reprojection error in three views. This leads to the statistically optimal estimate under the assumption of Gaussian noise. In this paper we present a solution to the optimal triangulation in three views. The standard approach for solving the three-view triangulation problem is to find a closed-form solution. In contrast to this, we propose a new method based on an iterative scheme. The method is rigorously tested on both synthetic and real image data with corresponding ground truth, on a midrange desktop PC and a Raspberry Pi, a low-end mobile platform. We are able to improve the precision achieved by the closed-form solvers and reach a speed-up of two orders of magnitude compared to the current state-of-the-art solver. In numbers, this amounts to around 300K triangulations per second on the PC and 30K triangulations per second on Raspberry Pi.

  • 153.
    Hedlund, Gunnar
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Närmaskbestämning från stereoseende2005Självständigt arbete på grundnivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats
    Abstract [sv]

    Detta examensarbete utreder avståndsbedömning med hjälp av bildbehandling och stereoseende för känd kamerauppställning.

    Idag existerar ett stort antal beräkningsmetoder för att få ut avstånd till objekt, men metodernas prestanda har knappt mätts. Detta arbete tittar huvudsakligen på olika blockbaserade metoder för avståndsbedömning och tittar på möjligheter samt begränsningar då man använder sig av känd kunskap inom bildbehandling och stereoseende för avståndsbedömning. Arbetet är gjort på Bofors Defence AB i Karlskoga, Sverige, i syfte att slutligen användas i ett optiskt sensorsystem. Arbetet utreder beprövade

    Resultaten pekar mot att det är svårt att bestämma en närmask, avstånd till samtliga synliga objekt, men de testade metoderna bör ändå kunna användas punktvis för att beräkna avstånd. Den bästa metoden bygger på att man beräknar minsta absolutfelet och enbart behåller de säkraste värdena.

  • 154.
    Heinemann, Christian
    et al.
    Forschungszentrum Jülich, Germany.
    Åström, Freddie
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Baravdish, George
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Kommunikations- och transportsystem. Linköpings universitet, Tekniska högskolan.
    Krajsek, Kai
    Forschungszentrum Jülich, Germany.
    Felsberg, Michael
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Scharr, Hanno
    Forschungszentrum Jülich, Germany.
    Using Channel Representations in Regularization Terms: A Case Study on Image Diffusion2014Ingår i: Proceedings of the 9th International Conference on Computer Vision Theory and Applications, SciTePress, 2014, Vol. 1, s. 48-55Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work we propose a novel non-linear diffusion filtering approach for images based on their channel representation. To derive the diffusion update scheme we formulate a novel energy functional using a soft-histogram representation of image pixel neighborhoods obtained from the channel encoding. The resulting Euler-Lagrange equation yields a non-linear robust diffusion scheme with additional weighting terms stemming from the channel representation which steer the diffusion process. We apply this novel energy formulation to image reconstruction problems, showing good performance in the presence of mixtures of Gaussian and impulse-like noise, e.g. missing data. In denoising experiments of common scalar-valued images our approach performs competitive compared to other diffusion schemes as well as state-of-the-art denoising methods for the considered noise types.

  • 155.
    Heintz, Fredrik
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska fakulteten.
    Löfgren, Fredrik
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska fakulteten.
    Linköping Humanoids: Application RoboCup 2016 Standard Platform League2016Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    This is the application for the RoboCup 2016 Standard Platform League from the Linköping Humanoids team.

    Linköping Humanoids participated in RoboCup 2015. We didn’t do very well, but we learned a lot. When we arrived nothing worked. However, we fixed more and more of the open issues and managed to play a draw in our final game. We also participated in some of the technical challenges and scored some points. At the end of the competition we had a working team. This was both frustrating and rewarding. Analyzing the competition we have identified both what we did well and the main issues that we need to fix. One important lesson is that it takes time to develop a competitive RoboCup SPL team. Weare dedicated to improving our performance over time in order to be competitive in 2017.

  • 156.
    Heintz, Fredrik
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska fakulteten.
    Löfgren, Fredrik
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska fakulteten.
    Linköping Humanoids: Application RoboCup 2017 Standard Platform League2017Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    This is the application for the RoboCup 2017 Standard Platform League from the Link¨oping Humanoids team

    Linköping Humanoids participated in both RoboCup 2015 and 2016 with the intention of incrementally developing a good team by learning as much as  possible. We significantly improved from 2015 to 2016, even though we still didn’t perform very well. Our main challenge is that we are building our software from the ground up using the Robot Operating System (ROS) as the integration and development infrastructure. When the system became overloaded, the ROS infrastructure became very unpredictable. This made it very hard to debug during the contest, so we basically had to remove things until the load was constantly low. Our top priority has since been to make the system stable and more resource efficient. This will take  us to the next level.

    From the start we have been clear that our goal is to have a competitive team by 2017 since we are developing our own software from scratch we are very well aware that we needed time to build up the competence and the software infrastructure. We believe we are making good progress towards this goal. The team of about 10 students has been very actively working during the fall with weekly workshops and bi-weekly one day hackathons.

  • 157.
    Hellsten, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap.
    Evaluation of tone mapping operators for use in real time environments2007Självständigt arbete på avancerad nivå (magisterexamen), 20 poäng / 30 hpStudentuppsats
    Abstract [en]

    As real time visualizations become more realistic it also becomes more important to simulate the perceptual effects of the human visual system. Such effects include the response to varying illumination, glare and differences between photopic and scotopic vision. This thesis evaluates several different tone mapping methods to allow a greater dynamic range to be used in real time visualisations. Several tone mapping methods have been implemented in the Avalanche Game Engine and evaluated using a small test group. To increase immersion in the visualization several filters aimed to simulate perceptual effects has also been implemented. The primary goal of these filters is to simulate scotopic vision. The tests showed that two tone mapping methods would be suitable for the environment used in the tests. The S-curve tone mapping method gave the best result while the Mean Value method gave good results while being the simplest to implement and the cheapest. The test subjects agreed that the simulation of scotopic vision enhanced the immersion in a visualization. The primary difficulties in this work has been lack of dynamic range in the input images and the challenges in coding real time graphics using a graphics processing unit.

  • 158.
    Hemstrom, Jennifer
    et al.
    Linköpings universitet, Medicinska fakulteten. Univ British Columbia, Canada.
    Albonico, Andrea
    Univ British Columbia, Canada.
    Djouab, Sarra
    Univ British Columbia, Canada; Univ Auvergne, France.
    Barton, Jason J. S.
    Univ British Columbia, Canada.
    Visual search for complex objects: Set-size effects for faces, words and cars2019Ingår i: Vision Research, ISSN 0042-6989, E-ISSN 1878-5646, Vol. 162Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    To compare visual processing for different object types, we developed visual search tests that generated accuracy and response time parameters, including an object set-size effect that indexes perceptual processing load. Our goal was to compare visual search for two expert object types, faces and visual words, as well as a less expert type, cars. We first asked if faces and words showed greater inversion effects in search. Second, we determined whether search with upright stimuli correlated with other perceptual indices. Last we assessed for correlations between tests within a single orientation, and between orientations for a single object type. Object set-size effects were smaller for faces and words than cars. All accuracy and temporal measures showed an inversion effect for faces and words, but not cars. Face-search accuracy measures correlated with accuracy on the Cambridge Face Memory Test and word-search temporal measures correlated with single-word reading times, but car search did not correlate with semantic car knowledge. There were cross-orientation correlations for all object types, as well as cross-object correlations in the inverted orientation, while in the upright orientation face search did not correlate with word or car search. We conclude that object search shows effects of expertise. Compared to cars, words and faces showed smaller object set-size effects, greater inversion effects, and their search results correlated with other indices of perceptual expertise. The correlation analyses provide preliminary evidence supporting contributions from common processes in the case of inverted stimuli, object-specific processes that operate in both orientations, and distinct processing for upright faces.

  • 159.
    Henriksson, Markus
    et al.
    FOI.
    Olofsson, Tomas
    FOI.
    Grönwall, Christina
    FOI.
    Brännlund, Carl
    FOI.
    Sjöqvist, Lars
    FOI.
    Optical reflectance tomography using TCSPC laser radar2012Ingår i: Proc. SPIE, 2012, Vol. 8542Konferensbidrag (Refereegranskat)
    Abstract [en]

    Tomographic signal processing is used to transform multiple one-dimensional range profiles of a target from different angles to a two-dimensional image of the object. The range profiles are measured by a time-correlated single-photon counting (TCSPC) laser radar system with approximately 50 ps range resolution and a field of view that is wide compared to the measured objects. Measurements were performed in a lab environment with the targets mounted on a rotation stage. We show successful reconstruction of 2D-projections along the rotation axis of a boat model and removal of artefacts using a mask based on the convex hull. The independence of spatial resolution and the high sensitivity at a first glance makes this an interesting technology for very long range identification of passing objects such as high altitude UAVs and orbiting satellites but also the opposite problem of ship identification from high altitude platforms. To obtain an image with useful information measurements from a large angular sector around the object is needed, which is hard to obtain in practice. Examples of reconstructions using 90 and 150° sectors are given. In addition, the projection of the final image is along the rotation axis for the measurement and if this is not aligned with a major axis of the target the image information is limited. There are also practical problems to solve, for example that the distance from the sensor to the rotation centre needs to be known with an accuracy corresponding to the measurement resolution. The conclusion is that that laser radar tomography is useful only when the sensor is fixed and the target rotates around its own axis. © (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  • 160.
    Henrysson, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Bringing Augmented Reality to Mobile Phones2007Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    With its mixing of real and virtual, Augmented Reality (AR) is a technology that has attracted lots of attention from the science community and is seen as a perfect way to visualize context-related information. Computer generated graphics is presented to the user overlaid and registered with the real world and hence augmenting it. Promising intelligence amplification and higher productivity, AR has been intensively researched over several decades but has yet to reach a broad audience.

    This thesis presents efforts in bringing Augmented Reality to mobile phones and thus to the general public. Implementing technologies on limited devices, such as mobile phones, poses a number of challenges that differ from traditional research directions. These include: limited computational resources with little or no possibility to upgrade or add hardware, limited input and output capabilities for interactive 3D graphics. The research presented in this thesis addresses these challenges and makes contributions in the following areas:

    Mobile Phone Computer Vision-Based Tracking

    The first contribution of thesis has been to migrate computer vision algorithms for tracking the mobile phone camera in a real world reference frame - a key enabling technology for AR. To tackle performance issues, low-level optimized code, using fixed-point algorithms, has been developed.

    Mobile Phone 3D Interaction Techniques

    Another contribution of this thesis has been to research interaction techniques for manipulating virtual content. This is in part realized by exploiting camera tracking for position-controlled interaction where motion of the device is used as input. Gesture input, made possible by a separate front camera, is another approach that is investigated. The obtained results are not unique to AR and could also be applicable to general mobile 3D graphics.

    Novel Single User AR Applications

    With short range communication technologies, mobile phones can exchange data not only with other phones but also with an intelligent environment. Data can be obtained for tracking or visualization; displays can be used to render graphics with the tracked mobile phone acting as an interaction device. Work is presented where a mobile phone harvests a sensor-network to use AR to visualize live data in context.

    Novel Collaboration AR Applications

    One of the most promising areas for mobile phone based AR is enhancing face-to-face computer supported cooperative work. This is because the AR display permits non-verbal cues to be used to a larger extent. In this thesis, face-to-face collaboration has been researched to examine whether AR increases awareness of collaboration partners even on small devices such as mobile phones. User feedback indicates that this is the case, confirming the hypothesis that mobile phones are increasingly able to deliver an AR experience to a large audience.

    Delarbeten
    1. Face to Face Collaborative AR on Mobile Phones
    Öppna denna publikation i ny flik eller fönster >>Face to Face Collaborative AR on Mobile Phones
    2005 (Engelska)Ingår i: Proceedings of the Fourth IEEE and ACM international Symposium on Mixed and Augmented Reality, 2005, s. 80-89Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    Mobile phones are an ideal platform for augmented reality. In this paper we describe how they also can be used to support face to face collaborative AR applications. We have created a custom port of the ARToolKit library to the Symbian mobile phone operating system and then developed a sample collaborative AR game based on this. We describe the game in detail and user feedback from people who have played it. We also provide general design guidelines that could be useful for others who are developing mobile phone collaborative AR applications.

    Nationell ämneskategori
    Teknik och teknologier
    Identifikatorer
    urn:nbn:se:liu:diva-12743 (URN)10.1109/ISMAR.2005.32 (DOI)
    Tillgänglig från: 2007-11-20 Skapad: 2007-11-20 Senast uppdaterad: 2011-01-04
    2. Virtual Object Manipulation using a Mobile Phone
    Öppna denna publikation i ny flik eller fönster >>Virtual Object Manipulation using a Mobile Phone
    2005 (Engelska)Ingår i: Proceedings of the 2005 international Conference on Augmented Tele-Existence, 2005, s. 164-171Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    Augmented Reality (AR) on mobile phones has reached a level of maturity where it can be used as a tool for 3D object manipulation. In this paper we look at user interface issues where an AR enabled mobile phone acts as an interaction device. We discuss how traditional 3D manipulation techniques apply to this new platform. The high tangibility of the device and its button interface makes it interesting to compare manipulation techniques. We describe AR manipulation techniques we have implemented on a mobile phone and present a small pilot study evaluating these methods.

    Nyckelord
    augmented reality, manipulation, mobile phone
    Nationell ämneskategori
    Teknik och teknologier
    Identifikatorer
    urn:nbn:se:liu:diva-12744 (URN)10.1145/1152399.1152430 (DOI)
    Tillgänglig från: 2007-11-20 Skapad: 2007-11-20 Senast uppdaterad: 2011-01-04
    3. Experiments in 3D Interaction for Mobile Phone AR
    Öppna denna publikation i ny flik eller fönster >>Experiments in 3D Interaction for Mobile Phone AR
    2007 (Engelska)Ingår i: Proceedings of the 5th international conference on Computer graphics and interactive techniques in Australia and Southeast Asia, Perth, Australia, New York: The Association for Computing Machinery, Inc. , 2007, s. 187-194Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    In this paper we present an evaluation of several different techniques for virtual object positioning and rotation on a mobile phone. We compare gesture input captured by the phone's front camera, to tangible input, keypad interaction and phone tilting in increasingly complex positioning and rotation tasks in an AR context. Usability experiments found that tangible input techniques are best for translation tasks, while keypad input is best for rotation tasks. Implications for the design of mobile phone 3D interfaces are presented as well as directions for future research.

    Ort, förlag, år, upplaga, sidor
    New York: The Association for Computing Machinery, Inc., 2007
    Nyckelord
    3D interaction, augmented reality, mobile graphics
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Identifikatorer
    urn:nbn:se:liu:diva-12745 (URN)10.1145/1321261.1321295 (DOI)978-1-59593-912-8 (ISBN)
    Tillgänglig från: 2007-11-20 Skapad: 2007-11-20 Senast uppdaterad: 2018-01-13Bibliografiskt granskad
    4. Mobile Phone Based AR Scene Assembly
    Öppna denna publikation i ny flik eller fönster >>Mobile Phone Based AR Scene Assembly
    2005 (Engelska)Ingår i: Proceedings of the 4th international Conference on Mobile and Ubiquitous Multimedia, 2005, s. 95-102Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    In this paper we describe a mobile phone based Augmented Reality application for 3D scene assembly. Augmented Reality on mobile phones extends the interaction capabilities on such handheld devices. It adds a 6 DOF isomorphic interaction technique for manipulating 3D content. We give details of an application that we believe to be the first where 3D content can be manipulated using both the movement of a camera tracked mobile phone and a traditional button interface as input for transformations. By centering the scene in a tangible marker space in front of the phone we provide a mean for bimanual interaction. We describe the implementation, the interaction techniques we have developed and initial user response to trying the application.

    Nyckelord
    CAD, augmented reality, mobile phone
    Nationell ämneskategori
    Teknik och teknologier
    Identifikatorer
    urn:nbn:se:liu:diva-12746 (URN)10.1145/1149488.1149504 (DOI)
    Tillgänglig från: 2007-11-20 Skapad: 2007-11-20 Senast uppdaterad: 2011-01-04
    5. Using a Mobile Phone for 6DOF Mesh Editing
    Öppna denna publikation i ny flik eller fönster >>Using a Mobile Phone for 6DOF Mesh Editing
    2007 (Engelska)Ingår i: Proceedings of the 7th ACM SIGCHI New Zealand Chapter's international Conference on Computer-Human interaction: Design Centered HCI., 2007, s. 9-16Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    This paper describes how a mobile phone can be used as a six degree of freedom interaction device for 3D mesh editing. Using a video see-through Augmented Reality approach, the mobile phone meets several design guidelines for a natural, easy to learn, 3D human computer interaction device. We have developed a system that allows a user to select one or more vertices in an arbitrary sized polygon mesh and freely translate and rotate them by translating and rotating the device itself. The mesh is registered in 3D and viewed through the device and hence the system provides a unified perception-action space. We present the implementation details and discuss the possible advantages and disadvantages of this approach.

    Nyckelord
    3D interfaces, content creation, mobile computer graphics, mobile phone augmented reality
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Identifikatorer
    urn:nbn:se:liu:diva-12747 (URN)10.1145/1278960.1278962 (DOI)1-59593-473-1 (ISBN)
    Tillgänglig från: 2007-11-20 Skapad: 2007-11-20 Senast uppdaterad: 2018-01-13Bibliografiskt granskad
    6. Interactive Collaborative Scene Assembly Using AR on Mobile Phones
    Öppna denna publikation i ny flik eller fönster >>Interactive Collaborative Scene Assembly Using AR on Mobile Phones
    2006 (Engelska)Ingår i: Artificial Reality and Telexistence, ICAT, Springer , 2006, s. 1008-1017Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    In this paper we present and evaluate a platform for interactive collaborative face-to-face Augmented Reality using a distributed scene graph on mobile phones. The results of individual actions are viewed on the screen in real-time on every connected phone. We show how multiple collaborators can use consumer mobile camera phones to furnish a room together in an Augmented Reality environment. We have also presented a user case study to investigate how untrained users adopt this novel technology and to study the collaboration between multiple users. The platform is totally independent of a PC server though it is possible to connect a PC client to be used for high quality visualization on a big screen device such as a projector or a plasma display.

    Ort, förlag, år, upplaga, sidor
    Springer, 2006
    Serie
    Lecture Notes in Computer Science, ISSN 1611-3349 ; 4282
    Nationell ämneskategori
    Teknik och teknologier
    Identifikatorer
    urn:nbn:se:liu:diva-12748 (URN)10.1007/11941354_104 (DOI)
    Tillgänglig från: 2007-11-20 Skapad: 2007-11-20 Senast uppdaterad: 2009-04-22
    7. A Novel Interface to Sensor Networks using Handheld Augmented Reality
    Öppna denna publikation i ny flik eller fönster >>A Novel Interface to Sensor Networks using Handheld Augmented Reality
    2006 (Engelska)Ingår i: Proceedings of the 8th Conference on Human-Computer interaction with Mobile Devices and Services, Espoo, Finland, 2006, s. 145-148Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    Augmented Reality technology enables a mobile phone to be used as an x-ray tool, visualizing structures and states not visible to the naked eye. In this paper we evaluate a set of techniques used augmenting the world with a visualization of data from a sensor network. Combining virtual and real information introduces challenges as information from the two domains might interfere. We have applied our system to humidity data and present a user study together with feedback from domain experts. The prototype system can be seen as the first step towards a novel tool for inspection of building elements.

    Nyckelord
    Algorithms, Design, Human Factors, Measurement, intelligent environments, mobile phone augmented reality, sensor networks, visualization
    Nationell ämneskategori
    Teknik och teknologier
    Identifikatorer
    urn:nbn:se:liu:diva-12749 (URN)10.1145/1152215.1152245 (DOI)
    Tillgänglig från: 2007-11-20 Skapad: 2007-11-20 Senast uppdaterad: 2015-09-22
    8. LUMAR: A Hybrid Spatial Display System for 2D and 3D Handheld Augmented Reality
    Öppna denna publikation i ny flik eller fönster >>LUMAR: A Hybrid Spatial Display System for 2D and 3D Handheld Augmented Reality
    2007 (Engelska)Ingår i: 17th International Conference on Artificial Reality and Telexistence (ICAT 2007), Esbjerg, Denmark, 2007, Los Alamitos, CA, USA: IEEE Computer Society Press , 2007, s. 63-70Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    LUMAR is a hybrid system for spatial displays, allowing cell phones to be tracked in 2D and 3D through combined egocentric and exocentric techniques based on the Light-Sense and UMAR frameworks. LUMAR differs from most other spatial display systems based on mobile phones with its three-layered information space. The hybrid spatial display system consists of printed matter that is augmented with context-sensitive, dynamic 2D media when the device is on the surface, and with overlaid 3D visualizations when it is held in mid-air.

    Ort, förlag, år, upplaga, sidor
    Los Alamitos, CA, USA: IEEE Computer Society Press, 2007
    Nyckelord
    spatially aware, portable, mobile, handheld, cell, phone, augmented reality, mixed reality, ubiquitous
    Nationell ämneskategori
    Datorseende och robotik (autonoma system)
    Identifikatorer
    urn:nbn:se:liu:diva-12750 (URN)10.1109/ICAT.2007.13 (DOI)
    Konferens
    17th International Conference on Artificial Reality and Telexistence (ICAT 2007), Esbjerg, Denmark, 2007
    Tillgänglig från: 2007-11-20 Skapad: 2007-11-20 Senast uppdaterad: 2018-03-05
  • 161.
    Henrysson, Anders
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Billinghurst, Mark
    University of Canterbury, Christchurch, New Zealand.
    Using a Mobile Phone for 6DOF Mesh Editing2007Ingår i: Proceedings of the 7th ACM SIGCHI New Zealand Chapter's international Conference on Computer-Human interaction: Design Centered HCI., 2007, s. 9-16Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    This paper describes how a mobile phone can be used as a six degree of freedom interaction device for 3D mesh editing. Using a video see-through Augmented Reality approach, the mobile phone meets several design guidelines for a natural, easy to learn, 3D human computer interaction device. We have developed a system that allows a user to select one or more vertices in an arbitrary sized polygon mesh and freely translate and rotate them by translating and rotating the device itself. The mesh is registered in 3D and viewed through the device and hence the system provides a unified perception-action space. We present the implementation details and discuss the possible advantages and disadvantages of this approach.

  • 162.
    Henrysson, Anders
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Marshall, Joe
    University of Nottingham.
    Billinghurst, Mark
    University of Canterbury, Christchurch, New Zealand.
    Experiments in 3D Interaction for Mobile Phone AR2007Ingår i: Proceedings of the 5th international conference on Computer graphics and interactive techniques in Australia and Southeast Asia, Perth, Australia, New York: The Association for Computing Machinery, Inc. , 2007, s. 187-194Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    In this paper we present an evaluation of several different techniques for virtual object positioning and rotation on a mobile phone. We compare gesture input captured by the phone's front camera, to tangible input, keypad interaction and phone tilting in increasingly complex positioning and rotation tasks in an AR context. Usability experiments found that tangible input techniques are best for translation tasks, while keypad input is best for rotation tasks. Implications for the design of mobile phone 3D interfaces are presented as well as directions for future research.

  • 163.
    Hermosilla, P.
    et al.
    Ulm Univ, Germany.
    Maisch, S.
    Ulm Univ, Germany.
    Ritschel, T.
    UCL, England.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Ulm Univ, Germany.
    Deep-learning the Latent Space of Light Transport2019Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, nr 4, s. 207-217Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We suggest a method to directly deep-learn light transport, i. e., the mapping from a 3D geometry-illumination-material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi-transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two-stage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D-2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage-operator serves as a valuable extension to modern deferred shading approaches.

  • 164.
    Holmer, Stefan
    Linköpings universitet, Institutionen för systemteknik.
    Implementation and evaluation of content-aware video retargeting techniques2008Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats
    Abstract [sv]

    Syftet med examensarbetet har varit att studera tekniker för ändring av bredd/höjd-förhållandet i videosekvenser, där hänsyn tas till innehållet i bilderna. Fokus har lagts på en generalisering av "seam carving" för video och möjligheterna att kombinera olika tekniker för att nå bättre kvalitet både för videosekvenser som består av endast ett, eller flera, klipp. Detta innefattade således också omfattande studier av automatisk klippdetektering och olika mått av videoinnehåll. Arbetet har resulterat i en prototypapplikation utvecklad i Matlab för halvautomatisk förändring av bildförhållande där hänsyn tas till innehållet i sekvenserna. I prototypen finns tre metoder implementerade, "seam carving", automatiserad "pan & scan" och nedsampling med bi-kubisk interpolering. Dessa metoder har utvärderats och jämförts med varandra från ett innehållsbevarande perspektiv och ett kvalitetsperspektiv.

  • 165.
    Holmquist, Karl
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Senel, Deniz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Computing a Collision-Free Path using the monogenic scale space2018Ingår i: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018, s. 8097-8102Konferensbidrag (Refereegranskat)
    Abstract [en]

    Mobile robots have been used for various purposes with different functionalities which require them to freely move in environments containing both static and dynamic obstacles to accomplish given tasks. One of the most relevant capabilities in terms of navigating a mobile robot in such an environment is to find a safe path to a goal position. This paper shows that there exists an accurate solution to the Laplace equation which allows finding a collision-free path and that it can be efficiently calculated for a rectangular bounded domain such as a map which is represented as an image. This is accomplished by the use of the monogenic scale space resulting in a vector field which describes the attracting and repelling forces from the obstacles and the goal. The method is shown to work in reasonably convex domains and by the use of tessellation of the environment map for non-convex environments.

  • 166.
    Horney, Tobias
    et al.
    Swedish Defence Research Agency, Sweden.
    Ahlberg, Jörgen
    Swedish Defence Research Agency, Sweden.
    Grönwall, Christina
    Swedish Defence Research Agency, Sweden.
    Folkesson, Martin
    Swedish Defence Research Agency, Sweden.
    Silvervarg, Karin
    Swedish Defence Research Agency, Sweden.
    Fransson, Jörgen
    Swedish Defence Research Agency, Sweden.
    Klasén, Lena
    Swedish Defence Research Agency, Sweden.
    Jungert, Erland
    Swedish Defence Research Agency, Sweden.
    Lantz, Fredrik
    Swedish Defence Research Agency, Sweden.
    Ulvklo, Morgan
    Swedish Defence Research Agency, Sweden.
    An information system for target recognition2004Ingår i: Volume 5434 Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications / [ed] Belur V. Dasarathy, SPIE - International Society for Optical Engineering, 2004, s. 163-175Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present an approach to a general decision support system. The aim is to cover the complete process for automatic target recognition, from sensor data to the user interface. The approach is based on a query-based information system, and include tasks like feature extraction from sensor data, data association, data fusion and situation analysis. Currently, we are working with data from laser radar, infrared cameras, and visual cameras, studying target recognition from cooperating sensors on one or several platforms. The sensors are typically airborne and at low altitude. The processing of sensor data is performed in two steps. First, several attributes are estimated from the (unknown but detected) target. The attributes include orientation, size, speed, temperature etc. These estimates are used to select the models of interest in the matching step, where the target is matched with a number of target models, returning a likelihood value for each model. Several methods and sensor data types are used in both steps. The user communicates with the system via a visual user interface, where, for instance, the user can mark an area on a map and ask for hostile vehicles in the chosen area. The user input is converted to a query in ΣQL, a query language developed for this type of applications, and an ontological system decides which algorithms should be invoked and which sensor data should be used. The output from the sensors is fused by a fusion module and answers are given back to the user. The user does not need to have any detailed technical knowledge about the sensors (or which sensors that are available), and new sensors and algorithms can easily be plugged into the system.

  • 167.
    Hotz, Ingrid
    et al.
    University of California, Davis, USA.
    Feng, Louis
    University of California, Davis, USA.
    Hagen, Hans
    University of Kaiserslautern.
    Hamann, Bernd
    University of California, Davis, USA.
    Joy, Ken
    University of California, Davis, USA.
    Tensor Field Visualization Using a Metric Interpretation2006Ingår i: Visualization and Image Processing of Tensor Fields / [ed] Joachim Weickert, Hans Hagen, Springer, 2006, s. 269-281Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    This chapter introduces a visualization method specifically tailored to the class of tensor fields with properties similar to stress and strain tensors. Such tensor fields play an important role in many application areas such as structure mechanics or solid state physics. The presented technique is a global method that represents the physical meaning of these tensor fields with their central features: regions of compression or expansion. The method consists of two steps: first, the tensor field is interpreted as a distortion of a flat metric with the same topological structure; second, the resulting metric is visualized using a texture-based approach. The method supports an intuitive distinction between positive and negative eigenvalues.

  • 168.
    Hotz, Ingrid
    et al.
    Universtiy of California,Davis, USA.
    Feng, Louis
    Universtiy of California,Davis, USA.
    Hagen, Hans
    University of Kaiserslautern,Germany.
    Hamann, Bernd
    University of California, Davis, USA.
    Joy, Ken
    University of California, Davis, USA.
    Jeremic, Boris
    University of California, Davis, USA.
    Physically Based Methods for Tensor Field Visualization2004Konferensbidrag (Refereegranskat)
    Abstract [en]

    The physical interpretation of mathematical features of tensor fields is highly application-specific. Existing visualization methods for tensor fields only cover a fraction of the broad application areas. We present a visualization method tailored specifically to the class of tensor field exhibiting properties similar to stress and strain tensors, which are commonly encountered in geomechanics. Our technique is a global method that represents the physical meaning of these tensor fields with their central features: regions of compression or expansion. The method is based on two steps: first, we define a positive definite metric, with the same topological structure as the tensor field; second, we visualize the resulting metric. The eigenvector fields are represented using a texture-based approach resembling line integral convolution (LIC) methods. The eigenvalues of the metric are encoded in free parameters of the texture definition. Our method supports an intuitive distinction between positive and negative eigenvalues. We have applied our method to synthetic and some standard data sets, and "real" data from earth science and mechanical engineering application.

  • 169.
    Hotz, Ingrid
    et al.
    Universtiy of California, Davis.
    Feng, Louis
    University of California, Davis.
    Hamann, Bernd
    University of California, Davis, USA.
    Joy, Ken
    University of California, Davis, USA.
    Tensor-fields Visualization using a Fabric like Texture on Arbitrary two-dimensional Surfaces2009Ingår i: Mathematical Foundations of Scientific Visualization / [ed] Torsten Möller,Bernd Hamann,Robert D. Russell, Springer, 2009, s. 139-155Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    We present a visualization method that for three-dimensional tensor fields based on the idea of a stretched or compressed piece of fabric used as a “texture” for a two-dimensional surfaces. The texture parameters as the fabric density reflect the physical properties of the tensor field. This method is especially appropriate for the visualization of stress and strain tensor fields that play an important role in many application areas including mechanics and solid state physics. To allow an investigation of a three-dimensional field we use a scalar field that defines a one-parameter family of iso-surfaces controlled by their iso-value. This scalar-field can be a “connected” scalar field, for example, pressure or an additional scalar field representing some symmetry or inherent structure of the dataset. Texture generation consists basically of three steps. The first is the transformation of the tensor field into a positive definite metric. The second step is the generation of an input for the final texture generation using line integral convolution (LIC). This input image consists of “bubbles” whose shape and density are controlled by the eigenvalues of the tensor field. This spot image incorporates the entire information content defined by the three eigenvalue fields. Convolving this input texture in direction of the eigenvector fields provides a continuous representation. This method supports an intuitive distinction between positive and negative eigenvalues and supports the additional visualization of a connected scalar field.

  • 170.
    Hotz, Ingrid
    et al.
    University of Kaiserslautern.
    Hagen, Hans
    University of Kaiserslautern.
    Isometric Embedding for a Discrete Metric2004Ingår i: Geometric Modeling for Scientific Visualization / [ed] Guido Brunnett ,Bernd Hamann,Heinrich Müller ,Lars Linsen, Springer, 2004, 1, s. 19-36Kapitel i bok, del av antologi (Refereegranskat)
  • 171.
    Hotz, Ingrid
    et al.
    Zuse Institute Berlin, Berlin, Germany.
    Peikert, Ronald
    ETH Zurich, Zurich, Switzerland .
    Definition  of  a  Multifield2014Ingår i: Scientific Visualization: Uncertainty, Multifield, Biomedical, and Scalable Visualization / [ed] Charles D. Hansen; Min Chen; Christopher R. Johnson; Arie E. Kaufman; Hans Hagen, Springer London, 2014, s. 105-109Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    A challenge, visualization is often faced with, is the complex structure of scientific data. Complexity can arise in various ways, from high dimensionalities of domains and ranges, time series of measurements, ensemble simulations, to heterogeneous collections of data, such as combinations of measured and simulated data. Many of these complexities can be subsumed under a concept of multifields, and in fact, multifield visualization has been identified as one of the major current challenges in scientific visualization. In this chapter, we propose a multifield definition, which will allow us a systematic approach to discussing related research.

  • 172.
    Hotz, Ingrid
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Schultz, ThomasUniversity Bonn, Germany.
    Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data (Dagstuhl’14)2015Samlingsverk (redaktörskap) (Refereegranskat)
    Abstract [en]
    • Transfer result from one application to another between which there is otherwise not much exchange
    • Bringing together ideas from applications and theory: Applications can stimulate new basic research, as basic results can be of great use in the applications
    • Summarizing the state of the art and major open questions in the field
    • Presenting new and innovative work with the capabilities of advancing the field
  • 173.
    Hotz, Ingrid
    et al.
    University of California, USA.
    Sreevalsan-Nair, Jaya
    University of California, USA.
    Hagen, Hans
    Technical University of Kaiserslautern,Kaiserslautern, Germany.
    Hamann, Bernd
    University of California, USA.
    Tensor Field Reconstruction Based on Eigenvector and Eigenvalue Interpolation2010Ingår i: Dagstuhl Follow-Ups, E-ISSN 1868-8977, Vol. 1, s. 110-123Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Interpolation is an essential step in the visualization process. While most data from simulations or experiments are discrete many visualization methods are based on smooth, continuous data approximation or interpolation methods. We introduce a new interpolation method for symmetrical tensor fields given on a triangulated domain. Differently from standard tensor field interpolation, which is based on the tensor components, we use tensor invariants, eigenvectors and eigenvalues, for the interpolation. This interpolation minimizes the number of eigenvectors and eigenvalues computations by restricting it to mesh vertices and makes an exact integration of the tensor lines possible. The tensor field topology is qualitatively the same as for the component wise-interpolation. Since the interpolation decouples the “shape” and “direction” interpolation it is shape-preserving, what is especially important for tracing fibers in diffusion MRI data.

  • 174.
    Hultberg, Johanna
    Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Dehazing of Satellite Images2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    The aim of this work is to find a method for removing haze from satellite imagery. This is done by taking two algorithms developed for images taken from the sur- face of the earth and adapting them for satellite images. The two algorithms are Single Image Haze Removal Using Dark Channel Prior by He et al. and Color Im- age Dehazing Using the Near-Infrared by Schaul et al. Both algorithms, altered to fit satellite images, plus the combination are applied on four sets of satellite images. The results are compared with each other and the unaltered images. The evaluation is both qualitative, i.e. looking at the images, and quantitative using three properties: colorfulness, contrast and saturated pixels. Both the qualitative and the quantitative evaluation determined that using only the altered version of Dark Channel Prior gives the result with the least amount of haze and whose colors look most like reality. 

  • 175.
    Häger, Gustav
    Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Improving Discriminative Correlation Filters for Visual Tracking2015Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Allmän visuell följning är ett klassiskt problem inom datorseende. I den vanliga formuleringen antas ingen förkunskap om objektet som skall följas, utöver en initial rektangel i en videosekvens första bild.Detta är ett mycket svårt problem att lösa allmänt på grund av occlusioner, rotationer, belysningsförändringar och variationer i objektets uppfattde storlek. På senare år har följningsmetoder baserade på diskriminativea korrelationsfilter gett lovande resultat inom området. Dessa metoder är baserade på att med hjälp av Fourertransformen effektivt beräkna detektioner och modellupdateringar, samtidigt som de har mycket bra prestanda och klarar av många hundra bilder per sekund. De nuvarande metoderna uppskattar dock bara translationen hos det följda objektet, medans skalförändringar ignoreras. Detta examensarbete utvärderar ett antal metoder för att göra skaluppskattningar inom ett korrelationsfilterramverk. En innovativ metod baserad på att konstruera separata skal och translationsfilter. Den föreslagna metoden är robust och har signifikant bättre följningsprestanda, samtidigt som den kan användas i realtid. Det utförs också en utvärdering av olika särdragsrepresentationer på två stora benchmarking dataset för följning.

  • 176.
    Häger, Gustav
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Bhat, Goutam
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Danelljan, Martin
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Khan, Fahad Shahbaz
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Felsberg, Michael
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Rudol, Piotr
    Linköpings universitet, Tekniska högskolan.
    Doherty, Patrick
    Linköpings universitet, Tekniska högskolan.
    Combining Visual Tracking and Person Detection for Long Term Tracking on a UAV2016Ingår i: Proceedings of the 12th International Symposium on Advances in Visual Computing, 2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    Visual object tracking performance has improved significantly in recent years. Most trackers are based on either of two paradigms: online learning of an appearance model or the use of a pre-trained object detector. Methods based on online learning provide high accuracy, but are prone to model drift. The model drift occurs when the tracker fails to correctly estimate the tracked object’s position. Methods based on a detector on the other hand typically have good long-term robustness, but reduced accuracy compared to online methods.

    Despite the complementarity of the aforementioned approaches, the problem of fusing them into a single framework is largely unexplored. In this paper, we propose a novel fusion between an online tracker and a pre-trained detector for tracking humans from a UAV. The system operates at real-time on a UAV platform. In addition we present a novel dataset for long-term tracking in a UAV setting, that includes scenarios that are typically not well represented in standard visual tracking datasets.

  • 177.
    Ingemars, Nils
    Linköpings universitet, Institutionen för systemteknik.
    A feature based face tracker using extended Kalman filtering2007Självständigt arbete på grundnivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats
    Abstract [en]

    A face tracker is exactly what it sounds like. It tracks a face in a video sequence. Depending on the complexity of the tracker, it could track the face as a rigid object or as a complete deformable face model with face expressions.

    This report is based on the work of a real time feature based face tracker. Feature based means that you track certain features in the face, like points with special characteristics. It might be a mouth or eye corner, but theoretically it could be any point. For this tracker, the latter is of interest. Its task is to extract global parameters, i.e. rotation and translation, as well as dynamic facial parameters (expressions) for each frame. It tracks feature points using motion between frames and a textured face model (Candide). It then uses an extended Kalman filter to estimate the parameters from the tracked feature points.

  • 178.
    Ingemars, Nils
    et al.
    Linköpings universitet, Institutionen för systemteknik, Bildkodning. Linköpings universitet, Tekniska högskolan.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Bildkodning. Linköpings universitet, Tekniska högskolan.
    Feature-based Face Tracking using Extended Kalman Filtering2007Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    This work examines the possiblity to, with the computational power of today’s consumer hardware, employ techniques previously developed for 3D tracking of rigid objects, and use them for tracking of deformable objects. Our target objects are human faces in a video conversation pose, and our purpose is to create a deformable face tracker based on a head tracker operating in real-time on consumer hardware. We also investigate how to combine model-based and image based tracking in order to get precise tracking and avoid drift.

  • 179.
    Isoz, Wilhelm
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Calibration of Multispectral Sensors2005Självständigt arbete på grundnivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats
    Abstract [en]

    This thesis describes and evaluates a number of approaches and algorithms for nonuniform correction (NUC) and suppression of fixed pattern noise in a image sequence. The main task for this thesis work was to create a general NUC for infrared focal plane arrays. To create a radiometrically correct NUC, reference based methods using polynomial approximation are used instead of the more common scene based methods which creates a cosmetic NUC.

    The pixels that can not be adjusted to give a correct value for the incomming radiation are defined as dead. Four separate methods of identifying dead pixels are used to find these pixels. Both the scene sequence and calibration data are used in these identifying methods.

    The algorithms and methods have all been tested by using real image sequences. A graphical user interface using the presented algorithms has been created in Matlab to simplify the correction of image sequences. An implementation to convert the corrected values from the images to radiance and temperature is also performed.

  • 180.
    Izquierdo, Milagros
    et al.
    Linköpings universitet, Matematiska institutionen, Matematik och tillämpad matematik. Linköpings universitet, Tekniska fakulteten.
    Stokes, Klara
    Högskolan i Skövde.
    Isometric Point-Circle Configurations on Surfaces from Uniform Maps2016Ingår i: Springer Proceedings in Mathematics and Statistics, ISSN 2194-1009, Vol. 159, s. 201-212Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We embed neighborhood geometries of graphs on surfaces as point-circle configurations. We give examples coming from regular maps on surfaces with a maximum number of automorphisms for their genus, and survey geometric realization of pentagonal geometries coming from Moore graphs. An infinite family of point-circle v4'>v4v4 configurations on p-gonal surfaces with two p-gonal morphisms is given. The image of these configurations on the sphere under the two p-gonal morphisms is also described.

  • 181.
    Jack Lee, Wing
    et al.
    Monash University of Malaysia, Malaysia.
    Ng, Kok Yew
    Linköpings universitet, Institutionen för systemteknik. Monash University of Malaysia, Malaysia.
    Luh Tan, Chin
    Monash University of Malaysia, Malaysia; Trity Technology, Malaysia.
    Pin Tan, Chee
    Monash University of Malaysia, Malaysia; Trity Technology, Malaysia.
    Real-Time Face Detection And Motorized Tracking Using ScicosLab and SMCube On SoCs2016Ingår i: 14TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), IEEE, 2016, artikel-id UNSP Su23.3Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a method for real-time detection and tracking of the human face. This is achieved using the Raspberry Pi microcomputer and the Easylab microcontroller as the main hardware with a camera mounted on servomotors for continuous image feed-in. Real-time face detection is performed using Haar-feature classifiers and ScicosLab in the Raspberry Pi. Then, the Easylab is responsible for face tracking, keeping the face in the middle of the frame through a pair of servomotors that control the horizontal and vertical movements of the camera. The servomotors are in turn controlled based on the state-diagrams designed using SMCube in the EasyLab. The methodology is verified via practical experimentation.

  • 182.
    Jackman, Simeon
    Linköpings universitet, Institutionen för medicinsk teknik.
    Football Shot Detection using Convolutional Neural Networks2019Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In this thesis, three different neural network architectures are investigated to detect the action of a shot within a football game using video data. The first architecture uses con- ventional convolution and pooling layers as feature extraction. It acts as a baseline and gives insight into the challenges faced during shot detection. The second architecture uses a pre-trained feature extractor. The last architecture uses three-dimensional convolution. All these networks are trained using short video clips extracted from football game video streams. Apart from investigating network architectures, different sampling methods are evaluated as well. This thesis shows that amongst the three evaluated methods, the ap- proach using MobileNetV2 as a feature extractor works best. However, when applying the networks to a video stream there are a multitude of challenges, such as false positives and incorrect annotations that inhibit the potential of detecting shots.

  • 183.
    Jackowski, C.
    et al.
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Wyss, M.
    Department of Preventive, Restorative and Paediatric Dentistry, University of Bern, 3010 Bern, Switzerland.
    Persson, A.
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Hälsouniversitetet. Linköpings universitet, Institutionen för medicin och hälsa, Medicinsk radiologi.
    Classens, M.
    Department of Diagnostic Radiology, Lindenhofspital, Bremgartenstrasse 117, 3001 Bern, Switzerland.
    Thali, M.J.
    Center of Forensic Imaging and Virtopsy, Institute of Forensic Medicine, University of Bern, Bühlstreet 20, 3012 Bern, Switzerland.
    Lussi, A.
    Department of Preventive, Restorative and Paediatric Dentistry, University of Bern, 3010 Bern, Switzerland.
    Ultra-high-resolution dual-source CT for forensic dental visualization - Discrimination of ceramic and composite fillings2008Ingår i: International journal of legal medicine (Print), ISSN 0937-9827, E-ISSN 1437-1596, Vol. 122, nr 4, s. 301-307Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Dental identification is the most valuable method to identify human remains in single cases with major postmortem alterations as well as in mass casualties because of its practicability and demanding reliability. Computed tomography (CT) has been investigated as a supportive tool for forensic identification and has proven to be valuable. It can also scan the dentition of a deceased within minutes. In the present study, we investigated currently used restorative materials using ultra-high-resolution dual-source CT and the extended CT scale for the purpose of a color-encoded, in scale, and artifact-free visualization in 3D volume rendering. In 122 human molars, 220 cavities with 2-, 3-, 4- and 5-mm diameter were prepared. With presently used filling materials (different composites, temporary filling materials, ceramic, and liner), these cavities were restored in six teeth for each material and cavity size (exception amalgam n=1). The teeth were CT scanned and images reconstructed using an extended CT scale. Filling materials were analyzed in terms of resulting Hounsfield units (HU) and filling size representation within the images. Varying restorative materials showed distinctively differing radiopacities allowing for CT-data-based discrimination. Particularly, ceramic and composite fillings could be differentiated. The HU values were used to generate an updated volume-rendering preset for postmortem extended CT scale data of the dentition to easily visualize the position of restorations, the shape (in scale), and the material used which is color encoded in 3D. The results provide the scientific background for the application of 3D volume rendering to visualize the human dentition for forensic identification purposes. © 2008 Springer-Verlag.

  • 184.
    Jankowai, Jochen
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Wang, Bei
    Univ Utah, UT 84112 USA.
    Hotz, Ingrid
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Robust Extraction and Simplification of 2D Symmetric Tensor Field Topology2019Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, nr 3, s. 337-349Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this work, we propose a controlled simplification strategy for degenerated points in symmetric 2D tensor fields that is based on the topological notion of robustness. Robustness measures the structural stability of the degenerate points with respect to variation in the underlying field. We consider an entire pipeline for generating a hierarchical set of degenerate points based on their robustness values. Such a pipeline includes the following steps: the stable extraction and classification of degenerate points using an edge labeling algorithm, the computation and assignment of robustness values to the degenerate points, and the construction of a simplification hierarchy. We also discuss the challenges that arise from the discretization and interpolation of real world data.

    Publikationen är tillgänglig i fulltext från 2020-07-10 14:34
  • 185.
    Jogbäck, Mats
    Linköpings universitet, Institutionen för medicinsk teknik, Medicinsk informatik. Linköpings universitet, Tekniska högskolan.
    Bildbaserad estimering av rörelse för reducering av rörelseartefakter2006Självständigt arbete på grundnivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats
    Abstract [sv]

    För att kunna rekonstruera en tredimensionell volym av en hjärna avbildad med hjälp av magnetresonanstomografi (MRI) behöver man korrigera varje snittbild i förhållande till varandra, beroende på oundvikliga rörelser hos den röntgade patienten. Detta förfarande kallas bildregistrering och idag använder man sig primärt av en metod där en bild utses till referensbild och därefter anpassas närliggande bilder, som antas ha en minimal avvikelse, till referensen.

    Syftet med detta examensarbete är att använda en annan metod vanligen utnyttjad inom datorseende för att estimera ett rörelsefält utifrån en vanlig videosekvens, genom att följa markörer som indikerar rörelse. Målet är att skapa en robust estimering av huvudets rörelse, som då kan användas för att skapa en mer noggrann korrigering och därmed också en bättre rekonstruktion.

  • 186.
    Johansson, Marcus
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem.
    Online Whole-Body Control using Hierarchical Quadratic Programming: Implementation and Evaluation of the HiQP Control Framework2016Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    The application of local optimal control is a promising paradigm for manipulative robot motion generation.In practice this involves instantaneous formulations of convex optimization problems depending on the current joint configuration of the robot and the environment.To be effective, however, constraints have to be carefully constructed as this kind of motion generation approach has a trade-off of completeness.Local optimal solvers, which are greedy in a temporal sense, have proven to be significantly more effective computationally than classical grid-based or sampling-based planning approaches.

    In this thesis we investigate how a local optimal control approach, namely the task function approach, can be implemented to grant high usability, extendibility and effectivity.This has resulted in the HiQP control framework, which is compatible with ROS, written in C++.The framework supports geometric primitives to aid in task customization by the user.It is also modular as to what communication system it is being used with, and to what optimization library it uses for finding optimal controls.

    We have evaluated the software quality of the framework according to common quantitative methods found in the literature.We have also evaluated an approach to perform tasks using minimal jerk motion generation with promising results.The framework also provides simple translation and rotation tasks based on six rudimentary geometric primitives.Also, task definitions for specific joint position setting, and velocity limitations were implemented.

  • 187.
    Johansson, Victor
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    3D Position Estimation of a Person of Interest in Multiple Video Sequences: Person of Interest Recognition2013Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Because of the increase in the number of security cameras, there is more video footage available than a human could efficiently process. In combination with the fact that computers are getting more efficient, it is getting more and more interesting to solve the problem of detecting and recognizing people automatically.

    Therefore a method is proposed for estimating a 3D-path of a person of interest in multiple, non overlapping, monocular cameras. This project is a collaboration between two master theses. This thesis will focus on recognizing a person of interest from several possible candidates, as well as estimating the 3D-position of a person and providing a graphical user interface for the system. The recognition of the person of interest includes keeping track of said person frame by frame, and identifying said person in video sequences where the person of interest has not been seen before.

    The final product is able to both detect and recognize people in video, as well as estimating their 3D-position relative to the camera. The product is modular and any part can be improved or changed completely, without changing the rest of the product. This results in a highly versatile product which can be tailored for any given situation.

  • 188.
    Johnander, Joakim
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Bhat, Goutam
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Danelljan, Martin
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    On the Optimization of Advanced DCF-Trackers2018Ingår i: Computer Vision – ECCV 2018 Workshops: Munich, Germany, September 8-14, 2018, Proceedings, Part I / [ed] Laura Leal-TaixéStefan Roth, Cham: Springer Publishing Company, 2018, s. 54-69Konferensbidrag (Refereegranskat)
    Abstract [en]

    Trackers based on discriminative correlation filters (DCF) have recently seen widespread success and in this work we dive into their numerical core. DCF-based trackers interleave learning of the target detector and target state inference based on this detector. Whereas the original formulation includes a closed-form solution for the filter learning, recently introduced improvements to the framework no longer have known closed-form solutions. Instead a large-scale linear least squares problem must be solved each time the detector is updated. We analyze the procedure used to optimize the detector and let the popular scheme introduced with ECO serve as a baseline. The ECO implementation is revisited in detail and several mechanisms are provided with alternatives. With comprehensive experiments we show which configurations are superior in terms of tracking capabilities and optimization performance.

  • 189.
    Johnander, Joakim
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Zenuity, Sweden.
    Danelljan, Martin
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. ETH Zurich, Switzerland.
    Brissman, Emil
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Saab, Sweden.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. IIAI, UAE.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    A generative appearance model for end-to-end video object segmentation2019Ingår i: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Institute of Electrical and Electronics Engineers (IEEE), 2019, s. 8945-8954Konferensbidrag (Refereegranskat)
    Abstract [en]

    One of the fundamental challenges in video object segmentation is to find an effective representation of the target and background appearance. The best performing approaches resort to extensive fine-tuning of a convolutional neural network for this purpose. Besides being prohibitively expensive, this strategy cannot be truly trained end-to-end since the online fine-tuning procedure is not integrated into the offline training of the network. To address these issues, we propose a network architecture that learns a powerful representation of the target and background appearance in a single forward pass. The introduced appearance module learns a probabilistic generative model of target and background feature distributions. Given a new image, it predicts the posterior class probabilities, providing a highly discriminative cue, which is processed in later network modules. Both the learning and prediction stages of our appearance module are fully differentiable, enabling true end-to-end training of the entire segmentation pipeline. Comprehensive experiments demonstrate the effectiveness of the proposed approach on three video object segmentation benchmarks. We close the gap to approaches based on online fine-tuning on DAVIS17, while operating at 15 FPS on a single GPU. Furthermore, our method outperforms all published approaches on the large-scale YouTube-VOS dataset.

  • 190.
    Johnander, Joakim
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Danelljan, Martin
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    DCCO: Towards Deformable Continuous Convolution Operators for Visual Tracking2017Ingår i: Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part I / [ed] Michael Felsberg, Anders Heyden and Norbert Krüger, Springer, 2017, Vol. 10424, s. 55-67Konferensbidrag (Refereegranskat)
    Abstract [en]

    Discriminative Correlation Filter (DCF) based methods have shown competitive performance on tracking benchmarks in recent years. Generally, DCF based trackers learn a rigid appearance model of the target. However, this reliance on a single rigid appearance model is insufficient in situations where the target undergoes non-rigid transformations. In this paper, we propose a unified formulation for learning a deformable convolution filter. In our framework, the deformable filter is represented as a linear combination of sub-filters. Both the sub-filter coefficients and their relative locations are inferred jointly in our formulation. Experiments are performed on three challenging tracking benchmarks: OTB-2015, TempleColor and VOT2016. Our approach improves the baseline method, leading to performance comparable to state-of-the-art.

  • 191.
    Jones, Andrew
    et al.
    USC Institute Creat Technology, CA 90094 USA.
    Nagano, Koki
    USC Institute Creat Technology, CA 90094 USA.
    Busch, Jay
    USC Institute Creat Technology, CA 90094 USA.
    Yu, Xueming
    USC Institute Creat Technology, CA 90094 USA.
    Peng, Hsuan-Yueh
    USC Institute Creat Technology, CA 90094 USA.
    Barreto, Joseph
    USC Institute Creat Technology, CA 90094 USA.
    Alexander, Oleg
    USC Institute Creat Technology, CA 90094 USA.
    Bolas, Mark
    USC Institute Creat Technology, CA 90094 USA.
    Debevec, Paul
    USC Institute Creat Technology, CA 90094 USA.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Time-Offset Conversations on a Life-Sized Automultiscopic Projector Array2016Ingår i: PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), IEEE , 2016, s. 927-935Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a system for creating and displaying interactive life-sized 3D digital humans based on pre-recorded interviews. We use 30 cameras and an extensive list of questions to record a large set of video responses. Users access videos through a natural conversation interface that mimics face-to-face interaction. Recordings of answers, listening and idle behaviors are linked together to create a persistent visual image of the person throughout the interaction. The interview subjects are rendered using flowed light fields and shown life-size on a special rear-projection screen with an array of 216 video projectors. The display allows multiple users to see different 3D perspectives of the subject in proper relation to their viewpoints, without the need for stereo glasses. The display is effective for interactive conversations since it provides 3D cues such as eye gaze and spatial hand gestures.

  • 192.
    Jonsson, Christian
    Linköpings universitet, Institutionen för teknik och naturvetenskap.
    Detection of annual rings in wood2008Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats
    Abstract [en]

    This report describes an annual line detection algorithm for the WoodEye quality control system. The goal with the algorithm is to find the positions of annual lines on the four surfaces of a board. The purpose is to use this result to find the inner annual ring structure of the board. The work was done using image processing techniques to analyze images collected with WoodEye. The report gives the reader an insight in the requirements of quality control systems in the woodworking industry and the benefits of automated quality control versus manual inspection. The appearance and formation of annual lines are explained on a detailed level to provide insight on how the problem should be approached. A comparison between annual rings and fingerprints are made to see if ideas from this area of pattern recognition can be adapted to annual line detection. This comparison together with a study of existing methods led to the implementation of a fingerprint enhancement method. This method became a central part of the annual line detection algorithm. The annual line detection algorithm consists of two main steps; enhancing the edges of the annual rings, and tracking along the edges to form lines. Different solutions for components of the algorithm were tested to compare performance. The final algorithm was tested with different input images to find if the annual line detection algorithm works best with images from a grayscale or an RGB camera.

  • 193.
    Jonsson, Erik
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Channel-Coded Feature Maps for Computer Vision and Machine Learning2008Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    This thesis is about channel-coded feature maps applied in view-based object recognition, tracking, and machine learning. A channel-coded feature map is a soft histogram of joint spatial pixel positions and image feature values. Typical useful features include local orientation and color. Using these features, each channel measures the co-occurrence of a certain orientation and color at a certain position in an image or image patch. Channel-coded feature maps can be seen as a generalization of the SIFT descriptor with the options of including more features and replacing the linear interpolation between bins by a more general basis function.

    The general idea of channel coding originates from a model of how information might be represented in the human brain. For example, different neurons tend to be sensitive to different orientations of local structures in the visual input. The sensitivity profiles tend to be smooth such that one neuron is maximally activated by a certain orientation, with a gradually decaying activity as the input is rotated.

    This thesis extends previous work on using channel-coding ideas within computer vision and machine learning. By differentiating the channel-coded feature maps with respect to transformations of the underlying image, a method for image registration and tracking is constructed. By using piecewise polynomial basis functions, the channel coding can be computed more efficiently, and a general encoding method for N-dimensional feature spaces is presented.

    Furthermore, I argue for using channel-coded feature maps in view-based pose estimation, where a continuous pose parameter is estimated from a query image given a number of training views with known pose. The optimization of position, rotation and scale of the object in the image plane is then included in the optimization problem, leading to a simultaneous tracking and pose estimation algorithm. Apart from objects and poses, the thesis examines the use of channel coding in connection with Bayesian networks. The goal here is to avoid the hard discretizations usually required when Markov random fields are used on intrinsically continuous signals like depth for stereo vision or color values in image restoration.

    Channel coding has previously been used to design machine learning algorithms that are robust to outliers, ambiguities, and discontinuities in the training data. This is obtained by finding a linear mapping between channel-coded input and output values. This thesis extends this method with an incremental version and identifies and analyzes a key feature of the method -- that it is able to handle a learning situation where the correspondence structure between the input and output space is not completely known. In contrast to a traditional supervised learning setting, the training examples are groups of unordered input-output points, where the correspondence structure within each group is unknown. This behavior is studied theoretically and the effect of outliers and convergence properties are analyzed.

    All presented methods have been evaluated experimentally. The work has been conducted within the cognitive systems research project COSPAL funded by EC FP6, and much of the contents has been put to use in the final COSPAL demonstrator system.

  • 194.
    Jonsson, Peter
    et al.
    Division of Solid State Physics, Lund University, SE-22100 Lund, Sweden.
    Jonsson, Magnus P.
    Division of Solid State Physics, Lund University, SE-22100 Lund, Sweden.
    Tegenfeldt, Jonas O.
    Division of Solid State Physics, Lund University, SE-22100 Lund, Sweden.
    Hook, Fredrik
    Division of Solid State Physics, Lund University, SE-22100 Lund, Sweden.
    A Method Improving the Accuracy of Fluorescence Recovery after Photobleaching Analysis2008Ingår i: Biophysical Journal, ISSN 0006-3495, E-ISSN 1542-0086, Vol. 95, nr 11, s. 5334-5348Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Fluorescence recovery after photobleaching has been an established technique of quantifying the mobility of molecular species in cells and cell membranes for more than 30 years. However, under nonideal experimental conditions, the current methods of analysis still suffer from occasional problems; for example, when the signal/noise ratio is low, when there are temporal fluctuations in the illumination, or when there is bleaching during the recovery process. We here present a method of analysis that overcomes these problems, yielding accurate results even under nonideal experimental conditions. The method is based on circular averaging of each image, followed by spatial frequency analysis of the averaged radial data, and requires no prior knowledge of the shape of the bleached area. The method was validated using both simulated and experimental fluorescence recovery after photobleaching data, illustrating that the diffusion coefficient of a single diffusing component can be determined to within similar to 1%, even for small signal levels (100 photon counts), and that at typical signal levels (5000 photon counts) a system with two diffusion coefficients can be analyzed with less than 10% error.

  • 195.
    Julià, Carme
    et al.
    Rovira i Virgili University, Spain.
    Moreno, Rodrigo
    Rovira i Virgili University, Spain.
    Puig, Domenec
    Rovira i Virgili University, Spain.
    Garcia, Miguel Angel
    Autonomous University of Madrid, Spain.
    Shape-based image segmentation through photometric stereo2011Ingår i: Computer Vision and Image Understanding, ISSN 1077-3142, E-ISSN 1090-235X, Vol. 115, nr 1, s. 91-104Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper describes a new algorithm for segmenting 2D images by taking into account 3D shape information. The proposed approach consists of two stages. In the first stage, the 3D surface normals of the objects present in the scene are estimated through robust photometric stereo. Then, the image is segmented by grouping its pixels according to their estimated normals through graph-based clustering. One of the advantages of the proposed approach is that, although the segmentation is based on the 3D shape of the objects, the photometric stereo stage used to estimate the 3D normals only requires a set of 2D images. This paper provides an extensive validation of the proposed approach by comparing it with several image segmentation algorithms. Particularly, it is compared with both appearance-based image segmentation algorithms and shape-based ones. Experimental results confirm that the latter are more suitable when the objective is to segment the objects or surfaces present in the scene. Moreover, results show that the proposed approach yields the best image segmentation in most of the cases.

  • 196.
    Järemo-Lawin, Felix
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Danelljan, Martin
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Tosteberg, Patrik
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Bhat, Goutam
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Deep Projective 3D Semantic Segmentation2017Ingår i: Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part I / [ed] Michael Felsberg, Anders Heyden and Norbert Krüger, Springer, 2017, s. 95-107Konferensbidrag (Refereegranskat)
    Abstract [en]

    Semantic segmentation of 3D point clouds is a challenging problem with numerous real-world applications. While deep learning has revolutionized the field of image semantic segmentation, its impact on point cloud data has been limited so far. Recent attempts, based on 3D deep learning approaches (3D-CNNs), have achieved below-expected results. Such methods require voxelizations of the underlying point cloud data, leading to decreased spatial resolution and increased memory consumption. Additionally, 3D-CNNs greatly suffer from the limited availability of annotated datasets.

  • 197.
    Kargén, Rolf
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
    Utveckling av ett active vision system för demonstration av EDSDK++ i tillämpningar inom datorseende2014Självständigt arbete på grundnivå (kandidatexamen), 10,5 poäng / 16 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Datorseende är ett snabbt växande, tvärvetenskapligt forskningsområde vars tillämpningar tar en allt mer framskjutande roll i dagens samhälle. Med ett ökat intresse för datorseende ökar också behovet av att kunna kontrollera kameror kopplade till datorseende system.

    Vid Linköpings tekniska högskola, på avdelningen för datorseende, har ramverket EDSDK++ utvecklats för att fjärrstyra digitala kameror tillverkade av Canon Inc. Ramverket är mycket omfattande och innehåller en stor mängd funktioner och inställningsalternativ. Systemet är därför till stor del ännu relativt oprövat. Detta examensarbete syftar till att utveckla ett demonstratorsystem till EDSDK++ i form av ett enkelt active vision system, som med hjälp av ansiktsdetektion i realtid styr en kameratilt, samt en kamera monterad på tilten, till att följa, zooma in och fokusera på ett ansikte eller en grupp av ansikten. Ett krav var att programbiblioteket OpenCV skulle användas för ansiktsdetektionen och att EDSDK++ skulle användas för att kontrollera kameran. Dessutom skulle ett API för att kontrollera kameratilten utvecklas.

    Under utvecklingsarbetet undersöktes bl.a. olika metoder för ansiktsdetektion. För att förbättra prestandan användes multipla ansiktsdetektorer, som med hjälp av multitrådning avsöker en bild parallellt från olika vinklar. Såväl experimentella som teoretiska ansatser gjordes för att bestämma de parametrar som behövdes för att kunna reglera kamera och kameratilt. Resultatet av arbetet blev en demonstrator, som uppfyllde samtliga krav.

  • 198.
    Kasten, Jens
    et al.
    Zuse Institute Berlin.
    Hotz, Ingrid
    Zuse Institue Berlin.
    Hege, Hans-Christian
    Zuse Institute Berlin.
    On the Elusive Concept of Lagrangian Coherent Structures2012Ingår i: Topological Methods in Data Analysis and Visualization: Theory, Algorithms, and Applications / [ed] Ronald Peikert, Helwig Hauser, Hamish Carr, Raphael Fuchs, Springer, 2012, s. 207-220Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Many of the recently developed methods for analysis and visualization of time-dependent flows are related to concepts, which can be subsumed under the term Lagrangian coherent structures (LCS). Thereby, no universal definition of LCS exists and different interpretations are used. Mostly, LCS are considered to be features linked to pathlines leading to the ideal conception of features building material lines. Such time-dependent features are extracted by averaging local properties of particles along their trajectories, e.g., separation, acceleration or unsteadiness. A popular realization of LCS is the finite-time Lyapunov exponent (FTLE) with its different implementations. The goal of this paper is to stimulate a discussion on the generality of the underlying assumptions and concepts. Using a few well-known datasets, the interpretation and usability of Lagrangian analysis methods are discussed.

  • 199.
    Kasten, Jens
    et al.
    Zuse Institute Berlin (ZIB), Berlin, Germany.
    Hotz, Ingrid
    Zuse Institute Berlin (ZIB), Berlin, Germany.
    Noack, Bernd
    Berlin Institute of Technology MB1, Berlin, Germany .
    Hege, Hans-Christian
    Berlin Institute of Technology MB1, Berlin, Germany .
    On the Extraction of Long-living Features in Unsteady Fluid Flows2011Ingår i: Topological Methods in Data Analysis and Visualization: Theory, Algorithms, and Applications / [ed] Valerio Pascucci, Xavier Tricoche, Hans Hagen, Julien Tierny, Springer, 2011, s. 115-126Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    This paper proposes aGalilean invariant generalization of critical points ofvector field topology for 2D time-dependent flows. The approach is based upon a Lagrangian consideration of fluid particle motion. It extracts long-living features, likesaddles and centers, and filters out short-living local structures. This is well suited for analysis ofturbulent flow, where standard snapshot topology yields an unmanageable large number of topological structures that are barely related to the few main long-living features employed in conceptual fluid mechanics models. Results are shown for periodic and chaoticvortex motion.

  • 200.
    Kasten, Jens
    et al.
    Zuse Institute Berlin, Germany.
    Hotz, Ingrid
    Zuse Institute Berlin, Germany.
    Noack, Bernd R.
    Zuse Institute Berlin, Germany.
    Hege, Hans-Christian
    Zuse Institute Berlin, Germany.
    Vortex merge graphs in two-dimensional unsteady flow fields2012Konferensbidrag (Refereegranskat)
    Abstract [en]

    Among the various existing vortex definitions, there is one class that relies on extremal structures of derived scalar fields. These are, e.g., vorticity,λ<sub>2</sub>, or the acceleration magnitude. This paper proposes a method to identify and track extremal-based vortex structures in 2D time-dependent flows. It is based on combinatorial scalar field topology. In contrast to previous methods, merge events are explicitly handled and represented in the resulting graph. An abstract representation of this vortex merge graph serves as basis for the comparison of the different scalar identifiers. The method is applied to numerically simulated flows of a mixing layer and a planar jet.

1234567 151 - 200 av 483
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf