liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 20 of 20
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Buxton, Bernard
    et al.
    University College London.
    Zografos, Vasileios
    University College London.
    Flexible Template and Model Matching Using Intensity2005In: Proceedings of Digital Image Computing: Techniques and Applications, / [ed] Brian C. Lovell, Anthony J. Maeder, Terry Caelli, and Sebastien Ourselin, IEEE , 2005, p. 438--447Conference paper (Refereed)
    Abstract [en]

    Intensity-based image and template matching is briefly reviewed with particular emphasis on the problems that arise when flexible templates or models are used. Use of such models and templates may often lead to a very small basin of attraction in the error landscape surrounding the desired solution and also to spurious, trivial solutions. Simple examples are studied in order to illustrate these problems which may arise from photometric transformations of the template, from geometric transforms of it or from internal parameters of the template that allow similar types of variation. It is pointed out that these problems are, from a probabilistic point of view, exacerbated by a failure to model the whole image, i.e. both the foreground object or template and the image background, which a Bayesian approach strictly requires. Some general remarks are made about the form of the error landscape to be expected in object recognition applications and suggestions made as to optimisation techniques that may prove effective in locating a correct match. These suggestions are illustrated by a preliminary example.

  • 2.
    Ellis, Liam
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Online Learning for Fast Segmentation of Moving Objects2012In: ACCV 2012 / [ed] Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z., Springer Berlin/Heidelberg, 2012, p. 52-65Conference paper (Other academic)
    Abstract [en]

    This work addresses the problem of fast, online segmentationof moving objects in video. We pose this as a discriminative onlinesemi-supervised appearance learning task, where supervising labelsare autonomously generated by a motion segmentation algorithm. Thecomputational complexity of the approach is signicantly reduced byperforming learning and classication on oversegmented image regions(superpixels), rather than per pixel. In addition, we further exploit thesparse trajectories from the motion segmentation to obtain a simplemodel that encodes the spatial properties and location of objects at eachframe. Fusing these complementary cues produces good object segmentationsat very low computational cost. In contrast to previous work,the proposed approach (1) performs segmentation on-the-y (allowingfor applications where data arrives sequentially), (2) has no prior modelof object types or `objectness', and (3) operates at signicantly reducedcomputational cost. The approach and its ability to learn, disambiguateand segment the moving objects in the scene is evaluated on a numberof benchmark video sequences.

  • 3.
    Lenz, Reiner
    et al.
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Fisher Information and the Combination of RGB channels2013In: 4th International Workshop, CCIW 2013, Chiba, Japan, March 3-5, 2013. Proceedings / [ed] Shoji Tominaga, Raimondo Schettini, Alain Trémeau, Springer Berlin/Heidelberg, 2013, p. 250-264Conference paper (Refereed)
    Abstract [en]

    We introduce a method to combine the color channels of an image to a scalar valued image. Linear combinations of the RGB channels are constructed using the Fisher-Trace-Information (FTI), defined as the trace of the Fisher information matrix of the Weibull distribution, as a cost function. The FTI characterizes the local geometry of the Weibull manifold independent of the parametrization of the distribution. We show that minimizing the FTI leads to contrast enhanced images, suitable for segmentation processes. The Riemann structure of the manifold of Weibull distributions is used to design optimization methods for finding optimal weight RGB vectors. Using a threshold procedure we find good solutions even for images with limited content variation. Experiments show how the method adapts to images with widely varying visual content. Using these image dependent de-colorizations one can obtain substantially improved segmentation results compared to a mapping with pre-defined coefficients.

  • 4.
    Lenz, Reiner
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    RGB Filter design using the properties of the weibull manifold2012In: CGIV 2012 Sixth European Conference on Colour in Graphics, Imaging, and Vision: Volume 6, Springfield, VA, 2012, p. 200-205Conference paper (Other academic)
    Abstract [en]

    Combining the channels of a multi-band image with the help of a pixelwise weighted sum is one of the basic operations in color and multispectral image processing. A typical example is the conversion of RGB- to intensity images. Usually the weights are given by some standard values or chosen heuristically. This does not take into account neither the statistical nature of the image source nor the intended further processing of the scalar image. In this paper we will present a framework in which we specify the statistical properties of the input data with the help of a representative collection of image patches. On the output side we specify the intended processing of the scalar image with the help of a filter kernel with zero-mean filter coefficients. Given the image patches and the filter kernel we use the Fisher information of the manifold of two-parameter Weibull distributions to introduce the trace of the Fisher information matrix as a cost function on the space of weight vectors of unit length. We will illustrate the properties of the method with the help of a database of scanned leaves and some color images from the internet. For the green leaves we find that the result of the mapping is similar to standard mappings like Matlab’s RGB2Gray weights. We then change the colour of the leaf using a global shift in the HSV representation of the original image and show how the proposed mapping adapts to this color change. This is also confirmed with other natural images where the new mapping reveals much more subtle details in the processed image. In the last experiment we show that the mapping emphasizes visually salient points in the image whereas the standard mapping only emphasizes global intensity changes. The proposed approach to RGB filter design provides thus a new methodology based only on the properties of the image statistics and the intended post-processing. It adapts to color changes of the input images and, due to its foundation in the statistics of extreme-value distributions, it is suitable for detecting salient regions in an image.

  • 5.
    Lenz, Reiner
    et al.
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Solli, Martin
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Dihedral Color Filtering2013In: Advanced Color Image Processing and Analysis / [ed] Christine Fernandez-Maloigne, Springer, 2013, p. 119-145Chapter in book (Refereed)
    Abstract [en]

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us "purple with rage" or "green with envy" and cause us to "see red." Defining colors has been the work of centuries, culminating in today's complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today's color imaging.

  • 6.
    Nordberg, Klas
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Multibody motion segmentation using the geometry of 6 points in 2D images.2010In: International Conference on Pattern Recognition: ISSN 1051-4651, Institute of Electrical and Electronics Engineers (IEEE), 2010, p. 1783-1787Conference paper (Refereed)
    Abstract [en]

    We propose a method for segmenting an arbitrary number of moving objects using the geometry of 6 points in 2D images to infer motion consistency. This geometry allows us to determine whether or not observations of 6 points over several frames are consistent with a rigid 3D motion. The matching between observations of the 6 points and an estimated model of their configuration in 3D space, is quantified in terms of a geometric error derived from distances between the points and 6 corresponding lines in the image. This leads to a simple motion inconsistency score, based on the geometric errors of 6points that in the ideal case should be zero when the motion of the points can be explained by a rigid 3D motion. Initial point clusters are determined in the spatial domain and merged in motion trajectory domain based on this score. Each point is then assigned to the cluster, which gives the lowest score.Our algorithm has been tested with real image sequences from the Hopkins155 database with very good results, competing withthe state of the art methods, particularly for degenerate motion sequences. In contrast to the motion segmentation methods basedon multi-body factorization, that assume an affine camera model, the proposed method allows the mapping from 3D space to the 2D image to be fully projective.

  • 7.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Comparison of Optimisation Algorithms for Deformable Template Matching2009In: Advances in Visual Computing / [ed] Bebis, G.; Boyle, R.; Parvin, B.; Koracin, D.; Kuno, Y.; Wang, J.; Pajarola, R.; Lindstrom, P.; Hinkenjann, A.; Encarnacao, M.L.; Silva, C.T.; Coming, D., Berlin: Springer , 2009, p. 1097-1108Conference paper (Refereed)
    Abstract [en]

    In this work we examine in detail the use of optimisation algorithms on deformable template matching problems. We start with the examination of simple, direct-search methods and move on to more complicated evolutionary approaches. Our goal is twofold: first, evaluate a number of methods examined under different template matching settings and introduce the use of certain, novel evolutionary optimisation algorithms to computer vision, and second, explore and analyse any additional advantages of using a hybrid approach over existing methods. We show that in computer vision tasks, evolutionary strategies provide very good choices for optimisation. Our experiments have also indicated that we can improve the convergence speed and results of existing algorithms by using a hybrid approach.

  • 8.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Enhancing motion segmentation by combination of complementary affinities2012In: Proceedings of the 21st Internationa Conference on Pattern Recognition, 2012, p. 2198-2201Conference paper (Other academic)
    Abstract [en]

    Complementary information, when combined in the right way, is capable of improving clustering and segmentation problems. In this paper, we show how it is possible to enhance motion segmentation accuracy with a very simple and inexpensive combination of complementary information, which comes from the column and row spaces of the same measurement matrix. We test our approach on the Hopkins155 dataset where it outperforms all other state-of-the-art methods.

  • 9.
    Zografos, Vasileios
    University College, London, UK.
    Pose-invariant, model-based objectrecognition, using linear combination of viewsand Bayesian statistics2009Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis presents an in-depth study on the problem of object recognition, and in particular the detectionof 3-D objects in 2-D intensity images which may be viewed from a variety of angles. A solution to thisproblem remains elusive to this day, since it involves dealing with variations in geometry, photometryand viewing angle, noise, occlusions and incomplete data. This work restricts its scope to a particularkind of extrinsic variation; variation of the image due to changes in the viewpoint from which the objectis seen.A technique is proposed and developed to address this problem, which falls into the category ofview-based approaches, that is, a method in which an object is represented as a collection of a smallnumber of 2-D views, as opposed to a generation of a full 3-D model. This technique is based on thetheoretical observation that the geometry of the set of possible images of an object undergoing 3-D rigidtransformations and scaling may, under most imaging conditions, be represented by a linear combinationof a small number of 2-D views of that object. It is therefore possible to synthesise a novel image of anobject given at least two existing and dissimilar views of the object, and a set of linear coefficients thatdetermine how these views are to be combined in order to synthesise the new image.The method works in conjunction with a powerful optimization algorithm, to search and recover theoptimal linear combination coefficients that will synthesize a novel image, which is as similar as possibleto the target, scene view. If the similarity between the synthesized and the target images is above somethreshold, then an object is determined to be present in the scene and its location and pose are defined,in part, by the coefficients. The key benefits of using this technique is that because it works directlywith pixel values, it avoids the need for problematic, low-level feature extraction and solution of thecorrespondence problem. As a result, a linear combination of views (LCV) model is easy to constructand use, since it only requires a small number of stored, 2-D views of the object in question, and theselection of a few landmark points on the object, the process which is easily carried out during the offline,model building stage. In addition, this method is general enough to be applied across a variety ofrecognition problems and different types of objects.The development and application of this method is initially explored looking at two-dimensionalproblems, and then extending the same principles to 3-D. Additionally, the method is evaluated acrosssynthetic and real-image datasets, containing variations in the objects’ identity and pose. Future work onpossible extensions to incorporate a foreground/background model and lighting variations of the pixelsare examined.

  • 10.
    Zografos, Vasileios
    et al.
    University College London.
    Buxton, Bernard
    University College London.
    A Bayesian Approach to 3D Object Recognition Using Linear Combination of 2D Views2008In: International Conference on computer vision theory and applications, 2008, p. 295-298Conference paper (Refereed)
  • 11.
    Zografos, Vasileios
    et al.
    University College London.
    Buxton, Bernard
    University College London.
    Affine Invariant, Model-Based Object Recognition Using Robust Metrics and Bayesian Statistics2005In: Proceedings of the International Conference on Image Analysis and Recognition, Berlin: Springer , 2005, p. 407-414Conference paper (Refereed)
  • 12.
    Zografos, Vasileios
    et al.
    Universty College London.
    Buxton, Bernard
    Universty College London.
    Evaluation of linear combination of views for object recognition: Chapter 52007In: ADVANCES IN INTELLIGENT INFORMATION PROCESSING: Tools and Applications / [ed] B. Chanda and C. A. Murthy, World Scientific Publishing Company , 2007, p. 85-106Chapter in book (Other academic)
    Abstract [en]

    In this work, we present a method for model-based recognition of 3d objects from a small number of 2d intensity images taken from nearby, but otherwise arbitrary viewpoints. Our method works by linearly combining images from two (or more) viewpoints of a 3d object to synthesise novel views of the object. The object is recognised in a target image by matching to such a synthesised, novel view. All that is required is the recovery of the linear combination parameters, and since we are working directly with pixel intensities, we suggest searching the parameter space using a global, evolutionary optimisation algorithm combined with a local search method in order efficiently to recover the optimal parameters and thus recognise the object in the scene. We have experimented with both synthetic data and real-image, public databases.

  • 13.
    Zografos, Vasileios
    et al.
    University College London.
    Buxton, Bernard
    University College London.
    Pose-Invariant 3D Object Recognition Using Linear Combination of 2D Views and Evolutionary Optimisation2007In: Proceedings of the ICCTA'07, 2007, p. 645-649Conference paper (Refereed)
    Abstract [en]

    In this work, we present a method for model-based recognition of 3d objects from a small number of 2d intensity images taken from nearby, but otherwise arbitrary viewpoints. Our method works by linearly combining images from two (or more) viewpoints of a 3d object to synthesise novel views of the object. The object is recognised in a target image by matching to such a synthesised, novel view. All that is required is the recovery of the linear combination parameters, and since we are working directly with pixel intensities, we suggest searching the parameter space using an evolutionary optimisation algorithm in order to efficiently recover the optimal parameters and thus recognise the object in the scene.

  • 14.
    Zografos, Vasileios
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ellis, Liam
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Mester, Rudolf
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Discriminative Subspace Clustering2013Conference paper (Refereed)
    Abstract [en]

    We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.

  • 15.
    Zografos, Vasileios
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Lenz, Reiner
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Spatio-chromatic image content descriptors and their analysis using Extreme Value Theory2011In: Image analysis: 17th Scandinavian Conference, SCIA 2011, Ystad, Sweden, May 2011. Proceedings, Springer Berlin/Heidelberg, 2011, p. 579-591Conference paper (Refereed)
    Abstract [en]

    We use the theory of group representations to construct very fast image descriptors that split the vector space of local RGB distributions into small group-invariant subspaces. These descriptors are group theoretical generalizations of the Fourier Transform and can be computed with algorithms similar to the FFT. Because of their computational efficiency they are especially suitable for retrieval, recognition and classification in very large image datasets. We also show that the statistical properties of these descriptors are governed by the principles of the Extreme Value Theory (EVT). This enables us to work directly with parametric probability distribution models, which offer a much lower dimensionality and higher resolution and flexibility than explore the connection to EVT and analyse the characteristics of these descriptors from a probabilistic viewpoint with the help of large image databases.

  • 16.
    Zografos, Vasileios
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Lenz, Reiner
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    The Weibull manifold in low-level image processing: an application to automatic image focusing.2013In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 31, no 5, p. 401-417Article in journal (Refereed)
    Abstract [en]

    In this paper, we introduce a novel framework for low-level image processing and analysis. First, we process images with very simple, difference-based filter functions. Second, we fit the 2-parameter Weibull distribution to the filtered output. This maps each image to the 2D Weibull manifold. Third, we exploit the information geometry of this manifold and solve low-level image processing tasks as minimisation problems on point sets. For a proof-of-concept example, we examine the image autofocusing task. We propose appropriate cost functions together with a simple implicitly-constrained manifold optimisation algorithm and show that our framework compares very favourably against common autofocus methods from literature. In particular, our approach exhibits the best overall performance in terms of combined speed and accuracy

  • 17.
    Zografos, Vasileios
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Lenz, Reiner
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Ringaby, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Fast segmentation of sparse 3D point trajectories using group theoretical invariants2015In: COMPUTER VISION - ACCV 2014, PT IV / [ed] D. Cremers, I. Reid, H. Saito, M.-H. Yang, Springer, 2015, Vol. 9006, p. 675-691Conference paper (Refereed)
    Abstract [en]

    We present a novel approach for segmenting different motions from 3D trajectories. Our approach uses the theory of transformation groups to derive a set of invariants of 3D points located on the same rigid object. These invariants are inexpensive to calculate, involving primarily QR factorizations of small matrices. The invariants are easily converted into a set of robust motion affinities and with the use of a local sampling scheme and spectral clustering, they can be incorporated into a highly efficient motion segmentation algorithm. We have also captured a new multi-object 3D motion dataset, on which we have evaluated our approach, and compared against state-of-the-art competing methods from literature. Our results show that our approach outperforms all methods while being robust to perspective distortions and degenerate configurations.

  • 18.
    Zografos, Vasileios
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Fast and accurate motion segmentation using linear combination of views2011In: BMVC 2011, 2011, p. 12.1-12.11Conference paper (Refereed)
    Abstract [en]

    We introduce a simple and efficient procedure for the segmentation of rigidly moving objects, imaged under an affine camera model. For this purpose we revisit the theory of "linear combination of views" (LCV), proposed by Ullman and Basri [20], which states that the set of 2d views of an object undergoing 3d rigid transformations, is embedded in a low-dimensional linear subspace that is spanned by a small number of basis views. Our work shows, that one may use this theory for motion segmentation, and cluster the trajectories of 3d objects using only two 2d basis views. We therefore propose a practical motion segmentation method, built around LCV, that is very simple to implement and use, and in addition is very fast, meaning it is well suited for real-time SfM and tracking applications. We have experimented on real image sequences, where we show good segmentation results, comparable to the state-of-the-art in literature. If we also consider computational complexity, our proposed method is one of the best performers in combined speed and accuracy. © 2011. The copyright of this document resides with its authors.

  • 19.
    Zografos, Vasileios
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ellis, Liam
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Sparse motion segmentation using multiple six-point consistencies.2010In: The 2nd International Workshop on Video Event Categorization, Tagging and Retrieval (VECTaR 2010), 2010, p. 338-348Conference paper (Refereed)
    Abstract [en]

    We present a method for segmenting an arbitrary number of moving objects in image sequences using the geometry of 6 points in 2D to infer motion consistency. The method has been evaluated on the Hopkins155 database and surpasses current state-of-the-art methods such as SSC, both in terms of overall performance on two and three motions butalso in terms of maximum errors. The method works by nding initialclusters in the spatial domain, and then classifying each remaining pointas belonging to the cluster that minimizes a motion consistency score. In contrast to most other motion segmentation methods that are basedon an affine camera model, the proposed method is fully projective.

  • 20.
    Åström, Freddie
    et al.
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Zografos, Vasileios
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Density Driven Diffusion2013In: 18th Scandinavian Conferences on Image Analysis, 2013, 2013, p. 718-730Conference paper (Refereed)
    Abstract [en]

    In this work we derive a novel density driven diffusion scheme for image enhancement. Our approach, called D3, is a semi-local method that uses an initial structure-preserving oversegmentation step of the input image.  Because of this, each segment will approximately conform to a homogeneous region in the image, allowing us to easily estimate parameters of the underlying stochastic process thus achieving adaptive non-linear filtering. Our method is capable of producing competitive results when compared to state-of-the-art methods such as non-local means, BM3D and tensor driven diffusion on both color and grayscale images.

1 - 20 of 20
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf