liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Zografos, Vasileios
Publications (10 of 20) Show all publications
Zografos, V., Lenz, R., Ringaby, E., Felsberg, M. & Nordberg, K. (2015). Fast segmentation of sparse 3D point trajectories using group theoretical invariants. In: D. Cremers, I. Reid, H. Saito, M.-H. Yang (Ed.), COMPUTER VISION - ACCV 2014, PT IV: . Paper presented at 12th Asian Conference on Computer Vision (ACCV) Singapore, Singapore, November 1-5 2014 (pp. 675-691). Springer, 9006
Open this publication in new window or tab >>Fast segmentation of sparse 3D point trajectories using group theoretical invariants
Show others...
2015 (English)In: COMPUTER VISION - ACCV 2014, PT IV / [ed] D. Cremers, I. Reid, H. Saito, M.-H. Yang, Springer, 2015, Vol. 9006, p. 675-691Conference paper, Published paper (Refereed)
Abstract [en]

We present a novel approach for segmenting different motions from 3D trajectories. Our approach uses the theory of transformation groups to derive a set of invariants of 3D points located on the same rigid object. These invariants are inexpensive to calculate, involving primarily QR factorizations of small matrices. The invariants are easily converted into a set of robust motion affinities and with the use of a local sampling scheme and spectral clustering, they can be incorporated into a highly efficient motion segmentation algorithm. We have also captured a new multi-object 3D motion dataset, on which we have evaluated our approach, and compared against state-of-the-art competing methods from literature. Our results show that our approach outperforms all methods while being robust to perspective distortions and degenerate configurations.

Place, publisher, year, edition, pages
Springer, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9006
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-114313 (URN)10.1007/978-3-319-16817-3_44 (DOI)000362444500044 ()978-3-31916-816-6 (ISBN)978-3-31916-817-3 (ISBN)
Conference
12th Asian Conference on Computer Vision (ACCV) Singapore, Singapore, November 1-5 2014
Projects
VPSCUASETT
Available from: 2015-02-18 Created: 2015-02-18 Last updated: 2018-10-15
Åström, F., Zografos, V. & Felsberg, M. (2013). Density Driven Diffusion. In: 18th Scandinavian Conferences on Image Analysis, 2013: . Paper presented at 18th Scandinavian Conferences on Image Analysis (SCIA 2013), 17-20 June 2013, Espoo, Finland (pp. 718-730).
Open this publication in new window or tab >>Density Driven Diffusion
2013 (English)In: 18th Scandinavian Conferences on Image Analysis, 2013, 2013, p. 718-730Conference paper, Published paper (Refereed)
Abstract [en]

In this work we derive a novel density driven diffusion scheme for image enhancement. Our approach, called D3, is a semi-local method that uses an initial structure-preserving oversegmentation step of the input image.  Because of this, each segment will approximately conform to a homogeneous region in the image, allowing us to easily estimate parameters of the underlying stochastic process thus achieving adaptive non-linear filtering. Our method is capable of producing competitive results when compared to state-of-the-art methods such as non-local means, BM3D and tensor driven diffusion on both color and grayscale images.

Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 7944
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-90016 (URN)10.1007/978-3-642-38886-6_67 (DOI)000342988500067 ()978-3-642-38885-9 (ISBN)978-3-642-38886-6 (ISBN)
Conference
18th Scandinavian Conferences on Image Analysis (SCIA 2013), 17-20 June 2013, Espoo, Finland
Projects
VIDIGARNICSBILDLAB
Available from: 2013-04-08 Created: 2013-03-14 Last updated: 2018-01-24Bibliographically approved
Lenz, R., Zografos, V. & Solli, M. (2013). Dihedral Color Filtering. In: Christine Fernandez-Maloigne (Ed.), Advanced Color Image Processing and Analysis: (pp. 119-145). Springer
Open this publication in new window or tab >>Dihedral Color Filtering
2013 (English)In: Advanced Color Image Processing and Analysis / [ed] Christine Fernandez-Maloigne, Springer, 2013, p. 119-145Chapter in book (Refereed)
Abstract [en]

This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us "purple with rage" or "green with envy" and cause us to "see red." Defining colors has been the work of centuries, culminating in today's complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today's color imaging.

Place, publisher, year, edition, pages
Springer, 2013
Keywords
Engineering, Computer vision, Visualization, Signal, Image and Speech Processing, Computer Imaging, Vision, Pattern Recognition and Graphics
National Category
Signal Processing
Identifiers
urn:nbn:se:liu:diva-89822 (URN)10.1007/978-1-4419-6190-7_5 (DOI)978-1-4419-6189-1 (ISBN)978-1-4419-6190-7 (ISBN)
Projects
FP7/2007-2013 - Chal- lenge 2 - Cognitive Systems, Interaction, Robotics - under grant agreement No 247947-GARNICSSwedish Science Foundationvps
Available from: 2013-03-07 Created: 2013-03-07 Last updated: 2016-08-31Bibliographically approved
Zografos, V., Ellis, L. & Mester, R. (2013). Discriminative Subspace Clustering. In: : . Paper presented at 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013), June 23-28, 2013, Portland, Oregon, USA.
Open this publication in new window or tab >>Discriminative Subspace Clustering
2013 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.

Series
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), ISSN 1063-6919
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-89979 (URN)10.1109/CVPR.2013.274 (DOI)000331094302022 ()
Conference
26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013), June 23-28, 2013, Portland, Oregon, USA
Projects
GARNICS, VR ETT, ELIIT, CADICS
Available from: 2013-03-12 Created: 2013-03-12 Last updated: 2018-01-11
Lenz, R. & Zografos, V. (2013). Fisher Information and the Combination of RGB channels. In: Shoji Tominaga, Raimondo Schettini, Alain Trémeau (Ed.), 4th International Workshop, CCIW 2013, Chiba, Japan, March 3-5, 2013. Proceedings: . Paper presented at Computational Color Imaging Workshop (CCIW 2013), 4-5 March 2013, Chiba, Japan (pp. 250-264). Springer Berlin/Heidelberg
Open this publication in new window or tab >>Fisher Information and the Combination of RGB channels
2013 (English)In: 4th International Workshop, CCIW 2013, Chiba, Japan, March 3-5, 2013. Proceedings / [ed] Shoji Tominaga, Raimondo Schettini, Alain Trémeau, Springer Berlin/Heidelberg, 2013, p. 250-264Conference paper, Published paper (Refereed)
Abstract [en]

We introduce a method to combine the color channels of an image to a scalar valued image. Linear combinations of the RGB channels are constructed using the Fisher-Trace-Information (FTI), defined as the trace of the Fisher information matrix of the Weibull distribution, as a cost function. The FTI characterizes the local geometry of the Weibull manifold independent of the parametrization of the distribution. We show that minimizing the FTI leads to contrast enhanced images, suitable for segmentation processes. The Riemann structure of the manifold of Weibull distributions is used to design optimization methods for finding optimal weight RGB vectors. Using a threshold procedure we find good solutions even for images with limited content variation. Experiments show how the method adapts to images with widely varying visual content. Using these image dependent de-colorizations one can obtain substantially improved segmentation results compared to a mapping with pre-defined coefficients.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2013
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 7786
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-89132 (URN)10.1007/978-3-642-36700-7_20 (DOI)000342983600020 ()978-3-642-36699-4 (ISBN)978-3-642-36700-7 (ISBN)
Conference
Computational Color Imaging Workshop (CCIW 2013), 4-5 March 2013, Chiba, Japan
Projects
GARNICSVPS
Available from: 2013-04-08 Created: 2013-02-21 Last updated: 2018-02-15Bibliographically approved
Zografos, V., Lenz, R. & Felsberg, M. (2013). The Weibull manifold in low-level image processing: an application to automatic image focusing.. Image and Vision Computing, 31(5), 401-417
Open this publication in new window or tab >>The Weibull manifold in low-level image processing: an application to automatic image focusing.
2013 (English)In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 31, no 5, p. 401-417Article in journal (Refereed) Published
Abstract [en]

In this paper, we introduce a novel framework for low-level image processing and analysis. First, we process images with very simple, difference-based filter functions. Second, we fit the 2-parameter Weibull distribution to the filtered output. This maps each image to the 2D Weibull manifold. Third, we exploit the information geometry of this manifold and solve low-level image processing tasks as minimisation problems on point sets. For a proof-of-concept example, we examine the image autofocusing task. We propose appropriate cost functions together with a simple implicitly-constrained manifold optimisation algorithm and show that our framework compares very favourably against common autofocus methods from literature. In particular, our approach exhibits the best overall performance in terms of combined speed and accuracy

Place, publisher, year, edition, pages
Elsevier, 2013
Keywords
Weibull distribution;image processing;Weibull manifold;image autofocus
National Category
Engineering and Technology Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-90879 (URN)10.1016/j.imavis.2013.03.004 (DOI)000319713100004 ()
Projects
GARNICSCADICSVPSETT
Funder
EU, FP7, Seventh Framework Programme, 249747Swedish Research CouncilLinnaeus research environment CADICSSwedish Foundation for Strategic Research , IIS11-0081
Available from: 2013-04-11 Created: 2013-04-08 Last updated: 2018-01-11Bibliographically approved
Zografos, V. (2012). Enhancing motion segmentation by combination of complementary affinities. In: Proceedings of the 21st Internationa Conference on Pattern Recognition. Paper presented at 21st International Conference on Pattern Recognition (ICPR 2012), 11-15 November 2012, Tsukuba, Japan (pp. 2198-2201).
Open this publication in new window or tab >>Enhancing motion segmentation by combination of complementary affinities
2012 (English)In: Proceedings of the 21st Internationa Conference on Pattern Recognition, 2012, p. 2198-2201Conference paper, Oral presentation only (Other academic)
Abstract [en]

Complementary information, when combined in the right way, is capable of improving clustering and segmentation problems. In this paper, we show how it is possible to enhance motion segmentation accuracy with a very simple and inexpensive combination of complementary information, which comes from the column and row spaces of the same measurement matrix. We test our approach on the Hopkins155 dataset where it outperforms all other state-of-the-art methods.

National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-85692 (URN)978-1-4673-2216-4 (ISBN)
Conference
21st International Conference on Pattern Recognition (ICPR 2012), 11-15 November 2012, Tsukuba, Japan
Projects
GARNICS
Funder
EU, FP7, Seventh Framework Programme
Available from: 2013-04-08 Created: 2012-11-28 Last updated: 2018-01-12Bibliographically approved
Ellis, L. & Zografos, V. (2012). Online Learning for Fast Segmentation of Moving Objects. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (Ed.), ACCV 2012: . Paper presented at 11th Asian Conference on Computer Vision (ACCV 2012), 5-9 November 2012, Daejeon, Korea (pp. 52-65). Springer Berlin/Heidelberg
Open this publication in new window or tab >>Online Learning for Fast Segmentation of Moving Objects
2012 (English)In: ACCV 2012 / [ed] Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z., Springer Berlin/Heidelberg, 2012, p. 52-65Conference paper, Published paper (Other academic)
Abstract [en]

This work addresses the problem of fast, online segmentationof moving objects in video. We pose this as a discriminative onlinesemi-supervised appearance learning task, where supervising labelsare autonomously generated by a motion segmentation algorithm. Thecomputational complexity of the approach is signicantly reduced byperforming learning and classication on oversegmented image regions(superpixels), rather than per pixel. In addition, we further exploit thesparse trajectories from the motion segmentation to obtain a simplemodel that encodes the spatial properties and location of objects at eachframe. Fusing these complementary cues produces good object segmentationsat very low computational cost. In contrast to previous work,the proposed approach (1) performs segmentation on-the-y (allowingfor applications where data arrives sequentially), (2) has no prior modelof object types or `objectness', and (3) operates at signicantly reducedcomputational cost. The approach and its ability to learn, disambiguateand segment the moving objects in the scene is evaluated on a numberof benchmark video sequences.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2012
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 7725
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-86211 (URN)10.1007/978-3-642-37444-9_5 (DOI)978-3-642-37443-2 (ISBN)978-3-642-37444-9 (ISBN)
Conference
11th Asian Conference on Computer Vision (ACCV 2012), 5-9 November 2012, Daejeon, Korea
Projects
GARNICSELLIITETTCUAS
Available from: 2012-12-11 Created: 2012-12-11 Last updated: 2018-02-19
Lenz, R. & Zografos, V. (2012). RGB Filter design using the properties of the weibull manifold. In: CGIV 2012 Sixth European Conference on Colour in Graphics, Imaging, and Vision: Volume 6. Paper presented at 6th European Conference on Colour in Graphics, Imaging, and Vision, May 6-9, Amsterdam (pp. 200-205). Springfield, VA
Open this publication in new window or tab >>RGB Filter design using the properties of the weibull manifold
2012 (English)In: CGIV 2012 Sixth European Conference on Colour in Graphics, Imaging, and Vision: Volume 6, Springfield, VA, 2012, p. 200-205Conference paper, Published paper (Other academic)
Abstract [en]

Combining the channels of a multi-band image with the help of a pixelwise weighted sum is one of the basic operations in color and multispectral image processing. A typical example is the conversion of RGB- to intensity images. Usually the weights are given by some standard values or chosen heuristically. This does not take into account neither the statistical nature of the image source nor the intended further processing of the scalar image. In this paper we will present a framework in which we specify the statistical properties of the input data with the help of a representative collection of image patches. On the output side we specify the intended processing of the scalar image with the help of a filter kernel with zero-mean filter coefficients. Given the image patches and the filter kernel we use the Fisher information of the manifold of two-parameter Weibull distributions to introduce the trace of the Fisher information matrix as a cost function on the space of weight vectors of unit length. We will illustrate the properties of the method with the help of a database of scanned leaves and some color images from the internet. For the green leaves we find that the result of the mapping is similar to standard mappings like Matlab’s RGB2Gray weights. We then change the colour of the leaf using a global shift in the HSV representation of the original image and show how the proposed mapping adapts to this color change. This is also confirmed with other natural images where the new mapping reveals much more subtle details in the processed image. In the last experiment we show that the mapping emphasizes visually salient points in the image whereas the standard mapping only emphasizes global intensity changes. The proposed approach to RGB filter design provides thus a new methodology based only on the properties of the image statistics and the intended post-processing. It adapts to color changes of the input images and, due to its foundation in the statistics of extreme-value distributions, it is suitable for detecting salient regions in an image.

Place, publisher, year, edition, pages
Springfield, VA: , 2012
National Category
Signal Processing
Identifiers
urn:nbn:se:liu:diva-77808 (URN)978-0-89208-299-5 (ISBN)
Conference
6th European Conference on Colour in Graphics, Imaging, and Vision, May 6-9, Amsterdam
Projects
European Community’s Seventh Framework Programme FP7/2007-2013 - Challenge 2 Cognitive Systems, Interaction, Robotics - No 247947-GARNICSVR 2008-4643; Groups and Manifolds for Information ProcessingVPS
Available from: 2013-04-08 Created: 2012-05-30 Last updated: 2016-08-31Bibliographically approved
Zografos, V. & Nordberg, K. (2011). Fast and accurate motion segmentation using linear combination of views. In: BMVC 2011: . Paper presented at 2011 22nd British Machine Vision Conference, BMVC 2011; Dundee; United Kingdom (pp. 12.1-12.11).
Open this publication in new window or tab >>Fast and accurate motion segmentation using linear combination of views
2011 (English)In: BMVC 2011, 2011, p. 12.1-12.11Conference paper, Published paper (Refereed)
Abstract [en]

We introduce a simple and efficient procedure for the segmentation of rigidly moving objects, imaged under an affine camera model. For this purpose we revisit the theory of "linear combination of views" (LCV), proposed by Ullman and Basri [20], which states that the set of 2d views of an object undergoing 3d rigid transformations, is embedded in a low-dimensional linear subspace that is spanned by a small number of basis views. Our work shows, that one may use this theory for motion segmentation, and cluster the trajectories of 3d objects using only two 2d basis views. We therefore propose a practical motion segmentation method, built around LCV, that is very simple to implement and use, and in addition is very fast, meaning it is well suited for real-time SfM and tracking applications. We have experimented on real image sequences, where we show good segmentation results, comparable to the state-of-the-art in literature. If we also consider computational complexity, our proposed method is one of the best performers in combined speed and accuracy. © 2011. The copyright of this document resides with its authors.

National Category
Computer Systems
Identifiers
urn:nbn:se:liu:diva-72921 (URN)10.5244/C.25.12 (DOI)1-901725-43-X (ISBN)
Conference
2011 22nd British Machine Vision Conference, BMVC 2011; Dundee; United Kingdom
Available from: 2011-12-16 Created: 2011-12-10 Last updated: 2016-06-09Bibliographically approved
Organisations

Search in DiVA

Show all publications