liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 88) Show all publications
Lenz, R. (2016). Eye movements and information geometry. Optical Society of America. Journal A: Optics, Image Science, and Vision, 33(8), 1598-1603
Open this publication in new window or tab >>Eye movements and information geometry
2016 (English)In: Optical Society of America. Journal A: Optics, Image Science, and Vision, ISSN 1084-7529, E-ISSN 1520-8532, Vol. 33, no 8, p. 1598-1603Article in journal (Refereed) Published
Abstract [en]

The human visual system uses eye movements to gather visual information. They act as visual scanning processes and can roughly be divided into two different types: small movements around fixation points and larger movements between fixation points. The processes are often modeled as random walks, and recent models based on heavy tail distributions, also known as Lev\&\#x00FD; flights, have been used in these investigations. In contrast to these approaches we do not model the stochastic processes, but we will show that the step lengths of the movements between fixation points follow generalized Pareto distributions (GPDs). We will use general arguments from the theory of extreme value statistics to motivate the usage of the GPD and show empirically that the GPDs provide good fits for measured eye tracking data. In the framework of information geometry the GPDs with a common threshold form a two-dimensional Riemann manifold with the Fisher information matrix as a metric. We compute the Fisher information matrix for the GPDs and introduce a feature vector describing a GPD by its parameters and different geometrical properties of its Fisher information matrix. In our statistical analysis we use eye tracker measurements in a database with 15 observers viewing 1003 images under free-viewing conditions. We use Matlab functions with their standard parameter settings and show that a naive Bayes classifier using the eigenvalues of the Fisher information matrix provides a high classification rate identifying the 15 observers in the database.

Keywords
Vision - eye movements ; Vision modeling
National Category
Signal Processing
Identifiers
urn:nbn:se:liu:diva-130590 (URN)10.1364/JOSAA.33.001598 (DOI)000382005000020 ()27505658 (PubMedID)
Funder
Swedish Foundation for Strategic Research , IIS11-0081Swedish Research Council, 2014-6227
Note

Funding agencies: Stiftelsen for Strategisk Forskning (SSF) [IIS11-0081]; Vetenskapsradet (VR) [2014-6227]

Available from: 2016-08-17 Created: 2016-08-17 Last updated: 2017-11-28
Lenz, R. (2016). Generalized Pareto Distributions-Application to Autofocus in Automated Microscopy. IEEE Journal on Selected Topics in Signal Processing, 10(1), 92-+
Open this publication in new window or tab >>Generalized Pareto Distributions-Application to Autofocus in Automated Microscopy
2016 (English)In: IEEE Journal on Selected Topics in Signal Processing, ISSN 1932-4553, E-ISSN 1941-0484, Vol. 10, no 1, p. 92-+Article in journal (Refereed) Published
Abstract [en]

Dihedral filters correspond to the Fourier transform of functions defined on square grids. For gray value images there are six pairs of dihedral edge-detector pairs on 5 5 windows. In low-level image statistics the Weibull-or the generalized extreme value distributions are often used as statistical distributions modeling such filter results. Since only points with high filter magnitudes are of interest we argue that the generalized Pareto distribution is a better choice. Practically this also leads to more efficient algorithms since only a fraction of the raw filter results have to be analyzed. The generalized Pareto distributions with a fixed threshold form a Riemann manifold with the Fisher information matrix as a metric tensor. For the generalized Pareto distributions we compute the determinant of the inverse Fisher information matrix as a function of the shape and scale parameters and show that it is the product of a polynomial in the shape parameter and the squared scale parameter. We then show that this determinant defines a sharpness function that can be used in autofocus algorithms. We evaluate the properties of this sharpness function with the help of a benchmark database of microscopy images with known ground truth focus positions. We show that the method based on this sharpness function results in a focus estimation that is within the given ground truth interval for a vast majority of focal sequences. Cases where it fails are mainly sequences with very poor image quality and sequences with sharp structures in different layers. The analytical structure given by the Riemann geometry of the space of probability density functions can be used to construct more efficient autofocus methods than other methods based on empirical moments.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2016
Keywords
Imaging; focusing geometry; information geometry probability; probability distributions; microscopy; image edge detection; generalized Pareto distribution; sharpness function
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-125679 (URN)10.1109/JSTSP.2015.2482949 (DOI)000369495900008 ()
Note

Funding Agencies|Swedish Research Council [2014-6227]; Swedish Foundation for Strategic Research [IIS11-0081]

Available from: 2016-03-02 Created: 2016-02-29 Last updated: 2017-11-30
Oshima, S., Mochizuki, R., Lenz, R. & Chao, J. (2016). Modeling, Measuring, and Compensating Color Weak Vision. IEEE Transactions on Image Processing, 25(6), 2587-2600
Open this publication in new window or tab >>Modeling, Measuring, and Compensating Color Weak Vision
2016 (English)In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 25, no 6, p. 2587-2600Article in journal (Refereed) Published
Abstract [en]

We use methods from Riemann geometry to investigate transformations between the color spaces of color-normal and color-weak observers. The two main applications are the simulation of the perception of a color weak observer for a color-normal observer, and the compensation of color images in a way that a color-weak observer has approximately the same perception as a color-normal observer. The metrics in the color spaces of interest are characterized with the help of ellipsoids defined by the just-noticeable-differences between the colors which are measured with the help of color-matching experiments. The constructed mappings are the isometries of Riemann spaces that preserve the perceived color differences for both observers. Among the two approaches to build such an isometry, we introduce normal coordinates in Riemann spaces as a tool to construct a global color-weak compensation map. Compared with the previously used methods, this method is free from approximation errors due to local linearizations, and it avoids the problem of shifting locations of the origin of the local coordinate system. We analyze the variations of the Riemann metrics for different observers obtained from new color-matching experiments and describe three variations of the basic method. The performance of the methods is evaluated with the help of semantic differential tests.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2016
Keywords
Color vision; color weak; color transformations; Riemannian geometry; Riemann normal coordinates
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-128718 (URN)10.1109/TIP.2016.2539679 (DOI)000375271700004 ()27046849 (PubMedID)
Note

Funding Agencies|Japan Society for the Promotion of Science (JSPS) [23500156]; Institute of Science and Engineering, Chuo University; Swedish Research Council through a framework grant for the project Energy Minimization for Computational Cameras [2014-6227]; Swedish Foundation for Strategic Research [IIS11-0081]

Available from: 2016-06-07 Created: 2016-05-30 Last updated: 2017-11-30
Lenz, R. (2016). Siegel Descriptors for Image Processing. IEEE Signal Processing Letters, 23(5), 625-628
Open this publication in new window or tab >>Siegel Descriptors for Image Processing
2016 (English)In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 23, no 5, p. 625-628Article in journal (Refereed) Published
Abstract [en]

We introduce the Siegel upper half-space with its symplectic geometry as a framework for low-level image processing. We characterize properties of images with the help of six parameters: two spatial coordinates, the pixel value, and the three parameters of a symmetric positive-definite (SPD) matrix such as the metric tensor. We construct a mapping of these parameters into the Siegel upper half-space. From the general theory, it is known that there is a distance on this space that is preserved by the symplectic transformations. The construction provides a mapping that has relatively simply transformation properties under spatial rotations, and the distance values can be computed with the help of closed-form expressions which allow an efficient implementation. We illustrate the properties of this geometry by considering a special case where we compute for every pixel its symplectic distance to its four spatial neighbors and we show how spatial distances, pixel value changes, and texture properties are described in this unifying symplectic framework.

Place, publisher, year, edition, pages
IEEE Press, 2016
Keywords
Feature extraction; image processing; Siegel descriptors; symplectic geometry; transformation groups
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-128143 (URN)10.1109/LSP.2016.2542850 (DOI)000374302200007 ()
Available from: 2016-05-19 Created: 2016-05-19 Last updated: 2017-11-30
Mochizuki, R., Kojima, T., Lenz, R. & Chao, J. (2015). Color-weak compensation using local affine isometry based on discrimination threshold matching. Optical Society of America. Journal A: Optics, Image Science, and Vision, 32(11), 2093-2103
Open this publication in new window or tab >>Color-weak compensation using local affine isometry based on discrimination threshold matching
2015 (English)In: Optical Society of America. Journal A: Optics, Image Science, and Vision, ISSN 1084-7529, E-ISSN 1520-8532, Vol. 32, no 11, p. 2093-2103Article in journal (Refereed) Published
Abstract [en]

We develop algorithms for color-weak compensation and color-weak simulation based on Riemannian geometry models of color spaces. The objective function introduced measures the match of color discrimination thresholds of average normal observers and a color-weak observer. The developed matching process makes use of local affine maps between color spaces of color-normal and color-weak observers. The method can be used to generate displays of images that provide color-normal and color-weak observers with a similar color difference experience. It can also be used to simulate the perception of a color-weak observer for color-normal observers. We also introduce a new database of measurements of color discrimination threshold data for color-normal and color-weak observers obtained at different lightness levels in CIELUV space. The compensation methods include compensations of chromaticity using local affine maps between chromaticity planes of color-normal and color-weak observers, and one-dimensional (1D) compensation on lightness. We describe how to determine correspondences between the origins of local coordinates in color spaces of color-normal and color-weak observers using a neighborhood expansion method. After matching the origins of the two coordinate systems, a local affine map is estimated by solving a nonlinear equation, or singular-value-decomposition (SVD). We apply the methods to natural images and evaluate their performance using the semantic differential (SD) method. (C) 2015 Optical Society of America

Place, publisher, year, edition, pages
OPTICAL SOC AMER, 2015
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-124139 (URN)10.1364/JOSAA.32.002093 (DOI)000367201600021 ()26560924 (PubMedID)
Note

Funding Agencies|Institute of Science and Engineering, Chuo University; Japan Society for the Promotion of Science (JSPS) [23500156]; Swedish Foundation for Strategic Research [IIS11-0081]; Swedish Research Council [2014-6227]

Available from: 2016-01-22 Created: 2016-01-19 Last updated: 2017-11-30
Zografos, V., Lenz, R., Ringaby, E., Felsberg, M. & Nordberg, K. (2015). Fast segmentation of sparse 3D point trajectories using group theoretical invariants. In: D. Cremers, I. Reid, H. Saito, M.-H. Yang (Ed.), COMPUTER VISION - ACCV 2014, PT IV: . Paper presented at 12th Asian Conference on Computer Vision (ACCV) Singapore, Singapore, November 1-5 2014 (pp. 675-691). Springer, 9006
Open this publication in new window or tab >>Fast segmentation of sparse 3D point trajectories using group theoretical invariants
Show others...
2015 (English)In: COMPUTER VISION - ACCV 2014, PT IV / [ed] D. Cremers, I. Reid, H. Saito, M.-H. Yang, Springer, 2015, Vol. 9006, p. 675-691Conference paper, Published paper (Refereed)
Abstract [en]

We present a novel approach for segmenting different motions from 3D trajectories. Our approach uses the theory of transformation groups to derive a set of invariants of 3D points located on the same rigid object. These invariants are inexpensive to calculate, involving primarily QR factorizations of small matrices. The invariants are easily converted into a set of robust motion affinities and with the use of a local sampling scheme and spectral clustering, they can be incorporated into a highly efficient motion segmentation algorithm. We have also captured a new multi-object 3D motion dataset, on which we have evaluated our approach, and compared against state-of-the-art competing methods from literature. Our results show that our approach outperforms all methods while being robust to perspective distortions and degenerate configurations.

Place, publisher, year, edition, pages
Springer, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9006
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-114313 (URN)10.1007/978-3-319-16817-3_44 (DOI)000362444500044 ()978-3-31916-816-6 (ISBN)978-3-31916-817-3 (ISBN)
Conference
12th Asian Conference on Computer Vision (ACCV) Singapore, Singapore, November 1-5 2014
Projects
VPSCUASETT
Available from: 2015-02-18 Created: 2015-02-18 Last updated: 2018-10-15
Lenz, R. (2015). Generalized Pareto Distributions, Image Statistics and Autofocusing in Automated Microscopy. In: GEOMETRIC SCIENCE OF INFORMATION, GSI 2015: . Paper presented at 2nd International SEE Conference on Geometric Science of Information (GSI) (pp. 96-103). Springer-Verlag New York
Open this publication in new window or tab >>Generalized Pareto Distributions, Image Statistics and Autofocusing in Automated Microscopy
2015 (English)In: GEOMETRIC SCIENCE OF INFORMATION, GSI 2015, Springer-Verlag New York, 2015, p. 96-103Conference paper, Published paper (Refereed)
Abstract [en]

We introduce the generalized Pareto distributions as a statistical model to describe thresholded edge-magnitude image filter results. Compared to the more common Weibull or generalized extreme value distributions these distributions have at least two important advantages, the usage of the high threshold value assures that only the most important edge points enter the statistical analysis and the estimation is computationally more efficient since a much smaller number of data points have to be processed. The generalized Pareto distributions with a common threshold zero form a two-dimensional Riemann manifold with the metric given by the Fisher information matrix. We compute the Fisher matrix for shape parameters greater than -0.5 and show that the determinant of its inverse is a product of a polynomial in the shape parameter and the squared scale parameter. We apply this result by using the determinant as a sharpness function in an autofocus algorithm. We test the method on a large database of microscopy images with given ground truth focus results. We found that for a vast majority of the focus sequences the results are in the correct focal range. Cases where the algorithm fails are specimen with too few objects and sequences where contributions from different layers result in a multi-modal sharpness curve. Using the geometry of the manifold of generalized Pareto distributions more efficient autofocus algorithms can be constructed but these optimizations are not included here.

Place, publisher, year, edition, pages
Springer-Verlag New York, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9389
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-127703 (URN)10.1007/978-3-319-25040-3_11 (DOI)000374288700011 ()978-3-319-25039-7 (ISBN)978-3-319-25040-3 (ISBN)
Conference
2nd International SEE Conference on Geometric Science of Information (GSI)
Funder
Swedish Research Council, 2014-6227Swedish Foundation for Strategic Research , IIS11-0081
Available from: 2016-05-09 Created: 2016-05-09 Last updated: 2018-01-30
Felsberg, M., Öfjäll, K. & Lenz, R. (2015). Unbiased decoding of biologically motivated visual feature descriptors. Frontiers in Robotics and AI, 2(20)
Open this publication in new window or tab >>Unbiased decoding of biologically motivated visual feature descriptors
2015 (English)In: Frontiers in Robotics and AI, ISSN 2296-9144, Vol. 2, no 20Article in journal (Refereed) Published
Abstract [en]

Visual feature descriptors are essential elements in most computer and robot vision systems. They typically lead to an abstraction of the input data, images, or video, for further processing, such as clustering and machine learning. In clustering applications, the cluster center represents the prototypical descriptor of the cluster and estimates the corresponding signal value, such as color value or dominating flow orientation, by decoding the prototypical descriptor. Machine learning applications determine the relevance of respective descriptors and a visualization of the corresponding decoded information is very useful for the analysis of the learning algorithm. Thus decoding of feature descriptors is a relevant problem, frequently addressed in recent work. Also, the human brain represents sensorimotor information at a suitable abstraction level through varying activation of neuron populations. In previous work, computational models have been derived that agree with findings of neurophysiological experiments on the represen-tation of visual information by decoding the underlying signals. However, the represented variables have a bias toward centers or boundaries of the tuning curves. Despite the fact that feature descriptors in computer vision are motivated from neuroscience, the respec-tive decoding methods have been derived largely independent. From first principles, we derive unbiased decoding schemes for biologically motivated feature descriptors with a minimum amount of redundancy and suitable invariance properties. These descriptors establish a non-parametric density estimation of the underlying stochastic process with a particular algebraic structure. Based on the resulting algebraic constraints, we show formally how the decoding problem is formulated as an unbiased maximum likelihood estimator and we derive a recurrent inverse diffusion scheme to infer the dominating mode of the distribution. These methods are evaluated in experiments, where stationary points and bias from noisy image data are compared to existing methods.

Place, publisher, year, edition, pages
Lausanne, Switzerland: Frontiers Research Foundation, 2015
Keywords
feature descriptors, population codes, channel representations, decoding, estimation, visualization
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-120973 (URN)10.3389/frobt.2015.00020 (DOI)
Projects
EMC2VIDICUASVPSELLIITCADICS
Available from: 2015-09-01 Created: 2015-09-01 Last updated: 2018-05-29Bibliographically approved
Läthén, G., Lindholm, S., Lenz, R. & Borga, M. (2014). Evaluation of transfer function methods in direct volume rendering of the blood vessel lumen. In: Ivan Viola and Katja Buehler and Timo Ropinski (Ed.), Proceedings from the EG VCBM 2014. Eurographics Workshop on Visual Computing for Biology and Medicine, Vienna, Austria, September 4–5, 2014: . Paper presented at EG VCBM 2014. Eurographics Workshop on Visual Computing for Biology and Medicine, Vienna, Austria, September 4–5, 2014 (pp. 117-126). Eurographics - European Association for Computer Graphics
Open this publication in new window or tab >>Evaluation of transfer function methods in direct volume rendering of the blood vessel lumen
2014 (English)In: Proceedings from the EG VCBM 2014. Eurographics Workshop on Visual Computing for Biology and Medicine, Vienna, Austria, September 4–5, 2014 / [ed] Ivan Viola and Katja Buehler and Timo Ropinski, Eurographics - European Association for Computer Graphics, 2014, p. 117-126Conference paper, Published paper (Refereed)
Abstract [en]

Visualization of contrast enhanced blood vessels in CT angiography data presents a challenge due to varying concentration of the contrast agent. The purpose of this work is to evaluate the correctness (effectiveness) in visualizing the vessel lumen using two different 3D visualization strategies, thereby assessing the feasibility of using such visualizations for diagnostic decisions. We compare a standard visualization approach with a recent method which locally adapts to the contrast agent concentration. Both methods are evaluated in a parallel setting where the participant is instructed to produce a complete visualization of the vessel lumen, including both large and small vessels, in cases of calcified vessels in the legs. The resulting visualizations are thereafter compared in a slice viewer to assess the correctness of the visualized lumen. The results indicate that the participants generally overestimated the size of the vessel lumen using the standard visualization, whereas the locally adaptive method better conveyed the true anatomy. The participants did find the interpretation of the locally adaptive method to be less intuitive, but also noted that this did not introduce any prohibitive complexity in the work flow. The observed trends indicate that the visualized lumen strongly depends on the width and placement of the applied transfer function and that this dependency is inherently local rather than global. We conclude that methods that permit local adjustments, such as the method investigated in this study, can be beneficial to certain types of visualizations of large vascular trees

Place, publisher, year, edition, pages
Eurographics - European Association for Computer Graphics, 2014
Series
Eurographics Workshop on Visual Computing for Biology and Medicine, ISSN 2070-5778
National Category
Medical Image Processing
Identifiers
urn:nbn:se:liu:diva-97370 (URN)10.2312/vcbm.20141197 (DOI)978-3-905674-62-0 (ISBN)
Conference
EG VCBM 2014. Eurographics Workshop on Visual Computing for Biology and Medicine, Vienna, Austria, September 4–5, 2014
Available from: 2013-09-10 Created: 2013-09-10 Last updated: 2016-08-31Bibliographically approved
Lenz, R. (2014). Generalized Extreme Value Distributions, Information Geometry And Sharpness Functions For Microscopy Images: . In: 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP): . Paper presented at IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2014), Florence, Italy, May 4-9, 2014 (pp. 2848-2852).
Open this publication in new window or tab >>Generalized Extreme Value Distributions, Information Geometry And Sharpness Functions For Microscopy Images:
2014 (English)In: 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014, p. 2848-2852Conference paper, Published paper (Refereed)
Abstract [en]

We introduce the generalized extreme value distributions asdescriptors of edge-related visual appearance properties. Theoreticallythese distributions are characterized by their limitingand stability properties which gives them a role similarto that of the normal distributions. Empirically we will showthat these distributions provide a good fit for images from alarge database of microscopy images with two visually verydifferent types of images. The generalized extreme value distributionsare transformed exponential distributions for whichanalytical expressions for the Fisher matrix are available. Wewill show how the determinant of the Fisher matrix and thegradient of the determinant of the Fisher matrix can be usedas sharpness functions and a combination of the determinantand the gradient information can be used to improve the qualityof the focus estimation.

Series
International Conference on Acoustics Speech and Signal Processing ICASSP, ISSN 1520-6149, E-ISSN 2379-190X
Keywords
generalized extreme value distribution, information geometry, edge statistics, auto-focus, imagebased screening
National Category
Signal Processing
Identifiers
urn:nbn:se:liu:diva-107139 (URN)10.1109/ICASSP.2014.6854120 (DOI)000343655302177 ()978-1-4799-2893-4 (ISBN)
Conference
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2014), Florence, Italy, May 4-9, 2014
Projects
Virtual Photo Set, VPS
Available from: 2014-06-05 Created: 2014-06-05 Last updated: 2017-03-07
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7557-4904

Search in DiVA

Show all publications