We derive a new Sequential-Monte-Carlo-based algorithm to estimate the capacity of two-dimensional channel models. The focus is on computing the noiseless capacity of the 2-D (1, ∞) run-length limited constrained channel, but the underlying idea is generally applicable. The proposed algorithm is profiled against a state-of-the-art method, yielding more than an order of magnitude improvement in estimation accuracy for a given computation time.
We propose a new framework for how to use sequential Monte Carlo (SMC) algorithms for inference in probabilistic graphical models (PGM). Via a sequential decomposition of the PGM we find a sequence of auxiliary distributions defined on a monotonically increasing sequence of probability spaces. By targeting these auxiliary distributions using SMC we are able to approximate the full joint distribution defined by the PGM. One of the key merits of the SMC sampler is that it provides an unbiased estimate of the partition function of the model. We also show how it can be used within a particle Markov chain Monte Carlo framework in order to construct high-dimensional block-sampling algorithms for general PGMs.
A new approach to track bicycles from imagery sensor data is proposed. It is based on detecting ellipsoids in the images, and treat these pair-wise using a dynamic bicycle model. One important application area is in automotive collision avoidance systems, where no dedicated systems for bicyclists yet exist and where very few theoretical studies have been published.
Possible conflicts can be predicted from the position and velocity state in the model, but also from the steering wheel articulation and roll angle that indicate yaw changes before the velocity vector changes. An algorithm is proposed which consists of an ellipsoid detection and estimation algorithm and a particle filter.
A simulation study of three critical single target scenarios is presented, and the algorithm is shown to produce excellent state estimates. An experiment using a stationary camera and the particle filter for state estimation is performed and has shown encouraging results.
The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.
In order to insert a virtual object into a TV image, the graphics system needs to know precisely how the camera is moving, so that the virtual object can be rendered in the correct place in every frame. Nowadays this can be achieved relatively easily in post-production, or in a studio equipped with a special tracking system. However, for live shooting on location, or in a studio that is not specially equipped, installing such a system can be difficult or uneconomic. To overcome these limitations, the MATRIS project is developing a real-time system for measuring the movement of a camera. The system uses image analysis to track naturally occurring features in the scene, and data from an inertial sensor. No additional sensors, special markers, or camera mounts are required. This paper gives an overview of the system and presents some results.
In order to insert a virtual object into a TV image, the graphics system needs to know precisely how the camera is moving, so that the virtual object can be rendered in the correct place in every frame. Nowadays this can be achieved relatively easily in postproduction, or in a studio equipped with a special tracking system. However, for live shooting on location, or in a studio that is not specially equipped, installing such a system can be difficult or uneconomic. To overcome these limitations, the MATRIS project is developing a real-time system for measuring the movement of a camera. The system uses image analysis to track naturally occurring features in the scene, and data from an inertial sensor. No additional sensors, special markers, or camera mounts are required. This paper gives an overview of the system and presents some results.
In this paper, a new particle filter (PF) which we refer to as the decentralized PF (DPF) is proposed. By first decomposing the state into two parts, the DPF splits the filtering problem into two nested sub-problems and then handles the two nested sub-problems using PFs. The DPF has an advantage over the regular PF that the DPF can increase the level of parallelism of the PF. In particular, part of the resampling in the DPF bears a parallel structure and thus can be implemented in parallel. The parallel structure of the DPF is created by decomposing the state space, differing from the parallel structure of the distributed PFs which is created by dividing the sample space. This difference results in a couple of unique features of the DPF in contrast with the existing distributed PFs. Simulation results from a numerical example indicates that the DPF has a potential to achieve the same level of performance as the regular PF, in a shorter execution time.
In this paper, a new particle filter (PF) which we refer to as the decentralized PF (DPF) is proposed. By first decomposing the state into two parts, the DPF splits the filtering problem into two nested subproblems and then handles the two nested subproblems using PFs. The DPF has the advantage over the regular PF that the DPF can increase the level of parallelism of the PF. In particular, part of the resampling in the DPF bears a parallel structure and can thus be implemented in parallel. The parallel structure of the DPF is created by decomposing the state space, differing from the parallel structure of the distributed PFs which is created by dividing the sample space. This difference results in a couple of unique features of the DPF in contrast with the existing distributed PFs. Simulation results of two examples indicate that the DPF has a potential to achieve in a shorter execution time the same level of performance as the regular PF.
Particle Markov Chain Monte Carlo (PMCMC) samplers allow for routine inference of parameters and states in challenging nonlinear problems. A common choice for the parameter proposal is a simple random walk sampler, which can scale poorly with the number of parameters.
In this paper, we propose to use log-likelihood gradients, i.e. the score, in the construction of the proposal, akin to the Langevin Monte Carlo method, but adapted to the PMCMC framework. This can be thought of as a way to guide a random walk proposal by using drift terms that are proportional to the score function. The method is successfully applied to a stochastic volatility model and the drift term exhibits intuitive behaviour.
Particle Metropolis-Hastings (PMH) allows for Bayesian parameter inference in nonlinear state space models by combining MCMC and particle filtering. The latter is used to estimate the intractable likelihood. In its original formulation, PMH makes use of a marginal MCMC proposal for the parameters, typically a Gaussian random walk. However, this can lead to a poor exploration of the parameter space and an inefficient use of the generated particles.
We propose two alternative versions of PMH that incorporate gradient and Hessian information about the posterior into the proposal. This information is more or less obtained as a byproduct of the likelihood estimation. Indeed, we show how to estimate the required information using a fixed-lag particle smoother, with a computational cost growing linearly in the number of particles. We conclude that the proposed methods can: (i) decrease the length of the burn-in phase, (ii) increase the mixing of the Markov chain at the stationary phase, and (iii) make the proposal distribution scale invariant which simplifies tuning.
We propose an improved proposal distribution in the Particle Metropolis-Hastings (PMH) algorithm for Bayesian parameter inference in nonlinear state space models. This proposal incorporates second-order information about the parameter posterior distribution, which can be extracted from the particle filter already used within the PMH algorithm. The added information makes the proposal scale-invariant, simpler to tune and can possibly also shorten the burn-in phase. The proposed algorithm has a computational cost which is proportional to the number of particles, i.e. the same as the original marginal PMH algorithm. Finally, we provide two numerical examples that illustrates some of the possible benefits of adding the second-order information.
Gaussian innovations are the typical choice in most ARX models but using other distributions such as the Student's t could be useful. We demonstrate that this choice of distribution for the innovations provides an increased robustness to data anomalies, such as outliers and missing observations. We consider these models in a Bayesian setting and perform inference using numerical procedures based on Markov Chain Monte Carlo methods. These models include automatic order determination by two alternative methods, based on a parametric model order and a sparseness prior, respectively. The methods and the advantage of our choice of innovations are illustrated in three numerical studies using both simulated data and real EEG data.
ARX models is a common class of models of dynamical systems. Here, we consider the case when the innovation process is not well described by Gaussian noise and instead propose to model the driving noise as Student's t distributed. The t distribution is more heavy tailed than the Gaussian distribution, which provides an increased robustness to data anomalies, such as outliers and missing observations. We use a Bayesian setting and design the models to also include an automatic order determination. Basically, this means that we infer knowledge about the posterior distribution of the model order from data. We consider two related models, one with a parametric model order and one with a sparseness prior on the ARX coefficients. We derive Markov chain Monte Carlo samplers to perform inference in these models. Finally, we provide three numerical illustrations with both simulated data and real EEG data to evaluate the proposed methods.
We propose a novel method for MAP parameter inference in nonlinear state space models with intractable likelihoods. The method is based on a combination of Gaussian process optimisation (GPO), sequential Monte Carlo (SMC) and approximate Bayesian computations (ABC). SMC and ABC are used to approximate the intractable likelihood by using the similarity between simulated realisations from the model and the data obtained from the system. The GPO algorithm is used for the MAP parameter estimation given noisy estimates of the log-likelihood. The proposed parameter inference method is evaluated in three problems using both synthetic and real-world data. The results are promising, indicating that the proposed algorithm converges fast and with reasonable accuracy compared with existing methods.
This paper deals with the problem of estimating the vehicle surroundings (lane geometry and the position of other vehicles), which is needed for intelligent automotive systems, such as adaptive cruise control, collision avoidance and lane guidance. This results in a nonlinear estimation problem. For automotive tracking systems, these problems are traditionally handled using the extended Kalman filter. In this paper we describe the application of the marginalized particle filter to this problem. Studies using both synthetic and authentic data show that the marginalized particle filter can in fact give better performance than the extended Kalman filter. However, the computational load is higher.
A common computer vision task is navigation and mapping. Many indoor navigation tasks require depth knowledge of at, unstructured surfaces (walls, oor, ceiling). With passive illumination only, this is an ill-posed problem. Inspired by small children using a torchlight, we use a spotlight for active illumination. Using our torchlight approach, depth and orientation estimation of unstructured, at surfaces boils down to estimation of ellipse parameters. The extraction of ellipses is very robust and requires little computational effort.
A common computer vision task is navigation and mapping. Many indoor navigation tasks require depth knowledge of flat, unstructured surfaces (walls, floor, ceiling). With passive illumination only, this is an ill-posed problem. Inspired by small children using a torchlight, we use a spotlight for active illumination. Using our torchlight approach, depth and orientation estimation of unstructured, flat surfaces boils down to estimation of ellipse parameters. The extraction of ellipses is very robust and requires little computational effort.
In this paper we are concerned with nonlinear systems subject to a conditionally linear, Gaussian sub-structure. This structure is often exploited in high-dimensional state estimation problems using the marginalized (aka Rao-Blackwellized) particle filter. The main contribution in the present work is to show how an efficient filter can be derived by exploiting this structure within the auxiliary particle filter. Based on a multisensor aircraft tracking example, the superior performance of the proposed filter over conventional particle filtering approaches is demonstrated.
This paper treats how parameter estimation and Kalman filtering can be performed using a Modelica model. The procedures for doing this have been developed earlier by the authors, and are here exemplified on a physical system. It is concluded that the parameter and state estimation problems can be solved using the Modelica model, and that the parameters estimation and observer construction to a large extent could be automated with relatively small changes to a Modelica environment.
The current demand for more complex models has initiated a shift away from state-space models towards models described by differential-algebraic equations (DAEs). These models arise as the natural product of object-oriented modeling languages, such as Modelica. However, the mathematics of DAEs is somewhat more involved than the standard state-space theory. The aim of this work is to present a well-posed description of a linear stochastic differential-algebraic equation and more importantly explain how well-posed estimation problems can be formed. We will consider both the system identification problem and the state estimation problem. Besides providing the necessary theory we will also explain how the procedures can be implemented by means of efficient numerical methods.
This article reviews authors' recently developed algorithm for identification of nonlinear state-space models under missing observations and extends it to the case of unknown model structure. In order to estimate the parameters in a state-space model, one needs to know the model structure and have an estimate of states. If the model structure is unknown, an approximation of it is obtained using radial basis functions centered around a maximum a posteriori estimate of the state trajectory. A particle filter approximation of smoothed states is then used in conjunction with expectation maximization algorithm for estimating the parameters. The proposed approach is illustrated through a real application.
This paper presents a new solution to the loop closing problem for 3D point clouds. Loop closing is the problem of detecting the return to a previously visited location, and constitutes an important part of the solution to the Simultaneous Localisation and Mapping (SLAM) problem. It is important to achieve a low level of false alarms, since closing a false loop can have disastrous effects in a SLAM algorithm. In this work, the point clouds are described using features, which efficiently reduces the dimension of the data by a factor of 300 or more. The machine learning algorithm AdaBoost is used to learn a classifier from the features. All features are invariant to rotation, resulting in a classifier that is invariant to rotation. The presented method does neither rely on the discretisation of 3D space, nor on the extraction of lines, corners or planes. The classifier is extensively evaluated on publicly available outdoor and indoor data, and is shown to be able to robustly and accurately determine whether a pair of point clouds is from the same location or not. Experiments show detection rates of 63% for outdoor and 53% for indoor data at a false alarm rate of 0%. Furthermore, the classifier is shown to generalise well when trained on outdoor data and tested on indoor data in a SLAM experiment.
In this paper we address the loop closure detection problem in simultaneous localization and mapping (SLAM), and present a method for solving the problem using pairwise comparison of point clouds in both two and three dimensions. The point clouds are mathematically described using features that capture important geometric and statistical properties. The features are used as input to the machine learning algorithm AdaBoost, which is used to build a non-linear classifier capable of detecting loop closure from pairs of point clouds. Vantage point dependency in the detection process is eliminated by only using rotation invariant features, thus loop closure can be detected from an arbitrary direction. The classifier is evaluated using publicly available data, and is shown to generalize well between environments. Detection rates of 66%, 63% and 53% for 0% false alarm rate are achieved for 2D outdoor data, 3D outdoor data and 3D indoor data, respectively. In both two and three dimensions, experiments are performed using publicly available data, showing that the proposed algorithm compares favourably with related work.
The Handbook of Intelligent Vehicles provides a complete coverage of the fundamentals, new technologies, and sub-areas essential to the development of intelligent vehicles; it also includes advances made to date, challenges, and future trends. Significant strides in the field have been made to date; however, so far there has been no single book or volume which captures these advances in a comprehensive format, addressing all essential components and subspecialties of intelligent vehicles, as this book does. Since the intended users are engineering practitioners, as well as researchers and graduate students, the book chapters do not only cover fundamentals, methods, and algorithms but also include how software/hardware are implemented, and demonstrate the advances along with their present challenges. Research at both component and systems levels are required to advance the functionality of intelligent vehicles. This volume covers both of these aspects in addition to the fundamentals listed above.
The problem of estimating the position and orientation (pose) of a camera is approached by fusing measurements from inertial sensors (accelerometers and rate gyroscopes) and a camera. The sensor fusion approach described in this contribution is based on nonlinear filtering using the measurements from these complementary sensors. This way, accurate and robust pose estimates are available for the primary purpose of augmented reality applications, but with the secondary effect of reducing computation time and improving the performance in vision processing. A real-time implementation of a nonlinear filter is described, using a dynamic model for the 22 states, where 100 Hz inertial measurements and 12.5 Hz vision measurements are processed. An example where an industrial robot is used to move the sensor unit, possessing almost perfect precision and repeatability, is presented. The results show that position and orientation accuracy is sufficient for a number of augmented reality applications.
The marginalized particle filter is a powerful combination of the particle filter and the Kalman filter, which can be used when the underlying model contains a linear substructure subject to Gaussian noise. This paper surveys state of the art for theory and practice.
This paper investigates methods for tool position estimation of industrial robots. It is assumed that the motor angular position and the tool acceleration are measured. The considered observers are different versions of the extended Kalman filter as well as a deterministic observer. A method for tuning the observers is suggested and the robustness of the methods is investigated. The observers are evaluated experimentally on a commercial industrial robot.
This paper is concerned with the problem of autonomously landing an unmanned aerial vehicle (UAV) on a stationary platform. Our solution consists of two parts, a sensor fusion framework producing estimates of the UAV state and a control system that computes appropriate actuator commands. There are three sensors used, a camera, a GPS and a compass. Besides the description of the solution, we also present experimental results illustrating the results obtained in using our system to autonomously land an UAV.
The main contribution of this work is a novel calibration method to determine the clock parameters of the UWB receivers as well as their 3D positions. It exclusively uses time-of-arrival measurements, thereby removing the need for the typically labor-intensive and time-consuming process of surveying the receiver positions. Experiments show that the method is capable of accurately calibrating a UWB setup within minutes.
In this paper we propose a 6DOF tracking system combining Ultra-Wideband measurements with low-cost MEMS inertial measurements. A tightly coupled system is developed which estimates position as well as orientation of the sensorunit while being reliable in case of multipath effects and NLOS conditions. The experimental results show robust and continuous tracking in a realistic indoor positioning scenario.
This paper is concerned with the problem of estimating the relative translation and orientation between an inertial measurement unit and a camera which are rigidly connected. The key is to realise that this problem is in fact an instance of a standard problem within the area of system identification, referred to as a gray-box problem. We propose a new algorithm for estimating the relative translation and orientation, which does not require any additional hardware, except a piece of paper with a checkerboard pattern on it. Furthermore, covariance expressions are provided for all involved estimates. The experimental results shows that the method works well in practice.
This paper is concerned with the problem of estimating the relative translation and orientation of an inertial measurement unit and a camera, which are rigidly connected. The key is to realize that this problem is in fact an instance of a standard problem within the area of system identification, referred to as a gray-box problem. We propose a new algorithm for estimating the relative translation and orientation, which does not require any additional hardware, except a piece of paper with a checkerboard pattern on it. The method is based on a physical model which can also be used in solving, for example, sensor fusion problems. The experimental results show that the method works well in practice, both for perspective and spherical cameras.
In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms with respect to their resampling quality and computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in terms of resampling quality and computational complexity.
In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms with respect to their resampling quality and computational complexity.Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in terms of resampling quality and computational complexity.
This paper is concerned with the problem of estimating the relative translation and orientation of an inertial measurement unit and a spherical camera, which are rigidly connected. The key is to realize that this problem is in fact an instance of a standard problem within the area of system identification, referred to as a gray-box problem. We propose a new algorithm for estimating the relative translation and orientation, which does not require any additional hardware, except a piece of paper with a checkerboard pattern on it. The experimental results show that the method works well in practice.
In Augmented Reality (AR), the position and orientation of the camera have to be estimated with high accuracy and low latency. This nonlinear estimation problem is studied in the present paper. The proposed solution makes use of measurements from inertial sensors and computer vision. These measurements are fused using a Kalman filtering framework, incorporating a rather detailed model for the dynamics of the camera. Experiments show that the resulting filter provides good estimates of the camera motion, even during fast movements.