liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Grimvall, Anders
Publications (10 of 77) Show all publications
Sysoev, O., Grimvall, A. & Burdakov, O. (2016). Bootstrap confidence intervals for large-scale multivariate monotonic regression problems. Communications in statistics. Simulation and computation, 45(3), 1025-1040
Open this publication in new window or tab >>Bootstrap confidence intervals for large-scale multivariate monotonic regression problems
2016 (English)In: Communications in statistics. Simulation and computation, ISSN 0361-0918, E-ISSN 1532-4141, Vol. 45, no 3, p. 1025-1040Article in journal (Refereed) Published
Abstract [en]

Recently, the methods used to estimate monotonic regression (MR) models have been substantially improved, and some algorithms can now produce high-accuracy monotonic fits to multivariate datasets containing over a million observations. Nevertheless, the computational burden can be prohibitively large for resampling techniques in which numerous datasets are processed independently of each other. Here, we present efficient algorithms for estimation of confidence limits in large-scale settings that take into account the similarity of the bootstrap or jackknifed datasets to which MR models are fitted. In addition, we introduce modifications that substantially improve the accuracy of MR solutions for binary response variables. The performance of our algorithms isillustrated using data on death in coronary heart disease for a large population. This example also illustrates that MR can be a valuable complement to logistic regression.

Place, publisher, year, edition, pages
Taylor & Francis, 2016
Keywords
Big data, Bootstrap, Confidence intervals, Monotonic regression, Pool- adjacent-violators algorithm
National Category
Probability Theory and Statistics Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-85169 (URN)10.1080/03610918.2014.911899 (DOI)000372527900014 ()
Note

Vid tiden för disputation förelåg publikationen som manuskript

Available from: 2012-11-08 Created: 2012-11-08 Last updated: 2017-12-13
Sysoev, O., Grimvall, A. & Burdakov, O. (2013). Bootstrap estimation of the variance of the error term in monotonic regression models. Journal of Statistical Computation and Simulation, 83(4), 625-638
Open this publication in new window or tab >>Bootstrap estimation of the variance of the error term in monotonic regression models
2013 (English)In: Journal of Statistical Computation and Simulation, ISSN 0094-9655, E-ISSN 1563-5163, Vol. 83, no 4, p. 625-638Article in journal (Refereed) Published
Abstract [en]

The variance of the error term in ordinary regression models and linear smoothers is usually estimated by adjusting the average squared residual for the trace of the smoothing matrix (the degrees of freedom of the predicted response). However, other types of variance estimators are needed when using monotonic regression (MR) models, which are particularly suitable for estimating response functions with pronounced thresholds. Here, we propose a simple bootstrap estimator to compensate for the over-fitting that occurs when MR models are estimated from empirical data. Furthermore, we show that, in the case of one or two predictors, the performance of this estimator can be enhanced by introducing adjustment factors that take into account the slope of the response function and characteristics of the distribution of the explanatory variables. Extensive simulations show that our estimators perform satisfactorily for a great variety of monotonic functions and error distributions.

Place, publisher, year, edition, pages
Taylor & Francis Group, 2013
Keywords
uncertainty estimation; bootstrap; monotonic regression; pool-adjacent-violators algorithm
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-78858 (URN)10.1080/00949655.2011.631138 (DOI)000317276900003 ()
Available from: 2012-06-21 Created: 2012-06-21 Last updated: 2017-12-07
Burauskaite-Harju, A. & Grimvall, A. (2013). Diagnostics for tail dependence in time-lagged random fields of precipitation. Journal of Theoretical and Applied Climatology, 112(3-4), 629-636
Open this publication in new window or tab >>Diagnostics for tail dependence in time-lagged random fields of precipitation
2013 (English)In: Journal of Theoretical and Applied Climatology, ISSN 0177-798X, E-ISSN 1434-4483, Vol. 112, no 3-4, p. 629-636Article in journal (Refereed) Published
Abstract [en]

Weather extremes often occur along fronts passing different sites with some time lag. Here, we show how such temporal patterns can be taken into account when exploring inter-site dependence of extremes. We incorporate time lags into existing models and into measures of extremal associations and their relation to the distance between the investigated sites. Furthermore, we define summarizing parameters that can be used to explore tail dependence for a whole network of stations in the presence of fixed or stochastic time lags. Analysis of hourly precipitation data from Sweden showed that our methods can prevent underestimation of the strength and spatial extent of tail dependencies.

Keywords
Precipitation; Sub-daily; Tail dependence; Spatial dependence; Time lag
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-71298 (URN)10.1007/s00704-012-0748-1 (DOI)000318246300022 ()
Available from: 2011-10-10 Created: 2011-10-10 Last updated: 2017-12-08Bibliographically approved
Burauskaite-Harju, A., Grimvall, A. & von Brömssen, C. (2012). A test for network-wide trends in rainfall extremes. International Journal of Climatology, 32(1), 86-94
Open this publication in new window or tab >>A test for network-wide trends in rainfall extremes
2012 (English)In: International Journal of Climatology, ISSN 0899-8418, E-ISSN 1097-0088, ISSN 0899-8418, Vol. 32, no 1, p. 86-94Article in journal (Refereed) Published
Abstract [en]

Temporal trends in meteorological extremes are often examined by first reducing daily data to annual index values, such as the 95th or 99th percentiles. Here, we report how this idea can be elaborated to provide an efficient test for trends at a network of stations. The initial step is to make separate estimates of tail probabilities of precipitation amounts for each combination of station and year by fitting a generalised Pareto distribution (GPD) to data above a user-defined threshold. The resulting time series of annual percentile estimates are subsequently fed into a multivariate Mann-Kendall (MK) test for monotonic trends. We performed extensive simulations using artificially generated precipitation data and noted that the power of tests for temporal trends was substantially enhanced when ordinary percentiles were substituted for GPD percentiles. Furthermore, we found that the trend detection was robust to misspecification of the extreme value distribution. An advantage of the MK test is that it can accommodate non-linear trends, and it can also take into account the dependencies between stations in a network. To illustrate our approach, we used long time series of precipitation data from a network of stations in The Netherlands.

Place, publisher, year, edition, pages
Wiley, 2012
Keywords
climate extremes; precipitation; temporal trend; generalised Pareto distribution; climate indices; global warming
National Category
Climate Science Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-63099 (URN)10.1002/joc.2263 (DOI)000298733800007 ()
Note
funding agencies|Swedish Environmental Protection Agency||Available from: 2010-12-13 Created: 2010-12-10 Last updated: 2025-02-01
Burauskaite-Harju, A., Grimvall, A., Walther, A., Achberger, C. & Chen, D. (2012). Characterizing and visualizing spatio-temporal patterns in hourly precipitation records. Journal of Theoretical and Applied Climatology, 109(3-4), 333-343
Open this publication in new window or tab >>Characterizing and visualizing spatio-temporal patterns in hourly precipitation records
Show others...
2012 (English)In: Journal of Theoretical and Applied Climatology, ISSN 0177-798X, E-ISSN 1434-4483, Vol. 109, no 3-4, p. 333-343Article in journal (Refereed) Published
Abstract [en]

We develop new techniques to summarize and visualize spatial patterns of coincidence in weather events such as more or less heavy precipitation at a network of meteorological stations. The cosine similarity measure, which has a simple probabilistic interpretation for vectors of binary data, is generalized to characterize spatial dependencies of events that may reach different stations with a variable time lag. More specifically, we reduce such patterns into three parameters (dominant time lag, maximum cross-similarity, and window-maximum similarity) that can easily be computed for each pair of stations in a network. Furthermore, we visualize such threeparameter summaries by using colour-coded maps of dependencies to a given reference station and distance-decay plots for the entire network. Applications to hourly precipitation data from a network of 93 stations in Sweden illustrate how this method can be used to explore spatial patterns in the temporal synchrony of precipitation events.

Place, publisher, year, edition, pages
Springer, 2012
Keywords
precipitation; hourly rainfall records; spatial dependence; time lag; cosine similarity
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-71297 (URN)10.1007/s00704-011-0574-x (DOI)000307243900002 ()
Note

funding agencies|Swedish Research Council (VR)||Gothenburg Atmospheric Science Centre (GAC)||FORMAS|2007-1048-8700*51|

Available from: 2011-10-10 Created: 2011-10-10 Last updated: 2017-12-08Bibliographically approved
Sysoev, O., Burdakov, O. & Grimvall, A. (2011). A segmentation-based algorithm for large-scale partially ordered monotonic regression. Computational Statistics & Data Analysis, 55(8), 2463-2476
Open this publication in new window or tab >>A segmentation-based algorithm for large-scale partially ordered monotonic regression
2011 (English)In: Computational Statistics & Data Analysis, ISSN 0167-9473, E-ISSN 1872-7352, Vol. 55, no 8, p. 2463-2476Article in journal (Refereed) Published
Abstract [en]

Monotonic regression (MR) is an efficient tool for estimating functions that are monotonic with respect to input variables. A fast and highly accurate approximate algorithm called the GPAV was recently developed for efficient solving large-scale multivariate MR problems. When such problems are too large, the GPAV becomes too demanding in terms of computational time and memory. An approach, that extends the application area of the GPAV to encompass much larger MR problems, is presented. It is based on segmentation of a large-scale MR problem into a set of moderate-scale MR problems, each solved by the GPAV. The major contribution is the development of a computationally efficient strategy that produces a monotonic response using the local solutions. A theoretically motivated trend-following technique is introduced to ensure higher accuracy of the solution. The presented results of extensive simulations on very large data sets demonstrate the high efficiency of the new algorithm.

Place, publisher, year, edition, pages
Elsevier Science B.V., Amsterdam., 2011
Keywords
Quadratic programming, Large-scale optimization, Least distance problem, Monotonic regression, Partially ordered data set, Pool-adjacent-violators algorithm
National Category
Social Sciences
Identifiers
urn:nbn:se:liu:diva-69182 (URN)10.1016/j.csda.2011.03.001 (DOI)000291181000002 ()
Available from: 2011-06-17 Created: 2011-06-17 Last updated: 2017-12-11
Sadoghi, A., Burdakov, O. & Grimvall, A. (2011). Piecewise Monotonic Regression Algorithm for Problems Comprising Seasonal and Monotonic Trends. Paper presented at SIAM Conference On Optimization May 16-19, 2011 Darmestadtium,Germany.
Open this publication in new window or tab >>Piecewise Monotonic Regression Algorithm for Problems Comprising Seasonal and Monotonic Trends
2011 (English)Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

In this research piecewise monotonic models for problemscomprising seasonal cycles and monotonic trends are considered.In contrast to the conventional piecewise monotonicregression algorithms, our approach can efficientlyexploit a priori information about temporal patterns. Itis based on reducing these problems to monotonic regressionproblems defined on partially ordered data sets. Thelatter are large-scale convex quadratic programming problems.They are efficiently solved by the GPAV algorithm.

Series
Book of abstracts SIAM Conference On Optimization May 16-19, 2011 Darmestadtium,Germany
Keywords
Statistics, Monotonic Regression, Quadratic programming, Convex programming
National Category
Probability Theory and Statistics Information Systems
Identifiers
urn:nbn:se:liu:diva-70417 (URN)
Conference
SIAM Conference On Optimization May 16-19, 2011 Darmestadtium,Germany
Available from: 2011-09-06 Created: 2011-09-06 Last updated: 2018-01-12
Timpka, T., Eriksson, H., Gursky, E. A., Stromgren, M., Holm, E., Ekberg, J., . . . Nyce, J. M. (2011). Requirements and Design of the PROSPER Protocol for Implementation of Information Infrastructures Supporting Pandemic Response: A Nominal Group Study. PLOS ONE, 6(3), 0017941
Open this publication in new window or tab >>Requirements and Design of the PROSPER Protocol for Implementation of Information Infrastructures Supporting Pandemic Response: A Nominal Group Study
Show others...
2011 (English)In: PLOS ONE, ISSN 1932-6203, Vol. 6, no 3, p. 0017941-Article in journal (Refereed) Published
Abstract [en]

Background: Advanced technical systems and analytic methods promise to provide policy makers with information to help them recognize the consequences of alternative courses of action during pandemics. Evaluations still show that response programs are insufficiently supported by information systems. This paper sets out to derive a protocol for implementation of integrated information infrastructures supporting regional and local pandemic response programs at the stage(s) when the outbreak no longer can be contained at its source. Methods: Nominal group methods for reaching consensus on complex problems were used to transform requirements data obtained from international experts into an implementation protocol. The analysis was performed in a cyclical process in which the experts first individually provided input to working documents and then discussed them in conferences calls. Argument-based representation in design patterns was used to define the protocol at technical, system, and pandemic evidence levels. Results: The Protocol for a Standardized information infrastructure for Pandemic and Emerging infectious disease Response (PROSPER) outlines the implementation of information infrastructure aligned with pandemic response programs. The protocol covers analyses of the community at risk, the response processes, and response impacts. For each of these, the protocol outlines the implementation of a supporting information infrastructure in hierarchical patterns ranging from technical components and system functions to pandemic evidence production. Conclusions: The PROSPER protocol provides guidelines for implementation of an information infrastructure for pandemic response programs both in settings where sophisticated health information systems already are used and in developing communities where there is limited access to financial and technical resources. The protocol is based on a generic health service model and its functions are adjusted for community-level analyses of outbreak detection and progress, and response program effectiveness. Scientifically grounded reporting principles need to be established for interpretation of information derived from outbreak detection algorithms and predictive modeling.

Place, publisher, year, edition, pages
Public Library of Science (PLoS), 2011
National Category
Social Sciences
Identifiers
urn:nbn:se:liu:diva-67828 (URN)10.1371/journal.pone.0017941 (DOI)000289053800006 ()
Note
Original Publication: Toomas Timpka, Henrik Eriksson, Elin A Gursky, Magnus Stromgren, Einar Holm, Joakim Ekberg, Olle Eriksson, Anders Grimvall, Lars Valter and James M Nyce, Requirements and Design of the PROSPER Protocol for Implementation of Information Infrastructures Supporting Pandemic Response: A Nominal Group Study, 2011, PLOS ONE, (6), 3, 0017941. http://dx.doi.org/10.1371/journal.pone.0017941 Copyright: Public Library of Science (PLoS) http://www.plos.org/ Available from: 2011-04-29 Created: 2011-04-29 Last updated: 2013-09-05
Shahsavani, D. & Grimvall, A. (2011). Variance-based sensitivity analysis of model outputs using surrogate models. ENVIRONMENTAL MODELLING and SOFTWARE, 26(6), 723-730
Open this publication in new window or tab >>Variance-based sensitivity analysis of model outputs using surrogate models
2011 (English)In: ENVIRONMENTAL MODELLING and SOFTWARE, ISSN 1364-8152, Vol. 26, no 6, p. 723-730Article in journal (Refereed) Published
Abstract [en]

If a computer model is run many times with different inputs, the results obtained can often be used to derive a computationally cheaper approximation, or surrogate model, of the original computer code. Thereafter, the surrogate model can be employed to reduce the computational cost of a variance-based sensitivity analysis (VBSA) of the model output. Here, we draw attention to a procedure in which an adaptive sequential design is employed to derive surrogate models and estimate sensitivity indices for different sub-groups of inputs. The results of such group-wise VBSAs are then used to select inputs for a final VBSA. Our procedure is particularly useful when there is little prior knowledge about the response surface and the aim is to explore both the global variability and local nonlinear features of the model output. Our conclusions are based on computer experiments involving the process-based river basin model INCA-N, in which outputs like the average annual riverine load of nitrogen can be regarded as functions of 19 model parameters.

Place, publisher, year, edition, pages
Elsevier Science B.V., Amsterdam., 2011
Keywords
Sensitivity analysis, Surrogate models, Experimental design, Computational cost
National Category
Social Sciences
Identifiers
urn:nbn:se:liu:diva-67311 (URN)10.1016/j.envsoft.2011.01.002 (DOI)000288583600004 ()
Available from: 2011-04-08 Created: 2011-04-08 Last updated: 2011-04-08
Sirisack, S. & Grimvall, A. (2011). Visual detection of change points and trends using animated bubble charts. In: Ema O. Ekundayo (Ed.), Environmental monitoring (pp. 327-340). Rijeka, Croatia: InTech
Open this publication in new window or tab >>Visual detection of change points and trends using animated bubble charts
2011 (English)In: Environmental monitoring / [ed] Ema O. Ekundayo, Rijeka, Croatia: InTech, 2011, p. 327-340Chapter in book (Refereed)
Abstract [en]

"Environmental Monitoring" is a book designed by InTech - Open Access Publisher in collaboration with scientists and researchers from all over the world. The book is designed to present recent research advances and developments in the field of environmental monitoring to a global audience of scientists, researchers, environmental educators, administrators, managers, technicians, students, environmental enthusiasts and the general public. The book consists of a series of sections and chapters addressing topics like the monitoring of heavy metal contaminants in varied environments, biolgical monitoring/ecotoxicological studies; and the use of wireless sensor networks/Geosensor webs in environmental monitoring.

Place, publisher, year, edition, pages
Rijeka, Croatia: InTech, 2011
Keywords
Visualization, bubble chart, change point, trend
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-79587 (URN)10.5772/29239 (DOI)978-953-307-724-6 (ISBN)
Available from: 2012-08-15 Created: 2012-08-10 Last updated: 2013-03-27Bibliographically approved
Organisations

Search in DiVA

Show all publications