liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 69) Show all publications
Brust, J., Burdakov, O., Erway, J. B. & Marcia, R. F. (2019). A dense initialization for limited-memory quasi-Newton methods. Computational Optimization and Applications, 74(1), 121-142
Open this publication in new window or tab >>A dense initialization for limited-memory quasi-Newton methods
2019 (English)In: Computational Optimization and Applications, ISSN 0926-6003, Vol. 74, no 1, p. 121-142Article in journal (Other academic) Published
Abstract [en]

We consider a family of dense initializations for limited-memory quasi-Newton methods. The proposed initialization exploits an eigendecomposition-based separation of the full space into two complementary subspaces, assigning a different initialization parameter to each subspace. This family of dense initializations is proposed in the context of a limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) trust-region method that makes use of a shape-changing norm to define each subproblem. As with L-BFGS methods that traditionally use diagonal initialization, the dense initialization and the sequence of generated quasi-Newton matrices are never explicitly formed. Numerical experiments on the CUTEst test set suggest that this initialization together with the shape-changing trust-region method outperforms other L-BFGS methods for solving general nonconvex unconstrained optimization problems. While this dense initialization is proposed in the context of a special trust-region method, it has broad applications for more general quasi-Newton trust-region and line search methods. In fact, this initialization is suitable for use with any quasi-Newton update that admits a compact representation and, in particular, any member of the Broyden class of updates.

Place, publisher, year, edition, pages
Springer, 2019
Keywords
Large-scale nonlinear optimization, limited-memory quasi-Newton methods, trust-region methods, quasi-Newton matrices, shape-changing norm.
National Category
Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-143315 (URN)10.1007/s10589-019-00112-x (DOI)000476600200005 ()
Note

Funding agencies: NSF [CMMI-1334042, CMMI-1333326, IIS-1741490, IIS-1741264]

Available from: 2017-12-04 Created: 2017-12-04 Last updated: 2019-08-12Bibliographically approved
Sysoev, O. & Burdakov, O. (2019). A smoothed monotonic regression via L2 regularization. Knowledge and Information Systems, 59(1), 197-218
Open this publication in new window or tab >>A smoothed monotonic regression via L2 regularization
2019 (English)In: Knowledge and Information Systems, ISSN 0219-1377, E-ISSN 0219-3116, Vol. 59, no 1, p. 197-218Article in journal (Refereed) Published
Abstract [en]

Monotonic regression is a standard method for extracting a monotone function from non-monotonic data, and it is used in many applications. However, a known drawback of this method is that its fitted response is a piecewise constant function, while practical response functions are often required to be continuous. The method proposed in this paper achieves monotonicity and smoothness of the regression by introducing an L2 regularization term. In order to achieve a low computational complexity and at the same time to provide a high predictive power of the method, we introduce a probabilistically motivated approach for selecting the regularization parameters. In addition, we present a technique for correcting inconsistencies on the boundary. We show that the complexity of the proposed method is O(n2). Our simulations demonstrate that when the data are large and the expected response is a complicated function (which is typical in machine learning applications) or when there is a change point in the response, the proposed method has a higher predictive power than many of the existing methods.

Place, publisher, year, edition, pages
Springer, 2019
Keywords
Monotonic regression, Kernel smoothing, Penalized regression, Probabilistic learning, Constrained optimization
National Category
Probability Theory and Statistics Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-147628 (URN)10.1007/s10115-018-1201-2 (DOI)000461390300008 ()
Available from: 2018-04-27 Created: 2018-04-27 Last updated: 2019-04-03Bibliographically approved
Burdakov, O., Dai, Y.-H. & Huang, N. (2019). Stabilized Barzilai-Borwein method. Journal of Computational Mathematics, 37(6), 916-936
Open this publication in new window or tab >>Stabilized Barzilai-Borwein method
2019 (English)In: Journal of Computational Mathematics, ISSN 0254-9409, E-ISSN 1991-7139, Vol. 37, no 6, p. 916-936Article in journal (Refereed) Published
Abstract [en]

The Barzilai-Borwein (BB) method is a popular and efficient tool for solving large-scale unconstrained optimization problems. Its search direction is the same as for the steepest descent (Cauchy) method, but its stepsize rule is different. Owing to this, it converges much faster than the Cauchy method. A feature of the BB method is that it may generate too long steps, which throw the iterates too far away from the solution. Moreover, it may not converge, even when the objective function is strongly convex. In this paper, a stabilization technique is introduced. It consists in bounding the distance between each pair of successive iterates, which often allows for decreasing the number of BB iterations. When the BB method does not converge, our simple modification of this method makes it convergent. For strongly convex functions with Lipschits gradients, we prove its global convergence, despite the fact that no line search is involved, and only gradient values are used. Since the number of stabilization steps is proved to be finite, the stabilized version inherits the fast local convergence of the BB method. The presented results of extensive numerical experiments show that our stabilization technique often allows the BB method to solve problems in a fewer iterations, or even to solve problems where the latter fails.

Place, publisher, year, edition, pages
Global Science Press, 2019
Keywords
Unconstrained optimization, Spectral algorithms, Stabilization, Convergence analysis.
National Category
Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-162637 (URN)10.4208/jcm.1911-m2019-0171 (DOI)000504738100009 ()
Note

Funding agencies:  Visiting Scientist award under the Chinese Academy of Sciences Presidents International Fellowship Initiative; Chinese Natural Science FoundationNational Natural Science Foundation of China [11631013]; National 973 Program of ChinaNational Basic Research 

Available from: 2019-12-12 Created: 2019-12-12 Last updated: 2020-01-09Bibliographically approved
Burdakov, O. & Sysoev, O. (2017). A Dual Active-Set Algorithm for Regularized Slope-Constrained Monotonic Regression. Iranian Journal of Operations Research, 8(2), 40-47
Open this publication in new window or tab >>A Dual Active-Set Algorithm for Regularized Slope-Constrained Monotonic Regression
2017 (English)In: Iranian Journal of Operations Research, ISSN 2008-1189, Vol. 8, no 2, p. 40-47Article in journal (Refereed) Published
Abstract [en]

In many problems, it is necessary to take into account monotonic relations. Monotonic (isotonic) Regression (MR) is often involved in solving such problems. The MR solutions are of a step-shaped form with a typical sharp change of values between adjacent steps. This, in some applications, is regarded as a disadvantage. We recently introduced a Smoothed MR (SMR) problem which is obtained from the MR by adding a regularization penalty term. The SMR is aimed at smoothing the aforementioned sharp change. Moreover, its solution has a far less pronounced step-structure, if at all available. The purpose of this paper is to further improve the SMR solution by getting rid of such a structure. This is achieved by introducing a lowed bound on the slope in the SMR. We call it Smoothed Slope-Constrained MR (SSCMR) problem. It is shown here how to reduce it to the SMR which is a convex quadratic optimization problem. The Smoothed Pool Adjacent Violators (SPAV) algorithm developed in our recent publications for solving the SMR problem is adapted here to solving the SSCMR problem. This algorithm belongs to the class of dual active-set algorithms. Although the complexity of the SPAV algorithm is o(n2) its running time is growing in our computational experiments almost linearly with n. We present numerical results which illustrate the predictive performance quality of our approach. They also show that the SSCMR solution is free of the undesirable features of the MR and SMR solutions.

Place, publisher, year, edition, pages
Tehran: , 2017
Keywords
Monotonic regression, Regularization, Quadratic penalty, Convex quadratic optimization, Dual active-set method, Large-scale optimization
National Category
Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-148061 (URN)10.29252/iors.8.2.40 (DOI)
Available from: 2018-05-29 Created: 2018-05-29 Last updated: 2018-06-07Bibliographically approved
Burdakov, O., Gong, L., Zikrin, S. & Yuan, Y.-x. (2017). On Efficiently Combining Limited-Memory and Trust-Region Techniques. Mathematical Programming Computation, 9(1), 101-134
Open this publication in new window or tab >>On Efficiently Combining Limited-Memory and Trust-Region Techniques
2017 (English)In: Mathematical Programming Computation, ISSN 1867-2949, E-ISSN 1867-2957, Vol. 9, no 1, p. 101-134Article in journal (Refereed) Published
Abstract [en]

Limited-memory quasi-Newton methods and trust-region methods represent two efficient approaches used for solving unconstrained optimization problems. A straightforward combination of them deteriorates the efficiency of the former approach, especially in the case of large-scale problems. For this reason, the limited-memory methods are usually combined with a line search. We show how to efficiently combine limited-memory and trust-region techniques. One of our approaches is based on the eigenvalue decomposition of the limited-memory quasi-Newton approximation of the Hessian matrix. The decomposition allows for finding a nearly-exact solution to the trust-region subproblem defined by the Euclidean norm with an insignificant computational overhead as compared with the cost of computing the quasi-Newton direction in line-search limited-memory methods. The other approach is based on two new eigenvalue-based norms. The advantage of the new norms is that the trust-region subproblem is separable and each of the smaller subproblems is easy to solve. We show that our eigenvalue-based limited-memory trust-region methods are globally convergent. Moreover, we propose improved versions of the existing limited-memory trust-region algorithms. The presented results of numerical experiments demonstrate the efficiency of our approach which is competitive with line-search versions of the L-BFGS method.

Keywords
Unconstrained Optimization, Large-scale Problems, Limited-Memory Methods, Trust-Region Methods, Shape-Changing Norm, Eigenvalue Decomposition
National Category
Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-129783 (URN)10.1007/s12532-016-0109-7 (DOI)
Available from: 2016-06-28 Created: 2016-06-28 Last updated: 2017-11-28Bibliographically approved
Burdakov, O., Kvarnström, J. & Doherty, P. (2017). Optimal scheduling for replacing perimeter guarding unmanned aerial vehicles. Annals of Operations Research, 249(1), 163-174
Open this publication in new window or tab >>Optimal scheduling for replacing perimeter guarding unmanned aerial vehicles
2017 (English)In: Annals of Operations Research, ISSN 0254-5330, E-ISSN 1572-9338, Vol. 249, no 1, p. 163-174Article in journal (Refereed) Published
Abstract [en]

Guarding the perimeter of an area in order to detect potential intruders is an important task in a variety of security-related applications. This task can in many circumstances be performed by a set of camera-equipped unmanned aerial vehicles (UAVs). Such UAVs will occasionally require refueling or recharging, in which case they must temporarily be replaced by other UAVs in order to maintain complete surveillance of the perimeter. In this paper we consider the problem of scheduling such replacements. We present optimal replacement strategies and justify their optimality.

Place, publisher, year, edition, pages
Springer, 2017
Keywords
Scheduling problem, Optimal replacement strategies, Perimeter guarding, Unmanned aerial vehicles
National Category
Computer Sciences Computational Mathematics Information Systems
Identifiers
urn:nbn:se:liu:diva-126459 (URN)10.1007/s10479-016-2169-5 (DOI)000394151400010 ()2-s2.0-84961644607 (Scopus ID)
Funder
EU, FP7, Seventh Framework Programme, 600958VINNOVA, 2013-01206Linnaeus research environment CADICSELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2016-03-27 Created: 2016-03-27 Last updated: 2018-01-10Bibliographically approved
Sysoev, O. & Burdakov, O. (2016). A Smoothed Monotonic Regression via L2 Regularization. Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>A Smoothed Monotonic Regression via L2 Regularization
2016 (English)Report (Other academic)
Abstract [en]

Monotonic Regression (MR) is a standard method for extracting a monotone function from non-monotonic data, and it is used in many applications. However, a known drawback of this method is that its fitted response is a piecewise constant function, while practical response functions are often required to be continuous. The method proposed in this paper achieves monotonicity and smoothness of the regression by introducing an L2 regularization term, and it is shown that the complexity of this method is O(n2). In addition, our simulations demonstrate that the proposed method normally has higher predictive power than some commonly used alternative methods, such as monotonic kernel smoothers. In contrast to these methods, our approach is probabilistically motivated and has connections to Bayesian modeling.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2016. p. 17
Series
LiTH-MAT-R, ISSN 0348-2960 ; 2016:01
Keywords
Monotonic regression, Kernel smoothing, Penalized regression, Bayesian modeling
National Category
Probability Theory and Statistics Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-125398 (URN)LiTH-MAT-R--2016/01--SE (ISRN)
Available from: 2016-02-22 Created: 2016-02-22 Last updated: 2016-09-26Bibliographically approved
Kalish, M. L., Dunn, J. C., Burdakov, O. P. & Sysoev, O. (2016). A statistical test of the equality of latent orders. Journal of mathematical psychology (Print), 70, 1-11, Article ID YJMPS2051.
Open this publication in new window or tab >>A statistical test of the equality of latent orders
2016 (English)In: Journal of mathematical psychology (Print), ISSN 0022-2496, E-ISSN 1096-0880, Vol. 70, p. 1-11, article id YJMPS2051Article in journal (Refereed) Published
Abstract [en]

It is sometimes the case that a theory proposes that the population means on two variables should have the same rank order across a set of experimental conditions. This paper presents a test of this hypothesis. The test statistic is based on the coupled monotonic regression algorithm developed by the authors. The significance of the test statistic is determined by comparison to an empirical distribution specific to each case, obtained via non-parametric or semi-parametric bootstrap. We present an analysis of the power and Type I error control of the test based on numerical simulation. Partial order constraints placed on the variables may sometimes be theoretically justified. These constraints are easily incorporated into the computation of the test statistic and are shown to have substantial effects on power. The test can be applied to any form of data, as long as an appropriate statistical model can be specified.

Place, publisher, year, edition, pages
Academic Press, 2016
Keywords
State-trace analysis, Monotonic regression, Hypothesis test
National Category
Other Mathematics
Identifiers
urn:nbn:se:liu:diva-122765 (URN)10.1016/j.jmp.2015.10.004 (DOI)000372686500001 ()
Note

free access is valid until January 8, 2016:

http://authors.elsevier.com/a/1S3XC53naPWGh

Funding agencies: Australian Research Council [0877510, 0878630, 110100751, 130101535]; National Science Foundation [1256959]; Linkoping University

Available from: 2015-11-21 Created: 2015-11-21 Last updated: 2017-12-01Bibliographically approved
Sysoev, O., Grimvall, A. & Burdakov, O. (2016). Bootstrap confidence intervals for large-scale multivariate monotonic regression problems. Communications in statistics. Simulation and computation, 45(3), 1025-1040
Open this publication in new window or tab >>Bootstrap confidence intervals for large-scale multivariate monotonic regression problems
2016 (English)In: Communications in statistics. Simulation and computation, ISSN 0361-0918, E-ISSN 1532-4141, Vol. 45, no 3, p. 1025-1040Article in journal (Refereed) Published
Abstract [en]

Recently, the methods used to estimate monotonic regression (MR) models have been substantially improved, and some algorithms can now produce high-accuracy monotonic fits to multivariate datasets containing over a million observations. Nevertheless, the computational burden can be prohibitively large for resampling techniques in which numerous datasets are processed independently of each other. Here, we present efficient algorithms for estimation of confidence limits in large-scale settings that take into account the similarity of the bootstrap or jackknifed datasets to which MR models are fitted. In addition, we introduce modifications that substantially improve the accuracy of MR solutions for binary response variables. The performance of our algorithms isillustrated using data on death in coronary heart disease for a large population. This example also illustrates that MR can be a valuable complement to logistic regression.

Place, publisher, year, edition, pages
Taylor & Francis, 2016
Keywords
Big data, Bootstrap, Confidence intervals, Monotonic regression, Pool- adjacent-violators algorithm
National Category
Probability Theory and Statistics Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-85169 (URN)10.1080/03610918.2014.911899 (DOI)000372527900014 ()
Note

Vid tiden för disputation förelåg publikationen som manuskript

Available from: 2012-11-08 Created: 2012-11-08 Last updated: 2017-12-13
Burdakov, O., Kanzow, C. & Schwartz, A. (2016). Mathematical Programs with Cardinality Constraints: Reformulation by Complementarity-Type Conditions and a Regularization Method. SIAM Journal on Optimization, 26(1), 397-425
Open this publication in new window or tab >>Mathematical Programs with Cardinality Constraints: Reformulation by Complementarity-Type Conditions and a Regularization Method
2016 (English)In: SIAM Journal on Optimization, ISSN 1052-6234, E-ISSN 1095-7189, Vol. 26, no 1, p. 397-425Article in journal (Refereed) Published
Abstract [en]

Optimization problems with cardinality constraints are very difficult mathematical programs which are typically solved by global techniques from discrete optimization. Here we introduce a mixed-integer formulation whose standard relaxation still has the same solutions (in the sense of global minima) as the underlying cardinality-constrained problem; the relation between the local minima is also discussed in detail. Since our reformulation is a minimization problem in continuous variables, it allows to apply ideas from that field to cardinality-constrained problems. Here, in particular, we therefore also derive suitable stationarity conditions and suggest an appropriate regularization method for the solution of optimization problems with cardinality constraints. This regularization method is shown to be globally convergent to a Mordukhovich-stationary point. Extensive numerical results are given to illustrate the behavior of this method.

Keywords
Cardinality constraints, global minima, local minima, stationary points, M-stationarity, relaxation, regularization method
National Category
Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-124611 (URN)10.1137/140978077 (DOI)000373631500015 ()
Available from: 2016-02-06 Created: 2016-02-06 Last updated: 2018-01-26Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1836-4200

Search in DiVA

Show all publications