Open-pit mining is a surface mining operation whereby ore, or waste, is excavated from the surface of the land. The open-pit design problem is deciding on which blocks of an ore deposit to mine in order to maximize the total profit, while obeying digging constraints concerning pit slope and block precedence. The open-pit design problem can be formulated as a maximum flow problem in a certain capacitated network, as first shown by Picard in 1976. His derivation is based on a restatement of the problem as a quadratic binary program. We give an alternative derivation of the maximum flow formulation, which uses only linear programming duality.
We consider the problem of finding an optimal mining sequence for an open pit during a number of time periods subject to only spatial and temporal precedence constraints. This problem is of interest because such constraints are generic to any open-pit scheduling problem and, in particular, because it arises as a Lagrangean relaxation of an open-pit scheduling problem. We show that this multi-period open-pit mining problem can be solved as a maximum flow problem in a time-expanded mine graph. Further, the minimum cut in this graph will define an optimal sequence of pits. This result extends a well-known result of J.-C. Picard from 1976 for the open-pit mine design problem, that is, the single-period case, to the case of multiple time periods.
We consider the problem of finding an optimal mining schedule for an openpit during a number of time periods, subject to a mining capacity restriction for each time period. By applying Lagrangian relaxation to the capacities, a multi-parametric formulation is obtained. We show that this formulation can be restated as a maximum flow problem in a time-expanded network. This result extends a well-known result of Picard from 1976 for the open-pit design problem, that is, the single-period case, to the case of multiple time periods.
This paper discusses a Lagrangian relaxation interpretation of the Picard and Smith (2004) parametric approach to open-pit mining, which finds a sequence of intermediate contours leading to an ultimate one. This method is similar to the well known parametric approach of Lerchs and Grossmann (1965). We give examples of worst case performance, as well as best case performance of the Picard-Smith approach. The worst case behaviour can be very poor in that we might not obtain any intermediate contours at all. We also discuss alternative parametric methods for finding intermediate contours, but conclude that such methods seem to have inherent weaknesses.
The selection of a mine design is based on estimating net present values of all possible, technically feasible mine plans so as to select the one with the maximum value. It is a hard task to know with certainty the quantity and quality of ore in the ground. This geological uncertainty, and also the future market behaviour of metal prices and foreign exchange rates, which are impossible to be known with certainty, make mining a high risk business.
Value-at-Risk (VaR) is a measure that is used in financial decisions to minimize the loss caused by inadequate monitoring of risk. This measure does however have certain drawbacks such as lack of consistency, nonconvexity, and nondifferentiability. Rockafellar and Uryasev (2000) introduce the Conditional Value-at-Risk (CVaR) measure as an alternative to the VaR measure. The CVaR measure gives rise to a convex problem.
An optimization model that maximizes expected return while minimizing risk is important for the mining sector as this will help make better decisions on the blocks of ore to mine at a particular point in time. We present a CVaR approach to the uncertainty involved in open-pit mining. We formulate investment and design models for the open-pit mine and also give a nested pit scheduling model based on CVaR. Several numerical results based on our models are presented by using scenarios from simulated geological and price uncertainties.
The selection of a mine design is based on estimating net present values of all possible, technically feasible mine plans so as to select the one with the maximum value. It is a hard task to know with certainty the quantity and quality of ore in the ground. This geological uncertainty and also the future market behavior of metal prices and foreign exchange rates, which are always uncertain, make mining a high risk business. Value-at-Risk (VaR) is a measure that is used in financial decisions to minimize the loss caused by inadequate monitoring of risk. This measure does, however, have certain drawbacks such as lack of consistency, nonconvexity, and nondifferentiability. Rockafellar and Uryasev [J. Risk 2, 21-41 (2000)] introduce the Conditional Value-at-Risk (CVaR) measure as an alternative to the VaR measure. The CVaR measure gives rise to a convex optimization problem. An optimization model that maximizes expected return while minimizing risk is important for the mining sector as this will help make better decisions on the blocks of ore to mine at a particular point in time. We present a CVaR approach to the uncertainty involved in open-pit mining. We formulate investment and design models for the open-pit mine and also give a nested pit scheduling model based on CVaR. Several numerical results based on our models are presented by using scenarios from simulated geological and market uncertainties.
Open-pit production scheduling deals with the problem of deciding what and when to mine from an open-pit, given potential profits of the different fractions of the mining volume, pit-slope restrictions, and mining capacity restrictions for successive time periods. We give suggestions for Lagrangian dual heuristic approaches for the open-pit production scheduling problem. First, the case with a single mining capacity restriction per time period is considered. For this case, linear programming relaxations are solved to find values of the multipliers for the capacity restrictions, to be used in a Lagrangian relaxation of the constraints. The solution to the relaxed problem will not in general satisfy the capacity restrictions, but can be made feasible by adjusting the multiplier values for one time period at a time. Further, a time aggregation approach is suggested as a way of reducing the computational burden of solving linear programming relaxations, especially for largescale real-life mine problems. For the case with multiple capacity restrictions per time period we apply newly developed conditions for optimality and nearoptimality in general discrete optimization problems to construct a procedure for heuristically constructing near-optimal intermediate pits.
We present a sequential linear programming, SLP, algorithm in which the traditional line-search step is replaced by a multi-dimensional search. The algorithm is based on inner approximations of both the primal and dual spaces, which yields a method which in the primal space combines column and constraint generation. The algorithm does not use a merit function, and the linear programming subproblem of the algorithm differs from the one obtained in traditional methods of this type, in the respect that linearized constraints are taken into account only implicitly in a Lagrangiandual fashion. Convergence to a point that satisfies the Karush-Kuhn-Tucker conditions is established. We apply the new method to a selection of the Hoch-Schittkowski’s nonlinear test problems and report a preliminary computational study in a Matlab environment. Since the proposed algorithmcombines column and constraint generation, it should be advantageous with large numbers of variables and constraints.
The feasible direction method of Frank and Wolfe has been claimed to be efficient for solving the stochastic transportation problem. While this is true for very moderate accuracy requirements, substantially more efficient algorithms are otherwise diagonalized Newton and conjugate Frank–Wolfe algorithms, which we describe and evaluate. Like the Frank–Wolfe algorithm, these two algorithms take advantage of the structure of the stochastic transportation problem. We also introduce a Frank–Wolfe type algorithm with multi-dimensional search; this search procedure exploits the Cartesian product structure of the problem. Numerical results for two classic test problem sets are given. The three new methods that are considered are shown to be superior to the Frank–Wolfe method, and also to an earlier suggested heuristic acceleration of the Frank–Wolfe method.
In aerodynamic development of ground vehicles, the use of Computational Fluid Dynamics (CFD) is crucial for improving the aerodynamic performance, stability and comfort of the vehicle. Simulation time and accuracy are two key factors of a well working CFD procedure. Using scale-resolving simulations, accurate predictions of the flow field and aerodynamic forces are possible, but often leads to long simulation time. For a given solver, one of the most significant aspects of the simulation time/cost is the temporal resolution. In this study, this aspect is investigated using the realistic vehicle model DrivAer with the notchback geometry as the test case. To ensure a direct and accurate comparison with wind tunnel measurements, performed at TU Berlin, a large section of the wind tunnel is included in the simulation domain. All simulations are performed at a Reynolds number of 3.12 million, based on the vehicle length. Three spatial resolutions were compared, where it could be seen that a hybrid element mesh consisting of 102 million cells only revealed small differences to the finest mesh investigated, well as showing excellent agreement with wind tunnel measurements. An investigation of the temporal resolution is performed, in order to see its effect on the simulation time/cost and accuracy of the results. The finest temporal resolution resulted in a Courant-Friedrichs-Lewy number less than unity, while the coarsest reached a CFL number of around 100. From these results, it is seen that it is possible to reduce the simulation time with more than 90 % (CFL 20) and still keep sufficient accuracy of the forces and important features of the flow field.
High dose-rate (HDR) brachytherapy is a kind of radiotherapy used to treat, among others, prostate cancer. When applied to prostate cancer a radioactive source is moved through catheters implanted into the prostate. For each patient a treatment plan is constructed that decide for example catheter placement and dwell time distribution, that is where to stop the radioactive source and for how long.
Mathematical optimization methods has been used to find quality plans with respect to dwell time distribution, however few optimization approaches regarding catheter placement have been studied. In this article we present an integrated optimization model that optimize catheter placement and dwell time distribution simultaneously. Our results show that integrating the two decisions yields greatly improved plans, from 15% to 94% improvement.
Since the presented model is computationally demanding to solve we also present three heuristics: tabu search, variable neighbourhood search and genetic algorithm. Of these variable neighbourhood search is clearly the best, outperforming a state-of-the-art optimization software (CPLEX) and the two other heuristics.
Purpose: Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier. less thanbrgreater than less thanbrgreater thanMethods: In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions. less thanbrgreater than less thanbrgreater thanResults: The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality. less thanbrgreater than less thanbrgreater thanConclusions: The authors new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.
Purpose: Dose plans generated with optimization models hitherto used in HDR brachytherapy have shown a tendency to yield longer dwell times than manually optimized plans. Concern has been raised for the corresponding undesired hot spots and various methods to mitigate these have been developed. The hypotheses of this work are a) that one cause for the long dwell times is the use of objective functions comprising simple linear penalties and b) that alternative penalties, being piecewise linear, would lead to reduced length of individual dwell times.
Methods: The characteristics of the linear penalties and the piecewise linear penalties are analysed mathematically. Experimental comparisons between the two types of penalties are carried out retrospectively for a set of prostate cancer patients.
Results: While most dose-volume parameters do not differ significantly between the two types of penalties significant changes can be seen in the dwell times. On the average, total dwell times were reduced by 4.2%, with a reduction of maximum dwell times by 30%, using the alternative penalties.
Conclusion: The use of linear penalties in optimization models for HDR brachytherapy is one cause for undesired longer dwell times appearing in mathematically optimized plans. By introducing alternative penalties significant reduction in dwell times can be achieved for HDR brachytherapy dose plans. Although various constraints as to reduce the long dwell times have been developed our finding is of fundamental interest in showing the shape of the objective function to be one reason for their appearance.
When optimizing dwell times for HDR brachytherapy it is common to use a model comprising an objective of linear penalties. However whether a planis considered good or not depends on other measures such as DVH-based parameters. We show through experiments that the correlation between the value of the objective function and the values of DVH-based parameters, such as D_{90}, is weak in some cases. It seems that the objective function can only classify solutions into better or worse, however it can not distinguish the best with respect to DVH-based parameters.
Purpose: Most clinical software for optimizing dwelling time patterns is based on a linear penalty model. The quality of a dose distribution generated by the dwelling time pattern is, however, evaluated through a number of dosimetric indices. The purpose of this article is to investigate the relationship between the linear penalty model and the dosimetric indices.
Method and Materials: Data sets from three patients, previously treated for prostate cancer with HDR brachytherapy as a boost to external beam therapy, were used for this study, and for each of them 300 random dwelling time patterns were generated. The relationship between the linear penalty model and the dosimetric indices were studied both by the Pearson’s product moment correlation coefficient between the objective function value of the linear penalty model and the values of the dosimetric indices, and by scatter-grams.
Results: For one of the three patients we found a clear connection between the linear penalty model and the values of the dosimetric indices, but not for the other two. For the two patients without a clear connection there where some dosimetric indices that actually improved with deteriorating objective function value.
Conclusion: The dwelling time pattern found by using the linear penalty model does not correspond to the optimal dose distribution with respect to dosimetric indices.
A mandatory Tanzania pension fund with a final salary defined benefit is analyzed. This fund is a contributory pay-as-you-go defined benefit pension system which is much affected by the change in demography. Two kinds of pension benefit, a commuted (at retirement) and a monthly (old age) pension are considered. A decisive factor in the analysis is the increased life expectancy of members of the fund. The projection of the fund’s future members and retirees is done using expected mortality rates of working population and expected longevity. The future contributions, benefits, asset values and liabilities are analyzed. The projection shows that the fund will not be fully sustainable on a long term due to the increase in life expectancy of its members. The contributions will not cover the benefit payouts and the asset value will not fully cover liabilities. Evaluation of some possible reforms of the fund shows that they cannot guarantee a long-term sustainability. Higher returns on asset value will improve the funding ratio, but contributions are still insufficient to cover benefit payouts.
We exhibit useful properties of ballstep subgradient methods for convex optimization using level controls for estimating the optimal value. Augmented with simple averaging schemes, they asymptotically find objective and constraint subgradients involved in optimality conditions. When applied to Lagrangian relaxation of convex programs, they find both primal and dual solutions, and have practicable stopping criteria. Up until now, similar results have only been known for proximal bundle methods, and for subgradient methods with divergent series stepsizes, whose convergence can be slow. Encouraging numerical results are presented for large-scale nonlinear multicommodity network flow problems. ©2007 INFORMS.
Origin-destination (OD) matrices are essential for various analyses in the field of traffic planning, and they are often estimated from link flow observations. We compare methods for allocating link flow detectors to a traffic network with respect to the quality of the estimated OD-matrix. First, an overview of allocation methods proposed in the literature is presented. Second, we construct a controlled experimental environment where any allocation method can be evaluated, and compared to others, in terms of the quality of the estimated OD-matrix. Third, this environment is used to evaluate and compare three fundamental allocation methods. Studies are made on the Sioux Falls network and on a network modeling the city of Linkoping. Our conclusion is, that the most commonly studied approach for detector allocation, maximizing the coverage of OD-pairs, seems to be unfavorable for the quality of the estimated OD-matrix.
We consider the separable nonlinear and strictly convex single-commodity network flow problem (SSCNFP). We develop a computational scheme for generating a primal feasible solution from any Lagrangian dual vector, this is referred to as "early primal recovery". It is motivated by the desire to obtain a primal feasible vector before convergence of a Lagrangian scheme, such a vector is not available from a Lagrangian dual vector unless it is optimal. The scheme is constructed such that if we apply it from a sequence of Lagrangian dual vectors that converge to an optimal one, then the resulting primal (feasible) vectors converge to the unique optimal primal flow vector. It is therefore also a convergent Lagrangian heuristic, akin to those primarily devised within the field of combinatorial optimization but with the contrasting and striking advantage that it is guaranteed to yield a primal optimal solution in the limit. Thereby we also gain access to a new stopping criterion for any Lagrangian dual algorithm for the problem, which is of interest in particular if the SSCNFP arises as a subproblem in a more complex model. We construct instances of convergent Lagrangian heuristics that are based on graph searches within the residual graph, and therefore are efficiently implementable, in particular we consider two shortest path based heuristics that are based on the optimality conditions of the original problem. Numerical experiments report on the relative efficiency and accuracy of the various schemes. © 2007 Elsevier B.V. All rights reserved.
Given a non-empty, compact and convex set, and an a priori defined condition which each element either satisfies or not, we want to find an element belonging to the former category. This is a fundamental problem of mathematical programming which encompasses nonlinear programs, variational inequalities, and saddle-point problems. We present a conceptual column generation scheme, which alternates between solving a restriction of the original problem and a column generation phase which is used to augment the restricted problems. We establish the general applicability of the conceptual method, as well as to the three problem classes mentioned. We also establish a version of the conceptual method in which the restricted and column generation problems are allowed to be solved approximately, and of a version allowing for the dropping of columns. We show that some solution methods (e.g., Dantzig-Wolfe decomposition and simplicial decomposition) are special instances, and present new convergent column generation methods in nonlinear programming, such as a sequential linear programming type method. Along the way, we also relate our quite general scheme in nonlinear programming presented in this paper with several other classic, and more recent, iterative methods in nonlinear optimization.
The well-known and established global optimality conditions based on the Lagrangian formulation of an optimization problem are consistent if and only if the duality gap is zero. We develop a set of global optimality conditions that are structurally similar but are consistent for any size of the duality gap. This system characterizes a primal-dual optimal solution by means of primal and dual feasibility, primal Lagrangian ε-optimality, and, in the presence of inequality constraints, a relaxed complementarity condition analogously called δ-complementarity. The total size ε + δ of those two perturbations equals the size of the duality gap at an optimal solution. Further, the characterization is equivalent to a near-saddle point condition which generalizes the classic saddle point characterization of a primal-dual optimal solution in convex programming. The system developed can be used to explain, to a large degree, when and why Lagrangian heuristics for discrete optimization are successful in reaching near-optimal solutions. Further, experiments on a set-covering problem illustrate how the new optimality conditions can be utilized as a foundation for the construction of new Lagrangian heuristics. Finally, we outline possible uses of the optimality conditions in column generation algorithms and in the construction of core problems. © 2006 INFORMS.
Excellent guides on academic writing and presentation in science in general, and in mathematics and computer science in particular, do abound (see, for example, Refs. [1-8], while guides on the assessment of the results of academic writing are rather more scarce. This short article presents two itemized lists that may be helping hands during the assessment of a scientific article in the field of mathematical optimization and operations research-be it your own, a work by a Master or PhD student of yours, or even a manuscript that you are refereeing for a scientific journal or conference proceedings volume. The first list-"Subbens checklist"-describes necessary ingredients of a complete article. The second list provides criteria for assessing the quality and scientific value of an article. (C) 2016 Elsevier Ltd. All rights reserved.
We present a column generation procedure for the side constrained traffic equilibrium problem. A dual stabilization scheme is introduced to improve the computational performance. Computational experiments for the case of linear side constraints are presented. The test problems are well known traffic equilibrium instances where side constraints of link flow capacity type and general linear side constraints are added. The computational results are promising especially for instances with a relatively small number of side constraints.
We present a solution algorithm for an inverse nonlinear multicommodity network flow problem. This problem is to find link cost adjustments that make a given target link flow solution optimal in a nonlinear multicommodity network flow problem, and that are optimal with respect to a specified objective. The solution procedure uses column generation. We present computational results for instances where the nonlinear multicommodity network flow problems are small and medium scale traffic equilibrium problems, and where system optimal link flows are targeted. The computational results show that the solution procedure is a viable approach for solving medium-scale instances of the inverse traffic equilibrium problem.
When non-smooth, convex minimization problems are solved by subgradient optimization methods, the subgradients used will in general not accumulate to subgradients that verify the optimality of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of subgradient methods in terms of the approximate fulfilment of optimality conditions. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers and convergent lower bounds on the optimal objective value, is not directly available in subgradient schemes. As a means of overcoming these weaknesses in subgradient methods, we introduced in our previous articles the computation of an ergoclic (averaged) sequence of subgradients. Specifically, we considered a non-smooth, convex program solved by a conditional subgradient optimization scheme with divergent series step lengths, and showed that the elements of the ergodic sequence of subgradients in the limit fulfil the optimality conditions at the optimal solution, to which the sequence of iterates converges. This result has three important implications. The first is the finite identification of active constraints at the solution obtained in the limit. The second is the establishment of the convergence of ergodic sequences of Lagrange multipliers; this result enables sensitivity analyses for solutions obtained by subgradient methods. The third is the convergence of a lower bounding procedure based on an ergodic sequence of affine underestimates of the objective function; this procedure also provides a proper termination criterion for subgradient optimization methods. This article gives first an overview of results and applications found in our previous articles pertaining to the generation of ergodic sequences of subgradients generated within a subgradient scheme. It then presents an application of these results to that of the first instance of a simplicial decomposition algorithm for convex and non-smooth optimization problems.
The paper provides two contributions. First, we present new convergence results for conditional e-subgradient algorithms for general convex programs. The results obtained here extend the classical ones by Polyak [Sov. Math. Doklady 8 (1967) 593, USSR Comput. Math. Math. Phys. 9 (1969) 14, Introduction to Optimization, Optimization Software, New York, 1987] as well as the recent ones in [Math. Program. 62 (1993) 261, Eur. J. Oper. Res. 88 (1996) 382, Math. Program. 81 (1998) 23] to a broader framework. Secondly, we establish the application of this technique to solve non-strictly convex-concave saddle point problems, such as primal-dual formulations of linear programs. Contrary to several previous solution algorithms for such problems, a saddle-point is generated by a very simple scheme in which one component is constructed by means of a conditional e-subgradient algorithm, while the other is constructed by means of a weighted average of the (inexact) subproblem solutions generated within the subgradient method. The convergence result extends those of [Minimization Methods for Non-Differentiable Functions, Springer-Verlag, Berlin, 1985, Oper. Res. Lett. 19 (1996) 105, Math. Program. 86 (1999) 283] for Lagrangian saddle-point problems in linear and convex programming, and of [Int. J. Numer. Meth. Eng. 40 (1997) 1295] for a linear-quadratic saddle-point problem arising in topology optimization in contact mechanics.
We provide new insights into the mean-variance portfolio optimization problem, based on performing eigendecomposition of the covariance matrix. The result of this decomposition can be given an interpretation in terms of uncorrelated eigenportfolios. When only some of the eigenvalues and eigenvectors are used, the resulting mean-variance problem is an approximation of the original one. A solution to the approximation yields lower and upper bounds on the original mean-variance problem; these bounds are tight if sufficiently many eigenvalues and eigenvectors are used in the approximation. Even tighter bounds are obtained through the use of a linearized error term of the unused eigenvalues and eigenvectors.
We provide theoretical results for the upper bounding quality of the approximate problem and the cardinality of the portfolio obtained, and also numerical illustrations of these results. Finally, we propose an ad hoc linear transformation of the mean-variance problem, which in practice significantly strengthens the bounds obtained from the approximate mean-variance problem.
The mean-variance problem introduced by Markowitz in 1952 is a fundamental model in portfolio optimization up to date. When cardinality and bound constraints are included, the problem becomes NP-hard and the existing optimizing solution methods for this problem take a large amount of time.
We introduce a core problem based method for obtaining upper bounds to the meanvariance portfolio optimization problem with cardinality and bound constraints. The method involves performing eigendecomposition on the covariance matrix and then using only few of the eigenvalues and eigenvectors to obtain an approximation of the original problem. A solution of this approximate problem has a relatively low cardinality and is used to construct a core problem. When solved, the core problem provides an upper bound. We test the method on large scale problems of up to 1000 assets. The obtained upper bounds are of high quality and the time required to obtain them is much less than what state-of-the-art mixed integer softwares use, which makes it practically useful.
High dose-rate brachytherapy is a modality of radiation therapy used for cancer treatment, in which the radiation source is placed within the body. The treatment goal is to give a high enough dose to the tumour while sparing nearby healthy tissue and organs (organs-at-risk). The most common criteria for evaluating dose distributions are dosimetric indices. For the tumour, such an index is the portion of the volume that receives at least a specified dose level (e.g. the prescription dose), while for organs-at-risk it is instead the portion of the volume that receives at most a specified dose level. Dosimetric indices are aggregate criteria and do not consider spatial properties of the dose distribution. Further, there are neither any established evaluation criteria for characterizing spatial properties, nor have such properties been studied in the context of mathematical optimization of brachytherapy. Spatial properties are however of clinical relevance and therefore dose plans are sometimes adjusted manually to improve them. We propose an optimization model for reducing the prevalence of contiguous volumes with a too high dose (hot spots) or a too low dose (cold spots) in a tentative dose plan. This model is independent of the process of constructing the tentative plan. We conduct computational experiments with tentative plans obtained both from optimization models and from clinical practice. The objective function considers pairs of dose points and each pair is given a distance-based penalty if the dose is either too high or too low at both dose points. Constraints are included to retain dosimetric indices at acceptable levels. Our model is designed to automate the manual adjustment step in the planning process. In the automatic adjustment step large-scale optimization models are solved. We show reductions of the volumes of the largest hot and cold spots, and the computing times are feasible in clinical practice.
Purpose High dose-rate brachytherapy is a method of radiotherapy for cancer treatment in which the radiation source is placed within the body. In addition to give a high enough dose to a tumor, it is also important to spare nearby healthy organs [organs at risk (OAR)]. Dose plans are commonly evaluated using the so-called dosimetric indices; for the tumor, the portion of the structure that receives a sufficiently high dose is calculated, while for OAR it is instead the portion of the structure that receives a sufficiently low dose that is of interest. Models that include dosimetric indices are referred to as dose-volume models (DVMs) and have received much interest recently. Such models do not take the dose to the coldest (least irradiated) volume of the tumor into account, which is a distinct weakness since research indicates that the treatment effect can be largely impaired by tumor underdosage even to small volumes. Therefore, our aim is to extend a DVM to also consider the dose to the coldest volume. Methods An improved DVM for dose planning is proposed. In addition to optimizing with respect to dosimetric indices, this model also takes mean dose to the coldest volume of the tumor into account. Results Our extended model has been evaluated against a standard DVM in ten prostate geometries. Our results show that the dose to the coldest volume could be increased, while also computing times for the dose planning were improved. Conclusion While the proposed model yields dose plans similar to other models in most aspects, it fulfils its purpose of increasing the dose to cold tumor volumes. An additional benefit is shorter solution times, and especially for clinically relevant times (of minutes) we show major improvements in tumour dosimetric indices.
High dose-rate brachytherapy is a method for cancer treatment where the radiation source is placed within the body, inside or close to a tumour. For dose planning, mathematical optimization techniques are being used in practice and the most common approach is to use a linear model which penalizes deviations from specified dose limits for the tumour and for nearby organs. This linear penalty model is easy to solve, but its weakness lies in the poor correlation of its objective value and the dose-volume objectives that are used clinically to evaluate dose distributions. Furthermore, the model contains parameters that have no clear clinical interpretation. Another approach for dose planning is to solve mixed-integer optimization models with explicit dose-volume constraints which include parameters that directly correspond to dose-volume objectives, and which are therefore tangible. The two mentioned models take the overall goals for dose planning into account in fundamentally different ways. We show that there is, however, a mathematical relationship between them by deriving a linear penalty model from a dose-volume model. This relationship has not been established before and improves the understanding of the linear penalty model. In particular, the parameters of the linear penalty model can be interpreted as dual variables in the dose-volume model.
High dose-rate brachytherapy is a method of radiation cancer treatment, where the radiation source is placed inside the body. The recommended way to evaluate dose plans is based on dosimetric indices which are aggregate measures of the received dose. Insufficient spatial distribution of the dose may however result in hot spots, which are contiguous volumes in the tumour that receive a dose that is much too high. We use mathematical optimization to adjust a dose plan that is acceptable with respect to dosimetric indices to also take spatial distribution of the dose into account. This results in large-scale nonlinear mixed-binary models that are solved using nonlinear approximations. We show that there are substantial degrees of freedom in the dose planning even though the levels of dosimetric indices are maintained, and that it is possible to improve a dose plan with respect to its spatial properties.
The Atlas Copco* distribution center in Allen, TX, supplies spare parts and consumables to mining and construction companies across the world. For some customers, packages are shipped in sea containers. Planning how to load the containers is difficult due to several factors: heterogeneity of the packages with respect to size, weight, stackability, positioning and orientation; the set of packages differs vastly between shipments; it is crucial to avoid cargo damage. Load plan quality is ultimately judged by shipping operators. This container loading problem is thus rich with respect to practical considerations. These are posed by the operators and include cargo and container stability as well as stacking and positioning constraints. To avoid cargo damage, the stacking restrictions are modeled in detail. For solving the problem, we developed a two-level metaheuristic approach and implemented it in a decision support system. The upper level is a genetic algorithm which tunes the objective function for a lower level greedy-type constructive placement heuristic, to optimize the quality of the load plan obtained. The decision support system shows load plans on the forklift laptops and has been used for over two years. Management has recognized benefits including reduction of labour usage, lead time, and cargo damage risk. (C) 2019 Elsevier B.V. All rights reserved.
Consider the utilization of a Lagrangian dual method which is convergent for consistent convex optimization problems. When it is used to solve an infeasible optimization problem, its inconsistency will then manifest itself through the divergence of the sequence of dual iterates. Will then the sequence of primal subproblem solutions still yield relevant information regarding the primal program? We answer this question in the affirmative for a convex program and an associated subgradient algorithm for its Lagrange dual. We show that the primal-dual pair of programs corresponding to an associated homogeneous dual function is in turn associated with a saddle-point problem, in which-in the inconsistent case-the primal part amounts to finding a solution in the primal space such that the Euclidean norm of the infeasibility in the relaxed constraints is minimized; the dual part amounts to identifying a feasible steepest ascent direction for the Lagrangian dual function. We present convergence results for a conditional epsilon-subgradient optimization algorithm applied to the Lagrangian dual problem, and the construction of an ergodic sequence of primal subproblem solutions; this composite algorithm yields convergence of the primal-dual sequence to the set of saddle-points of the associated homogeneous Lagrangian function; for linear programs, convergence to the subset in which the primal objective is at minimum is also achieved.
We consider a military mission planning problem where a given fleet of aircraft should attack a number of ground targets. At each attack, two aircraft need to be synchronized in both space and time. Further, there are multiple attack options against each targets, with different target effects. The objective is to maximize the outcome of the entire attack, while also minimizing the mission timespan. Real-life mission planning instances involve only a few targets and a few aircraft, but are still computationally challenging. We present metaheuristic solution methods for this problem, based on an earlier presented model. The problem includes three types of decisions: attack directions, task assignments and scheduling, and the solution methods exploit this structure in a two-stage approach. In an outer stage, a heuristic search is performed with respect to attack directions, while in an inner stage the other two decisions are optimized, given the outer stage decisions. The proposed metaheuristics are capable of producing high-quality solutions and are fast enough to be incorporated in a decision support tool.
We consider tactical planning of a military operation on a large target scene where a number of specific targets of interest are positioned, using a given number of resources which can be, for example, fighter aircraft, unmanned aerial vehicles, or missiles. The targets could be radar stations or other surveillance equipment, with or without defensive capabilities, which the attacker wishes to destroy. Further, some of the targets are defended, by, for example, Surface-to-Air Missile units, and this defense capability can be used to protect also other targets. The attacker has knowledge about the positions of all the targets and also a reward associated with each target. We consider the problem of the attacker, who has the objective to maximize the expected outcome of a joint attack against the enemy. The decisions that can be taken by the attacker concern the allocation of the resources to the targets and what tactics to use against each target. We present a mathematical model for the attacker’s problem. The model is similar to a generalized assignment problem, but with a complex objective function that makes it intractable for large problem instances. We present approximate models that can be used to provide upper and lower bounds on the optimal value, and also provide heuristic solution approaches that are able to successfully provide near-optimal solutions to a number of scenarios.
We introduce a military aircraft mission planning problem where agiven fleet of aircraft should attack a number of ground targets. Due to the nature of the attack, two aircraft need to rendez-vous at the target, that is, they need to be synchronized in both space and time. At the attack, one aircraft is launching a guided weapon, while the other is illuminating the target. Each target is associated with multiple attack and illumination options. Further, there may be precedence constraints between targets, limiting the order of the attacks. The objective is to maximize the outcome of the entire attack, while also minimizing the mission timespan. We give a linear mixed integer programming model of the problem, which can be characterized as ageneralized vehicle routing problem with synchronization and precedence side constraints. Numerical results are presented for problem instances of realistic size.
We introduce a time-indexed mixed-integer linear programming model for a military aircraft mission planning problem, where a fleet of cooperating aircraft should attack a number of ground targets so that the total expected effect is maximized. The model is a rich vehicle routing problem and the direct application of a general solver is practical only for scenarios of very moderate sizes. We propose a Dantzig-Wolfe reformulation and column generation approach. A column here represents a specific sequence of tasks at certain times for an aircraft, and to generate columns a longest path problem with side constraints is solved. We compare the column generation approach with the time-indexed model with respect to upper bounding quality of their linear programming relaxations and conclude that the former provides a much stronger formulation of the problem.
This paper deals with a Military Aircraft Mission Planning Problem, where the problem is to find time efficient flight paths for a given aircraft fleet that should attack a number of ground targets. Due to the nature of the attack, two aircraft need to rendezvous at the target, that is, they need to be synchronized in both space and time. At the attack, one aircraft is launching a guided weapon, while the other is illuminating the target. Each target is associated with multiple attack and illumination options. Further, there may be precedence constraints between targets, limiting the order of the attacks. The objective is to maximize the outcome of the entire attack, while also minimizing the mission time span. We present two mathematical models for this problem and compare their efficiency on some small test cases. We also provide some heuristic approaches since direct application of a general MIP solver to the mathematical model is only practical for smaller scenarios. The heuristics are compared and they successfully provide solutions to a number of scenarios.
The problem setting concerns the tactical planning of a military operation. Imagine a big wide open area where a number of interesting targets are positioned. It could be radar stations or other surveillance equipment, with or without defensive capabilities, which the attacker wishes to destroy. Moreover, the targets are possibly guarded by defending units, like Surface-to-Air Missile (SAM) units. The positions of all units, targets and defenders, are known. We consider the problem of the attacker, where the objective is to maximize the expected outcome of a joint attack against the enemy, subject to a limited amount of resources (i.e. aircraft, tanks). We present a mathematical model for this problem, together with alternative model versions which provide optimistic and a pessimistic approximations. The model is not efficient for large problem instances, hence we also provide heuristic solution approaches and successfully provide solutions to a number of scenarios.
Column generation is a linear programming method in which a dual solution of the master problem is essential when deriving new columns by solving a subproblem. When combined with appropriate integer programming techniques, column generation has successfully been used for solving huge integer programs. In many applications where column generation is used, the master problem is of a set partitioning type.
The set partitioning polytope has the quasi-integrality property, which enables the use of simplex pivot based methods for finding improved integer solutions where each integer solution is associated with a linear programming basis a corresponding dual solution. By combining these kinds of simplex pivots with column generation, one obtains a method where each successively found solution to a restricted master problem is feasible, integer, and associated with a dual solution to be used in the column generation step. The column generation subproblem can either be of a regular type, or it can be tailored to produce columns that maintain integrality when pivoted into the basis.
In this paper, a framework for this kind of column generation, which we here name all-integer column generation for set partitioning problems, is presented. The strategies proposed are primarily of a meta-heuristic nature, but with the proper settings, optimal or near-optimal solutions can be obtained.
The set partitioning polytope has the quasi-integrality propertythat enables the use of simplex pivot based methods for finding animproved integer solution, which thereby is associated with a linearprogramming basis and a corresponding dual solution. Presented in thispaper is a framework for an all-integer column generation methodologyfor set partitioning problems that utilises the quasi-integralityproperty of the feasible polytope.In the presented methodology, each successively found solution to arestricted master problem is feasible, integer and associated with acorresponding dual solution, which is then used in the columngeneration step. The column generation problem is tailored to producecolumns that maintain integrality when pivoted into the basis.Furthermore, criteria for verifying optimality are presented.
Column generation is a linear programming method that, when combined with appropriate integer programming techniques, has been successfully used for solving huge integer programs. The method alternates between a restricted master problem and a column generation subproblem. The latter step is founded on dual information from the former one; often an optimal dual solution to the linear programming relaxation of the restricted master problem is used.
We consider a zero–one linear programming problem that is approached by column generation and present a generic sufficient optimality condition for the restricted master problem to contain the columns required to find an integer optimal solution to the complete problem. The condition is based on dual information, but not necessarily on an optimal dual solution. It is however most natural to apply the condition in a situation when an optimal or near-optimal dual solution is at hand.
We relate our result to a few special cases from the literature, and make some suggestions regarding possible exploitation of the optimality condition in the construction of column generation methods for integer programs.
Hospital wards need to be staffed by nurses round the clock, resulting in irregular working hours for many nurses. Over the years, the nurses influence on the scheduling has been increased in order to improve their working conditions. In Sweden it is common to apply a kind of self-scheduling where each nurse individually proposes a schedule, and then the final schedule is determined through informal negotiations between the nurses. This kind of self-scheduling is very time-consuming and does often lead to conflicts. We present a pilot study which aims at determining if it is possible to create an optimisation tool that automatically delivers a usable schedule based on the schedules proposed by the nurses. The study is performed at a typical Swedish nursing ward, for which we have developed a mathematical model and delivered schedules. The results of this study are very promising and suggest continued work along these lines.
Hospital wards need to be staffed by nurses round the clock, resultingin irregular working hours for many nurses. Over the years, thenurses' influence on the scheduling have been increased in order toimprove their working conditions. In Sweden it is common to apply a kindof self-scheduling where each nurse individually proposes a schedule,and then the final schedule is determined through informalnegotiations between the nurses. This kind of self-scheduling is verytime-consuming and does often lead to conflicts.We present a pilot study which aims at determining if it is possibleto create an optimisation tool that automatically delivers a usableschedule based on the schedules proposed by the nurses. The study isperformed at a typical Swedish nursing ward, for which we havedeveloped a mathematical model and delivered schedules. The results ofthis study are very promising.
The integral simplex method for set partitioning problems allows onlypivots-on-one to be made, which results in a primal all-integer method. Inthis technical note we outline how to tailor the column generationprinciple to this method. Because of the restriction topivots-on-one, only local optimality can be guaranteed, and to ensureglobal optimality we consider the use of implicit enumeration.
Column generation is a linear programming method that, when combined with appropriate integer programming techniques, has been successfully used for solving huge integer programs. The use of a dual solution to the restricted master problem is essential when new columns are derived by solving a subproblem. Even if the problem to be solved is an integer programming one, this dual solution is usually optimal with respect to the linear programming relaxation of either the original problem or of a restriction thereof formed further down a branch-and-price-tree.
This paper addresses the situation that arises when columns of a binary problem are generated using any dual solution, and we derive optimality conditions for determining when the master problem has been augmented with enough columns to contain an integer optimal solution to the complete master problem.
We discuss the concept of over-generation of columns, which means to augment the restricted master problem with a set of columns, to ensure progress of the algorithm and also to make sure that the columns of the restricted master problem eventually comply with the optimality conditions. To illustrate the over-generation strategy, we compare our results with special cases that are already known from the literature, and we make some new suggestions.
The set partitioning problem is a generic optimisation model with many applications, especially within scheduling and routing. It is common in the context of column generation, and its importance has grown due to the strong developments in this field. The set partitioning problem has the quasi-integrality property, which means that every edge of the convex hull of the integer feasible solutions is also an edge of the polytope of the linear programming relaxation. This property enables, in principle, the use of solution methods that find improved integer solutions through simplex pivots that preserve integrality; pivoting rules with this effect can be designed in a few different ways. Although seemingly promising, the application of these approaches involves inherent challenges. Firstly, they can get be trapped at local optima, with respect to the pivoting options available, so that global optimality can be guaranteed only by resorting to an enumeration principle. Secondly, set partitioning problems are typically massively degenerate and a big hurdle to overcome is therefore to establish anti-cycling rules for the pivoting options available. The purpose of this chapter is to lay a foundation for research on these topics.