# Publikationer vid Optimeringslära

## Journal papers

Consider the utilization of a Lagrangian dual method which is convergent for consistent convex optimization problems. When it is used to solve an infeasible optimization problem, its inconsistency will then manifest itself through the divergence of the sequence of dual iterates. Will then the sequence of primal subproblem solutions still yield relevant information regarding the primal program? We answer this question in the affirmative for a convex program and an associated subgradient algorithm for its Lagrange dual. We show that the primal-dual pair of programs corresponding to an associated homogeneous dual function is in turn associated with a saddle-point problem, in which-in the inconsistent case-the primal part amounts to finding a solution in the primal space such that the Euclidean norm of the infeasibility in the relaxed constraints is minimized; the dual part amounts to identifying a feasible steepest ascent direction for the Lagrangian dual function. We present convergence results for a conditional epsilon-subgradient optimization algorithm applied to the Lagrangian dual problem, and the construction of an ergodic sequence of primal subproblem solutions; this composite algorithm yields convergence of the primal-dual sequence to the set of saddle-points of the associated homogeneous Lagrangian function; for linear programs, convergence to the subset in which the primal objective is at minimum is also achieved.

```
@article{diva2:1093282,
author = {Onnheim, Magnus and Gustavsson, Emil and Stromberg, Ann-Brith and Patriksson, Michael and Larsson, Torbjörn},
title = {{Ergodic, primal convergence in dual subgradient schemes for convex programming, II: the case of inconsistent primal problems}},
journal = {Mathematical programming},
year = {2017},
volume = {163},
number = {1-2},
pages = {57--84},
}
```

Monotonic (isotonic) regression is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the monotonic regression, formulated as a least distance problem with monotonicity constraints. The resulting smoothed monotonic regression is a convex quadratic optimization problem. We focus on the case, where the set of observations is completely (linearly) ordered. Our smoothed pool-adjacent-violators algorithm is designed for solving the regularized problem. It belongs to the class of dual active-set algorithms. We prove that it converges to the optimal solution in a finite number of iterations that does not exceed the problem size. One of its advantages is that the active set is progressively enlarging by including one or, typically, more constraints per iteration. This resulted in solving large-scale test problems in a few iterations, whereas the size of that problems was prohibitively too large for the conventional quadratic optimization solvers. Although the complexity of our algorithm grows quadratically with the problem size, we found its running time to grow almost linearly in our computational experiments.

```
@article{diva2:1068281,
author = {Burdakov, Oleg and Sysoev, Oleg},
title = {{A Dual Active-Set Algorithm for Regularized Monotonic Regression}},
journal = {Journal of Optimization Theory and Applications},
year = {2017},
volume = {172},
number = {3},
pages = {929--949},
}
```

Limited-memory quasi-Newton methods and trust-region methods represent two efficient approaches used for solving unconstrained optimization problems. A straightforward combination of them deteriorates the efficiency of the former approach, especially in the case of large-scale problems. For this reason, the limited-memory methods are usually combined with a line search. We show how to efficiently combine limited-memory and trust-region techniques. One of our approaches is based on the eigenvalue decomposition of the limited-memory quasi-Newton approximation of the Hessian matrix. The decomposition allows for finding a nearly-exact solution to the trust-region subproblem defined by the Euclidean norm with an insignificant computational overhead as compared with the cost of computing the quasi-Newton direction in line-search limited-memory methods. The other approach is based on two new eigenvalue-based norms. The advantage of the new norms is that the trust-region subproblem is separable and each of the smaller subproblems is easy to solve. We show that our eigenvalue-based limited-memory trust-region methods are globally convergent. Moreover, we propose improved versions of the existing limited-memory trust-region algorithms. The presented results of numerical experiments demonstrate the efficiency of our approach which is competitive with line-search versions of the L-BFGS method.

```
@article{diva2:943690,
author = {Burdakov, Oleg and Gong, Lujin and Zikrin, Spartak and Yuan, Ya-xiang},
title = {{On Efficiently Combining Limited-Memory and Trust-Region Techniques}},
journal = {Mathematical Programming Computation},
year = {2017},
volume = {9},
number = {1},
pages = {101--134},
}
```

Guarding the perimeter of an area in order to detect potential intruders is an important task in a variety of security-related applications. This task can in many circumstances be performed by a set of camera-equipped unmanned aerial vehicles (UAVs). Such UAVs will occasionally require refueling or recharging, in which case they must temporarily be replaced by other UAVs in order to maintain complete surveillance of the perimeter. In this paper we consider the problem of scheduling such replacements. We present optimal replacement strategies and justify their optimality.

```
@article{diva2:914860,
author = {Burdakov, Oleg and Kvarnström, Jonas and Doherty, Patrick},
title = {{Optimal scheduling for replacing perimeter guarding unmanned aerial vehicles}},
journal = {Annals of Operations Research},
year = {2017},
volume = {249},
number = {1},
pages = {163--174},
}
```

Excellent guides on academic writing and presentation in science in general, and in mathematics and computer science in particular, do abound (see, for example, Refs. [1-8], while guides on the assessment of the results of academic writing are rather more scarce. This short article presents two itemized lists that may be helping hands during the assessment of a scientific article in the field of mathematical optimization and operations research-be it your own, a work by a Master or PhD student of yours, or even a manuscript that you are refereeing for a scientific journal or conference proceedings volume. The first list-"Subbens checklist"-describes necessary ingredients of a complete article. The second list provides criteria for assessing the quality and scientific value of an article. (C) 2016 Elsevier Ltd. All rights reserved.

```
@article{diva2:934985,
author = {Larsson, Torbjörn and Patriksson, Michael},
title = {{Subbens checklist}},
journal = {Computers \& Operations Research},
year = {2016},
volume = {71},
pages = {163--164},
}
```

We study resource allocation in cellular systems and consider the problem of finding a power efficient scheduling in an uplink single carrier frequency division multiple access system. Due to the discrete nature of this problem and its computational difficulty, particularly in a real-time setting, the use of suboptimal algorithms is common practice. We aim at an effective way of gauging the performance of suboptimal algorithms by finding tight bounds on the global optimum. Toward this end, we first provide a basic integer linear programming formulation. Then we propose a significantly stronger column-oriented formulation and a corresponding column generation method, as well as an enhanced column generation scheme. The latter extends the first scheme through the inclusion of a stabilization technique, an approximate column generation principle, and a tailored heuristic that is embedded in the column generation scheme to find high-quality though not necessarily global optimal solutions. The computational evaluation demonstrates that compared with a poor performance by the integer linear programming formulation, the column generation method can produce near-optimal schedules that enable a sharp bounding interval. The enhanced column generation method significantly sharpens the bounding interval. Hence the column generation approach serves well for the purpose of benchmarking results for large-scale instances.

```
@article{diva2:922125,
author = {Zhao, Yixin and Larsson, Torbjörn and Yuan, Di and Rönnberg, Elina and Lei, Lei},
title = {{Power efficient uplink scheduling in SC-FDMA:
benchmarking by column generation}},
journal = {Optimization and Engineering},
year = {2016},
volume = {17},
number = {4},
pages = {695--725},
}
```

The benefits, in terms of social surplus, from introducing congestion pricing schemes in urban networks are depending on the design of the pricing scheme. The major part of the literature on optimal design of congestion pricing schemes is based on static traffic assignment, which is known for its deficiency in correctly predict travels in networks with severe congestion. Dynamic traffic assignment can better predict travel times in a road network, but are more computational expensive. Thus, previously developed methods for the static case cannot be applied straightforward. Surrogate-based optimization is commonly used for optimization problems with expensive-to-evaluate objective functions. In this paper we evaluate the performance of a surrogate-based optimization method, when the number of pricing schemes which we can afford to evaluate (due to the computational time) is limited to between 20 and 40. A static traffic assignment model of Stockholm is used for evaluating a large number of different configurations of the surrogatebased optimization method. Final evaluation is done with the dynamic traffic assignment tool VisumDUE, coupled with the demand model Regent, for a Stockholm network including 1 240 demand zones and 17 000 links. Our results show that the surrogate-based optimization method can indeed be used for designing a congestion pricing scheme which return a high social surplus.

```
@article{diva2:917464,
author = {Ekström, Joakim and Kristoffersson, Ida and Quttineh, Nils-Hassan},
title = {{Surrogate-based optimization of cordon toll levels in congested traffic networks}},
journal = {Journal of Advanced Transportation},
year = {2016},
volume = {50},
number = {6},
pages = {1008--1033},
}
```

Optimization problems with cardinality constraints are very difficult mathematical programs which are typically solved by global techniques from discrete optimization. Here we introduce a mixed-integer formulation whose standard relaxation still has the same solutions (in the sense of global minima) as the underlying cardinality-constrained problem; the relation between the local minima is also discussed in detail. Since our reformulation is a minimization problem in continuous variables, it allows to apply ideas from that field to cardinality-constrained problems. Here, in particular, we therefore also derive suitable stationarity conditions and suggest an appropriate regularization method for the solution of optimization problems with cardinality constraints. This regularization method is shown to be globally convergent to a Mordukhovich-stationary point. Extensive numerical results are given to illustrate the behavior of this method.

```
@article{diva2:901202,
author = {Burdakov, Oleg and Kanzow, Christian and Schwartz, Alexandra},
title = {{Mathematical Programs with Cardinality Constraints: Reformulation by Complementarity-Type Conditions and a Regularization Method}},
journal = {SIAM Journal on Optimization},
year = {2016},
volume = {26},
number = {1},
pages = {397--425},
}
```

It is sometimes the case that a theory proposes that the population means on two variables should have the same rank order across a set of experimental conditions. This paper presents a test of this hypothesis. The test statistic is based on the coupled monotonic regression algorithm developed by the authors. The significance of the test statistic is determined by comparison to an empirical distribution specific to each case, obtained via non-parametric or semi-parametric bootstrap. We present an analysis of the power and Type I error control of the test based on numerical simulation. Partial order constraints placed on the variables may sometimes be theoretically justified. These constraints are easily incorporated into the computation of the test statistic and are shown to have substantial effects on power. The test can be applied to any form of data, as long as an appropriate statistical model can be specified.

```
@article{diva2:873007,
author = {Kalish, Michael L. and Dunn, John C. and Burdakov, Oleg P. and Sysoev, Oleg},
title = {{A statistical test of the equality of latent orders}},
journal = {Journal of mathematical psychology (Print)},
year = {2016},
volume = {70},
pages = {1--11},
}
```

Recently, the methods used to estimate monotonic regression (MR) models have been substantially improved, and some algorithms can now produce high-accuracy monotonic fits to multivariate datasets containing over a million observations. Nevertheless, the computational burden can be prohibitively large for resampling techniques in which numerous datasets are processed independently of each other. Here, we present efficient algorithms for estimation of confidence limits in large-scale settings that take into account the similarity of the bootstrap or jackknifed datasets to which MR models are fitted. In addition, we introduce modifications that substantially improve the accuracy of MR solutions for binary response variables. The performance of our algorithms isillustrated using data on death in coronary heart disease for a large population. This example also illustrates that MR can be a valuable complement to logistic regression.

```
@article{diva2:565741,
author = {Sysoev, Oleg and Grimvall, Anders and Burdakov, Oleg},
title = {{Bootstrap confidence intervals for large-scale multivariate monotonic regression problems}},
journal = {Communications in statistics. Simulation and computation},
year = {2016},
volume = {45},
number = {3},
pages = {1025--1040},
}
```

High dose-rate (HDR) brachytherapy is a kind of radiotherapy used to treat, among others, prostate cancer. When applied to prostate cancer a radioactive source is moved through catheters implanted into the prostate. For each patient a treatment plan is constructed that decide for example catheter placement and dwell time distribution, that is where to stop the radioactive source and for how long.

Mathematical optimization methods has been used to find quality plans with respect to dwell time distribution, however few optimization approaches regarding catheter placement have been studied. In this article we present an integrated optimization model that optimize catheter placement and dwell time distribution simultaneously. Our results show that integrating the two decisions yields greatly improved plans, from 15% to 94% improvement.

Since the presented model is computationally demanding to solve we also present three heuristics: tabu search, variable neighbourhood search and genetic algorithm. Of these variable neighbourhood search is clearly the best, outperforming a state-of-the-art optimization software (CPLEX) and the two other heuristics.

```
@article{diva2:412838,
author = {Holm, Åsa and Carlsson Tedgren, Åsa and Larsson, Torbjörn},
title = {{Heuristics for Integrated Optimization of Catheter Positioning and Dwell Time Distribution in Prostate HDR Brachytherapy}},
journal = {Annals of Operations Research},
year = {2016},
volume = {236},
number = {2},
pages = {319--339},
}
```

The problem of estimating mean and covariances of a multivariate normally distributed random vector has been studied in many forms. This paper focuses on the estimators proposed by Ohlson et al. (2011) for a banded covariance structure with *m*-dependence. We rewrite the estimator when *m* = 1, which makes it easier to analyze. This leads to an adjustment, and an unbiased estimator can be proposed. A new and easier proof of consistency is then presented.

This theory is also generalized to a general linear model where the corresponding theorems and propositions are stated to establish unbiasedness and consistency.

```
@article{diva2:882124,
author = {Karlsson, Emil and Singull, Martin},
title = {{More on explicit estimators for a banded covariance matrix}},
journal = {Acta et Commentationes Universitatis Tartuensis de Mathematica},
year = {2015},
volume = {19},
number = {1},
pages = {49--62},
}
```

Given a non-empty, compact and convex set, and an a priori defined condition which each element either satisfies or not, we want to find an element belonging to the former category. This is a fundamental problem of mathematical programming which encompasses nonlinear programs, variational inequalities, and saddle-point problems. We present a conceptual column generation scheme, which alternates between solving a restriction of the original problem and a column generation phase which is used to augment the restricted problems. We establish the general applicability of the conceptual method, as well as to the three problem classes mentioned. We also establish a version of the conceptual method in which the restricted and column generation problems are allowed to be solved approximately, and of a version allowing for the dropping of columns. We show that some solution methods (e.g., Dantzig-Wolfe decomposition and simplicial decomposition) are special instances, and present new convergent column generation methods in nonlinear programming, such as a sequential linear programming type method. Along the way, we also relate our quite general scheme in nonlinear programming presented in this paper with several other classic, and more recent, iterative methods in nonlinear optimization.

```
@article{diva2:841508,
author = {Larsson, Torbjörn and Migdalas, Athanasios and Patriksson, Michael},
title = {{A generic column generation principle: derivation and convergence analysis}},
journal = {Operational Research},
year = {2015},
volume = {15},
number = {2},
pages = {163--198},
}
```

Filter networks are used as a powerful tool used for reducing the image processing time and maintaining high image quality.They are composed of sparse sub-filters whose high sparsity ensures fast image processing.The filter network design is related to solvinga sparse optimization problem where a cardinality constraint bounds above the sparsity level.In the case of sequentially connected sub-filters, which is the simplest network structure of those considered in this paper, a cardinality-constrained multilinear least-squares (MLLS) problem is to be solved. Even when disregarding the cardinality constraint, the MLLS is typically a large-scale problem characterized by a large number of local minimizers, each of which is singular and non-isolated.The cardinality constraint makes the problem even more difficult to solve.

An approach for approximately solving the cardinality-constrained MLLS problem is presented.It is then applied to solving a bi-criteria optimization problem in which both thetime and quality of image processing are optimized. The developed approach is extended to designing filter networks of a more general structure. Its efficiency is demonstrated by designing certain 2D and 3D filter networks. It is also compared with the existing approaches.

```
@article{diva2:796663,
author = {Andersson, Mats and Burdakov, Oleg and Knutsson, Hans and Zikrin, Spartak},
title = {{Sparsity Optimization in Design of Multidimensional Filter Networks}},
journal = {Optimization and Engineering},
year = {2015},
volume = {16},
number = {2},
pages = {259--277},
}
```

We consider a military mission planning problem where a given fleet of aircraft should attack a number of ground targets. At each attack, two aircraft need to be synchronized in both space and time. Further, there are multiple attack options against each targets, with different target effects. The objective is to maximize the outcome of the entire attack, while also minimizing the mission timespan. Real-life mission planning instances involve only a few targets and a few aircraft, but are still computationally challenging. We present metaheuristic solution methods for this problem, based on an earlier presented model. The problem includes three types of decisions: attack directions, task assignments and scheduling, and the solution methods exploit this structure in a two-stage approach. In an outer stage, a heuristic search is performed with respect to attack directions, while in an inner stage the other two decisions are optimized, given the outer stage decisions. The proposed metaheuristics are capable of producing high-quality solutions and are fast enough to be incorporated in a decision support tool.

```
@article{diva2:790508,
author = {Quttineh, Nils-Hassan and Larsson, Torbjörn},
title = {{Military Aircraft Mission Planning:
Efficient model-based metaheuristics approaches}},
journal = {Optimization Letters},
year = {2015},
volume = {9},
number = {8},
pages = {1625--1639},
}
```

n/a

```
@article{diva2:751569,
author = {Burdakov, Oleg and Griewank, Andreas and Lotov, Alexander},
title = {{Yury G. Evtushenko - A tribute}},
journal = {Optimization Methods and Software},
year = {2014},
volume = {29},
number = {5},
pages = {899--899},
}
```

We consider the problem of finding an optimal mining sequence for an open pit during a number of time periods subject to only spatial and temporal precedence constraints. This problem is of interest because such constraints are generic to any open-pit scheduling problem and, in particular, because it arises as a Lagrangean relaxation of an open-pit scheduling problem. We show that this multi-period open-pit mining problem can be solved as a maximum flow problem in a time-expanded mine graph. Further, the minimum cut in this graph will define an optimal sequence of pits. This result extends a well-known result of J.-C. Picard from 1976 for the open-pit mine design problem, that is, the single-period case, to the case of multiple time periods.

```
@article{diva2:714036,
author = {Amankwah, Henry and Larsson, Torbjörn and Textorius, Björn},
title = {{A maximum flow formulation of a multi-period open-pit mining problem}},
journal = {Operational Research},
year = {2014},
volume = {14},
number = {1},
pages = {1--10},
}
```

Sodra Cell is a world leading producer of pulp and has a large distribution network for its pulp products. This network includes production mills in Sweden and Norway, terminals in European harbours and inland locations, and many customers. The company uses several transport modes including long chartered vessels, spot vessels, trains, barges and trucks. The company uses a supplier managed inventory with a large proportion of its customers. This makes the logistic planning including transportation and inventory critical, as Sodra Cell has direct responsibility of their customers inventories. However, there is still some uncertainty regarding customer demand and Sodra Cell has traditionally used a safety stock inventory approach to handle this. In this paper, we introduce a robust optimization approach to handle the uncertainty and to establish a distribution plan, together with related inventory management. With this approach there is no need for explicit safety stock levels. This is instead taken into account directly through the robust solution. In the proposed model, we can use practical characterization and historical information on the uncertainty. An important result with this is that we can avoid solutions that are too conservative and costly as in standard robust models. A large case study from Sodra Cell is used to validate the proposed approach against the traditional approach with safety stock. The analysis is based on simulations and it shows that the robust approach is more cost efficient and can be used as a basis in a decision support system.

```
@article{diva2:708870,
author = {Carlsson, D. and Flisberg, Patrik and Roennqvist, M.},
title = {{Using robust optimization for distribution and inventory planning for a large pulp producer}},
journal = {Computers \& Operations Research},
year = {2014},
volume = {44},
pages = {214--225},
}
```

Column generation is a linear programming method in which a dual solution of the master problem is essential when deriving new columns by solving a subproblem. When combined with appropriate integer programming techniques, column generation has successfully been used for solving huge integer programs. In many applications where column generation is used, the master problem is of a set partitioning type.

The set partitioning polytope has the quasi-integrality property, which enables the use of simplex pivot based methods for finding improved integer solutions where each integer solution is associated with a linear programming basis a corresponding dual solution. By combining these kinds of simplex pivots with column generation, one obtains a method where each successively found solution to a restricted master problem is feasible, integer, and associated with a dual solution to be used in the column generation step. The column generation subproblem can either be of a regular type, or it can be tailored to produce columns that maintain integrality when pivoted into the basis.

In this paper, a framework for this kind of column generation, which we here name all-integer column generation for set partitioning problems, is presented. The strategies proposed are primarily of a meta-heuristic nature, but with the proper settings, optimal or near-optimal solutions can be obtained.

```
@article{diva2:512199,
author = {Rönnberg, Elina and Larsson, Torbjörn},
title = {{All-integer column generation: Basic principles and extensions}},
journal = {European Journal of Operational Research},
year = {2014},
volume = {233},
number = {3},
pages = {529--538},
}
```

We introduce a military aircraft mission planning problem where agiven fleet of aircraft should attack a number of ground targets. Due to the nature of the attack, two aircraft need to rendez-vous at the target, that is, they need to be synchronized in both space and time. At the attack, one aircraft is launching a guided weapon, while the other is illuminating the target. Each target is associated with multiple attack and illumination options. Further, there may be precedence constraints between targets, limiting the order of the attacks. The objective is to maximize the outcome of the entire attack, while also minimizing the mission timespan. We give a linear mixed integer programming model of the problem, which can be characterized as ageneralized vehicle routing problem with synchronization and precedence side constraints. Numerical results are presented for problem instances of realistic size.

```
@article{diva2:697916,
author = {Quttineh, Nils-Hassan and Larsson, Torbjörn and Lundberg, Kristian and Holmberg, Kaj},
title = {{Military aircraft mission planning:
a generalized vehicle routing model with synchronization and precedence}},
journal = {EURO Journal on Transportation and Logistics},
year = {2013},
volume = {2},
number = {1-2},
pages = {109--127},
}
```

The tissue fraction of red blood cells (RBCs) and their oxygenation and speed-resolved perfusion areestimated in absolute units by combining diffuse reflectance spectroscopy (DRS) and laser Doppler flowmetry(LDF). The DRS spectra (450 to 850 nm) are assessed at two source–detector separations (0.4 and 1.2 mm), allowingfor a relative calibration routine, whereas LDF spectra are assessed at 1.2mmin the same fiber-optic probe. Data areanalyzed using nonlinear optimization in an inverse Monte Carlo technique by applying an adaptive multilayeredtissue model based on geometrical, scattering, and absorbing properties, as well as RBC flow-speed information.Simulations of 250 tissue-like models including up to 2000 individual blood vessels were used to evaluatethe method. The absolute root mean square (RMS) deviation between estimated and true oxygenation was 4.1percentage units, whereas the relative RMS deviations for the RBC tissue fraction and perfusion were 19% and23%, respectively. Examples of in vivo measurements on forearm and foot during common provocations arepresented. The method offers several advantages such as simultaneous quantification of RBC tissue fractionand oxygenation and perfusion from the same, predictable, sampling volume. The perfusion estimate is speedresolved, absolute (% RBC × mm∕s), and more accurate due to the combination with DRS.

```
@article{diva2:681131,
author = {Fredriksson, Ingemar and Burdakov, Oleg and Larsson, Marcus and Strömberg, Tomas},
title = {{Inverse Monte Carlo in a multilayered tissue model: merging diffuse reflectance spectroscopy and laser Doppler flowmetry}},
journal = {Journal of Biomedical Optics},
year = {2013},
volume = {18},
number = {12},
pages = {127004-1--127004-14},
}
```

Purpose: Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier. less thanbrgreater than less thanbrgreater thanMethods: In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions. less thanbrgreater than less thanbrgreater thanResults: The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality. less thanbrgreater than less thanbrgreater thanConclusions: The authors new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.

```
@article{diva2:645811,
author = {Holm, Åsa and Larsson, Torbjörn and Carlsson Tedgren, Åsa},
title = {{A linear programming model for optimizing HDR brachytherapy dose distributions with respect to mean dose in the DVH-tail}},
journal = {Medical physics (Lancaster)},
year = {2013},
volume = {40},
number = {8},
}
```

The variance of the error term in ordinary regression models and linear smoothers is usually estimated by adjusting the average squared residual for the trace of the smoothing matrix (the degrees of freedom of the predicted response). However, other types of variance estimators are needed when using monotonic regression (MR) models, which are particularly suitable for estimating response functions with pronounced thresholds. Here, we propose a simple bootstrap estimator to compensate for the over-fitting that occurs when MR models are estimated from empirical data. Furthermore, we show that, in the case of one or two predictors, the performance of this estimator can be enhanced by introducing adjustment factors that take into account the slope of the response function and characteristics of the distribution of the explanatory variables. Extensive simulations show that our estimators perform satisfactorily for a great variety of monotonic functions and error distributions.

```
@article{diva2:536280,
author = {Sysoev, Oleg and Grimvall, Anders and Burdakov, Oleg},
title = {{Bootstrap estimation of the variance of the error term in monotonic regression models}},
journal = {Journal of Statistical Computation and Simulation},
year = {2013},
volume = {83},
number = {4},
pages = {625--638},
}
```

We present versions of the Frank-Wolfe method for linearly constrained convex programs, in which consecutive search directions are made conjugate. Preliminary computational studies in a MATLAB environment applying pure Frank-Wolfe, conjugate direction Frank-Wolfe (CFW), bi-conjugate Frank-Wolfe (BFW), and "partanized" Frank-Wolfe methods to some classical Traffic Assignment Problems show that CFW and BFW compare favorably to the other methods. This spurred a more detailed study, comparing our methods to an origin-based algorithm. This study indicates that our methods are competitive for accuracy requirements ensuring link flow stability. We also show that CFW is globally convergent. We further point at independent studies by other researchers that show that our methods compare favorably with recent bush-based and gradient projection algorithms on computers with several cores

```
@article{diva2:23504,
author = {Daneva (Mitradjieva), Maria and Lindberg, Per Olov},
title = {{The Stiff is Moving - Conjugate Direction Frank -Wolfe Methods with Applications to Traffic Assignment}},
journal = {Transportation Science},
year = {2013},
volume = {47},
number = {2},
pages = {280--293},
}
```

When non-smooth, convex minimization problems are solved by subgradient optimization methods, the subgradients used will in general not accumulate to subgradients that verify the optimality of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of subgradient methods in terms of the approximate fulfilment of optimality conditions. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers and convergent lower bounds on the optimal objective value, is not directly available in subgradient schemes. As a means of overcoming these weaknesses in subgradient methods, we introduced in our previous articles the computation of an ergoclic (averaged) sequence of subgradients. Specifically, we considered a non-smooth, convex program solved by a conditional subgradient optimization scheme with divergent series step lengths, and showed that the elements of the ergodic sequence of subgradients in the limit fulfil the optimality conditions at the optimal solution, to which the sequence of iterates converges. This result has three important implications. The first is the finite identification of active constraints at the solution obtained in the limit. The second is the establishment of the convergence of ergodic sequences of Lagrange multipliers; this result enables sensitivity analyses for solutions obtained by subgradient methods. The third is the convergence of a lower bounding procedure based on an ergodic sequence of affine underestimates of the objective function; this procedure also provides a proper termination criterion for subgradient optimization methods. This article gives first an overview of results and applications found in our previous articles pertaining to the generation of ergodic sequences of subgradients generated within a subgradient scheme. It then presents an application of these results to that of the first instance of a simplicial decomposition algorithm for convex and non-smooth optimization problems.

```
@article{diva2:589530,
author = {Larsson, Torbjörn and Patriksson, Michael and Strömberg, Ann-Brith},
title = {{Ergodic Convergence in Subgradient Optimization with Application to Simplicial Decomposition of Convex Programs}},
journal = {Contemporary Mathematics},
year = {2012},
volume = {568},
pages = {159--189},
}
```

The multilinear least-squares (MLLS) problem is an extension of the linear leastsquares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows for moving from one local minimizer to a better one. The efficiency of this strategy is illustrated by results of numerical experiments performed for some problems related to the design of filter networks.

```
@article{diva2:536888,
author = {Andersson, Mats and Burdakov, Oleg and Knutsson, Hans and Zikrin, Spartak},
title = {{Global search strategies for solving multilinear least-squares problems}},
journal = {Sultan Qaboos University Journal for Science},
year = {2012},
volume = {17},
number = {1},
pages = {12--21},
}
```

We suggest here a least-change correction to available finite element (FE) solution. This postprocessing procedure is aimed at recovering the monotonicity and some other important properties that may not be exhibited by the FE solution. Although our approach is presented for FEs, it admits natural extension to other numerical schemes, such as finite differences and finite volumes. For the postprocessing, a priori information about the monotonicity is assumed to be available, either for the whole domain or for a subdomain where the lost monotonicity is to be recovered. The obvious requirement is that such information is to be obtained without involving the exact solution, e.g. from expected symmetries of this solution. less thanbrgreater than less thanbrgreater thanThe postprocessing is based on solving a monotonic regression problem with some extra constraints. One of them is a linear equality-type constraint that models the conservativity requirement. The other ones are box-type constraints, and they originate from the discrete maximum principle. The resulting postprocessing problem is a large scale quadratic optimization problem. It is proved that the postprocessed FE solution preserves the accuracy of the discrete FE approximation. less thanbrgreater than less thanbrgreater thanWe introduce an algorithm for solving the postprocessing problem. It can be viewed as a dual ascent method based on the Lagrangian relaxation of the equality constraint. We justify theoretically its correctness. Its efficiency is demonstrated by the presented results of numerical experiments.

```
@article{diva2:516858,
author = {Burdakov, Oleg and Kapyrin, Ivan and Vassilevski, Yuri},
title = {{Monotonicity recovering and accuracy preserving optimization methods for postprocessing finite element solutions}},
journal = {Journal of Computational Physics},
year = {2012},
volume = {231},
number = {8},
pages = {3126--3142},
}
```

Purpose: Dose plans generated with optimization models hitherto used in HDR brachytherapy have shown a tendency to yield longer dwell times than manually optimized plans. Concern has been raised for the corresponding undesired hot spots and various methods to mitigate these have been developed. The hypotheses of this work are a) that one cause for the long dwell times is the use of objective functions comprising simple linear penalties and b) that alternative penalties, being piecewise linear, would lead to reduced length of individual dwell times.

Methods: The characteristics of the linear penalties and the piecewise linear penalties are analysed mathematically. Experimental comparisons between the two types of penalties are carried out retrospectively for a set of prostate cancer patients.

Results: While most dose-volume parameters do not differ significantly between the two types of penalties significant changes can be seen in the dwell times. On the average, total dwell times were reduced by 4.2%, with a reduction of maximum dwell times by 30%, using the alternative penalties.

Conclusion: The use of linear penalties in optimization models for HDR brachytherapy is one cause for undesired longer dwell times appearing in mathematically optimized plans. By introducing alternative penalties significant reduction in dwell times can be achieved for HDR brachytherapy dose plans. Although various constraints as to reduce the long dwell times have been developed our finding is of fundamental interest in showing the shape of the objective function to be one reason for their appearance.

```
@article{diva2:412836,
author = {Holm, Åsa and Larsson, Torbjörn and Carlsson Tedgren, Åsa},
title = {{Impact of Using Linear Optimization Models in Dose Planning for HDR Brachytherapy}},
journal = {Medical physics (Lancaster)},
year = {2012},
volume = {39},
number = {2},
pages = {1021--1028},
}
```

Monotonic regression (MR) is an efficient tool for estimating functions that are monotonic with respect to input variables. A fast and highly accurate approximate algorithm called the GPAV was recently developed for efficient solving large-scale multivariate MR problems. When such problems are too large, the GPAV becomes too demanding in terms of computational time and memory. An approach, that extends the application area of the GPAV to encompass much larger MR problems, is presented. It is based on segmentation of a large-scale MR problem into a set of moderate-scale MR problems, each solved by the GPAV. The major contribution is the development of a computationally efficient strategy that produces a monotonic response using the local solutions. A theoretically motivated trend-following technique is introduced to ensure higher accuracy of the solution. The presented results of extensive simulations on very large data sets demonstrate the high efficiency of the new algorithm.

```
@article{diva2:424299,
author = {Sysoev, Oleg and Burdakov, Oleg and Grimvall, Anders},
title = {{A segmentation-based algorithm for large-scale partially ordered monotonic regression}},
journal = {Computational Statistics \& Data Analysis},
year = {2011},
volume = {55},
number = {8},
pages = {2463--2476},
}
```

We consider a constrained optimization problem with mixed integer and real variables. It models optimal placement of communications relay nodes in the presence of obstacles. This problem is widely encountered, for instance, in robotics, where it is required to survey some target located in one point and convey the gathered information back to a base station located in another point. One or more unmanned aerial or ground vehicles (UAVs or UGVs) can be used for this purpose as communications relays. The decision variables are the number of unmanned vehicles (UVs) and the UV positions. The objective function is assumed to access the placement quality. We suggest one instance of such a function which is more suitable for accessing UAV placement. The constraints are determined by, firstly, a free line of sight requirement for every consecutive pair in the chain and, secondly, a limited communication range. Because of these requirements, our constrained optimization problem is a difficult multi-extremal problem for any fixed number of UVs. Moreover, the feasible set of real variables is typically disjoint. We present an approach that allows us to efficiently find a practically acceptable approximation to a global minimum in the problem of optimal placement of communications relay nodes. It is based on a spatial discretization with a subsequent reduction to a shortest path problem. The case of a restricted number of available UVs is also considered here. We introduce two label correcting algorithms which are able to take advantage of using some peculiarities of the resulting restricted shortest path problem. The algorithms produce a Pareto solution to the two-objective problem of minimizing the path cost and the number of hops. We justify their correctness. The presented results of numerical 3D experiments show that our algorithms are superior to the conventional Bellman-Ford algorithm tailored to solving this problem.

```
@article{diva2:354877,
author = {Burdakov, Oleg and Doherty, Patrick and Holmberg, Kaj and Olsson, Per-Magnus},
title = {{Optimal placement of UV-based communications relay nodes}},
journal = {Journal of Global Optimization},
year = {2010},
volume = {48},
number = {4},
pages = {511--531},
}
```

Hospital wards need to be staffed by nurses round the clock, resulting in irregular working hours for many nurses. Over the years, the nurses influence on the scheduling has been increased in order to improve their working conditions. In Sweden it is common to apply a kind of self-scheduling where each nurse individually proposes a schedule, and then the final schedule is determined through informal negotiations between the nurses. This kind of self-scheduling is very time-consuming and does often lead to conflicts. We present a pilot study which aims at determining if it is possible to create an optimisation tool that automatically delivers a usable schedule based on the schedules proposed by the nurses. The study is performed at a typical Swedish nursing ward, for which we have developed a mathematical model and delivered schedules. The results of this study are very promising and suggest continued work along these lines.

```
@article{diva2:353179,
author = {Rönnberg, Elina and Larsson, Torbjörn},
title = {{Automating the self-scheduling process of nurses in Swedish healthcare: a pilot study}},
journal = {HEALTH CARE MANAGEMENT SCIENCE},
year = {2010},
volume = {13},
number = {1},
pages = {35--53},
}
```

When unmanned aerial vehicles (UAVs) are used for surveillance, information must often be transmitted to a base station in real time. However, limited communication ranges and the common requirement of free line of sight may make direct transmissions from distant targets impossible. This problem can be solved using relay chains consisting of one or more intermediate relay UAVs. This leads to the problem of positioning such relays given known obstacles, while taking into account a possibly mission-specific quality measure. The maximum quality of a chain may depend strongly on the number of UAVs allocated. Therefore, it is desirable to either generate a chain of maximum quality given the available UAVs or allow a choice from a spectrum of Pareto-optimal chains corresponding to different trade-offs between the number of UAVs used and the resulting quality. In this article, we define several problem variations in a continuous three-dimensional setting. We show how sets of Pareto-optimal chains can be generated using graph search and present a new label-correcting algorithm generating such chains significantly more efficiently than the best-known algorithms in the literature. Finally, we present a new dual ascent algorithm with better performance for certain tasks and situations.

```
@article{diva2:337902,
author = {Burdakov, Oleg and Doherty, Patrick and Holmberg, Kaj and Kvarnström, Jonas and Olsson, Per-Magnus},
title = {{Relay Positioning for Unmanned Aerial Vehicle Surveillance}},
journal = {The international journal of robotics research},
year = {2010},
volume = {29},
number = {8},
pages = {1069--1087},
}
```

Origin-destination (OD) matrices are essential for various analyses in the field of traffic planning, and they are often estimated from link flow observations. We compare methods for allocating link flow detectors to a traffic network with respect to the quality of the estimated OD-matrix. First, an overview of allocation methods proposed in the literature is presented. Second, we construct a controlled experimental environment where any allocation method can be evaluated, and compared to others, in terms of the quality of the estimated OD-matrix. Third, this environment is used to evaluate and compare three fundamental allocation methods. Studies are made on the Sioux Falls network and on a network modeling the city of Linkoping. Our conclusion is, that the most commonly studied approach for detector allocation, maximizing the coverage of OD-pairs, seems to be unfavorable for the quality of the estimated OD-matrix.

```
@article{diva2:291256,
author = {Larsson, Torbjörn and Lundgren, Jan and Peterson, Anders},
title = {{Allocation of Link Flow Detectors for Origin-Destination Matrix Estimation-A Comparative Study}},
journal = {COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING},
year = {2010},
volume = {25},
number = {2},
pages = {116--131},
}
```

When addressing the problem of snow removal for secondary roads, a tool for solving the rural postman problem can be very useful. We present some ideas for heuristics for this problem. The heuristics are of the same type as the classical Frederickson heuristic. The ideas concern the order of the main steps in such a method, namely constructing a connected graph with all vertices having even degree, containing all the required edges. We also propose two postprocessing heuristics for improving the tours and removing unnecessary detours. The computational tests show that the ideas are interesting alternatives to the classical approach, and that running times are acceptable. We study problem characteristics that may indicate which method to choose.

```
@article{diva2:285747,
author = {Holmberg, Kaj},
title = {{Heuristics for the rural postman problem}},
journal = {Computers \& Operations Research},
year = {2010},
volume = {37},
number = {5},
pages = {981--990},
}
```

The feasible direction method of Frank and Wolfe has been claimed to be efficient for solving the stochastic transportation problem. While this is true for very moderate accuracy requirements, substantially more efficient algorithms are otherwise diagonalized Newton and conjugate Frank–Wolfe algorithms, which we describe and evaluate. Like the Frank–Wolfe algorithm, these two algorithms take advantage of the structure of the stochastic transportation problem. We also introduce a Frank–Wolfe type algorithm with multi-dimensional search; this search procedure exploits the Cartesian product structure of the problem. Numerical results for two classic test problem sets are given. The three new methods that are considered are shown to be superior to the Frank–Wolfe method, and also to an earlier suggested heuristic acceleration of the Frank–Wolfe method.

```
@article{diva2:23507,
author = {Daneva (Mitradjieva), Maria and Larsson, Torbjörn and Patriksson, Michael and Rydergren, Clas},
title = {{A Comparison of Feasible Direction Methods for the Stochastic Transportation Problem}},
journal = {Computational optimization and applications},
year = {2010},
volume = {46},
number = {3},
pages = {451--466},
}
```

Many IP (Internet Protocol) networks use OSPF (Open Shortest Path First) for determining the routing of traffic. OSPF routers compute routing paths using link weights set by the network administrator, and the routers send traffic on all shortest paths to the destination. An interesting question is whether or not a set of prespecified routing patterns can be realized in an OSPF network. If not, we seek structural properties that explain why no such weights exist. Mathematical models for finding weights and for combining routing patterns are presented. We show that two possibly non-spanning routing patterns forming a ``valid cycle'' cannot simultaneously be obtained in an OSPF network. Two new methods for finding valid cycles are presented, illustrated by numerical examples, and shown to be faster that those previously known.

```
@article{diva2:240537,
author = {Broström, Peter and Holmberg, Kaj},
title = {{Compatible Weights and Valid Cycles in Non-spanning OSPF Routing Patterns}},
journal = {Algorithmic Operations Research},
year = {2009},
volume = {4},
number = {1},
pages = {19--35},
}
```

We address the problem of designing IP networks where the traffic is routed using the OSPF protocol. Routers in OSPF networks use link weights set by an administrator for determining how to route the traffic. The routers use all shortest paths when traffic is routed to a destination, and the traffic is evenly balanced by the routers when several paths are equally short. We present a new model for the OSPF network design problem. The model is based on routing patterns and does not explicitly include OSPF weights. The OSPF protocol is modeled by ensuring that all pairs of routing patterns are subpath consistent, which is a necessary condition for the existence of weights. A Lagrangean heuristic is proposed as solution method, and feasible solutions to the problem are generated using a tabu search method. Computational results are reported for random instances and for real-life instances.

```
@article{diva2:227214,
author = {Brostrom, Peter and Holmberg, Kaj},
title = {{Design of OSPF networks using subpath consistent routing patterns}},
journal = {TELECOMMUNICATION SYSTEMS},
year = {2009},
volume = {41},
number = {4},
pages = {293--309},
}
```

We consider routing in symmetrical three stage Clos networks. Especially we search for the routing of an additional connection that requires the least rearrangements, i.e. the minimal number of changes of already routed connections. We describe polynomial methods, based on matchings and edge colorings. The basic idea is to swap colors along alternating paths. The paths need to be maximal, and the shortest of these maximal paths is chosen, since it minimizes the rerouting that needs to be done. Computational tests confirm the efficiency of the approach.

```
@article{diva2:221881,
author = {Holmberg, Kaj},
title = {{Graph optimization approaches for minimal rerouting in symmetric three stage Clos networks}},
journal = {Journal of Mathematical Modelling and Algorithms},
year = {2009},
volume = {8},
number = {1},
pages = {81--100},
}
```

Billerud, a Swedish company with four integrated pulp and paper mills, serves specific packaging-market segments. In 2000, the process-engineering department at Billeruds mill in Skarblacka sought to improve the mills bleaching process. It contacted Linkoping University, which collaborated with Eurocon Automation AB to develop a decision-support system, OptCab. Eurocon Automation AB now offers OptCab as a general tool in the market. Its core is an online optimization system that dynamically updates a process description and optimizes the bleaching control process. The system, which has been fully operational since 2004, has provided numerous benefits to Billerud. Chemical use has decreased by approximately 10 percent, saving approximately two million euros since its installation; the environmental impact has also been reduced; and the brightness quality of paper that the mill produces is more uniform and stable. Moreover, OptCab requires less operator time, freeing the operators for development and analysis work.

```
@article{diva2:212983,
author = {Flisberg, Patrik and Ronnqvist, Mikael and Nilsson, Stefan},
title = {{Billerud Optimizes Its Bleaching Process Using Online Optimization}},
journal = {INTERFACES},
year = {2009},
volume = {39},
number = {2},
pages = {119--131},
}
```

Many telecommunication networks use the open shortest path first (OSPF) protocol for the routing of traffic. In such networks, each router sends the traffic on the shortest paths to the destination, with respect to the link weights assigned. An interesting question is whether or not a set of desired routing patterns can be obtained in an OSPF network by assigning appropriate weights. If not, we wish to find the source of the infeasibility. We study these issues by formulating a mathematical model and investigating its feasibility. A certain structure, called valid cycle, is found to be present in most infeasible instances. This yields new necessary conditions, stronger than those previously known, for the existence of weights yielding a set of given desired shortest path graphs. A valid cycle indicates which parts of the routing patterns are in conflict and can be used for changing the routing patterns to make the problem feasible. A polynomial algorithm for finding valid cycles is presented, the method is illustrated by a numerical example, and computational tests are reported.

```
@article{diva2:290174,
author = {Broström, Peter and Holmberg, Kaj},
title = {{Valid Cycles: A Source of Infeasibility in Open Shortest Path First Routing}},
journal = {Networks},
year = {2008},
volume = {52},
number = {4},
pages = {206--215},
}
```

We consider the separable nonlinear and strictly convex single-commodity network flow problem (SSCNFP). We develop a computational scheme for generating a primal feasible solution from any Lagrangian dual vector, this is referred to as "early primal recovery". It is motivated by the desire to obtain a primal feasible vector before convergence of a Lagrangian scheme, such a vector is not available from a Lagrangian dual vector unless it is optimal. The scheme is constructed such that if we apply it from a sequence of Lagrangian dual vectors that converge to an optimal one, then the resulting primal (feasible) vectors converge to the unique optimal primal flow vector. It is therefore also a convergent Lagrangian heuristic, akin to those primarily devised within the field of combinatorial optimization but with the contrasting and striking advantage that it is guaranteed to yield a primal optimal solution in the limit. Thereby we also gain access to a new stopping criterion for any Lagrangian dual algorithm for the problem, which is of interest in particular if the SSCNFP arises as a subproblem in a more complex model. We construct instances of convergent Lagrangian heuristics that are based on graph searches within the residual graph, and therefore are efficiently implementable, in particular we consider two shortest path based heuristics that are based on the optimality conditions of the original problem. Numerical experiments report on the relative efficiency and accuracy of the various schemes. © 2007 Elsevier B.V. All rights reserved.

```
@article{diva2:271249,
author = {Larsson, Torbjörn and Marklund, J. and Olsson, C. and Patriksson, M.},
title = {{Convergent Lagrangian heuristics for nonlinear minimum cost network flows}},
journal = {European Journal of Operational Research},
year = {2008},
volume = {189},
number = {2},
pages = {324--346},
}
```

We study the multicommodity network flow problem with fixed costs on paths, with specific application to the empty freight car distribution process of a rail operator. The classification costs for sending a group of cars do not depend on the number of cars in the group, as long as the group is kept together as one unit. Arcs correspond to trains, so we have capacity restrictions on arcs but fixed costs on the paths corresponding to routes for groups of cars. As solution method, we propose a Lagrangian based heuristic using dual subgradient search and primal heuristics based on path information of the Lagrangian subproblem solutions. The method illustrates several ways of exploiting the specific structures of the problem. Computational tests indicate that the method is able to generate fairly good primal feasible solutions and lower bounds on the optimal objective function value.

```
@article{diva2:266829,
author = {Holmberg, Kaj and Joborn, Martin and Melin, Kennet},
title = {{Lagrangian based heuristics for the multicommodity network flow problem with fixed costs on paths}},
journal = {European Journal of Operational Research},
year = {2008},
volume = {188},
number = {1},
pages = {101--108},
}
```

```
@article{diva2:260950,
author = {Holmberg, Kaj},
title = {{Optmization Models for Routing in Switching Networks of Clos Type with many Stages}},
journal = {Advanced modeling and optimization},
year = {2008},
volume = {10},
number = {1},
}
```

In this paper, the integrated planning of production and distribution for a pulp company is considered. The tactical decisions included regard transportation of raw materials from harvest areas to pulp mills; production mix and contents at pulp mills; inventory; distribution of pulp products from mills to customers and the selection of potential orders and their levels at customers. The planning period is one year and several time periods are included. As a solution approach we make use of two different heuristic approaches. The main reason to use heuristics is the need for quick solution times. The first heuristic is based on a rolling planning horizon where iteratively a fixed number of time periods is taken into consideration. The second heuristic is based on Lagrangian decomposition and subgradient optimization. This provides optimistic bounds of the optimal objective function value that are better than the LP relaxation value, which can be used as a measure of the heuristic (pessimistic) solution quality. In addition, we apply the proposed rolling horizon heuristic in each iteration of the subgradient optimization. A number of cases based on real data is analysed which shows that the proposed solution approach is simple and provides high quality solutions.

```
@article{diva2:25349,
author = {Gunnarsson, Helene and Rönnqvist, Mikael},
title = {{Solving a multi-period supply chain problem for a pulp company using heuristics--An application to Södra Cell AB}},
journal = {International Journal of Production Economics},
year = {2008},
volume = {116},
number = {1},
pages = {75--94},
}
```

Road blocking due to thawing or heavy rains annually contribute to a considerable loss in Swedish forestry. Companies are forced to build up large stocks of raw material (saw and pulp logs) in order to secure a continuous supply when access to the road network is uncertain. Storage outdoors leads to quality deterioration and monetary losses. Other related costs due to road blocking are road damage and longer haulage distances. One approach to reduce the losses due to road blocks is to upgrade the road network to a standard that guarantees accessibility. We consider the road upgrade problem from the perspective of Swedish forest companies with a planning horizon of about one decade. The objective is to minimize the combined upgrade and transportation costs. We present two mixed integer programming models, which are uncapacitated fixed charge network flow problems including multiple assortments, several time periods and a set of road classes. One model is based on arc flows and one on route flows. For a typical planning instance, the models become large and we propose how to improve solution performance through model strengthening. The models are tested in a case study for a major Swedish forest company.

```
@article{diva2:575485,
author = {Henningsson, Mathias and Karlsson, Jenny and Rönnqvist, Mikael},
title = {{Optimization models for forest road upgrade planning}},
journal = {Journal of Mathematical Modelling and Algorithms},
year = {2007},
volume = {6},
number = {1},
pages = {3--23},
}
```

To produce pulp for paper production or as market pulp is a complicated on-line process with many integrated stages that impact the final quality. In the bleaching plant which is at the end of pulp production, the main objective is to increase pulp brightness within specified limits. Here chemical treatments are applied in sequential stages to achieve the right brightness while striving to maintain the pulp strength as unaffected as possible. The raw material, i.e. pulp logs and wood chips from saw mills, differ in quality and properties. Due to this, it is important to continuously update the amount of chemicals added to the pulp in real-time. This is typically done by experienced operators. In this paper, we describe an on-line optimization based decision support system called OptCab that controls the bleaching process at Billerud AB's paper mill in Skärblacka. The solution approach is based on two phases. In phase one, we establish approximations of each of the processes based on process data collected on-line. These approximations are found by solving a set of constrained least square problems and are updated every 15 minutes. In phase two, we formulate an overall nonlinear control problem that links all stages together and aims to minimize the cost of chemicals. This is solved on-line every five minutes. The system has been in operation during the last three years providing a 10% reduction in the use of chemicals. Additional benefits include a more stable brightness quality.

```
@article{diva2:571200,
author = {Flisberg, Patrik and Nilsson, Stefan and Rönnqvist, Mikael},
title = {{Optimized on-line process control of bleaching operations with OptCab}},
journal = {Norges Handelshoeyskole. Institutt for Foretaksoekonomi. Discussion Paper},
year = {2007},
}
```

The forwarding of logs at harvest areas once the harvesting is done is planned manually by experienced operators. To improve their efficiency and simplify the planning we have developed and tested a decision support system at a major Swedish forest company. The system is based on a combination of a geographic information system (GIS), global positioning system (GPS), and optimization routines to solve the underlying vehicle routing problem. The routes for the forwarders are found by using a repeated matching algorithm. The solution time is short, and it is possible to find routes dynamically in a real-time environment. The geographic information required is found by using a GPS together with data obtained from the bucking software in the harvesters. To show the routes and location of the forwarder, we make use of a GIS that is connected to the GPS. We report on a study with savings in the distance travelled of 8% and numerical tests on the solution methodology. We also compare the proposed solution method with some well-known routing methods.

```
@article{diva2:268796,
author = {Flisberg, Patrik and Forsberg, Mattias and Ronnqvist, Mikael},
title = {{Optimization based planning tools for routing of forwarders at harvest areas}},
journal = {Canadian Journal of Forest Research},
year = {2007},
volume = {37},
number = {11},
pages = {2153--2163},
}
```

```
@article{diva2:264812,
author = {Broström, Peter and Holmberg, Kaj},
title = {{On the Extremal Structure of an OSPF Related Cone}},
journal = {Vietnam journal of mathematics},
year = {2007},
volume = {35},
pages = {507--522},
}
```

We exhibit useful properties of ballstep subgradient methods for convex optimization using level controls for estimating the optimal value. Augmented with simple averaging schemes, they asymptotically find objective and constraint subgradients involved in optimality conditions. When applied to Lagrangian relaxation of convex programs, they find both primal and dual solutions, and have practicable stopping criteria. Up until now, similar results have only been known for proximal bundle methods, and for subgradient methods with divergent series stepsizes, whose convergence can be slow. Encouraging numerical results are presented for large-scale nonlinear multicommodity network flow problems. ©2007 INFORMS.

```
@article{diva2:260850,
author = {Kiwiel, Krzysztof C. and Larsson, Torbjörn and Lindberg, Per Olov},
title = {{Lagrangian relaxation via ballstep subgradient methods}},
journal = {Mathematics of Operations Research},
year = {2007},
volume = {32},
number = {3},
pages = {669--686},
}
```

In this paper we present a branch and price algorithm for the combined vehicle routing and scheduling problem with synchronization constraints. The synchronization constraints are used to model situations when two or more customers need simultaneous service. The synchronization constraints impose a temporal dependency between vehicles, and it follows that a classical decomposition of the vehicle routing and scheduling problem is not directly applicable. With our algorithm, we have solved 44 problems to optimality from the 60 problems used for numerical experiments. The algorithm performs time window branching, and the number of subproblem calls is kept low by adjustment of the columns service times.

```
@article{diva2:17881,
author = {Bredström, David and Rönnqvist, Mikael},
title = {{A branch and price algorithm for the combined vehicle routing and scheduling problem with synchronization constraints}},
journal = {Social Science Research Network},
year = {2007},
volume = {7},
}
```

In this paper we consider integrated planning of transportation of raw material, production and distribution of products of the supply chain at Södra Cell AB, a major European pulp mill company. The strategic planning period is one year. Decisions included in the planning are transportation of raw materials from harvest areas to pulp mills, production mix and contents at pulp mills, distribution of pulp products from mills to customer via terminals or directly and selection of potential orders and their levels at customers. Distribution is carried out by three different transportation modes; vessels, trains and trucks. We propose a mathematical model for the entire supply chain which includes a large number of continuous variables and a set of binary variables to reflect decisions about product mix and order selection at customers. Five different alternatives regarding production mix in a case study carried out at Södra Cell are analyzed and evaluated. Each alternative describes which products will be produced at which pulp mills.

```
@article{diva2:23547,
author = {Gunnarsson Lidestam, Helene and Rönnqvist, Mikael and Carlsson, Dick},
title = {{Integrated production and distribution planning for S¨odra Cell AB}},
journal = {Journal of Mathematical Modelling and Algorithms},
year = {2007},
volume = {6},
number = {1},
pages = {25--45},
}
```

We present a sequential linear programming, SLP, algorithm in which the traditional line-search step is replaced by a multi-dimensional search. The algorithm is based on inner approximations of both the primal and dual spaces, which yields a method which in the primal space combines column and constraint generation. The algorithm does not use a merit function, and the linear programming subproblem of the algorithm differs from the one obtained in traditional methods of this type, in the respect that linearized constraints are taken into account only implicitly in a Lagrangiandual fashion. Convergence to a point that satisfies the Karush-Kuhn-Tucker conditions is established. We apply the new method to a selection of the Hoch-Schittkowski’s nonlinear test problems and report a preliminary computational study in a Matlab environment. Since the proposed algorithmcombines column and constraint generation, it should be advantageous with large numbers of variables and constraints.

```
@article{diva2:23508,
author = {Daneva (Mitradjieva), Maria and Göthe-Lundgren, Maud and Larsson, Torbjörn and Patriksson, Michael and Rydergren, Clas},
title = {{A Sequential Linear Programming Algorithm with Multi-dimensional Search:
Derivation and Convergence}},
journal = {},
year = {2007},
}
```

The health care system in Sweden and many other countries is facing increasing costs. The major reason is the changing age distribution of the population with more elderly people in need of support. At the same time, health care systems are often very labor and staff intensive. In this paper, we focus on a staff planning problem arising in Sweden where people receive home care from the local authorities. The objective is to develop visiting schedules for care providers that incorporate some restrictions and soft objectives. Each visit has a particular task to be performed, for example: cleaning, washing, personal hygiene and/or nursing activities. Each staff member has skills and each client should, if possible, be visited by the same contact person. The operational situation is continuously changing and planning is done each day. We describe the development of a decision support system Laps Care to aid the planners. The system consists of a number of components including information data bases, maps, optimization routines, and report possibilities. We formulate the problem using a set partitioning model and, for a solution method, we make use of a repeated matching algorithm. The system is currently in operation at a number of home care organizations. We report on the practical impact of the system in the health care organization which was involved in the development. The savings are considerably in terms of saved planning time and in the quality of the routes, as well as the measured quality for the clients. Numerical experiments of the system are presented.

```
@article{diva2:289833,
author = {Eveborn, Patrik and Flisberg, Patrik and Rönnqvist, Mikael},
title = {{Laps Care:
an operational system for staff planning of home care}},
journal = {European Journal of Operational Research},
year = {2006},
volume = {171},
number = {3},
pages = {962--976},
}
```

Road blocking due to thawing or heavy rains annually contributes to a considerable loss of profit in Swedish forestry. Companies have to build large stocks of sawlogs and pulplogs to secure a continuous supply during periods where the accessibility of the road network is uncertain. This storage leads to quality deterioration, which means loss in profit. One approach to reduce the losses due to blocked roads is to upgrade the road network to a standard that guarantees accessibility throughout the year. This article describes a decision support system called RoadOpt for the planning of forest road upgrading. The planning horizon is about one decade. The system uses a Geographical Information System (GIS)-based map user interface to present and analyse data and results. Two important modules are the Swedish road database, which provides detailed information about the road network, and an optimization module consisting of a mixed integer linear programming model. A case study from a major Swedish company is presented.

```
@article{diva2:271199,
author = {Karlsson, Jenny and Rönnqvist, Mikael and Frisk, Mikael},
title = {{RoadOpt:
a decision support system for road upgrading in forestry}},
journal = {Scandinavian Journal of Forest Research},
year = {2006},
volume = {21},
number = {S7. 7},
pages = {5--15},
}
```

The vast size of real world stochastic programming instances requires sampling to make them practically solvable. In this paper we extend the understanding of how sampling affects the solution quality of multistage stochastic programming problems. We present a new heuristic for determining good feasible solutions for a multistage decision problem. For power and log-utility functions we address the question of how tree structures, number of stages, number of outcomes and number of assets affect the solution quality. We also present a new method for evaluating the quality of first stage decisions.

```
@article{diva2:271084,
author = {Blomvall, Jörgen and Shapiro, A.},
title = {{Solving multistage asset investment problems by the sample average approximation method}},
journal = {Mathematical programming},
year = {2006},
volume = {108},
number = {2-3},
pages = {571--595},
}
```

Modern communication networks often use Internet Protocol routing and the intra-domain protocol OSPF (Open Shortest Path First). The routers in such a network calculate the shortest path to each destination and send the traffic on these paths, using load balancing. The issue of survivability, i.e. the question of how much traffic the network will be able to accommodate if components fail, is increasingly important. We consider the problem of designing a survivable IP network, which also requires determining the routing of the traffic. This is done by choosing the weights used for the shortest path calculations.

```
@article{diva2:271010,
author = {Broström, Peter and Holmberg, Kaj},
title = {{Multiobjective design of survivable IP networks}},
journal = {Annals of Operations Research},
year = {2006},
volume = {147},
number = {1},
pages = {235--253},
}
```

The well-known and established global optimality conditions based on the Lagrangian formulation of an optimization problem are consistent if and only if the duality gap is zero. We develop a set of global optimality conditions that are structurally similar but are consistent for any size of the duality gap. This system characterizes a primal-dual optimal solution by means of primal and dual feasibility, primal Lagrangian ε-optimality, and, in the presence of inequality constraints, a relaxed complementarity condition analogously called δ-complementarity. The total size ε + δ of those two perturbations equals the size of the duality gap at an optimal solution. Further, the characterization is equivalent to a near-saddle point condition which generalizes the classic saddle point characterization of a primal-dual optimal solution in convex programming. The system developed can be used to explain, to a large degree, when and why Lagrangian heuristics for discrete optimization are successful in reaching near-optimal solutions. Further, experiments on a set-covering problem illustrate how the new optimality conditions can be utilized as a foundation for the construction of new Lagrangian heuristics. Finally, we outline possible uses of the optimality conditions in column generation algorithms and in the construction of core problems. © 2006 INFORMS.

```
@article{diva2:257129,
author = {Larsson, Torbjörn and Patriksson, Michael},
title = {{Global optimality conditions for discrete and nonconvex optimization-with applications to Lagrangian heuristics and column generation}},
journal = {Operations Research},
year = {2006},
volume = {54},
number = {3},
pages = {436--453},
}
```

Monotonic regression (MR) is a least distance problem with monotonicity constraints induced by a partially ordered data set of observations. In our recent publication [In Ser. {\sl Nonconvex Optimization and Its Applications}, Springer-Verlag, (2006) {\bf 83}, pp. 25-33], the Pool-Adjacent-Violators algorithm (PAV) was generalized from completely to partially ordered data sets (posets). The new algorithm, called GPAV, is characterized by the very low computational complexity, which is of second order in the number of observations. It treats the observations in a consecutive order, and it can follow any arbitrarily chosen topological order of the poset of observations. The GPAV algorithm produces a sufficiently accurate solution to the MR problem, but the accuracy depends on the chosen topological order. Here we prove that there exists a topological order for which the resulted GPAV solution is optimal. Furthermore, we present results of extensive numerical experiments, from which we draw conclusions about the most and the least preferable topological orders.

```
@article{diva2:257126,
author = {Burdakov, Oleg and Grimvall, Anders and Sysoev, Oleg},
title = {{Data preordering in generalized PAV algorithm for monotonic regression}},
journal = {Journal of Computational Mathematics},
year = {2006},
volume = {24},
number = {6},
pages = {771--790},
}
```

Mean value cross decomposition is a symmetric primal-dual decomposition method suitable for optimization problems with both primal and dual structures. It uses only easily solvable subproblems and no difficult master problems. Originally developed for linear problems, it is in this paper extended to nonlinear convex optimization problems. Convergence is proved for a somewhat generalized version, allowing more general weights. Computational results are presented for a network routing problem with congestion costs, a large-scale nonlinear problem with structures that enable decomposition both with respect to variables and constraints. The main goals of the tests are to illustrate the procedure and to indicate that this decomposition approach is more efficient than direct solution with a well established general code.

```
@article{diva2:257096,
author = {Holmberg, Kaj and Kiwiel, Krzysztof},
title = {{Mean Value Cross Decomposition for Nonlinear Convex Problems}},
journal = {Optimization Methods and Software},
year = {2006},
volume = {21},
pages = {401--417},
}
```

We present a mathematical programming model for the combined vehicle routing and scheduling problem with time windows and additional temporal constraints. The temporal constraints allow for imposing pairwise synchronization and pairwise temporal precedence between customer visits, independently of the vehicles. We describe some real world problems where the temporal constraints, in the literature, usually are remarkably simplified in the solution process, even though these constraints may significantly improve the solution quality and/or usability. We also propose an optimization based heuristic to solve real size instances. The results of numerical experiments substantiate the importance of the temporal constraints in the solution approach. We also make a computational study by comparing a direct usage of a commercial solver against the proposed heuristic where the latter approach can find high quality solutions within distinct time limits.

```
@article{diva2:17880,
author = {Bredström, David and Rönnqvist, Mikael},
title = {{Combined vehicle routing and scheduling with temporal precedence and synchronization constraints}},
journal = {EconPapers},
year = {2006},
volume = {18},
}
```

In this paper, we consider a combined terminal location and ship routing problem at Södra Cell AB. The purpose is to supply the customers' annual demand for pulp products while minimizing the distribution costs. Customers are supplied with various pulp products from pulp mills in Scandinavia by ships, trains, or lorries. The ship routes go from the pulp mills to terminals in Europe. From each terminal, the products are transported to customers by lorry, train, or barge. Some customers can be supplied directly from the pulp mills by trains or lorries. We have developed a mathematical model to select which terminals to use and, at the same time, determine the shipping routes. The mixed integer programming model was solved directly using a commercial solver. When the number of routes generated is large, the time required to obtain an optimal solution is too long. Hence, we have developed heuristics in order to obtain an acceptable solution in reasonable time. In addition to the basic case, five different scenarios were tested. Our heuristics provide solutions that are within 0.12% of the optimal ones.

```
@article{diva2:23546,
author = {Gunnarsson (Lidestam), Helene and Rönnqvist, Mikael and Carlsson, Dick},
title = {{A combined terminal location and ship routing problem}},
journal = {Journal of the Operational Research Society},
year = {2006},
volume = {57},
number = {8},
pages = {928--938},
}
```

By taking the guesswork out of the equation, operations research-based process control system helps cut costs, reduce environmental impact at Swedish paper mill.

```
@article{diva2:571190,
author = {Flisberg, Patrik and Nilsson, Stefan and Rönnqvist, Mikael},
title = {{Pulp fact, not fiction}},
journal = {OR/MS Today},
year = {2005},
volume = {32},
number = {2},
pages = {42--47},
}
```

The use of supply chain management and optimisation is of increasing importance in the forest industry. The overall wood-flow starts with standing trees in forests and continues with harvesting, bucking, sorting, transportation to terminals, sawmills, pulp mills, paper mills and heating plants, conversion into products such as pulp, paper, lumber, and ends at different customers. Many planning problems arise along the chain and these cover different time horizons. Co-ordinating the wood-flow is a vital concern for many companies. We study Södra, one of the larger Swedish forest companies, which is involved in all stages of the wood-flow. We focus in particular on Södra Cell AB, a company within Södra, which is responsible for pulp production. We describe the operations at Södra Cell and the decision support tools used for supply chain planning. We describe five major projects or cases which focus on improving their supply chain management and optimisation. These cases include the introduction of new technologies for sales and orders, new distribution structures using terminals, and the development of integrated optimisation models and methods. © 2004 Elsevier B.V. All rights reserved.

```
@article{diva2:271376,
author = {Carlsson, D. and Rönnqvist, Mikael},
title = {{Supply chain management in forestry - Case studies at Södra Cell AB}},
journal = {European Journal of Operational Research},
year = {2005},
volume = {163},
number = {3},
pages = {589--616},
}
```

```
@article{diva2:252545,
author = {Burdakov, Oleg and Griewank, Andreas and Lotov, Alexander V},
title = {{Yury G. Evtushenko---a tribute}},
journal = {Optimization Methods and Software},
year = {2005},
volume = {20},
number = {1},
pages = {1--7},
}
```

Discrete choice models characterize the alternatives in the choice set by utilities/attributes. The decision making is described by a probability distribution over the choice set. In this paper we introduce a welfare measure based on expected payoff and expected freedom of choice for the simple one parameter logit model. In this case the welfare measure turns out to be the so called composite utility. This means that the composite utility can be interpreted as the combined benefit of expected payoff and expected freedom of choice. The proposed welfare measure can be extended to the linear-in-parameters logit model and nested logit models and others. The proposed welfare measure is formulated in terms of the choice probability distribution. It depends on the form of the probabilities, but not on any particular derivation of the distribution.

```
@article{diva2:251056,
author = {Erlander, Sven},
title = {{Welfare, freedom of choice and composite utility in the logit model}},
journal = {Social Choice and Welfare},
year = {2005},
volume = {24},
number = {3},
pages = {509--525},
}
```

In this paper we suggest an optimization model and a solution method for a shipment planning problem. This problem concerns the simultaneous planning of how to route a fleet of ships and the planning of which products to transport in these ships. The ships are used for moving products from oil refineries to storage depots. There are inventory levels to consider both at the refineries and at the depots. The inventory levels are affected by the process scheduling at the refineries and demand at the depots. The problem is formulated using an optimization model including an aggregated representation of the process scheduling at the refineries. Hence, we integrate the shipment planning and the process scheduling at the refineries. We suggest a solution method based on column generation, valid inequalities, and constraint branching. The solution method is tested on data provided by the Nynas oil refinery company and solutions are obtained within 4 hours, for problem instances of up to 3 refineries, 15 depots, and 4 products when considering a time horizon of 42 days. © 2004 Elsevier B.V. All rights reserved.

```
@article{diva2:244523,
author = {Persson, Jan A. and Göthe-Lundgren, Maud},
title = {{Shipment planning at oil refineries using column generation and valid inequalities}},
journal = {European Journal of Operational Research},
year = {2005},
volume = {163},
number = {3},
pages = {631--652},
}
```

Distribution is a major factor in the supply chain for Sodra Cell, a leading manufacturer of pulp intended for paper production. Each year, the company transports large quantities of pulp using ships, trains, and trucks; here we focus on scheduling the ships and optimizing deliveries to minimize distribution costs.

```
@article{diva2:17884,
author = {Bredström, David and Rönnqvist, Mikael and Carlsson, Dick},
title = {{A hybrid algorithm for a pulp distribution problem}},
journal = {IEEE Intelligent Systems},
year = {2005},
volume = {20},
number = {4},
pages = {19--25},
}
```

Monotonic regression is a nonparametric method for estimation ofmodels in which the expected value of a response variable y increases ordecreases in all coordinates of a vector of explanatory variables x = (x_{1}, …, x_{p}).Here, we examine statistical and computational aspects of our recentlyproposed generalization of the pool-adjacent-violators (PAV) algorithm fromone to several explanatory variables. In particular, we show how the goodnessof-fit and accuracy of obtained solutions can be enhanced by presortingobserved data with respect to their level in a Hasse diagram of the partial orderof the observed x-vectors, and we also demonstrate how these calculations canbe carried out to save computer memory and computational time. Monte Carlosimulations illustrate how rapidly the mean square difference between fittedand expected response values tends to zero, and how quickly the mean squareresidual approaches the true variance of the random error, as the number of observations increases up to 10^{4}.

```
@article{diva2:21032,
author = {Burdakov, Oleg and Grimvall, Anders and Hussian, Mohamed and Sysoev, Oleg},
title = {{Hasse diagrams and the generalized PAV-algorithm for monotonic regression in several explanatory variables}},
journal = {Computational Statistics and Data Analysis},
year = {2005},
}
```

```
@article{diva2:21033,
author = {Hussian, Mohamed and Grimvall, Anders and Burdakov, Oleg and Sysoev, Oleg},
title = {{Monotonic regression for the detection of temporal trends in environmental quality data}},
journal = {Match},
year = {2005},
volume = {54},
number = {3},
pages = {535--550},
}
```

In this paper, we study a cost-allocation problem that arises in a distribution-planning situation at the Logistics Department at Norsk Hydro Olje AB, Stockholm, Sweden. We consider the routes from one depot during one day. The total distribution cost for these routes is to be divided among the customers that are visited. This cost-allocation problem is formulated as a vehicle-routing game (VRG), allowing the use of vehicles with different capacities. Cost-allocation methods based on different concepts from cooperative game theory, such as the core and the nucleolus, are discussed. A procedure that can be used to investigate whether the core is empty or not is presented, as well as a procedure to compute the nucleolus. Computational results for the Norsk Hydro case are presented and discussed.

```
@article{diva2:267172,
author = {Engevall, Stefan and Göthe-Lundgren, Maud and Värbrand, Peter},
title = {{The heterogeneous vehicle-routing game}},
journal = {Transportation Science},
year = {2004},
volume = {38},
number = {1},
pages = {71--85},
}
```

The electricity market in Sweden has changed during recent years. Electricity for industrial use can now be purchased from a number of competing electricity suppliers. Hence, the price for each kilowatt-hour is significantly lower than it was just two years ago and interest in electricity conservation measures has declined. However, part of the electricity tariff, i.e. the demand cost expressed in Swedish Kronor (SEK) for each kilowatt, is almost the same as before. Attention has thereby been drawn to load management measures in order to reduce this specific cost. Saving one kWh might lead to a monetary saving of between SEK 0.22 and SEK 914, this paper demonstrates how to eliminate only those kWh that actually save a significant amount of money. A load management system has been installed in a small carpentry factory that can turn off equipment based on a pre-set priority and number of minutes each hour. The question now is what level of the electricity load is optimal in a strictly mathematical sense, i.e. how many kW should be set in the load management computer in order to maximise profitability? In this paper, we develop a mathematical model that can be used as a tool both to find the most profitable subscription level and to control the choices to be made. Numerical results from a case study are presented. Copyright (C) 2004 John Wiley Sons, Ltd.

```
@article{diva2:267168,
author = {Gustafsson, Stig-Inge and Rönnqvist, Mikael and Claesson, M},
title = {{Optimization models and solution methods for load management}},
journal = {International journal of energy research (Print)},
year = {2004},
volume = {28},
number = {4},
pages = {299--317},
}
```

In this paper, we consider empty freight car distribution in a scheduled railway system. We analyze the cost structure for the repositioning of empty cars, and conclude that the distribution cost shows an economy-of-scale behavior. In addition to the cost proportional to the number of cars sent from origin to destination, there is a cost related to car-handling operations at yards, which depends on the number of car groups that are handled. Thus, if we can find a transportation pattern in which fewer but larger groups of cars are built, the total distribution cost can be decreased. The objective of the paper is to propose an optimization model that explicitly takes this economy-of-scale effect into account. We use a time-dependent network to describe the possible car movements in time and space, and show how this network can be transformed into a network with fixed costs on links representing movements of cars with identical origin and destination terminals. The resulting optimization model is a capacitated network design model, where each capacity constraint limits the flow on several arcs. We describe a tabu heuristic for solving the model, and present computational results. © 2004 INFORMS.

```
@article{diva2:266652,
author = {Joborn, M. and Crainic, T.G. and Gendreau, M. and Holmberg, Kaj and Lundgren, Jan},
title = {{Economies of scale in empty freight car distribution in scheduled railways}},
journal = {Transportation Science},
year = {2004},
volume = {38},
number = {2},
pages = {121--134},
}
```

We present a new location model, which is a generalization of the bicritera median problem. We introduce two sum objectives and criteria dependent edge lengths. For this script N sign P complete problem a solution method finding all the efficient solutions is presented. The method is a two-phases approach, which can easily be applied as an interactive method. In Phase 1 the supported solutions are found, and in Phase 2 the unsupported solutions are found. This method can be thought of as a general approach to bi-objective combinatorial optimization (BOCO) problems. © 2003 Elsevier B.V. All rights reserved.

```
@article{diva2:266575,
author = {Skriver, A.J.V. and Andersen, K.A. and Holmberg, Kaj},
title = {{Bicriteria network location (BNL) problems with criteria dependent lengths and minisum objectives}},
journal = {European Journal of Operational Research},
year = {2004},
volume = {156},
number = {3},
pages = {541--549},
}
```

We present a column generation procedure for the side constrained traffic equilibrium problem. A dual stabilization scheme is introduced to improve the computational performance. Computational experiments for the case of linear side constraints are presented. The test problems are well known traffic equilibrium instances where side constraints of link flow capacity type and general linear side constraints are added. The computational results are promising especially for instances with a relatively small number of side constraints.

```
@article{diva2:244481,
author = {Larsson, Torbjörn and Patriksson, Michael and Rydergren, Clas},
title = {{A column generation procedure for the side constrained traffic equilibrium problem}},
journal = {},
year = {2004},
}
```

```
@article{diva2:243063,
author = {Palmgren, Myrna and Rönnqvist, Mikael and Värbrand, Peter},
title = {{A near-exact method for solving the log-truck scheduling problem}},
journal = {International Transactions in Operational Research},
year = {2004},
volume = {11},
number = {4},
pages = {447--464},
}
```

Scheduling of staff is an important area, both from an academic and industrial point of view. There has been a lot of attention to develop new and efficient methods and models. In this paper, we consider the problem that given a work-force demand, a set of working rules and regulations find schedules for staff members with individual skills and preferences. The planning horizon is typically one week to several months. The problem both constructs tasks and simultaneously allocates them to staff members. The purpose of this paper is not to develop new theoretical results. Instead it deals with novel applications of known approaches to real-world practice. We describe a general scheduling software called SCHEDULER that includes a number of important features. The model is based on a elastic set-partitioning model and as solution method we use a branch-and-price algorithm. As branching strategy we make use of constraint branching and the column generator is a nested constrained shortest path formulation. An important feature is that only legal schedules are generated and used within the model. The system also allows for task changes within shifts, a general description of legal restrictions, preferences and allowable times. The system is in use at a number of companies and we report on the usage at some companies. We also give some numerical results to illustrate the behavior of some important features.

```
@article{diva2:243061,
author = {Rönnqvist, Mikael and Eveborn, Patrik},
title = {{Scheduler - A system for staff planning}},
journal = {Annals of Operations Research},
year = {2004},
volume = {128},
number = { 1-4},
pages = {21--45},
}
```

```
@article{diva2:243060,
author = {Rönnqvist, Mikael and Fjeld, Dag and Puodziunas, M},
title = {{The potential of improvement for tactical planning of roundwood transport in Lithuanian state forest enterprises}},
journal = {Baltic forestry},
year = {2004},
volume = {10},
number = {1},
pages = {79--88},
}
```

No abstract available.

```
@article{diva2:243059,
author = {Rönnqvist, Mikael and Flisberg, Patrik and Eveborn, Patrik},
title = {{Home care operations}},
journal = {OR/MS today},
year = {2004},
volume = {April},
pages = {38--43},
}
```

```
@article{diva2:242909,
author = {Larsson, Torbjörn and Yuan, Di},
title = {{An augmented Lagrangian algorithm for large scale multicommodity routing}},
journal = {Computational optimization and applications},
year = {2004},
volume = {27},
pages = {187--215},
}
```

In this paper we present a nonlinear dynamic programming algorithm for the computation of forward rates within the maximum smoothness framework. The algorithm implements the forward rate positivity constraint for a one-parametric family of smoothness measures and it handles price spreads in the constraining data set. We investigate the outcome of the algorithm using the Swedish Bond market showing examples where the absence of the positive constraint leads to negative interest rates. Furthermore we investigate the predictive accuracy of the algorithm as we move along the family of smoothness measures. Among other things we observe that the inclusion of spreads not only improves the smoothness of forward curves but also significantly reduces the predictive error.

```
@article{diva2:242723,
author = {Blomvall, Jörgen and Manzano, Julian},
title = {{Positive forward rates in the maximum smoothness framework}},
journal = {Quantitative finance (Print)},
year = {2004},
volume = {4},
number = {2},
pages = {221--232},
}
```

We consider network design and routing for Internet Protocol (IP) traffic. The design problem concerns capacity dimensioning of communication links, where the design cost consists of fixed charges and linear capacity expansion costs. The optimization problem also concerns determining the amount of traffic demand to be carried by the network and the metric used by a shortest path routing protocol. We present a novel linear mixed-integer mathematical formulation and two heuristic solution procedures. The first heuristic uses mixed-integer programming to generate a sequence of routing solutions. The second solution approach is a simulated annealing meta heuristic. Computational experiments for synthesized and real-life networks show that high-quality solutions can be obtained by both approaches.

```
@article{diva2:242697,
author = {Holmberg, Kaj and Yuan, Di},
title = {{Optimization of Internet Protocol Network Design and Routing}},
journal = {Networks},
year = {2004},
volume = {43},
number = {1},
pages = {39--53},
}
```

In this paper we present a tabu search heuristic which can be used for scheduling the production at an oil refinery. The scheduling problem is to decide which production modes to use at the different processing units at each point in time. The problem is a type of lot-sizing problem where costs of changeovers, inventories and production are considered. In the suggested tabu search heuristic we explore the use of variable neighbourhood, dynamic penalty and different tabu lists. Computational results are presented for different versions of the heuristic and the results are compared to the best-known lower bound for a set of scheduling scenarios.

```
@article{diva2:242636,
author = {Persson, Jan and Göthe-Lundgren, Maud and Lundgren, Jan and Gendron, Bernard},
title = {{A tabu search heuristic for scheduling the production processes at an oil refinery}},
journal = {International Journal of Production Research},
year = {2004},
volume = {42},
number = {3},
pages = {445--471},
}
```

We study the supply chain problem of a large international pulp producer with five pulp mills located in Scandinavia. The company currently uses manual planning for most of its supply chain, which includes harvesting and transportation of pulp, production scheduling and distribution of products to customers. We have developed two new mixed integer models that determine daily supply chain decisions over a planning horizon of three months. One model is based on column generation, where the generation phase is to find new production plans using a shortest path network. The second, slightly less flexible, has the daily production decisions explicitly included in the model. In order to solve the models within practical time limits we use a flexible approach that aggregates together the less immediate decisions. We also introduce a novel constraint branching heuristic. The models and solution approaches are intended to become an integrated component in the company’s new management system. In tests and comparisons with today’s manual planning, we have found new strategic policies that significantly reduce the company’s supply chain costs.

```
@article{diva2:17882,
author = {Bredström, David and Carlsson, Dick and Lundgren, Jan T. and Mason, Andrew and Rönnqvist, Mikael},
title = {{Supply chain optimization in the pulp mill industry:
IP models, column generation and novel constraint branches}},
journal = {European Journal of Operational Research},
year = {2004},
volume = {156},
number = {1},
pages = {2--22},
}
```

We study the problem of deciding when and where forest residues are to be converted into forest fuel, and how the residues are to be transported and stored in order to satisfy demand at heating plants. Decisions also include whether or not additional harvest areas and saw-mills are to be contracted. In addition, we consider the flow of products from saw-mills and import harbors, and address the question about which terminals to use. The planning horizon is one year and monthly time periods are considered. The supply chain problem is formulated as a large mixed integer linear programming model. In order to obtain solutions within reasonable time we have developed a heuristic solution approach. Computational results from a large Swedish supplying entrepreneur are reported.

```
@article{diva2:23545,
author = {Gunnarsson (Lidestam), Helene and Lundgren, Jan and Rönnqvist, Mikael},
title = {{Supply chain modelling of forest fuel}},
journal = {European Journal of Operational Research},
year = {2004},
volume = {158},
number = {1},
pages = {103--123},
}
```

In [Euro. J. Operat. Res. 143 (2002) 452, Opt. Meth. Software 17 (2002) 383] a Riccati-based primal interior point method for multistage stochastic programmes was developed. This algorithm has several interesting features. It can solve problems with a nonlinear node-separable convex objective, local linear constraints and global linear constraints. This paper demonstrates that the algorithm can be efficiently parallelized. The solution procedure in the algorithm allows for a simple but efficient method to distribute the computations. The parallel algorithm has been implemented on a low-budget parallel computer, where we experience almost perfect linear speedup and very good scalability of the algorithm. © 2003 Elsevier Science B.V. All rights reserved.

```
@article{diva2:267584,
author = {Blomvall, Jörgen},
title = {{A multistage stochastic programming algorithm suitable for parallel computing}},
journal = {Parallel Computing},
year = {2003},
volume = {29},
number = {4},
pages = {431--445},
}
```

It is often possible to use different convexification techniques with different transformations in global optimization. In 'Optimization Eng. (submitted for publication)', a new global optimization technique based on convexifying signomial terms is presented. The technique is based on the solution of a sequence of convexified approximate subproblems. The choice of transformation functions is clearly essential. It is not enough to use convexifications that will result in convex and underestimating problems, if an effective optimization approach is wanted. The transformations should be such that they make the resulting problems convex but at the same time do not change the problem more than necessary. It will be shown in this article that for certain problems the choice of transformations has a clear influence on the efficiency of the proposed optimization approach. Using other transformations than what is proposed in 'Optimization Eng. (submitted for publication)' will, in some examples, give solution times that are shorter by an order of magnitude. The concept of power convex functions (Generalized Concavity Optimization Econ. (1981) 153) will be used as a measure of the quality of the transformations. In this paper, the new transformation functions are also shown to be very successful in a heat exchanger network synthesis application. © 2002 Elsevier Science Ltd. All rights reserved.

```
@article{diva2:267527,
author = {Björk, Kaj-Mikael and Lindberg, Per Olov and Westerlund, Tapio},
title = {{Some convexifications in global optimization of problems containing signomial terms}},
journal = {Computers and Chemical Engineering},
year = {2003},
volume = {27},
number = {5},
pages = {669--679},
}
```

```
@article{diva2:242665,
author = {Rönnqvist, Mikael},
title = {{Modeller för operationsanalys}},
journal = {Kungl. Skogs- och Lantbruksakademiens Tidskrift},
year = {2003},
volume = {142},
number = {19},
pages = {27--32},
}
```

```
@article{diva2:242647,
author = {Palmgren, Myrna and Rönnqvist, Mikael and Värbrand, Peter},
title = {{A solution approach for log truck scheduling based on composite pricing and branch and bound}},
journal = {International Transactions in Operational Research},
year = {2003},
volume = {10},
pages = {433--447},
}
```

The paper provides two contributions. First, we present new convergence results for conditional e-subgradient algorithms for general convex programs. The results obtained here extend the classical ones by Polyak [Sov. Math. Doklady 8 (1967) 593, USSR Comput. Math. Math. Phys. 9 (1969) 14, Introduction to Optimization, Optimization Software, New York, 1987] as well as the recent ones in [Math. Program. 62 (1993) 261, Eur. J. Oper. Res. 88 (1996) 382, Math. Program. 81 (1998) 23] to a broader framework. Secondly, we establish the application of this technique to solve non-strictly convex-concave saddle point problems, such as primal-dual formulations of linear programs. Contrary to several previous solution algorithms for such problems, a saddle-point is generated by a very simple scheme in which one component is constructed by means of a conditional e-subgradient algorithm, while the other is constructed by means of a weighted average of the (inexact) subproblem solutions generated within the subgradient method. The convergence result extends those of [Minimization Methods for Non-Differentiable Functions, Springer-Verlag, Berlin, 1985, Oper. Res. Lett. 19 (1996) 105, Math. Program. 86 (1999) 283] for Lagrangian saddle-point problems in linear and convex programming, and of [Int. J. Numer. Meth. Eng. 40 (1997) 1295] for a linear-quadratic saddle-point problem arising in topology optimization in contact mechanics.

```
@article{diva2:242646,
author = {Larsson, Torbjörn and Patriksson, Michael and Strömberg, Ann-Brith},
title = {{On the convergence of conditional e-subgradient methods for convex programs and convex-concave saddle-point problems}},
journal = {European Journal of Operational Research},
year = {2003},
volume = {151},
number = {3},
pages = {461--473},
}
```

We present a solution algorithm for an inverse nonlinear multicommodity network flow problem. This problem is to find link cost adjustments that make a given target link flow solution optimal in a nonlinear multicommodity network flow problem, and that are optimal with respect to a specified objective. The solution procedure uses column generation. We present computational results for instances where the nonlinear multicommodity network flow problems are small and medium scale traffic equilibrium problems, and where system optimal link flows are targeted. The computational results show that the solution procedure is a viable approach for solving medium-scale instances of the inverse traffic equilibrium problem.

```
@article{diva2:242645,
author = {Larsson, Torbjörn and Patriksson, Michael and Rydergren, Clas},
title = {{Inverse nonlinear multicommodity flow optimization by column generation}},
journal = {Optimization Methods and Software},
year = {2003},
volume = {18},
number = {5},
pages = {601--613},
}
```

Optimization models and methods have been used extensively in the forest industry. In this paper we describe the general wood-flow in forestry and a variety of planning problems. These cover planning periods from a fraction of a second to more than one hundred years. The problems are modelled using linear, integer and nonlinear models. Solution methods used depend on the required solution time and include for example dynamic programming, LP methods, branch & bound methods, heuristics and column generation. The importance of modelling and qualitative information is also discussed.

```
@article{diva2:242649,
author = {Rönnqvist, Mikael},
title = {{Optimization in forestry}},
journal = {Mathematical programming},
year = {2003},
volume = {97},
pages = {267--284},
}
```

We build an investment model based on Stochastic Programming. In the model we buy at the ask price and sell at the bid price. We apply the model to a case where we can invest in a Swedish stock index, call options on the index and the risk-free asset. By reoptimizing the portfolio on a daily basis over a ten-year period, it is shown that options can be used to create a portfolio that outperforms the index. With ex post analysis, it is furthermore shown that we can create a portfolio that dominates the index in terms of mean and variance, i.e. at given level of risk we could have achieved a higher return using options.

```
@article{diva2:242639,
author = {Blomvall, Jörgen and Lindberg, Per Olov},
title = {{Back-testing the performance of an actively managed option portfolio at the Swedish Stock Market, 1990--1999}},
journal = {Journal of Economic Dynamics and Control},
year = {2003},
volume = {27},
number = {6},
pages = {1099--1112},
}
```

Today most sawmills use optimization techniques to maximize yields from logs. If open market conditions are assumed when maximizing volume or value yields, overproduction of some products and underproduction of others may result. In this study, a system is described that links production with log sawing optimization while also addressing demands for timber products of differing qualities. The system is based on an optimization framework that is implemented in a log sawing simulator, AUTOSAW. Unlike other optimization systems that use static product "values" such as board volume or timber price, dynamic values are used that are updated regularly according to previous timber production. The efficiency of the system was tested with three orderbooks, via comparisons with volume- and value-optimized solutions obtained from sawing pruned Pinus radiata D. Don logs. In each case, product optimization required fewer logs to satisfy all demands than did either volume or value optimization. Furthermore, underproduction was eliminated and overproduction reduced.

```
@article{diva2:269869,
author = {Todoroki, C and Rönnqvist, Mikael},
title = {{Dynamic control of timber production at a sawmill with log sawing optimization}},
journal = {Scandinavian Journal of Forest Research},
year = {2002},
volume = {17},
number = {1},
pages = {79--89},
}
```

This paper examines the proposition that homotheticity is equivalent to the property that (e.g., in the context of a production function) the marginal rate of substitution is constant along any ray from the origin. This claim is made in many places, but hitherto the prerequisites have not been stated explicitly. In the present contribution it is demonstrated that an additional condition is required for the claim to hold. We present a theorem that achieves equivalence by also assuming 'nowhere ray constancy'. It turns out that this condition is implied by assumptions often made, e.g., in production theory. Further, a complete characterization is given of the class of functions that satisfy ray constant marginal rates of substitution or, somewhat more generally, a condition of ray parallel gradients. In addition to homothetic functions this class contains functions homogeneous of degree 0 (i.e., ray constant) and functions which are homothetic in disjoint regions separated by regions of ray constancy.

```
@article{diva2:269844,
author = {Lindberg, Per Olov and Eriksson, EA and Mattsson, LG},
title = {{Homothetic functions revisited}},
journal = {Economic Theory},
year = {2002},
volume = {19},
number = {2},
pages = {417--427},
}
```

A new algorithm for solving smooth large-scale minimization problems with bound constraints is introduced. The way of dealing with active constraints is similar to the one used in some recently introduced quadratic solvers. A limited-memory multipoint symmetric secant method for approximating the Hessian is presented. Positive-definiteness of the Hessian approximation is not enforced. A combination of trust-region and conjugate-gradient approaches is used to explore a useful negative curvature information. Global convergence is proved for a general model algorithm. Results of numerical experiments are presented.

```
@article{diva2:269608,
author = {Burdakov, Oleg and Martinez, JM and Pilotta, EA},
title = {{A limited-memory multipoint symmetric secant method for bound constrained optimization}},
journal = {Annals of Operations Research},
year = {2002},
volume = {117},
number = {1-4},
pages = {51--70},
}
```

We show that a Riccati-based Multistage Stochastic Programming solver for problems with separable convex linear/nonlinear objective developed in previous papers can be extended to solve more general Stochastic Programming problems. With a Lagrangean relaxation approach, also local and global equality constraints can be handled by the Riccati-based primal interior point solver. The efficiency of the approach is demonstrated on a 10 staged stochastic programming problem containing both local and global equality constraints. The problem has 1.9 million scenarios, 67 million variables and 119 million constraints, and was solved in 97 min on a 32 node PC cluster.

```
@article{diva2:268753,
author = {Blomvall, Jörgen and Lindberg, Per Olov},
title = {{A Riccati-based primal interior point solver for multistage stochastic programming - Extensions}},
journal = {Optimization Methods and Software},
year = {2002},
volume = {17},
number = {3},
pages = {383--407},
}
```

In this paper we describe a production planning and scheduling problem in an oil refinery company. The problem concerns the planning and the utilization of a production process consisting of one distillation unit and two hydro-treatment units. In the process crude oil is transformed to bitumen and naphthenic special oils. The aim of the scheduling is to decide which mode of operation to use in each processing unit at each point in time, in order to satisfy the demand while minimizing the production cost and taking storage capacities into account. The production cost includes costs for changing mode and for holding inventory. We formulate a mixed integer linear programming model for the scheduling problem. The model can be regarded as a generalized lot-sizing problem, where inventory capacities are considered and more than one product is obtained for some modes of operation. A number of modifications and extensions of the model are also discussed. It is shown how the optimization model can be used as a viable tool for supporting production planning and scheduling at the refinery, and that it is possible to analyze scheduling scenarios of realistic sizes. It is also shown that the model can support shipment planning and strategic decisions concerning new products and investments in storage capacity. © 2002 Elsevier Science B.V. All rights reserved.

```
@article{diva2:267819,
author = {Göthe-Lundgren, Maud and Lundgren, Jan T. and Persson, Jan A.},
title = {{An optimization model for refinery production scheduling}},
journal = {International Journal of Production Economics},
year = {2002},
volume = {78},
number = {3},
pages = {255--270},
}
```

We propose a new method for certain multistage stochastic programs with linear or nonlinear objective function, combining a primal interior point approach with a linear-quadratic control problem over the scenario tree. The latter problem, which is the direction finding problem for the barrier subproblem is solved through dynamic programming using Riccati equations. In this way we combine the low iteration count of interior point methods with an efficient solver for the subproblems. The computational results are promising. We have solved a financial problem with 1,000,000 scenarios, 15,777,740 variables and 16,888,850 constraints in 20 hours on a moderate computer. © 2002 Elsevier Science B.V. All rights reserved.

```
@article{diva2:267710,
author = {Blomvall, Jörgen and Lindberg, Per Olov},
title = {{A Riccati-based primal interior point solver for multistage stochastic programming}},
journal = {European Journal of Operational Research},
year = {2002},
volume = {143},
number = {2},
pages = {452--461},
}
```

We apply a recent extension of the Bregman proximal method for convex programming to LP relaxations of 0-1 problems. We allow inexact subproblem solutions obtained via dual ascent, increasing their accuracy successively to retain global convergence. Our framework is applied to relaxations of large-scale set covering problems that arise in airline crew scheduling. Approximate relaxed solutions are used to construct primal feasible solutions via a randomized heuristic. Encouraging preliminary experience is reported.

```
@article{diva2:289371,
author = {Kiwiel, Krzysztof C and Olov Lindberg, Per and Nou, Andreas},
title = {{Bregman proximal relaxation of large-scale 0-1 problems}},
journal = {Computational optimization and applications},
year = {2000},
volume = {15},
number = {1},
pages = {33--44},
}
```

Detailed three dimensional models are nowadays frequently used in cross-cutting (bucking) tree stems into logs and in breakdown processes of logs into boards and flitches. Such models require increasingly sophisticated optimization models to assist planners (or automated decision support systems) in decision making. In this paper we develop two techniques that are linked to each other. The first is concerned with establishing high quality analytic approximations of full trees that are needed in full stem bucking applications. One important aspect is that inaccuracies due to measurement error can be reduced. The second is a transformation technique that makes it possible to apply curve sawing on logs in a standard straight sawing system. Numerical results based on real data are presented that support the usefulness of the techniques.

```
@article{diva2:270510,
author = {Rönnqvist, Mikael and Todoroki, C and Allsopp, T},
title = {{Usage of 3D geometry descriptions and transformation techniques in log bucking and curve sawing}},
journal = {Annals of Operations Research},
year = {2000},
volume = {95},
pages = {93--113},
}
```

```
@article{diva2:253326,
author = {Engevall, Stefan and Göthe-Lundgren, Maud and Värbrand, Peter},
title = {{A strong lower bound for the node weighted Steiner tree problem}},
journal = {Networks},
year = {1998},
volume = {31},
pages = {11--17},
}
```

## Books

Optimering är en komplett grundbok i optimeringslära som passar för både enklare och mer avancerade kurser på universitetsnivå. Läsaren får en mångsidig verktygslåda för att lösa praktiska optimeringsproblem. Fokus ligger på användbara metoder snarare än teori.Läs merAndra utmärkande drag:? Täcker alla delar av optimeringens grunder? Är ovanligt grundlig i behandlingen av kombinatoriskoptimering och optimering av grafer? Innehåller en stor mängd övningsuppgifter, tillräckligt förbåde lärarledda lektioner och hemuppgifterSpråket är enkelt och beskrivningarna inleds genomgående med exempel där författaren konkret förklarar när och hur metoderna kan användas.

```
@book{diva2:403332,
author = {Holmberg, Kaj},
title = {{Optimering}},
publisher = {Liber},
year = {2010},
}
```

This book stems from a desire to understand the underlying assumptions and structureof the choice probability models most often used in transportation planning. The bookinvestigates how far a new way of defining cost minimizing behavior can take us. Allcommonly used choice probability distributions of the logit type – log linear probabilityfunctions – follow from cost minimizing behavior defined in the new way; some newnested models also appear. The new approach provides a deeper understanding of whatis at work in the models. The new way of defining cost minimizing behavior is as follows:cost minimizing behavior pertains if the likelihood (probability) of any independentsample of observations is a decreasing function of the average cost of the sample.Extreme value distributed random variables are not used in the derivation of models. Ameasure of freedom of choice related to the Shannon measure of how much "choice" isinvolved is used to obtain a welfare measure which is equal to composite cost.... more on http://springer.com/978-3-642-11910-1

```
@book{diva2:382894,
author = {Erlander, Sven},
title = {{Cost-Minimizing Choice Behavior in Transportation Planning:
A Theoretical Framework fo Logit Models}},
publisher = {Springer},
year = {2010},
address = {Heidelberg},
}
```

## Book chapters

We provide new insights into the mean-variance portfolio optimization problem, based on performing eigendecomposition of the covariance matrix. The result of this decomposition can be given an interpretation in terms of uncorrelated eigenportfolios. When only some of the eigenvalues and eigenvectors are used, the resulting mean-variance problem is an approximation of the original one. A solution to the approximation yields lower and upper bounds on the original mean-variance problem; these bounds are tight if sufficiently many eigenvalues and eigenvectors are used in the approximation. Even tighter bounds are obtained through the use of a linearized error term of the unused eigenvalues and eigenvectors.

We provide theoretical results for the upper bounding quality of the approximate problem and the cardinality of the portfolio obtained, and also numerical illustrations of these results. Finally, we propose an ad hoc linear transformation of the mean-variance problem, which in practice significantly strengthens the bounds obtained from the approximate mean-variance problem.

```
@incollection{diva2:814620,
author = {Mayambala, Fred and Rönnberg, Elina and Larsson, Torbjörn},
title = {{Eigendecomposition of the mean-variance portfolio optimization model}},
booktitle = {Optimization, control, and applications in the information age},
year = {2015},
pages = {209--232},
publisher = {Springer},
}
```

Mathematical programs with cardinality constraints are optimization problems with an additional constraint which requires the solution to be sparse in the sense that the number of nonzero elements, i.e. the cardinality, is bounded by a given constant. Such programs can be reformulated as a mixed-integer ones in which the sparsity is modeled with the use of complementarity-type constraints. It is shown that the standard relaxation of the integrality leads to a nonlinear optimization program of the striking property that its solutions (global minimizers) are the same as the solutions of the original program with cardinality constraints. Since the number of local minimizers of the relaxed program is typically larger than the number of local minimizers of the cardinality-constrained problem, the relationship between the local minimizers is also discussed in detail. Furthermore, we show under which assumptions the standard KKT conditions are necessary optimality conditions for the relaxed program. The main result obtained for such conditions is significantly different from the existing optimality conditions that are known for the somewhat related class of mathematical programs with complementarity constraints.

```
@incollection{diva2:763725,
author = {Burdakov, Oleg and Kanzow, Christian and Schwartz, Alexandra},
title = {{On a Reformulation of Mathematical Programs with Cardinality Constraints}},
booktitle = {Advances in Global Optimization},
year = {2015},
pages = {3--14},
publisher = {Springer International Publishing},
address = {Switzerland},
}
```

This book presents some recent systems engineering and mathematical tools for health care along with their real-world applications by health care practitioners and engineers. Advanced approaches, tools, and algorithms used in operating room scheduling and patient flow are covered. State-of-the-art results from applications of data mining, business process modeling, and simulation in healthcare, together with optimization methods, form the core of the volume. Systems Analysis Tools for Better Health Care Delivery illustrates the increased need of partnership between engineers and health care professionals. This book will benefit researchers and practitioners in health care delivery institutions, staff members and professionals of specialized hospital units, and lecturers and graduate students in engineering, applied mathematics, business administration and health care.

```
@incollection{diva2:512138,
author = {Rönnberg, Elina and Larsson, Torbjörn and Bertilsson, Ann},
title = {{Automatic scheduling of nurses:
What does it take in practice?}},
booktitle = {Systems Analysis Tools for Better Healthcare Delivery},
year = {2013},
pages = {151--178},
publisher = {Springer Science+Business Media B.V.},
address = {New York, NY},
}
```

The inverse shortest path problem has received considerable attention in the literature since the seminal paper by Burton and Toint from 1992. Given a graph and a set of paths the problem is to find arc costs such that all specified paths are shortest paths. The quality of the arc costs is measured by the deviation from some ideal arc costs. Our contribution is a novel modeling technique for this problem based on fundamental cycle bases. For LP compatible norms we present a cycle basis model equivalent to the LP dual. The LP dual of our cycle basis model is a path based model that only requires a polynomial number of path constraints. This model is valid also for LP incompatible norms. This yields the first polynomial sized path formulation of the original problem.

```
@incollection{diva2:589583,
author = {Call, Mikael and Holmberg, Kaj},
title = {{Inverse Shortest Path Models Based on Fundamental Cycle Bases}},
booktitle = {Operations Research Proceedings 2011},
year = {2012},
pages = {77--82},
publisher = {Springer Berlin/Heidelberg},
}
```

The inverse shortest path routing problem is to decide if a set of tentative routing patterns is simultaneously realizable. A routing pattern is defined by its destination and two arc subsets of required shortest path arcs and prohibited non-shortest path arcs. A set of tentative routing patterns is simultaneously realizable if there is a cost vector such that for all routing patterns it holds that all shortest path arcs are in some shortest path and no non-shortest path arc is in any shortest path to the destination of the routing pattern. Our main result is that this problem is NP-complete, contrary to what has been claimed earlier in the literature. Inverse shortest path routing problems naturally arise as a subproblem in bilevel programs where the lower level consists of shortest path problems. Prominent applications that fit into this framework include traffic engineering in IP networks using OSPF or IS-IS and in Stackelberg network pricing games. In this paper we focus on the common subproblem that arises if the bilevel program is linearized and solved by branch-and-cut. Then, it must repeatedly be decided if a set of tentative routing patterns is realizable. In particular, an NP-completeness proof for this problem is given.

```
@incollection{diva2:467470,
author = {Holmberg, Kaj and Call, Mikael},
title = {{Complexity of Inverse Shortest Path Routing}},
booktitle = {Network Optimization},
year = {2011},
pages = {339--353},
publisher = {Springer Berlin/Heidelberg},
}
```

Algorithmic discrete mathematics plays a key role in the development of information and communication technologies, and methods that arise in computer science, mathematics and operations research – in particular in algorithms, computational complexity, distributed computing and optimization – are vital to modern services such as mobile telephony, online banking and VoIP. This book examines communication networking from a mathematical viewpoint. The contributing authors took part in the European COST action 293 – a four-year program of multidisciplinary research on this subject. In this book they offer introductory overviews and state-of-the-art assessments of current and future research in the fields of broadband, optical, wireless and ad hoc networks. Particular topics of interest are design, optimization, robustness and energy consumption. The book will be of interest to graduate students, researchers and practitioners in the areas of networking, theoretical computer science, operations research, distributed computing and mathematics.

```
@incollection{diva2:467481,
author = {Holmberg, Kaj},
title = {{Optimization of OSPF Routing in IP Networks}},
booktitle = {Graphs and Algorithms in Communication Networks},
year = {2010},
publisher = {Springer Berlin/Heidelberg},
}
```

Welcome to the latest volume of AI Game Programming Wisdom! AI Game Programming Wisdom 4 includes a collection of more than 50 new articles featuring cutting-edge techniques, algorithms, and architectures written by industry professionals for use in commercial game development. Organized into 7 sections, this comprehensive volume explores every important aspect of AI programming to help you develop and expand your own personal AI toolbox. You'll find ready-to-use ideas, algorithms, and code in all key AI areas including general wisdom, scripting and dialogue, movement and pathfinding, architecture, tactics and planning, genre specific, and learning and adaptation. New to this volume are articles on recent advances in realistic agent, squad, and vehicle movement, as well as dynamically changing terrain, as exemplified in such popular games as Company of Heroes.You'll also find information on planning as a key game architecture, as well as important new advances in learning algorithms and player modeling. AI Game Programming Wisdom 4 features coverage of multiprocessor architectures, Bayesian networks, planning architectures, conversational AI, reinforcement learning, and player modeling.These valuable and innovative insights and issues offer the possibility of new game AI experiences and will undoubtedly contribute to taking the games of tomorrow to the next level.

```
@incollection{diva2:261848,
author = {Olsson, Per-Magnus},
title = {{Practical Pathfinding in Dynamic Environments}},
booktitle = {AI Game Programming Wisdom 4},
year = {2008},
publisher = {Charles River},
address = {Boston},
}
```

*O*(

*n*

^{2}) algorithm for isotonic regression problems", Large-Scale Nonlinear Optimization, Nonconvex Optimization and Its Applications, No. 83, 25-33, 2006.

**Large-Scale Nonlinear Optimization **reviews and discusses recent advances in the development of methods and algorithms for nonlinear optimization and its applications, focusing on the large-dimensional case, the current forefront of much research.

The chapters of the book, authored by some of the most active and well-known researchers in nonlinear optimization, give an updated overview of the field from different and complementary standpoints, including theoretical analysis, algorithmic development, implementation issues and applications

```
@incollection{diva2:357983,
author = {Burdakov, Oleg and Sysoev, Oleg and Grimvall, Anders and Hussian, Mohammed},
title = {{An \emph{O}(\emph{n}$^{2}$) algorithm for isotonic regression problems}},
booktitle = {Large-Scale Nonlinear Optimization},
year = {2006},
pages = {25--33},
publisher = {Springer-Verlag},
}
```

"I highly recommend **The Handbook of Optimization in Telecommunications** as an invaluable resource for anyone interested in understanding the impact of optimization on the most import problems facing the telecommunications industry today.

The handbook is unprecedented in the breadth and depth of its coverage, illustrating that telecommunications offers a vast array of interesting and important optimization problems probably exceeding the traditional areas of transportation networks, engineering, economics and military operations.”

—Clyde Monma, Retired Chief Scientist, Applied Research Area, Telcordia Technologies

Telecommunications has had a major impact in all aspects of life in the last century. There is little doubt that the transformation from the industrial age to the information age has been fundamentally influenced by advances in telecommunications.

Optimization problems are abundant in the telecommunications industry. The successful solution of these problems has played an important role in the development of telecommunications and its widespread use. Optimization problems arise in the design of telecommunication systems and in their operation.

**The Handbook of Optimization in Telecommunications** brings together experts from around the world who use optimization to solve problems that arise in telecommunications. The editors made an effort to cover recent optimization developments that are frequently applied to telecommunications. The spectrum of topics covered includes planning and design of telecommunication networks, routing, network protection, grooming, restoration, wireless communications, network location and assignment problems, Internet protocol, World Wide Web, and stochastic issues in telecommunications. The editors’ objective is to provide a reference tool for the increasing number of scientists and engineers in telecommunications who depend upon optimization in some way.

Each chapter in the handbook is of an expository nature, but of scholarly treatment, and includes a brief overview of the state-of-the-art thinking relative to the topic, as well as pointers to the key references in the field. Specialists as well as nonspecialists should find this handbook stimulating and helpful.

```
@incollection{diva2:251478,
author = {Henningsson, Mathias and Holmberg, Kaj and Yuan, Di},
title = {{Ring Network Design}},
booktitle = {Handbook of Optimization in Telecommunications},
year = {2006},
pages = {291--312},
publisher = {Springer Science + Business Media},
address = {New York},
}
```

Honoring David Boyce for his legendary contributions to the fields of transportation modeling and regional science, the chapters in this festschrift highlight and analyze state-of-the-art and state-of-the-practice methodologies and theories in transportation modeling, regional and urban planning. Authors from academia and industry, all experts in planning, engineering, management, economics and related disciplines, provide important new contributions to this wide-ranging literature, as well as extensions of David Boyce's seminal work. This volume goes well beyond the traditional festschrift and stands as an important reference tool in its own right. Academics, researchers and students will find this comprehensive volume a valuable additional to their library.

```
@incollection{diva2:242422,
author = {Erlander, Sven and Lundgren, Jan},
title = {{Cost minimizing behavior in random discrete choice modelling}},
booktitle = {Urban and Regional Transportation Modeling},
year = {2004},
pages = {70--82},
publisher = {Edward Elgar},
address = {Cheltenham, UK},
}
```

Systems analysis in forestry has continued to advance in sophistication and diversity of application over the last few decades. The papers in this volume were presented at the eighth symposium in the foremost conference series worldwide in this subject area. Techniques presented include optimization and simulation modelling, decision support systems, alternative planning techniques, and spatial analysis. Over 30 papers and extended abstracts are grouped into the topical areas of (1) fire and fuels; (2) networks and transportation; (3) forest and landscape planning; (4) ecological modeling, biodiversity, and wildlife; and (5) forest resource applications. This collection will be of interest to forest planners and researchers who work in quantitative methods in forestry.

```
@incollection{diva2:242673,
author = {Gunnarsson, Helene and Lundgren, Jan and Rönnqvist, Mikael},
title = {{Supply chain modelling of forest fuel}},
booktitle = {Systems analysis in forest resources :},
year = {2003},
publisher = {Kluwer},
}
```

This proceedings volume contains a selection of papers presented at the International Conference on Operations Research (SOR 2002).The contributions cover the broad interdisciplinary spectrum of Operations Research and present recent advances in theory, development of methods, and applications in practice. Subjects covered are Production, Logistics and Supply Chain Production, Marketing and Data Analysis, Transportation and Traffic, Scheduling and Project Management, Telecommunication and Information Technology, Energy and Environment, Public Economy, Health, Agriculture, Education, Banking, Finance, Insurance, Risk Management, Continuous Optimization, Discrete and Combinatorial Optimization, Stochastic and Dynamic Programming, Simulation, Control Theory, Systems Dynamics, Dynamic Games, Game Theory, Auctioning and Bidding, Experimental Economics, Econometrics, Statistics and Mathematical Economics, Fuzzy Logic, Multicriteria Decision Making, Decision Theory.

```
@incollection{diva2:242672,
author = {Daneva, Maria and Lindberg, Per Olov},
title = {{A Conjugate Direction Frank-Wolfe Method with Applications to the Traffic Assignment Problem}},
booktitle = {Operations Research Proceedings 2002: Selected Papers of the International Conference on Operations Research (SOR 2002), Klagenfurt, September 2-5, 2002"},
year = {2003},
publisher = {Springer},
}
```

Systems analysis in forestry has continued to advance in sophistication and diversity of application. This work features papers, which were presented at the eighth symposium focusing on techniques which include optimization and simulation modelling, decision support systems, alternative planning techniques, and spatial analysis.

```
@incollection{diva2:242671,
author = {Johan, Bergström and Karlsson, Jenny and Rönnqvist, Mikael},
title = {{Annual harvest planning integrated with crew assignment and transportation planning}},
booktitle = {Systems Analysis in Forest Resources},
year = {2003},
publisher = {Kluwer},
}
```

```
@incollection{diva2:599902,
author = {Burdakov, Oleg and Felgenhauer, Ursula},
title = {{Stable multi-point secant methods with released requirements to point's position}},
booktitle = {System Modelling and Optimization},
year = {1994},
pages = {225--236},
publisher = {Springer},
address = {London/Berlin},
}
```

```
@incollection{diva2:599919,
author = {Burdakov, Oleg},
title = {{Component-wise quasi-Newton methods}},
booktitle = {Numerical methods of nonlinear programming and their implementations},
year = {1991},
pages = {17--27},
publisher = {Akademie Verlag},
address = {Berlin},
}
```

## Conference papers

In modern integrated modular avionic systems, applications share hardware resources on a common avionic platform. Such an architecture necessitates strict requirements on the spatial and temporal partitioning of the system to prevent fault propagation between different aircraft functions. One way to establish a temporal partitioning is through pre-runtime scheduling of the system, which involves creating a schedule for both tasks and a communication network.

While the avionic systems are growing more and more complex, so is the challenge of scheduling them. Scheduling of the system has an important role in the development of new avionic systems since functionality typically is added to the system over a period of several years and a scheduling tool is used both to detect if the platform can host the new functionality and, in case this is possible, to create a new schedule. For this reason an exact solution strategy for avionics scheduling is preferred over a heuristic one.

In this paper we present a mathematical model for an industrially relevant avionic system and present a constraint generation procedure for scheduling of such systems. We apply our optimisation approach to instances provided by our industrial partner. These instances are of relevance for the development of future avionic systems and contain up to 20 000 tasks to be scheduled. The computational results show that our optimisation approach can be used to create schedules for such instances within reasonable time.

```
@inproceedings{diva2:1110471,
author = {Blikstad, Mathias and Karlsson, Emil and Lööw, Tomas and Rönnberg, Elina},
title = {{A constraint generation procedure for pre-runtime scheduling of integrated modular avionic systems}},
booktitle = {Proceedings of the 13th Workshop on Models and Algorithms for Planning and Scheduling Problems},
year = {2017},
}
```

This work investigates and discusses how the introduction of electric buses (EB), both battery and plug-in hybrid EB, will and should change the operations planning of a public transit system. It is shown that some changes are required in the design of a transit route network, and in the timetabling and vehicle scheduling processes. Other changes are not required, but are advisable, using this opportunity upon the introduction of EB. The work covers the main characteristics of different types of EB with a short description, including the most popular charging technologies, and it presents the generally accepted transit operations planning process. Likewise, it describes and analytically formulates new challenges that arise when introducing EB. The outcome of the analyses shows that multiple new considerations must take place. It is also shown that the different charging techniques will influence the operations planning process in different ways and to a varying extent. With overnight, quick and continuous charging, the main challenges are in the network route design step, given the possibility of altering the existing network of routes, with efficient and optimal changes of the timetabling and vehicle scheduling components.

```
@inproceedings{diva2:1066133,
author = {Häll, Carl Henrik and Ceder, Avishai (Avi) and Ekström, Joakim and Quttineh, Nils-Hassan},
title = {{Adjustments of Public Transit Operations Planning Process for the Use of Electric Buses}},
booktitle = {TRB Annual Meeting Online, 2017},
year = {2017},
publisher = {Transportation Research Board},
}
```

```
@inproceedings{diva2:1070443,
author = {Zhao, Yixin and Larsson, Torbjörn and Rönnberg, Elina},
title = {{A Large Neighbourhood Search Principle for Column-Oriented Models:
Theoretical Derivation and Example Applications}},
booktitle = {Matheuristics 2016},
year = {2016},
series = {IRIDIA -- Technical Report Series},
volume = {2016-007},
publisher = {IRIDIA},
address = {Bruxelles, Belgium},
}
```

The mean-variance problem introduced by Markowitz in 1952 is a fundamental model in portfolio optimization up to date. When cardinality and bound constraints are included, the problem becomes NP-hard and the existing optimizing solution methods for this problem take a large amount of time.

We introduce a core problem based method for obtaining upper bounds to the meanvariance portfolio optimization problem with cardinality and bound constraints. The method involves performing eigendecomposition on the covariance matrix and then using only few of the eigenvalues and eigenvectors to obtain an approximation of the original problem. A solution of this approximate problem has a relatively low cardinality and is used to construct a core problem. When solved, the core problem provides an upper bound. We test the method on large scale problems of up to 1000 assets. The obtained upper bounds are of high quality and the time required to obtain them is much less than what state-of-the-art mixed integer softwares use, which makes it practically useful.

```
@inproceedings{diva2:814624,
author = {Mayambala, Fred and Rönnberg, Elina and Larsson, Torbjörn},
title = {{Tight Upper Bounds on the Cardinality Constrained Mean-Variance Portfolio Optimization Problem Using Truncated Eigendecomposition}},
booktitle = {Operations Research Proceedings 2014},
year = {2016},
series = {Operations Research Proceedings},
pages = {385--392},
publisher = {Springer},
}
```

We introduce a time-indexed mixed-integer linear programming model for a military aircraft mission planning problem, where a fleet of cooperating aircraft should attack a number of ground targets so that the total expected effect is maximized. The model is a rich vehicle routing problem and the direct application of a general solver is practical only for scenarios of very moderate sizes. We propose a Dantzig-Wolfe reformulation and column generation approach. A column here represents a specific sequence of tasks at certain times for an aircraft, and to generate columns a longest path problem with side constraints is solved. We compare the column generation approach with the time-indexed model with respect to upper bounding quality of their linear programming relaxations and conclude that the former provides a much stronger formulation of the problem.

```
@inproceedings{diva2:954050,
author = {Quttineh, Nils-Hassan and Larsson, Torbjörn and Van den Bergh, Jorne and Belien, Jeroen},
title = {{A Time-Indexed Generalized Vehicle Routing Model and Stabilized Column Generation for Military Aircraft Mission Planning}},
booktitle = {OPTIMIZATION, CONTROL, AND APPLICATIONS IN THE INFORMATION AGE: IN HONOR OF PANOS M. PARDALOSS 60TH BIRTHDAY},
year = {2015},
series = {Springer Proceedings in Mathematics \& Statistics},
pages = {299--314},
publisher = {SPRINGER},
}
```

We present a mathematical optimization model that can be used as a decision support tool for the supply chain planning at Perstorp Oxo AB, a global company in the process industry. At their site in Stenungsund, Perstorp Oxo AB produce chemicals to customers in a variety of branches and for further refinement at other Perstorp sites in Gent, Castellanza and Perstorp. The customers are mainly in branches such as food and feed, leather and textile, plastic and safety glass production. Since Perstorp Oxo sells products to customers worldwide, two large inventory facilities are located in Antwerp (Belgium) and Tees (United Kingdom) for five product types each and two smaller facilities in Philadelphia (USA) and Aveiro (Portugal) for one type respectively. The developed model is a mixed-integer linear program, where the objective function maximizes the profit. A solution to the model shows the quantities to be transported between the different sites, production rates, inventory levels, setups and purchases from external suppliers, each with its respective cost. Based on actual sales data from Perstorp Oxo AB, we use rolling horizon techniques to simulate how customer demands vary over a time horizon of one year, and show that our optimization model is able to find feasible and profitable production plans. The results show that there is a potential to increase profit margin by using a decision support tool based on an optimization model.

```
@inproceedings{diva2:790509,
author = {Quttineh, Nils-Hassan and Lidestam, Helene},
title = {{Using rolling horizon techniques in the planning process for a chemical process industry}},
booktitle = {\emph{Pre-Prints, Vol.1, 18th International Working Seminars on Production Economics, Innsbruck, Austria, February 2014}.},
year = {2014},
pages = {381--393},
}
```

Planning and scheduling are functions that have large economic impact in the chemical process industry. For integrated sites with many interconnected production areas, obtaining production schedules that respect all production-related constraints is a complex task. One important issue is the constraints due to disturbances in utilities, such as steam and cooling water. These are often site-wide disturbances that may make it impossible to maintain desired production rates in several production areas at a site. In this study, scheduling at two levels of the functional hierarchy at a site of a world lead chemical industry, Perstorp, is handled. The activities are denoted production scheduling (PS) and detailed production scheduling (DPS). Real data of incoming orders and utility disturbances are used to produce a production schedule and detailed production schedule for one month. The PS and DPS problems are formulated as optimization problems, where production-related constraints such as production rate constraints, inventory limitations, and start-up costs are included. The objective functions of the PS and DPS problems are formulated to reflect the importance of different issues at the site. The procedure aims to show how the hierarchical optimization framework may be used to provide decision support for how to operate the production at a site in order to maximize profit while minimizing the effects of site-wide disturbances.

```
@inproceedings{diva2:790504,
author = {Lindholm, Anna and Quttineh, Nils-Hassan and Lidestam, Helene},
title = {{Hierarchical Production Scheduling:
A Case Study at Perstorp}},
booktitle = {24th European Symposium on Computer Aided Process Engineering},
year = {2014},
series = {Computer Aided Chemical Engineering},
volume = {33},
pages = {511--516},
publisher = {Elsevier},
}
```

There has recently been a growing interest in analysing road pricing schemes in urban areas using dynamic traffic assignment (DTA) tools. Finding optimal toll levels in cordon based road pricing schemes has so far mainly been studied using either derivative-free heuristics or ascent methods. For future use of DTA tools such methods are not suitable and in this paper we investigate how a surrogate modelling framework can be used instead. We focus on cases when the number of costly objective function evaluations is limited to between 20 and 40. In order to allow a large number of different configurations of the surrogate modelling framework to be evaluated, a static user equilibrium model is used for simulating the road users’ response to a given pricing scheme. The results show that for a realistic scenario, valuable information on close to optimal toll levels can be achieved with only 20 costly function evaluations.

```
@inproceedings{diva2:772161,
author = {Ekström, Joakim and Quttineh, Nils-Hassan},
title = {{Surrogate-based optimisation of toll levels in congestion pricing schemes}},
booktitle = {Transportation Infrastructure},
year = {2014},
pages = {209--216},
publisher = {Hong Kong Society of Transportation Studies Limited},
address = {Hong Kong},
}
```

There has recently been a growing interest in analysing road pricing schemes in urban areas using dynamic traffic assignment (DTA) tools. The motivation behind this development is the problem for static transportation models to accurately predict travel time savings, from introducing road pricing, in networks with severe congestion. Finding optimal toll levels and locations in urban road traffic networks has so far mainly been studied using either derivative-free heuristics (e.g. genetic algorithms and simulated annealing) or ascent methods. Both approaches rely on fast computations of the road users response (traffic flows, travel times and demands), given the road pricing scheme, and for the case of ascent methods, the methods also rely on fast computations (or rather approximation) of derivatives. Using DTA tools for evaluating the road users’ response to a pricing scheme is, however, very computationally expensive. Previously developed methods are therefore not suitable to use together with DTA.

Surrogate models, e.g. in terms of response surfaces, are commonly used for optimisation problems with expensive-to-evaluate objective functions. The surrogate model is used for approximating the expensive-to-evaluate objective function, and the optimisation is then done on the surrogate model instead. The performances of optimisation methods based on surrogate models are, however, dependent on experimental design, infill strategy and choice of surrogate model itself. The experimental design will give the initial set of toll levels, for which the DTA needs to be evaluated, the infill strategy determined additional toll levels to be evaluated by the DTA, and the choice of surrogate model will give the functional form to be fitted to the sampled toll levels.

We apply a surrogate model framework for optimising toll levels in a multiple cordon pricing scheme. In the first stage we evaluate the experimental design, infill strategy and choice of surrogate model, using a static macroscopic traffic model. This allows a large number of experiments to be carried out, which would not be possible with a DTA tool. It also allows us to compare the performance of the surrogate modelling approach with other global optimisation methods. In the second stage, the insight which has been gained from the experiments with the static model is used when applying the surrogate modelling approach to a DTA model of Stockholm.

Computational results are presented for a Stockholm network with three cordons, each with differentiated toll level in both directions, resulting in a total of six toll level variables. Surrogate models in the form of Radial Basis Functions and Kriging models are evaluated with a static model of Stockholm, for different initial experimental designs, infill strategies and choice of surrogate models. In comparison with previously developed derivative based methods for static models, our results show that the surrogate based optimisation approach performs better, since it allows for metaheuristic methods to search for global optimal solutions efficiently.

```
@inproceedings{diva2:746758,
author = {Ekström, Joakim and Quttineh, Nils-Hassan and Kristoffersson, Ida},
title = {{Simulation based optimisation of toll levels in urban road traffic networks}},
booktitle = {Nationella konferensen i transportforskning, 21-22 oktober 2014, Linköpings universitet, Campus Norrköping},
year = {2014},
}
```

We consider the directed Steiner tree problem (DSTP) with a constraint on the total number of arcs (hops) in the tree. This problem is known to be NP-hard, and therefore, only heuristics can be applied in the case of its large-scale instances.For the hop-constrained DSTP, we propose local search strategies aimed at improving any heuristically produced initial Steiner tree. They are based on solving a sequence of hop-constrained shortest path problems for which we have recently developed efficient label correcting algorithms.The presented approach is applied to finding suitable 3D locations where unmanned aerial vehicles (UAVs) can be placed to relay information gathered in multi-target monitoring and surveillance. The efficiency of our algorithms is illustrated by results of numerical experiments involving problem instances with up to 40 000 nodes and up to 20 million arcs.

```
@inproceedings{diva2:742139,
author = {Burdakov, Oleg and Doherty, Patrick and Kvarnström, Jonas},
title = {{Local Search for Hop-constrained Directed Steiner Tree Problem with Application to UAV-based Multi-target Surveillance}},
booktitle = {Examining Robustness and Vulnerability of Networked Systems},
year = {2014},
series = {NATO Science for Peace and Security Series - D: Information and Communication Security},
volume = {37},
pages = {26--50},
publisher = {IOS Press},
}
```

The selection of a mine design is based on estimating net present values of all possible, technically feasible mine plans so as to select the one with the maximum value. It is a hard task to know with certainty the quantity and quality of ore in the ground. This geological uncertainty and also the future market behavior of metal prices and foreign exchange rates, which are always uncertain, make mining a high risk business. Value-at-Risk (VaR) is a measure that is used in financial decisions to minimize the loss caused by inadequate monitoring of risk. This measure does, however, have certain drawbacks such as lack of consistency, nonconvexity, and nondifferentiability. Rockafellar and Uryasev [J. Risk 2, 21-41 (2000)] introduce the Conditional Value-at-Risk (CVaR) measure as an alternative to the VaR measure. The CVaR measure gives rise to a convex optimization problem. An optimization model that maximizes expected return while minimizing risk is important for the mining sector as this will help make better decisions on the blocks of ore to mine at a particular point in time. We present a CVaR approach to the uncertainty involved in open-pit mining. We formulate investment and design models for the open-pit mine and also give a nested pit scheduling model based on CVaR. Several numerical results based on our models are presented by using scenarios from simulated geological and market uncertainties.

```
@inproceedings{diva2:714909,
author = {Amankwah, Henry and Larsson, Torbjörn and Textorius, Björn},
title = {{Open-pit mining with uncertainty:
A conditional value-at-risk approach}},
booktitle = {Optimization Theory, Decision Making, and Operations Research Applications},
year = {2013},
series = {Springer Proceedings in Mathematics and Statistics},
volume = {31},
pages = {117--139},
publisher = {Springer},
address = {New York},
}
```

We study resource allocation in cellular systems and consider the problem of finding a power efficient scheduling in an uplink single carrier frequency division multiple access (SC-FDMA) system with localized allocation of subcarriers, that is, the subcarriers allocated to a user equipment have to be consecutive in the frequency domain in each time slot. This problem is discrete and nonconvex, thus the use of suboptimal algorithms has been a common practice. We leverage the power of mathematical programming in order to approach global optimality or a tight bounding interval confining global optimum, to arrive at an effective scheme for gauging the performance of suboptimal algorithms. Toward this end, we first provide a straightforward integer linear programming formulation, and then an alternative and less trivial, so-called column-oriented, formulation. The latter is solved by column generation, which is a solution technique for large-scale optimization problems with certain characteristics. The computational evaluation demonstrates that the column generation method produces very highquality subcarrier allocations that either coincide with the global optimum or enable an extremely sharp bounding interval. Hence the approach serves well for the purpose of benchmarking results for large-scale instances of power efficient SC-FDMA scheduling.

```
@inproceedings{diva2:714898,
author = {Zhao, Hongmei and Lei, Lei and Yuan, Di and Larsson, Torbjörn and Rönnberg, Elina},
title = {{Power efficient uplink scheduling in SC-FDMA: Bounding global optimality by column generation}},
booktitle = {2013 IEEE 18th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD},
year = {2013},
series = {International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD)},
volume = {2013},
pages = {119--123},
publisher = {IEEE},
}
```

The purpose of this paper is to develop a mathematical optimization model that can be used as a decision support tool for the supply chain planning at Perstorp Oxo AB, a global company in the process industry. At their site in Stenungsund, Perstorp Oxo AB produce chemicals to customers in a variety of branches and for further refinement at other Perstorp sites in Gent, Castellanza and Perstorp. The customers are mainly in branches such as food and feed, leather and textile, plastic and safety glass production. Since Perstorp Oxo sells products to customers worldwide, two large inventory facilities are located in Antwerp (Belgium)and Tees (United Kingdom) for five product types each and two smaller facilities in Philadelphia (USA) and Aveiro (Portugal) for one type respectively. The developed model is a mixed-integer linear program, where the objective function maximizes the profit margin, that is, the difference between the selling price and the cost of production, transportation, inventory carrying and outsourcing. A solution to the model shows the quantities to be transported between the different sites, production rates, inventory levels, setups and purchases from external suppliers, each with its respective cost. The results of a baseline scenario show that there is a potential to increase profit margin by using a decision support tool based on an optimization model.

```
@inproceedings{diva2:697971,
author = {Quttineh, Nils-Hassan and Lidestam, Helene and Ahlstedt, Mårten and Olsson, Sven},
title = {{Supply Chain Planning at a Chemical Process Industry}},
booktitle = {\emph{Proceedings for Decision Science Institute (DSI 2013), The 44th Annual Meeting, 2013}},
year = {2013},
pages = {671895 - 1--671895 - 19},
publisher = {Decision Sciences Institute},
}
```

The integration of scheduling and control in the process industry is a topic that has been frequently discussed during the recent years, but many challenges remain in order to achieve integrated solutions that can be implemented for large-scale industrial sites. In this paper we consider production control under disturbances in the supply of utilities at integrated sites together with the integration towards production scheduling. Utilities, such as steam and cooling water, are often shared between the production areas of a site, which enables formulation of an optimization problem for determining the optimal supply of utilities to each area at the occurrence of a disturbance. Optimization in two timescales is suggested to handle the scheduling and disturbance management problems in a hierarchical fashion. The suggested structure has been discussed with companies within the chemical process industry. A simple example is provided to show how the structure may be used

```
@inproceedings{diva2:697953,
author = {Lindholm, Anna and Johnsson, Charlotta and Quttineh, Nils-Hassan and Lidestam, Helene and Henningsson, Mathias and Wikner, Joakim and Tang, Ou and Nytz\'{e}n, Nils-Petter and Forsman, Krister},
title = {{Hierarchical Scheduling and Utility Disturbance Management in the Process Industry}},
booktitle = {Proceedings for IFAC Conference on Manufacturing Modelling, Management and Control (MIM2013), 2013},
year = {2013},
pages = {140--145},
}
```

The purpose of this paper is to formulate an optimization model for the production scheduling problem at continuous production sites. The production scheduling activity should produce a monthly schedule that accounts for orders and forecasts of all products. The plan should be updated every day, with feedback on the actual production the previous day. The actual daily production may be lower than the planned production due to disturbances, e.g. disruptions in the supply of a utility. The work is performed in collaboration with Perstorp, a world-leading company within several sectors of the specialty chemicals market. Together with Perstorp, a list of specifications for the production scheduling has been formulated. These are formulated mathematically in a mixed-integer linear program that is solved in receding horizon fashion. The formulation of the model aims to be general, such that it may be used for any process industrial site.

```
@inproceedings{diva2:697931,
author = {Lindholm, Anna and Giselsson, Pontus and Quttineh, Nils-Hassan and Lidestam, Helene and Johnsson, Charlotta and Forsman, Krister},
title = {{Production Scheduling in the Process Industry}},
booktitle = {Proceedings for 22nd International Conference on Production Research, 2013},
year = {2013},
}
```

We consider tactical planning of a military operation on a large target scene where a number of specific targets of interest are positioned, using a given number of resources which can be, for example, fighter aircraft, unmanned aerial vehicles, or missiles. The targets could be radar stations or other surveillance equipment, with or without defensive capabilities, which the attacker wishes to destroy. Further, some of the targets are defended, by, for example, Surface-to-Air Missile units, and this defense capability can be used to protect also other targets. The attacker has knowledge about the positions of all the targets and also a reward associated with each target. We consider the problem of the attacker, who has the objective to maximize the expected outcome of a joint attack against the enemy. The decisions that can be taken by the attacker concern the allocation of the resources to the targets and what tactics to use against each target. We present a mathematical model for the attacker’s problem. The model is similar to a generalized assignment problem, but with a complex objective function that makes it intractable for large problem instances. We present approximate models that can be used to provide upper and lower bounds on the optimal value, and also provide heuristic solution approaches that are able to successfully provide near-optimal solutions to a number of scenarios.

```
@inproceedings{diva2:697919,
author = {Quttineh, Nils-Hassan and Larsson, Torbjörn and Lundberg, Kristian and Holmberg, Kaj},
title = {{Effect Oriented Planning of Joint Attacks}},
booktitle = {Optimization Theory, Decision Making, and Operations Research Applications},
year = {2013},
series = {Springer Proceedings in Mathematics \& Statistics},
volume = {31},
pages = {49--70},
publisher = {Springer-Verlag New York},
}
```

In this research piecewise monotonic models for problemscomprising seasonal cycles and monotonic trends are considered.In contrast to the conventional piecewise monotonicregression algorithms, our approach can efficientlyexploit a priori information about temporal patterns. Itis based on reducing these problems to monotonic regressionproblems defined on partially ordered data sets. Thelatter are large-scale convex quadratic programming problems.They are efficiently solved by the GPAV algorithm.

```
@inproceedings{diva2:439180,
author = {Sadoghi, Amirhossein and Burdakov, Oleg and Grimvall, Anders},
title = {{Piecewise Monotonic Regression Algorithm for Problems Comprising Seasonal and Monotonic Trends}},
booktitle = {SIAM Conference On Optimization May 16-19, 2011 Darmestadtium,Germany},
year = {2011},
series = {Book of abstracts SIAM Conference On Optimization May 16-19, 2011 Darmestadtium,Germany},
}
```

An important use of unmanned aerial vehicles is surveillance of distant targets, where sensor information must quickly be transmitted back to a base station. In many cases, high uninterrupted bandwidth requires line-of-sight between sender and transmitter to minimize quality degradation. Communication range is typically limited, especially when smaller UAVs are used. Both problems can be solved by creating relay chains for surveillance of a single target, and relay trees for simultaneous surveillance of multiple targets. In this paper, we show how such chains and trees can be calculated. For relay chains we create a set of chains offering different trade-offs between the number of UAVs in the chain and the chain’s cost. We also show new results on how relay trees can be quickly calculated and then incrementally improved if necessary. Encouraging empirical results for improvement of relay trees are presented.

```
@inproceedings{diva2:353975,
author = {Olsson, Per-Magnus and Kvarnström, Jonas and Doherty, Patrick and Burdakov, Oleg and Holmberg, Kaj},
title = {{Generating UAV Communication Networks for Monitoring and Surveillance}},
booktitle = {Proceedings of the 11th International Conference on Control, Automation, Robotics and Vision (ICARCV 2010)},
year = {2010},
pages = {1070--1077},
publisher = {IEEE conference proceedings},
}
```

When unmanned aerial vehicles (UAVs) are used to survey distant targets, it is important to transmit sensor information back to a base station. As this communication often requires high uninterrupted bandwidth, the surveying UAV often needs afree line-of-sight to the base station, which can be problematic in urban or mountainous areas. Communication ranges may also belimited, especially for smaller UAVs. Though both problems can be solved through the use of relay chains consisting of one or more intermediate relay UAVs, this leads to a new problem: Where should relays be placed for optimum performance? We present two new algorithms capable of generating such relay chains, one being a dual ascent algorithm and the other a modification of the Bellman-Ford algorithm. As the priorities between the numberof hops in the relay chain and the cost of the chain may vary, wecalculate chains of different lengths and costs and let the ground operator choose between them. Several different formulations for edge costs are presented. In our test cases, both algorithms are substantially faster than an optimized version of the original Bellman-Ford algorithm, which is used for comparison.

```
@inproceedings{diva2:242169,
author = {Burdakov, Oleg and Doherty, Patrick and Holmberg, Kaj and Kvarnström, Jonas and Olsson, Per-Magnus},
title = {{Positioning Unmanned Aerial Vehicles As Communication Relays for Surveillance Tasks}},
booktitle = {Robotics},
year = {2010},
pages = {257--264},
publisher = {MIT Press},
}
```

In this paper, the monotonic regression problem (MR) is considered. We have recentlygeneralized for MR the well-known Pool-Adjacent-Voilators algorithm(PAV) from the case of completely to partially ordered data sets. Thenew algorithm, called GPAV, combines both high accuracy and lowcomputational complexity which grows quadratically with the problemsize. The actual growth observed in practice is typically far lowerthan quadratic. The fitted values of the exact MR solution composeblocks of equal values. Its GPAV approximation has also a blockstructure. We present here a technique for refining blocks produced bythe GPAV algorithm to make the new blocks more close to those in theexact solution. This substantially improves the accuracy of the GPAVsolution and does not deteriorate its computational complexity. Thecomputational time for the new technique is approximately triple thetime of running the GPAV algorithm. Its efficiency is demonstrated byresults of our numerical experiments.

```
@inproceedings{diva2:283960,
author = {Burdakov, Oleg and Grimvall, Anders and Sysoev, Oleg},
title = {{Generalized PAV algorithm with block refinement for partially ordered monotonic regression}},
booktitle = {Proceedings of the Workshop on Learning Monotone Models from Data},
year = {2009},
pages = {23--37},
}
```

We study the NP-hard problem of scheduling andpower control with quality-of-service (QoS) constraints. We consider a generic wireless network comprising K mutually interfering links and N < K orthogonal time or frequency slots. We formulate the joint resource allocation problem as a constrained optimization problem, specifically, as a mixed integer programming (MIP) problem. This enables us to solve the problem exactly, and relatively efficiently for the vast majority of instances, using off-the-shelf algorithms. We also apply our formulation to the paradigm of cognitive underlay networks.

```
@inproceedings{diva2:246018,
author = {Karipidis, Eleftherios and Larsson, Erik G. and Holmberg, Kaj},
title = {{Optimal Scheduling and QoS Power Control for Cognitive Underlay Networks}},
booktitle = {Proceedings of the 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP'09)},
year = {2009},
pages = {408--411},
publisher = {IEEE},
}
```

```
@inproceedings{diva2:265549,
author = {Persson, Fredrik and Engevall, Stefan},
title = {{The Shift from Construction Site to Prefabrication in the Construction Industry:
A Case Study}},
booktitle = {Advances in Production Management Systems,2008},
year = {2008},
publisher = {Espoo},
address = {Espoo, Finland},
}
```

```
@inproceedings{diva2:264830,
author = {Blomvall, Jörgen and Henningsson, Mathias},
title = {{AN INTRODUCTORY PROJECT IN FINANCIAL ENGINEERING}},
booktitle = {4th International CDIO Conference,2008},
year = {2008},
}
```

```
@inproceedings{diva2:264452,
author = {Lidestam, Helene and Rönnqvist, Mikael},
title = {{Solving a multi-period supply chain problem for a pulp company using Lagrangian heuristics based on physical stages}},
booktitle = {Beyond Business Logistics},
year = {2008},
pages = {233--248},
publisher = {Anniversary NOFOMA},
address = {Helsinki, Finland},
}
```

In this talk we consider the isotonic regression (IR) problem which can beformulated as follows. Given a vector $\bar{x} \in R^n$, find $x_* \in R^n$ which solves the problem:\begin{equation}\label{ir2}\begin{array}{cl}\mbox{min} & \|x-\bar{x}\|^2 \\\mbox{s.t.} & Mx \ge 0.\end{array}\end{equation}The set of constraints $Mx \ge 0$ represents here the monotonicityrelations of the form $x_i \le x_j$ for a given set of pairs of thecomponents of $x$. The corresponding row of the matrix $M$ is composed mainly of zeros,but its $i$th and $j$th elements, which are equal to $-1$ and $+1$,respectively. The most challenging applications of (\ref{ir2}) arecharacterized by very large values of $n$. We introduce new IRalgorithms. Our numerical experiments demonstrate the high efficiencyof our algorithms, especially for very large-scale problems, and theirrobustness. They are able to solve some problems which all existing IRalgorithms fail to solve. We outline also our new algorithms formonotonicity-preserving interpolation of scattered multivariate data.In this talk we focus on application of our IR algorithms inpostprocessing of FE solutions.Non-monotonicity ofthe numerical solution is a typical drawback of the conventionalmethods of approximation, such as finite elements (FE), finite volumes,and mixed finite elements. The problem of monotonicity isparticularly important in cases of highly anisotropic diffusion tensorsor distorted unstructured meshes. For instance, in the nuclear wastetransport simulation, the non-monotonicity results in the presence ofnegative concentrations which may lead to unacceptable concentrationand chemistry calculations failure. Another drawback of the conventionalmethods is a possible violation of the discrete maximum principle, whichestablishes lower and upper bounds for the solution.We suggest here a least-change correction to the available FE solution$\bar{x} \in R^n$. This postprocessing procedure is aimed on recoveringthe monotonicity and some other important properties that may not beexhibited by $\bar{x}$.The mathematical formulation of the postprocessing problemis reduced to the following convex quadratic programming problem\begin{equation}\label{ls2}\begin{array}{cl}\mbox{min} & \|x-\bar{x}\|^2 \\\mbox{s.t.} & Mx \ge 0, \quad l \le x \le u, \quad e^Tx = m,\end{array}\end{equation}where$e=(1,1, \ldots ,1)^T \in R^n$. The set of constraints $Mx \ge 0$ represents here the monotonicityrelations between some of the adjacent mesh cells.The constraints $l \le x \le u$ originate fromthe discrete maximum principle. The last constraint formulates the conservativity requirement.The postprocessing based on (\ref{ls2}) is typically a large scaleproblem. We introduce here algorithms for solving this problem. Theyare based on the observation that, in the presence of the monotonicityconstraints only, problem (\ref{ls2}) is the classical monotonicregression problem, which can be solved efficiently by some of theavailable monotonic regression algorithms. This solution is used thenfor producing the optimal solution to problem (\ref{ls2}) in thepresence of all the constraints. We present results of numericalexperiments to illustrate the efficiency of our algorithms.

```
@inproceedings{diva2:261194,
author = {Burdakov, Oleg and Grimvall, Anders and Sysoev, Oleg and Kapyrin, Ivan and Vassilevski, Yuri},
title = {{Monotonic data fitting and interpolation with application to postprocessing of FE solutions}},
booktitle = {CERFACS 20th Anniversary Conference on High-performance Computing,2007},
year = {2007},
pages = {11--12},
}
```

Simulation of transport phenomena based on advection-diffusion equationis very popular in many engineering applications. Non-monotonicity ofthe numerical solution is the typical drawback of the conventionalmethods of approximation, such as finite elements (FE), finite volumes,and mixed finite elements. The problem of monotonicity isparticularly important in cases of highly anisotropic diffusion tensorsor distorted unstructured meshes. For instance, in the nuclear wastetransport simulation, the non-monotonicity results in the presence ofnegative concentrations which may lead to unacceptable concentrationand chemistry calculations failure. Another drawback of the conventionalmethods is a possible violation of the discrete maximum principle, whichestablishes lower and upper bounds for solution.We suggest here a least-change correction to the available FE solution$\bar{x} \in R^n$. This postprocessing procedure is aimed on recoveringthe monotonicity and some other important properties that may not beexhibited by $\bar{x}$.The mathematical formulation of the postprocessing problemis reduced to the following convex quadratic programming problem%Given the FE solution $\bar{x} \in R^n$, find $x_* \in R^n$ which%solves the list-distance problem.\begin{equation}\label{ls2}\begin{array}{cl}\mbox{min} & \|x-\bar{x}\|^2 \\\mbox{s.t.} & Mx \ge 0, \\ & l \le x \le u, \\ & e^Tx = m.\end{array}\end{equation}The set of constraints $Mx \ge 0$ represents here the monotonicityrequirements. It establishes relations between some of the adjacentmesh cells in the form $x_i \le x_j$, which relates cells $i$ and $j$.The corresponding row of the matrix $M$ is composed mainly of zeros,but its $i$th and $j$th elements, which are equal to $-1$ and $+1$,respectively. The set of constraints $l \le x \le u$ originates fromthe discrete maximum principle. In the last constraint, $e=(1,1, \ldots ,1)^T \in R^n$. It formulates the conservativityrequirement.The postprocessing based on (\ref{ls2}) is typically a large scaleproblem. We introduce here algorithms for solving this problem. Theyare based on the observation that, in the presence of the monotonicityconstraints only, problem (\ref{ls2}) is the classical monotonicregression problem, which can be solved efficiently by some of theavailable monotonic regression algorithms. This solution is used thenfor producing the optimal solution to problem (\ref{ls2}) in thepresence of all the constraints. We present results of numericalexperiments to illustrate the efficiency of our algorithms.

```
@inproceedings{diva2:261193,
author = {Burdakov, Oleg and Kapyrin, Ivan and Vassilevski, Yuri},
title = {{Optimization methods for postprocessing finite element solutions}},
booktitle = {International Conference on Numerical Optimization and Numerical Linear Algebra,2007},
year = {2007},
}
```

Isotonicregression problem (IR) has numerous importantapplications in statistics, operations research, biology, image andsignal processing and other areas. IR is a minimization problemwiththe objective function defined by the distance from a given pointto aconvex set defined by monotonicity constraints of the form: i-thcomponent of the decision vector is less or equal to its j-thcomponent. The distance in IR is usually associated with the Lpnorm,whereas the norms L1 and L2 are of the highest practical interest.Theconventional optimization methods are unable to solve large-scaleIRproblems originating from large data sets.Historically, the major efforts were focused on IR problem in theL2norm. Exact algorithms such as the minimum lower sets algorithm byBrunk, the min-max algorithm by Lee, the network flow algorithm byMaxwell & Muchstadt and the IBCR algorithm by Block et al. weredeveloped. Among them the IBCR algorithm has been proved to be themostnumerically efficient, but it is not robust enough. An alternativeapproach is related to solving IR problem approximately. Followingthisapproach, Burdakov et al. developed GPAV algorithm whose blockrefinement extension, GPAVR, is able to solve IR problem with highaccuracy in a far shorter time than the exact algorithms. Apartfromthis, GPAVR is a very robust algorithm.Unfortunately, for the norm L1 there are no algorithms which are asefficient as those in L2 norm. In our talk, we introduce newalgorithms, GPAVR1 and IBCR1. They are extensions of the algorithmsGPAV and IBCR to L1 norm. We present also results of numericalexperiments, which demonstrate the high efficiency of the newalgorithms, especially for very large-scaleproblems.

```
@inproceedings{diva2:261192,
author = {Sysoev, Oleg and Burdakov, Oleg and Grimvall, Anders},
title = {{New optimization methods for isotonic regression in L1 norm}},
booktitle = {EUROPT-OMS Conference on Optimization,2007},
year = {2007},
pages = {133--133},
publisher = {Guadeamus},
address = {University of Hradec Kralove, Czech Republic},
}
```

Isotonicregression problem (IR) has numerous important applications instatistics, operations research, biology, image and signalprocessingand other areas. IR in L2-norm is a minimization problem in whichtheobjective function is the squared Euclidean distance from a givenpointto a convex set defined by monotonicity constraints of the form:i-thcomponent of the decision vector is less or equal to its j-thcomponent. Unfortunately, the conventional optimization methods areunable to solve IR problems originating from large data sets.The existing IR algorithms, such as the minimum lower setsalgorithm byBrunk, the min-max algorithm by Lee, the network flow algorithm byMaxwell & Muchstadt and the IBCR algorithm by Block et al. areable tofind exact solution to IR problem for at most a few thousands ofvariables. The IBCR algorithm, which proved to be the mostefficient ofthem, is not robust enough. An alternative approach is related tosolving IR problem approximately. Following this approach, Burdakovetal. developed an algorithm, called GPAV, whose block refinementextension,GPAVR, is able to solve IR problems with a very high accuracy in afarshorter time than the exact algorithms. Apart from this, GPAVR is avery robust algorithm, and it allows us to solve IR problems withoverhundred thousands of variables.In this talk, we introduce new exact IR algorithms, which can beviewedas active set methods. They use the approximate solution producedbythe GPAVR algorithm as a starting point. We present results of ournumerical experiments demonstrating the high efficiency of the newalgorithms, especially for very large-scale problems, and theirrobustness. They are able to solve the problems which all existingexact IR algorithms fail to solve.

```
@inproceedings{diva2:261191,
author = {Burdakov, Oleg and Grimvall, Anders and Sysoev, Oleg},
title = {{New optimization algorithms for large-scale isotonic regression in L2-norm}},
booktitle = {EUROPT-OMS Conference on Optimization,2007},
year = {2007},
pages = {44--44},
publisher = {Guadeamus},
address = {University of Hradec Kralove, Czech Republic},
}
```

```
@inproceedings{diva2:259596,
author = {Svensson, Björn and Burdakov, Oleg and Andersson, Mats and Knutsson, Hans},
title = {{A New Approach For Treating Multiple Eextremal Points In Multi-Linear Least Squares Filter Design}},
booktitle = {Proceedings of the {SSBA} Symposium on Image Analysis, 2007},
year = {2007},
pages = {61--64},
}
```

```
@inproceedings{diva2:257127,
author = {Göthe-Lundgren, Maud and Frisk, Mikael and Rönnqvist, Mikael and Jörnsten, Kurt},
title = {{Cost allocation in forestry operations}},
booktitle = {12th IFAC Symposium on Information Control Problems in Manufacturing,2006},
year = {2006},
publisher = {Elsevier Science},
address = {Paris},
}
```

We consider the problem of minimizing the distance from a given *n*-dimensional vector to a set defined by constraints of the form *x*_{i} ≤ *x*_{j}. Such constraints induce a partial order of the components *x*_{i}, which can be illustrated by an acyclic directed graph. This problem is also known as the isotonic regression (IR) problem. IR has important applications in statistics, operations research and signal processing, with most of them characterized by a very large value of *n*. For such large-scale problems, it is of great practical importance to develop algorithms whose complexity does not rise with *n* too rapidly. The existing optimization-based algorithms and statistical IR algorithms have either too high computational complexity or too low accuracy of the approximation to the optimal solution they generate. We introduce a new IR algorithm, which can be viewed as a generalization of the Pool-Adjacent-Violator (PAV) algorithm from completely to partially ordered data. Our algorithm combines both low computational complexity *O*(*n*^{2}) and high accuracy. This allows us to obtain sufficiently accurate solutions to IR problems with thousands of observations.

```
@inproceedings{diva2:257128,
author = {Burdakov, Oleg and Sysoev, Oleg and Grimvall, Anders and Hussian, Mohamed},
title = {{An O(n2) algorithm for isotonic regression}},
booktitle = {Large-Scale Nonlinear Optimization},
year = {2006},
series = {Nonconvex Optimization and Its Applications},
volume = {83},
pages = {25--33},
publisher = {Springer Science+Business Media B.V.},
address = {New York},
}
```

```
@inproceedings{diva2:251476,
author = {Holmberg, Kaj and Broström, Peter},
title = {{Determining the Non-Existence of a Compatible OSPF Metric}},
booktitle = {International Network Optimization Conference, INOC 2005,2005},
year = {2005},
pages = {106--},
publisher = {University of Lisbon,},
address = {Lisbon, Portugal},
}
```

This paper address the problem of designing IP networks where traffic is distributed in accordance with the OSPF protocol. Routers use link weights for determining how traffic is distributed. All shortest paths between pairs of routers are used and the traffic is evenly divided when several paths are shortest. We formulate a new model for the design of IP networks with OSPF and ECM distribution where weights are implicitly included. Necessary constraints for representing the shortest paths obtained from link weights by in-graphs are described. A Lagrangean heuristic is developed for verifying the usefulness of the model. Numerical experiments on test problems shows that acceptable gaps are obtained in reasonable time.

```
@inproceedings{diva2:251477,
author = {Holmberg, Kaj and Broström, Peter},
title = {{Design of IP/OSPF Networks Using a Lagrangean Heuristic on an In-graph Based Model}},
booktitle = {Proceedings of the International Network Optimization Conference, INOC 2005},
year = {2005},
pages = {702--},
publisher = {University of Lisbon},
address = {Lisbon, Portugal},
}
```

We present a new algorithm for monotonic regression in one or more explanatory variables. Formally, our method generalises the well-known PAV (pool-adjacent-violators) algorithm from fully to partially ordered data. The computational complexity of our algorithm is *O*(

*2). The goodness-of-fit to observed data is much closer to optimal than for simple averaging techniques.*

*n*```
@inproceedings{diva2:244647,
author = {Burdakov, Oleg and Grimvall, Anders and Hussian, Mohamed},
title = {{A generalised PAV algorithm for monotonic regression in several variables}},
booktitle = {COMPSTAT. Proceedings in Computational Statistics},
year = {2004},
pages = {761--767},
publisher = {PhysicaVerlag/Springer},
address = {Heidelberg, NY},
}
```

Monotonic regression is a non-parametric method that is designed especially for applications in which the expected value of a response variable increases or decreases in one or more explanatory variables. Here, we show how the recently developed generalised pool-adjacent-violators (GPAV) algorithm can greatly facilitate the assessment of trends in time series of environmental quality data. In particular, we present new methods for simultaneous extraction of a monotonic trend and seasonal components, and for normalisation of environmental quality data that are influenced by random variation in weather conditions or other forms of natural variability. The general aim of normalisation is to clarify the human impact on the environment by suppressing irrelevant variation in the collected data. Our method is designed for applications that satisfy the following conditions: (i) the response variable under consideration is a monotonic function of one or more covariates; (ii) the anthropogenic temporal trend is either increasing or decreasing; (iii) the seasonal variation over a year can be defined by one increasing and one decreasing function. Theoretical descriptions of our methodology are accompanied by examples of trend assessments of water quality data and normalisation of the mercury concentration in cod muscle in relation to the length of the analysed fish.

```
@inproceedings{diva2:244646,
author = {Hussian, Mohamed and Grimvall, Anders and Burdakov, Oleg},
title = {{Monotonic regression for assessment of trends in environmental quality data}},
booktitle = {European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS},
year = {2004},
pages = {1--12},
publisher = {University of Jyväskylä, Department of Mathematical Information Technology},
address = {Jyväskylä},
}
```

We consider the problem of minimizing the distance from a given *n*-dimensional vector to a set defined by constraintsof the form *xi xj* Such constraints induce a partial order of the components *xi*, which can be illustrated by an acyclic directed graph.This problem is known as the isotonic regression (IR) problem. It has important applications in statistics, operations research and signal processing. The most of the applied IR problems are characterized by a very large value of *n*. For such large-scale problems, it is of great practical importance to develop algorithms whose complexity does not rise with *n* too rapidly.The existing optimization-based algorithms and statistical IR algorithms have either too high computational complexity or too low accuracy of the approximation to the optimal solution they generate. We introduce a new IR algorithm, which can be viewed as a generalization of the Pool-Adjacent-Violator (PAV) algorithm from completely to partially ordered data. Our algorithm combines both low computational complexity O(n2) and high accuracy. This allows us to obtain sufficiently accurate solutions to the IR problems with thousands of observations.

```
@inproceedings{diva2:244645,
author = {Burdakov, Oleg and Sysoev, Oleg and Grimvall, Anders and Hussian, Mohamed},
title = {{An algorithm for isotonic regression problems}},
booktitle = {European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS},
year = {2004},
pages = {1--9},
publisher = {University of Jyväskylä},
address = {Jyväskylä},
}
```

```
@inproceedings{diva2:244514,
author = {Razmara, Gholamreza and Lindberg, Per Olov},
title = {{The Prize Collecting Connected Subgraph Problem - A New NP-Hard Problem arizing in Snow Removal Routing}},
booktitle = {Operations Research 2004,2004},
year = {2004},
publisher = {Springer},
address = {Berlin},
}
```

```
@inproceedings{diva2:243548,
author = {Westerlund, Andreas},
title = {{Tree-relaxations of relatives to the Traveling Salesman Problem}},
booktitle = {Workshop i tillämpad matematik,2004},
year = {2004},
}
```

```
@inproceedings{diva2:243066,
author = {Holmberg, Kaj and Broström, Peter},
title = {{Stronger Necessary Conditions for the Existence of a Compatible OSPF Metric}},
booktitle = {Nordic MPS 04,2004},
year = {2004},
}
```

```
@inproceedings{diva2:243065,
author = {Broström, Peter and Holmberg, Kaj},
title = {{Determining the Non-Existence of a Compatible OSPF Metric}},
booktitle = {Nordic MPS 2004,2004},
year = {2004},
}
```

```
@inproceedings{diva2:243057,
author = {Rönnqvist, Mikael and Frisk, Mikael and Forsberg, Mathias},
title = {{FlowOpt - A flexible decision support tool for strategic and tactical planning in forestry}},
booktitle = {TRISTAN 2004, Triennial Symposium on Transportation Analysis,2004},
year = {2004},
}
```

```
@inproceedings{diva2:243056,
author = {Rönnqvist, Mikael},
title = {{Transportation planning with several transportation modes}},
booktitle = {CORSINFORMS Joint International Meeting,2004},
year = {2004},
}
```

```
@inproceedings{diva2:243055,
author = {Rönnqvist, Mikael},
title = {{Optimization and Transportation Scheduling, The Grand Chateau National Park, New Zealand, September 8-10, 2004. (Experiences from developing a decision}},
booktitle = {Optimization and Transportation Scheduling,2004},
year = {2004},
}
```

```
@inproceedings{diva2:243054,
author = {Rönnqvist, Mikael},
title = {{FlowOpt - Ett flexibelt planeringsverktyg för skoglig logistik}},
booktitle = {Skogskonferensen,2004},
year = {2004},
}
```

```
@inproceedings{diva2:243053,
author = {Forsberg, Mathias and Frisk, Mikael and Rönnqvist, Mikael},
title = {{FlowOpt - A new decision support system in the forest supply chain}},
booktitle = {NOFOMA 2004,2004},
year = {2004},
}
```

```
@inproceedings{diva2:242866,
author = {Lindberg, Per Olov and Engelson, L.},
title = {{Convexification of the Traffic Equilibrium Problem with Social Marginal Cost Tolls}},
booktitle = {Operations Research 2003,2003},
year = {2004},
pages = {141--},
publisher = {Springer},
address = {Heidelberg},
}
```

```
@inproceedings{diva2:244525,
author = {Golbaharan, Nima and Lindberg, Per Olov and Göthe-Lundgren, Maud},
title = {{Optimal Routing of Snowplows - A Column generation Approach}},
booktitle = {Operations Research 2002,2002},
year = {2003},
pages = {199--},
publisher = {Springer},
address = {Heidelberg},
}
```

```
@inproceedings{diva2:243258,
author = {Grimvall, Anders and Hussian, Mohamed and Burdakov, Oleg},
title = {{Isotonic regression and normalisation of environmental quality data}},
booktitle = {TIES The International Environmetrics Society 2003,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242931,
author = {Westerlund, Andreas and Göthe-Lundgren, Maud and Larsson, Torbjörn},
title = {{Mathematical Formulations of the Heterogeneous Fleet Vehicle Routing Problem}},
booktitle = {ROUTE 2003, International Workshop on Vehicle Routing, Intermodal Transport and related areas,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242930,
author = {Rönnqvist, Mikael},
title = {{Integrated logistics management in the forestry supply chain}},
booktitle = {Symposium for Systems Analysis in Forest Resources,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242929,
author = {Rönnqvist, Mikael},
title = {{OR in forestry - 12 open problems}},
booktitle = {Symposium for Systems Analysis in Forest Resources,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242928,
author = {Rönnqvist, Mikael},
title = {{Optimization in forestry}},
booktitle = {18th International Symposium on Mathematical Programming,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242903,
author = {Rönnqvist, Mikael},
title = {{Laps Care - An operational system for staff planning in home care}},
booktitle = {5th EUROINFORMS Joint International meeting,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242901,
author = {Rönnqvist, Mikael},
title = {{A new support system for tactical transportation planning}},
booktitle = {5th EUROINFORMS Joint International meeting,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242899,
author = {Rönnqvist, Mikael},
title = {{Integrated logistics management in the forest supply chain}},
booktitle = {2nd Forest Engineering Conference,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242900,
author = {Rönnqvist, Mikael},
title = {{Integrated routing and transportation planning}},
booktitle = {ROUTE2003, International workshop on vehicle routing and multi-modal transportation,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242898,
author = {Rönnqvist, Mikael},
title = {{Transportation and route planning}},
booktitle = {2nd Forest Engineering Conference,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242897,
author = {Rönnqvist, Mikael},
title = {{Experiences with supply chain management in Swedish forest industries}},
booktitle = {SMARTLOG, Smart logistics in dynamic value-chains,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242896,
author = {Rönnqvist, Mikael},
title = {{Kundanpassad virkesproduktion}},
booktitle = {Logistikkonferensen,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242895,
author = {Rönnqvist, Mikael},
title = {{Integration in the wood chain towards customer optimized timber - A case study}},
booktitle = {IUFRO All Division 5 Conference,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242894,
author = {Rönnqvist, Mikael},
title = {{Forestry planning using OR models in Sweden}},
booktitle = {Workshop Taller Nucleo Milenium: Complex engineering systems,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242893,
author = {Andersson, Per-Åke},
title = {{Multiyear planning of public road maintenance}},
booktitle = {GOR2003,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242890,
author = {Andersson, Per-Åke},
title = {{CDU:
R2 Optimeringsmetoder för resursfördelning inom Dr\&Uh-verksamhet väg}},
booktitle = {CDU-dagen,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242889,
author = {Palmgren, Myrna and Rönnqvist, Mikael and Ryan, David},
title = {{Solving truck scheduling problems with branch and price}},
booktitle = {ISMP 2003,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242882,
author = {Palmgren, Myrna and Rönnqvist, Mikael and Ryan, David},
title = {{A hybrid column generation method for log truck scheduling}},
booktitle = {ROUTE2003,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242871,
author = {Palmgren, Myrna and Rönnqvist, Mikael},
title = {{RuttOpt - an operative transportation planning system}},
booktitle = {The 2003 Symposium for Systems Analysis in Forest Resources,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242870,
author = {Lindberg, Per Olov and Golbaharan, Nima},
title = {{A Prize Collecting Connected Subgraph Problem}},
booktitle = {18th Int. Symp. on Math. Programming,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242868,
author = {Lindberg, Per Olov and Engelson, L.},
title = {{Samhällsekonomiskt optimala biltullar för trafikanter med olika tidsvärden,}},
booktitle = {Transportforum,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242863,
author = {Holmberg, Kaj},
title = {{Mean Value Cross Decomposition for Nonlinear Convex Problems}},
booktitle = {18th International Symposium on Mathematical Programming,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242834,
author = {Erlander, Sven and Lundgren, Jan},
title = {{Cost Minimizing Behavior in Discrete Choice Modeling}},
booktitle = {50th Annual North American meetings of the Regional Science Association International,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242833,
author = {Daneva, Maria and Lindberg, Per Olov},
title = {{Conjugate Direction Frank-Wolfe Methods with Applications to Traffic Assignment Problem}},
booktitle = {International Symposium on Mathematical Programming,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242832,
author = {Burdakov, Oleg},
title = {{On Limited Memory Methods with Shape Changing Trust Region}},
booktitle = {The 18th International Symposium on Mathematical Programming,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242830,
author = {Broström, Peter and Holmberg, Kaj},
title = {{Internet Protocol Network Design and Routing}},
booktitle = {18th International Symposium on Mathematical Programming,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242696,
author = {Bredström, David and Rönnqvist, Mikael},
title = {{Ship routing and scheduling in the pulp mill industry}},
booktitle = {18th International Symposium on Mathematical Programming,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242694,
author = {Blomvall, Jörgen},
title = {{Optimization based derivative pricing in incomplete and imperfect markets}},
booktitle = {International Symposium on Mathematical Programming,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242690,
author = {Blomvall, Jörgen},
title = {{Optimization based derivative pricing in incomplete and imperfect markets}},
booktitle = {Informs Annual Meeting 2003,2003},
year = {2003},
}
```

```
@inproceedings{diva2:242663,
author = {Mattias, Forsberg and Rönnqvist, Mikael},
title = {{Decision support system/tools:
Integrated logistics management in the forest supply chain}},
booktitle = {2nd Forest Engineering Conference,2003},
year = {2003},
pages = {64--73},
}
```

```
@inproceedings{diva2:242664,
author = {Gunnarsson, Helene and Lundgren, Jan and Rönnqvist, Mikael},
title = {{Decision support system/tools:
Optimization of transportation, storage and chipping of forest fuel}},
booktitle = {2nd Forest Engineering Conference,2003},
year = {2003},
pages = {74--82},
}
```

```
@inproceedings{diva2:242662,
author = {Eriksson, Jonas and Rönnqvist, Mikael},
title = {{Decision support system/tools:
Transportation and route planning}},
booktitle = {2nd Forest Engineering Conference,2003},
year = {2003},
pages = {48--57},
}
```

Roll cutting has attracted a great deal of research attention. However, most of the development of models and methods in academic research has not considered some key industrial requirements. These are practical production aspects that often create non-tractable optimization problems. In this paper, we study two important problems and show how these can be solved efficiently in an industrial setting. These are operative roll cutting where the number of reels and different patterns should be minimized and real-time cutting, when defects and quality restrictions on ordered rolls are included.

```
@inproceedings{diva2:571203,
author = {Bergman, Johan and Flisberg, Patrik and Rönnqvist, Mikael},
title = {{Roll cutting at paper mills}},
booktitle = {Proceedings of the Control Systems 2002},
year = {2002},
pages = {159--163},
}
```

In the congested cities of today, congestion pricing is a tempting alternative. With a single user class, already Beckmann et al. showed that ``system optimal'' traffic flows can be achieved by social marginal cost (SMC) pricing where users have to pay for the delays the incur on others. However different user classes can have widly differing time values. Hence, when introducing tolls, one should consider multi-class user equilibria, where the classes have different time values. In the single class case, the equilibrium conditions can be viewn as optimality conditions of an equivalent optimization problem. In the multi-class case, however, netter claims that this is not possible. We show that, depending on the formulation, the multi-class SMC-pricing equilibrium problem (with different time values) can be stated either as an asymmetric or as a symmetric equilibrium problem. In the latter case, the corresponding optimization problems is in general non-convex. For this non-convex problem, we devise descent methods of Frank-Wolfe type. We apply the methods and study a synthetic case based on Sioux Falls.

```
@inproceedings{diva2:23505,
author = {Engelson, Leonid and Lindberg, Per Olov and Daneva, Maria},
title = {{Multi-Class User Equilibria under Social Marginal Cost Pricing}},
booktitle = {Operations Research 2002},
year = {2002},
pages = {174--179},
}
```

## Theses

Column-oriented models are today common in the eld of discrete optimization, and there is an increasing interest in using such models as a basis for heuristic solution methods. The common theme of this work is to explore some possibilities to integrate heuristic principles and column-oriented models for discrete optimization problems.

In the rst paper, we consider a resource allocation problem for cellular systems. We propose a strong column-oriented formulation and a corresponding column generation method, as well as an enhanced column generation scheme for this problem. The enhanced scheme is composed of a stabilization technique, an approximate column generation principle, and, for nding integer solutions, a heuristic that is embedded in the column generation scheme.

The second paper provides a new and strong convexied formulation of the xed charge transportation problem. This formulation is obtained by integrating the concepts of Lagrangian decomposition and column generation. It is shown both theoretically and practically that this integration yields a formulation which is stronger than three other convexied formulations of the problem.

```
@phdthesis{diva2:921971,
author = {Zhao, Yixin},
title = {{On the Integration of Heuristics with Column-Oriented Models for Discrete Optimization}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1764}},
year = {2016},
address = {Sweden},
}
```

This thesis is divided into two parts that each is concerned with a specific problem.

The problem under consideration in the first part is to find suitable graph representations, abstractions, cost measures and algorithms for calculating placements of unmanned aerial vehicles (UAVs) such that they can keep one or several static targets under constant surveillance. Each target is kept under surveillance by a surveillance UAV, which transmits information, typically real time video, to a relay UAV. The role of the relay UAV is to retransmit the information to another relay UAV, which retransmits it again to yet another UAV. This chain of retransmission continues until the information eventually reaches an operator at a base station.

When there is a single target, then all Pareto-optimal solutions, i.e. all relevant compromises between quality and the number of UAVs required, can be found using an efficient new algorithm. If there are several targets, the problem becomes a variant of the Steiner tree problem and to solve this problem we adapt an existing algorithm to find an initial tree. Once it is found, we can further improve it using a new algorithm presentedin this thesis.

The second problem is optimization of time-consuming problems where the objective function is seen as a black box, where the input parameters are sent and a function valueis returned. This has the important implication that no gradient or Hessian information is available. Such problems are common when simulators are used to perform advanced calculations such as crash test simulations of cars, dynamic multibody simulations etc. It is common that a single function evaluation takes several hours.

Algorithms for solving such problems can be broadly divided into direct search algorithms and model building algorithms. The first kind evaluates the objective function directly, whereas the second kind builds a model of the objective function, which is then optimized in order to find a new point where it is believed that objective function has agood value. Then the objective function is evaluated in that point.

Since the objective function is very time-consuming, it is common to focus on minimizing the number of function evaluations. However, this completely disregards the possibility to perform calculations in parallel and to exploit this we investigate different ways parallelization can be used in model-building algorithms. Some of the ways to do this is to use several starting points, generate several new points in each iteration, new ways of predicting a point’s value and more.

We have implemented the parallel extensions in one of the state of the art algorithms for derivative-free optimization and report results from testing on synthetic benchmarksas well as from solving real industrial problems.

```
@phdthesis{diva2:695431,
author = {Olsson, Per-Magnus},
title = {{Methods for Network Optimization and Parallel Derivative-free Optimization}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1580}},
year = {2014},
address = {Sweden},
}
```

Nowadays, large-scale optimization problems are among those most challenging. Any progress in developing methods for large-scale optimization results in solving important applied problems more effectively. Limited memory methods and trust-region methods represent two ecient approaches used for solving unconstrained optimization problems. A straightforward combination of them deteriorates the efficiency of the former approach, especially in the case of large-scale problems. For this reason, the limited memory methods are usually combined with a line search. We develop new limited memory trust-region algorithms for large-scale unconstrained optimization. They are competitive with the traditional limited memory line-search algorithms.

In this thesis, we consider applied optimization problems originating from the design of lter networks. Filter networks represent an ecient tool in medical image processing. It is based on replacing a set of dense multidimensional lters by a network of smaller sparse lters called sub-filters. This allows for improving image processing time, while maintaining image quality and the robustness of image processing.

Design of lter networks is a nontrivial procedure that involves three steps: 1) choosing the network structure, 2) choosing the sparsity pattern of each sub-filter and 3) optimizing the nonzero coecient values. So far, steps 1 and 2 were mainly based on the individual expertise of network designers and their intuition. Given a sparsity pattern, the choice of the coecients at stage 3 is related to solving a weighted nonlinear least-squares problem. Even in the case of sequentially connected lters, the resulting problem is of a multilinear least-squares (MLLS) type, which is a non-convex large-scale optimization problem. This is a very dicult global optimization problem that may have a large number of local minima, and each of them is singular and non-isolated. It is characterized by a large number of decision variables, especially for 3D and 4D lters.

We develop an effective global optimization approach to solving the MLLS problem that reduces signicantly the computational time. Furthermore, we develop efficient methods for optimizing sparsity of individual sub-filters in lter networks of a more general structure. This approach offers practitioners a means of nding a proper trade-o between the image processing quality and time. It allows also for improving the network structure, which makes automated some stages of designing lter networks.

```
@phdthesis{diva2:692950,
author = {Zikrin, Spartak},
title = {{Large-Scale Optimization Methods with Application to Design of Filter Networks}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1561}},
year = {2014},
address = {Sweden},
}
```

One out of eight deaths throughout the world is due to cancer. Developing new treatments and improving existing treatments is hence of major importance. In this thesis we have studied how mathematical optimization can be used to improve an existing treatment method: high-dose-rate (HDR) brachytherapy.

HDR brachytherapy is a radiation modality used to treat tumours of for example the cervix, prostate, breasts, and skin. In HDR brachytherapy catheters are implanted into or close to the tumour volume. A radioactive source is moved through the catheters, and by adjusting where the catheters are placed, called catheter positioning, and how the source is moved through the catheters, called the dwelling time pattern, the dose distribution can be controlled.

By constructing an individualized catheter positioning and dwelling time pattern, called dose plan, based on each patient's anatomy, it is possible to improve the treatment result. Mathematical optimization has during the last decade been used to aid in creating individualized dose plans. The dominating optimization model for this purpose is a linear penalty model. This model only considers the dwelling time pattern within already implanted catheters, and minimizes a weighted deviation from dose intervals prescribed by a physician.

In this thesis we show that the distribution of the basic variables in the linear penalty model implies that only dwelling time patterns that have certain characteristics can be optimal. These characteristics cause troublesome inhomogeneities in the plans, and although various measures for mitigating these are already available, it is of fundamental interest to understand their cause.

We have also shown that the relationship between the objective function of the linear penalty model and the measures commonly used for evaluating the quality of the dose distribution is weak. This implies that even if the model is solved to optimality there is no guarantee that the generated plan is optimal with respect to clinically relevant objectives, or even near-optimal. We have therefore constructed a new model for optimizing the dwelling time pattern. This model approximates the quality measures by the concept conditional value-at-risk, and we show that the relationship between our new model and the quality measures is strong. Furthermore, the new model generates dwelling time patterns that yield high-quality dose distributions.

Combining optimization of the dwelling time pattern with optimization of the catheter positioning yields a problem for which it is rarely possible to find a proven optimal solution within a reasonable time frame. We have therefore developed a variable neighbourhood search heuristic that outperforms a state-of-the-art optimization software (CPLEX). We have also developed a tailored branch-and-bound algorithm that is better at improving the dual bound than a general branch-and-bound algorithm. This is a step towards the development of a method that can find proven optimal solutions to the combined problem within a reasonable time frame.

```
@phdthesis{diva2:658212,
author = {Holm, Åsa},
title = {{Mathematical Optimization of HDR Brachytherapy}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1550}},
year = {2013},
address = {Sweden},
}
```

The current work is devoted to estimating the term structure of interest rates based on a generalized optimization framework. To x the ideas of the subject, we introduce representations of the term structure as they are used in nance: yield curve, discount curve and forward rate curve.

Yield curves are used in empirical research in nance and macroeconomic to support nancial decisions made by governments and/or private nancial institutions. When governments (or nancial corporations) need fundings, they issue to the public (i.e. the market) debt securities (bills, bonds, notes, etc ) which are sold at the discount rate at the settlement date and promise the face value of the security at the redemption date, known as maturity date. Bills, notes and bonds are usually sold with maximum maturity of 1 year, 10 years and 30 years respectively.

Let us assume that the government issues to the market zero-coupon bonds, which provide a single payment at maturity of each bond. To determine the price of the security at time of settlement, a single discount factor is used. Thus, the yield can be dened as the discount rate which makes the present value of the security issued (the zero-coupon bond) equal to its initial price. The yield curve describes the relationship between a particular yield and a bond's maturity. In general, given a certain number of bonds with dierent time to maturity, the yield curve will describe the one-to-one relationship between the bond yields and their corresponding time to maturity. For a realistic yield curve, it is important to use only bonds from the same class of issuer or securities having the same degree of liquidity when plotting the yields.

Discount factors, used to price bonds, are functions of the time to maturity. Given that yields are positive, these functions are assumed to be monotonically decreasing as the time to maturity increases. Thus, a discount curve is simply the graph of discount factors for dierent maturities associated with dierent securities.

Another useful curve uses the forward rate function which can be deduced from both the discount factor and the yield function. The forward rate is the rate of return for an investment that is agreed upon today but which starts at some time in the future and provides payment at some time in the future as well. When forward rates are used, the resulting curve is referred to as the forward rate curve. Thus, any of these curves, that is, the yield curve, the discount curve or the forward rate curve, can be used to represent what is known as the term structure of interest rate. The shapes that the term structure of interest rates can assume include upward sloping, downward sloping, atness or humped, depending on the state of the economy. When the expectations of market participants are incorporated in the construction of these curves representing the term structure, their shapes capture and summarize the cost of credit and risks associated with every security traded.

However, constructing these curves and the choice of an appropriate representation of the term structure to use is not a straightforward task. This is due to the complexity of the market data, precisely, the scarcity of zero-coupon bonds which constitutes the backbone of the term structure. The market often provides coupons alongside market security prices for a small number of maturities. This implies that, for the entire maturity spectrum, yields can not be observed on the market. Based on available market data, yields must be estimated using traditional interpolation methods. To this end, polynomial splines as well as parsimonious functions are the methods mostly used by nancial institutions and in research in nance. However, it is observed in literature that these methods suer from the shape constraints which cause them to produce yield curves that are not realistic with respect to the market observations. Precisely, the yield curves produced by these methods are characterized by unrealistic t of the market data, either in the short end or in the long end of the term structure of interest rate.

To ll the gap, the current research models the yield curve using a generalized optimization framework. The method is not shape constrained, which implies that it can adapt to any shape the yield curve can take across the entire maturity spectrum. While estimating the yield curve using this method in comparison with traditional methods on the Swedish and US markets, it is shown that any other traditional method used is a special case of the generalized optimization framework. Moreover, it is shown that, for a certain market consistency, the method produces lower variances than any of the traditional methods tested. This implies that the method produces forward rate curve of higher quality compared to the existing traditional methods.

Interest rate derivatives are instruments whose prices depend or are derived from the price of other instruments. Derivatives instruments that are extensively used include the forward rate agreement (FRA) contracts where forward rate is used and the interest rate swap (IRS) where LIBOR rate is used as oating rate. These instruments will only be used to build up the term structure of interest rates. Since the liquidity crisis in 2007, it is observed that discrepancies in basis spread between interest rates applied to dierent interest rate derivatives have grown so large that a single discount curve is no longer appropriate to use for pricing securities consistently. It has been suggested that the market needs new methods for multiple yield curves estimation to price securities consistently with the market. As a response, the generalized optimization framework is extended to a multiple yield curves estimation. We show that, unlike the cubic spline for instance, which is among the mostly used traditional method, the generalized framework can produce multiple yield curves and tenor premium curves that are altogether smooth and realistic with respect to the market observations.

U.S. Treasury market is, by size and importance, a leading market which is considered as benchmark for most xed-income securities that are traded worldwide. However, existing U.S. Treasury yield curves that are used in the market are of poor quality since they have been estimated by traditional interpolation methods which are shape constrained. This implies that the market prices they imply contain lots of noise and as such, are not safe to use. In this work, we use the generalized optimization framework to estimate high-quality forward rates for the U.S. Treasury yield curve. Using ecient frontiers, we show that the method can produce low pricing error with low variance as compared to the least squares methods that have been used to estimate U.S. Treasury yield curves.

We nally use the high-quality U.S. Treasury forward rate curve estimated by the generalized optimization framework as input to the essentially ane model to capture the randomness property in interest rates and the time-varying term premium. This premium is simply a compensation that is required for additional risks that investors are exposed to. To determine optimal investment in the U.S. Treasury market, a two-stage stochastic programming model without recourse is proposed, which model borrowing, shorting and proportional transaction cost. It is found that the proposed model can provide growth of wealth in the long run. Moreover, its Sharpe ratio is better than the market index and its Jensen's alpha is positive. This implies that the Stochastic Programming model proposed can produce portfolios that perform better than the market index.

```
@phdthesis{diva2:647662,
author = {Ndengo Rugengamanzi, Marcel},
title = {{Term structure estimation based on a generalized optimization framework}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1539}},
year = {2013},
address = {Sweden},
}
```

Today there are many aspects of model based planning, probably much more than for just 20 years ago. Today data is not the problem, data is everywhere, but the big issue is to understand how to gain advantage of data in decision making - a growing focus is now on modeling!

This work is devoted to this task and can be seen as a three part study. First a general problem of multi-resource routing with sequence dependent costs is studied both in terms of models, as well as the development of methods to solve this class of problems efficiently. Sequence dependent costs among resources arise in situations where for instance, one resource is allocated to a task which influence another task and makes that easier in the sense that it can be conducted to a lower cost or shorter time or some other measured effort. This is a useful building block in military decision making when making plans for troops, attacks or missions in general.

The next part is founded by Vinnova via the research project “Effect Oriented Planning in Dynamic Scenarios”, and deals with a military ground attack problem with simultaneous attacks against a plurality of targets. This part deals with the difficulties of the attacker-defender problem which is modeled in a Nonlinear Mixed Integer Programming formulation. Suggestions are given how to refine and transform this into robust solution methods.

The last part, also included in the Vinnova research project, deals with mission support modeling of Air to Ground missions including multiple aircrafts and a plurality of targets. In this case, sequencing is most important and a strong effort has been put in the understanding and transformation of the problem into models and methods.

In these last two parts implementations of models and heuristics as well as computer runs and simulations, originates from the work in [19] and [20] which has been an invaluable contribution to this work.

The results are very promising where fast execution and sufficient planning accuracy have drawn the attention as a foundation for future work and research in model based decision making applicable to industry needs and ambitions.

```
@phdthesis{diva2:622880,
author = {Lundberg, Kristian},
title = {{Effect Oriented Planning in Military Mission Support Systems:
Models and Heuristic Approaches}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1544}},
year = {2013},
address = {Sweden},
}
```

The Internet is constantly growing but the available resources, i.e. bandwidth, are limited. Using bandwidth efficiently to provide high quality of service to users is referred to as traffic engineering. This is of utmost importance. Traffic in IP networks is commonly routed along shortest paths with respect to auxiliary link weights, e.g. using the OSPF or IS-IS protocol. Here, shortest path routing is indirectly controlled via the link weights only, and it is therefore crucial to have a profound understanding of the shortest path routing mechanism to solve traffic engineering problems in IP networks. The theoretical aspects of such problems have received little attention.

Traffic engineering in IP networks leads to very difficult optimization problems and the key element in exact methods for such problems is an inverse shortest path routing problem. It is used to answer the fundamental question of whether there exist link weights that reproduce a set of tentative paths. Path sets that cannot be obtained correspond to routing conflicts. Valid inequalities are instrumental to prohibit such routing conflicts.

We analyze the inverse shortest path routing problem thoroughly. We show that the problem is NP-complete, contrary to what is claimed in the literature, and propose a stronger relaxation. Its Farkas system is used to develop a novel and compact formulation based on cycle bases, and to classify and characterize minimal infeasible subsystems. Valid inequalities that prevent routing conflicts are derived and separation algorithms are developed for such inequalities.

We also consider several approaches to modelling traffic engineering problems, especially Dantzig–Wolfe reformulations and extended formulations. We give characterizations of facets for some relaxations of traffic engineering problems.

Our results contribute to the theoretical understanding, modelling and solution of problems related to traffic engineering in IP networks.

```
@phdthesis{diva2:571482,
author = {Call, Mikael},
title = {{Shortest Path Routing Modelling, Infeasibility and Polyhedra}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1486}},
year = {2012},
address = {Sweden},
}
```

The thesis consists of five papers. The first three deal with topics within costly global optimization and the last two concern military decision support systems.

The first part of the thesis addresses so-called costly problems where the objective function is seen as a “black box” to which the input parameter values are sent and a function value is returned. This means in particular that no information about derivatives is available. The black box could, for example, solve a large system of differential equations or carry out timeconsuming simulation, where a single function evaluation can take several hours! This is the reason for describing such problems as costly and why they require customized algorithms. The goal is to construct algorithms that find a (near)-optimal solution using as few function evaluations as possible. A good example of a real life application comes from the automotive industry, where the development of new engines utilizes advanced mathematical models that are governed by a dozen key parameters. The objective is to optimize the engine by changing these parameters in such a way that it becomes as energy efficient as possible, but still meets all sorts of demands on strength and external constraints. The first three papers describe algorithms and implementation details for these costly global optimization problems.

The second part deals with military mission planning, that is, problems that concern logistics, allocation and deployment of military resources. Given a fleet of resource, the decision problem is to allocate the resources against the enemy so that the overall mission success is optimized. We focus on the problem of the attacker and consider two separate problem classes. In the fourth paper we introduce an effect oriented planning approach to an advanced weapon-target allocation problem, where the objective is to maximize the expected outcome of a coordinated attack. We present a mathematical model together with efficient solution techniques. Finally, in the fifth paper, we introduce a military aircraft mission planning problem, where an aircraft fleet should attack a given set of targets. Aircraft routing is an essential part of the problem, and the objective is to maximize the expected mission success while minimizing the overall mission time. The problem is stated as a generalized vehicle routing model with synchronization and precedence side constraints.

```
@phdthesis{diva2:524846,
author = {Quttineh, Nils-Hassan},
title = {{Models and Methods for Costly Global Optimization and Military Decision Support Systems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1450}},
year = {2012},
address = {Sweden},
}
```

Integer programming can be used to provide solutions to complex decision and planning problems occurring in a wide variety of situations. The application of integer programming to solve real world problems requires a modelling phase in which the problem at hand is translated into a mathematical description of the problem, and a solution phase that aims at developing methods for producing solutions to the mathematical formulation of the problem.

The first two papers of this thesis have their focus on the modelling phase, and the application of integer programming for solving nurse scheduling problems. Common to both papers is that the research has been conducted in collaboration with health care representatives, and that the models presented can be used for providing schedules that can be used by nurses. In the latter paper, a meta-heuristic approach is suggested for providing the schedules.

The last three papers address method development and specifically the design of column generation methods. The first of these papers presents optimality conditions that are useful in methods where columns are generated using dual solutions that are not necessarily optimal with respect to a linear programming relaxation, and the usefulness of these conditions are illustrated by examples from the literature.

Many applications of column generation yield master problems of a set partitioning type, and the fourth and fifth paper present methodologies for solving such problems. The characteristics of these methodologies are that all solutions derived are feasible and integral, where the preservation of integrality is a major distinction from other column generation methods presented in the literature.

```
@phdthesis{diva2:512202,
author = {Rönnberg, Elina},
title = {{Contributions within two topics in integer programming:
nurse scheduling and column generation}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1421}},
year = {2012},
address = {Sweden},
}
```

Open-pit mining is an operation in which blocks from the ground are dug to extract the ore contained in them, and in this process a deeper and deeper pit is formed until the mining operation ends. Mining is often a highly complex industrial operation, with respect to both technological and planning aspects. The latter may involve decisions about which ore to mine and in which order. Furthermore, mining operations are typically capital intensive and long-term, and subject to uncertainties regarding ore grades, future mining costs, and the market prices of the precious metals contained in the ore. Today, most of the high-grade or low-cost ore deposits have already been depleted, and to obtain sufficient profitability in mining operations it is therefore today often a necessity to achieve operational efficiency with respect to both technological and planning issues.

In this thesis, we study the open-pit design problem, the open-pit mining scheduling problem, and the open-pit design problem with geological and price uncertainty. These problems give rise to (mixed) discrete optimization models that in real-life settings are large scale and computationally challenging.

The open-pit design problem is to find an optimal ultimate contour of the pit, given estimates of ore grades, that are typically obtained from samples in drill holes, estimates of costs for mining and processing ore, and physical constraints on mining precedence and maximal pit slope. As is well known, this problem can be solved as a maximum flow problem in a special network. In a first paper, we show that two well known parametric procedures for finding a sequence of intermediate contours leading to an ultimate one, can be interpreted as Lagrangian dual approaches to certain side-constrained design models. In a second paper, we give an alternative derivation of the maximum flow problem of the design problem.

We also study the combined open-pit design and mining scheduling problem, which is the problem of simultaneously finding an ultimate pit contour and the sequence in which the parts of the orebody shall be removed, subject to mining capacity restrictions. The goal is to maximize the discounted net profit during the life-time of the mine. We show in a third paper that the combined problem can also be formulated as a maximum flow problem, if the mining capacity restrictions are relaxed; in this case the network however needs to be time-expanded.

In a fourth paper, we provide some suggestions for Lagrangian dual heuristic and time aggregation approaches for the open-pit scheduling problem. Finally, we study the open-pit design problem under uncertainty, which is taken into account by using the concept of conditional value-atrisk. This concept enables us to incorporate a variety of possible uncertainties, especially regarding grades, costs and prices, in the planning process. In real-life situations, the resulting models would however become very computationally challenging.

```
@phdthesis{diva2:442025,
author = {Amankwah, Henry},
title = {{Mathematical Optimization Models and Methods for Open-Pit Mining}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1396}},
year = {2011},
address = {Sweden},
}
```

This thesis is concerned with problems related to shortest pathrouting (SPR) in Internet protocol (IP) networks. In IP routing, alldata traffic is routed in accordance with an SPR protocol, e.g. OSPF.That is, the routing paths are shortest paths w.r.t. some artificialmetric. This implies that the majority of the Internet traffic isdirected by SPR. Since the Internet is steadily growing, efficientutilization of its resources is of major importance. In theoperational planning phase the objective is to utilize the availableresources as efficiently as possible without changing the actualdesign. That is, only by re-configuration of the routing. This isreferred to as traffic engineering (TE). In this thesis, TE in IPnetworks and related problems are approached by integer linearprogramming.

Most TE problems are closely related to multicommodity routingproblems and they are regularly solved by integer programmingtechniques. However, TE in IP networks has not been studied as much,and is in fact a lot harder than ordinary TE problems without IProuting since the complicating shortest path aspect has to be takeninto account. In a TE problem in an IP network the routing isperformed in accordance with an SPR protocol that depends on a metric,the so called set of administrative weights. The major differencebetween ordinary TE problems and TE in IP networks is that all routingpaths must be simultaneously realizable as shortest paths w.r.t. thismetric. This restriction implies that the set of feasible routingpatterns is significantly reduced and that the only means available toadjust and control the routing is indirectly, via the administrativeweights.

A constraint generation method for solving TE problems in IP networksis outlined in this thesis. Given an "original" TE problem, the ideais to iteratively generate and augment valid inequalities that handlethe SPR aspect of IP networks. These valid inequalities are derived byanalyzing the inverse SPR problem. The inverse SPR problem is todecide if a set of tentative routing patterns is simultaneouslyrealizable as shortest paths w.r.t. some metric. When this is not thecase, an SPR conflict exists which must be prohibited by a validinequality that is then augmented to the original TE problem. Toderive strong valid inequalities that prohibit SPR conflicts, athorough analysis of the inverse SPR problem is first performed. Inthe end, this allows us to draw conclusions for the design problem,which was the initial primary concern.

```
@phdthesis{diva2:403478,
author = {Call, Mikael},
title = {{Inverse Shortest Path Routing Problems in the Design of IP Networks}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1448}},
year = {2010},
address = {Sweden},
}
```

Integer programming can be used to provide solutionsto complex decision and planning problems occurring in a wide varietyof situations. Applying integer programming to a real life problembasically involves a first phase where a mathematical model isconstructed, and a second phase where the problem described by themodel is solved. While the nature of the challenges involved in therespective two phases differ, the strong relationship between theproperties of models, and which methods that are appropriate for theirsolution, links the two phases. This thesis constitutes of threepapers, of which the third one considers the modeling phase, while thefirst and second one consider the solution phase.

Many applications of column generation yield master problems of setpartitioning type, and the first and second papers presentmethodologies for solving such problems. The characteristics of themethodologies presented are that all successively found solutions arefeasible and integral, where the retention of integrality is a majordistinction from other column generation methods presented in theliterature.

The third paper concerns nurse scheduling and describes the results ofa pilot implementation of a scheduling tool at a Swedish nursing ward.This paper focuses on the practical aspects of modeling and thechallenges of providing a solution to a complex real life problem.

```
@phdthesis{diva2:113882,
author = {Rönnberg, Elina},
title = {{Methods and Applications in Integer Programming:
All-Integer Column Generation and Nurse Scheduling}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1383}},
year = {2008},
address = {Sweden},
}
```

This thesis deals with large-scale industrial problems that can be formulated using mixed integer linear programming (MIP) models. Because of the large problem size, it is not often possible to apply standard solution methods. Therefore special techniques must be used. In this thesis, both full optimization and optimization based heuristic approaches are developed.

The body of this thesis consists of five research papers. In the papers we consider industrial cases based on three applications: production planning at Södra Cell AB, ship routing and distribution at Södra Cell AB, and staff routing and scheduling in the Swedish home care.

We develop two large-scale MIP models for the production-planning problem. For solving, we use both a direct approach, and a combination of column generation and constraint branching. The latter is a technique to control the branching rules in a branch and bound framework and takes into account application specific properties.

For the ship routing problem, we present a MIP model and develop two solution methods. The first is based on an iterative rolling time horizon. In each step a part of the model is solved and later fixed. This is repeated until the full problem is solved. The second approach is to combine a genetic algorithm and linear programming. From the MIP model, we obtain solution bounds rarely present in genetic algorithm frameworks.

In the staff routing problem there are special synchronization constraints and we introduce a new MIP model. An exact solution approach based on column generation and branch and bound is developed. We also suggest a variable fixing heuristic, which we use for solving the more general problem with requirements on equal workload.

In the computational experiments, we use data instances obtained from real-world cases, and generated instances based on real-world cases. The numerical results show that high quality solutions can be obtained within reasonable time limits in all applications.

```
@phdthesis{diva2:17885,
author = {Bredström, David},
title = {{Models and solution methods for large-scale industrial mixed integer programming problems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1071}},
year = {2007},
address = {Sweden},
}
```

The focus of this thesis is the use of Operations Research for pplications in the forest industry. Optimization models and methods have been developed for problems in the forest supply chain and they have been integrated in decision support systems. The problems considered in this thesis are operative with a planning horizon of less than a month. Short solution times for the methods and the feasibility of the models used are important aspects. The body of this thesis consists of eight research papers where six of them consider operative problems and follows the forest supply chain. The industrial applications include routing of forwarders, routing of logging trucks, a process control problem, and roll cutting problems. The other two papers consider an operative planning problem in the home care sector. They are spin offs from one of the other projects in this thesis. In these applications both linear and nonlinear problems occur.

The forwarding problem is to decide routes for forwarders to pick up the small piles of logs the harvesters have left in the harvest areas. The forwarders then put the logs adjacent to forest roads. The logging truck problem is to decide routes for logging trucks to pick up the piles created by the forwarders and transport them to demand points, for example pulp or paper mills. The process control problem appear in the bleaching stage of a pulp mill where the control variables are the bleaching chemical charges. The cost of bleaching chemicals is minimized while a finishing brightness within target values is ensured. Mainly two roll cutting problems are studied. One is to minimize the number of cutting patterns and one is to minimize the number of reels when defects in the papper reels are considered. The solution methods developed for the forwarding problem have also been applied to a routing problem which appears in staff planning for home care operations.

The different DSS developed and implemented have been tested and several are in daily industrial use. In each of the papers, we have developed robust OR models and quick and effective OR methods. The savings from using the systems vary but are typically in the range 5-20%.

```
@phdthesis{diva2:23492,
author = {Flisberg, Patrik},
title = {{Application of operations research in operative planning in the forest industry}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1104}},
year = {2007},
address = {Sweden},
}
```

I avhandlingen studeras hur kostnadseffektiva underhålls- (uh-)planer för belagd väg kan genereras, på basis av information om aktuellt vägytetillstånd och funktionella modeller för kostnads- och tillståndsförändringar, delvis utvecklade i samarbete med svenska Vägverket (VV). Tilltänkt användning är på strategisk och programnivå, innan mer detaljerad objektinformation finns att tillgå. Till skillnad från hittills använda modeller, så genereras individuella uh-planer för varje vägsegment (en homogen vägsträcka vad gäller aktuellt beläggningstillstånd och beläggningshistorik), i kontinuerliga tillstånds- och åtgärdsrum. Genom användning av Lagrangerelaxerande optimeringsteknik, så kan de speciella nytto/kostnads-kvot-villkor som VV ålägger varje uh-objekt naturligen hanteras med dualpriser för budgetvillkoren. Antalet vägsegment som konkurrerar om budgetmedlen är vanligtvis stort. Data från VV:s Vägdatabank för Värmland har använts, omfattande ca 9000 vägsegment. Genom den stora datamängden har datorprogrammen implementerats för parallellbearbetning. Under avhandlingsarbetet har projektet beviljats tillgång till Monolith PCklustret vid NSC. För att kunna reducera optimeringskörtiderna har modell- och metodutveckling varit nödvändig. Genom att aggregera vägsegmenten till vägklasser har goda startvärden på dualpriserna erhållits. Genom utvecklingen av en speciell restvärdesrutin har den explicit behandlade tidsperioden kunnat reduceras. Vid lösandet av det duala subproblemet har speciell uppmärksamhet ägnats åt de diskretiseringseffekter som uppstår i metoden dynamisk programmering. En typ av tillämpning avser ett delvägnät, exempelvis en väg. Valideringsstudier har genomförts på väg 63 i Värmland – med lovande men inte tillfredsställande resultat (se nedan). En speciell modell för samordnat uh beaktar stordriftsfördelarna vid samtidig åtgärd på en hel vägsträcka. Den andra huvudtypen av studier gäller ett helt nätverk. Flera metodtyper har tillämpats, både för att lösa de relaxerade optimeringsproblemen och för att generera uhplaner som uppfyller budgetvillkoren. För en anständig diskretisering är körtiderna för hela Värmland mindre än 80 CPU-timmar. Genom en a posteriori primal heuristik reduceras kraven på parallellbearbetning till ett litet PC-kluster. Avhandlingen studerar vidare effekterna av omfördelade budgetmedel samt en övergång till en transparent, stokastisk modell – vilka båda visar små avvikelser från basmodellen.

Optimeringsresultaten för Värmland indikerar att budgetnivåer på ca 40% av Värmlands verkliga uh-budget är tillräckliga. Dock saknas viktiga kostnadsdrivande faktorer i denna första modellomgång, exempelvis vissa funktionella prestanda (säkerhet), all miljöpåverkande prestanda (buller etc.) och strukturell prestanda (ex.vis bärighet, som enbart modelleras via ett åldersmått). För ökad tilltro till PMS i allmänhet och optimering i synnerhet, bör avvikelserna analyseras ytterligare och leda till förbättringar vad gäller tillståndsmätning, tillståndseffekt- & kostnadsmodellering samt matematisk modellering & implementering.

```
@phdthesis{diva2:23632,
author = {Andersson, Per-Åke},
title = {{Multi-year maintenance optimisation for paved public roads - segment based modelling and price-directive decomposition}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1107}},
year = {2007},
address = {Sweden},
}
```

The scope of this thesis is modelling and solving large-scale planning problems in the supply chain within the forest industry. Five research papers are included, the first three of which focus on the modelling, and the last two on the solution methods. All problems included are tactical multi-commodity problems expressed as mixed integer programming (MIP) models. The work has been done in collaboration with two Swedish companies within the forest industry.

In Paper I, a problem concerning the supply chain of forest fuel for Sydved Energileveranser AB is modelled and solved. We study the problem of deciding when and where forest residues are to be converted into wood chips, and how the residues and chips are to be transported and stored in order to satisfy energy demand at heating plants. The company has long-term contracts with forest owners and saw mills. Decisions in the model include whether or not additional harvest areas and saw mills are to be contracted and which terminals to use. The planning horizon is one year and monthly time periods are used.

Papers II--V are based on planning problems at Södra Cell AB. The planning horizon is normally one year. Papers II--III consider only one time period. In Paper II the supply chain from pulp mills to customers is modelled and the combined problem of deciding terminal locations and which ship routes to use is studied. Shipping vessels chartered on short or long term are used to transport products to terminals in Europe. From each terminal, the products are transported to customers by truck, train, or a combination of both. In addition, trains and trucks can be used for transports directly to customers from mills. In Paper III the entire supply chain, from harvest areas to customers, is considered. Decisions included are transportation of raw materials, production mix, the distribution of pulp products, and the selection of potential orders and their quantities at customers. The ship routes are considered as flow links.

In Papers IV--V the problems in Papers II--III are combined into one model and several time periods are used. Lagrangian heuristics based on Lagrangian decomposition are used as solution methods in both papers. In Paper IV, the approach leads to subproblems for each time period, whereas in Paper V, another approach that results in subproblems for different parts of the supply chain is developed.

All models are based on real data from the companies. The models are detailed and describe the problems accurately. The solution methods are developed such that the solution time is kept within practical limits. Results from Papers II--III have been used by Södra Cell AB to support the change of the terminal structure as well as in budget planning.

```
@phdthesis{diva2:23550,
author = {Gunnarsson Lidestam, Helene},
title = {{Supply chain optimization in the forest industry}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1105}},
year = {2007},
address = {Sweden},
}
```

This thesis concerns the development of novel feasible direction type algorithms for constrained nonlinear optimization. The new algorithms are based upon enhancements of the search direction determination and the line search steps.

The Frank-Wolfe method is popular for solving certain structured linearly constrained nonlinear problems, although its rate of convergence is often poor. We develop improved Frank--Wolfe type algorithms based on conjugate directions. In the conjugate direction Frank-Wolfe method a line search is performed along a direction which is conjugate to the previous one with respect to the Hessian matrix of the objective. A further refinement of this method is derived by applying conjugation with respect to the last two directions, instead of only the last one.

The new methods are applied to the single-class user traffic equilibrium problem, the multi-class user traffic equilibrium problem under social marginal cost pricing, and the stochastic transportation problem. In a limited set of computational tests the algorithms turn out to be quite efficient. Additionally, a feasible direction method with multi-dimensional search for the stochastic transportation problem is developed.

We also derive a novel sequential linear programming algorithm for general constrained nonlinear optimization problems, with the intention of being able to attack problems with large numbers of variables and constraints. The algorithm is based on inner approximations of both the primal and the dual spaces, which yields a method combining column and constraint generation in the primal space.

```
@phdthesis{diva2:23509,
author = {Mitradjieva-Daneva, Maria},
title = {{Feasible Direction Methods for Constrained Nonlinear Optimization:
Suggestions for Improvements}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1095}},
year = {2007},
address = {Sweden},
}
```

The routing in OSPF Telecommunication networks is determined by computing shortest paths with respect to link weights set by the network operator. All shortest paths to a destination are used by the routers when traffic is routed, and the routers split the traffic evenly when alternative shortest paths exist.

This thesis includes models and methods for various problems related to OSPF networks and OSPF routing. Two different models for the design of OSPF networks are presented. One model ensures that the network is capable of accommodating large parts of the traffic even if a single link fails. The OSPF protocol is modeled using constraints based on weights, and a heuristical method uses the link weights for generating good feasible solutions. The second model is based on routing patterns and does not include the link weights. The OSPF protocol is modeled by ensuring that no pair of routing patterns contains different paths between the same pair of nodes, which is a necessary condition for the existence of weights. A Lagrangean heuristic is proposed as solution method.

The thesis also includes models that can be used for determining if a set of desired routing patterns is obtainable in an OSPF network. The models are infeasible if no link weights yield shortest path graphs which are identical to the patterns, and this means that the patterns can not be used simultaneously in an OSPF network. Infeasible instances are characterized using LP-duality, and a certain structure called “valid cycle” is found in most infeasible instances. A valid cycle involves two routing patterns and indicates which parts of the patterns that are in conflict. This information can be used for finding weights that yields slightly modified patterns. Polynomial methods which can be used for finding valid cycles are presented. There are however also some infeasible instances where no valid cycle exists, so the absence of valid cycles is a necessary condition for the existence of weights. In this case, the conflict involves more than two routing patterns, and such conflicts are characterized by a structure called “3-valid set of cycles”. The absence of 3-valid sets of cycles is a stronger necessary condition for the existence of weights.

```
@phdthesis{diva2:22448,
author = {Broström, Peter},
title = {{Optimization Models and Methods for Telecommunication Networks using OSPF}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1032}},
year = {2006},
address = {Sweden},
}
```

Many integer optimization problems of great practical importance are today attacked with column generation. Merits of column generation is that it enables the use of compact and flexible formulations of many complex optimization problems, and that it often gives rise to good (strong) formulations. A potential drawback with column generation is the well-known tailing-off phenomenon, that is, that the objective value is improved rapidly in early iterations but very slowly in the late iterations.

We study various techniques for accelerating column generation methods for (integer) linear programs. We evaluate the use of stabilized column generation on a Traveling Salesman Subtour Problem, TSSP, a problem which is closely related to the Prize Collecting Traveling Salesman Problem. We further study how subgradient optimization can be used with the purpose of predicting optimal columns (and, optionally, non-binding restrictions). This technique is tested on the TSSP and the multicommodity network flow problem.

Another idea evaluated in this thesis is the use of over-generation of columns in a column generation scheme for an integer programming problem, in this case a vehicle routing problem with a heterogeneous fleet of vehicles.

The thesis also includes a note on a class of relatives to the Held and Karp 1–tree problem. It is shown that two subclasses have polynomial time-complexity. Further, we note a mistake in an earlier work and provide a counter-example to the erroneous result.

```
@phdthesis{diva2:251095,
author = {Westerlund, Andreas},
title = {{Accelerating column generation schemes:
applications to routing problems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 978}},
year = {2005},
address = {Sweden},
}
```

The purpose of this thesis is to contribute to the development and the use of optimization models and methods to support efficient decision making in Swedish forestry. The main problem areas concerned are forest road upgrade planning and operational harvest planning. Today much of the planning is done manually by experienced managers. Forest management has a natural hierarchical structure based on the wide range of planning periods and organisational structure. The hierarchical harvest planning and the subdivision into strategic, tactical and operational levels are described from an Operations Research perspective. The description of the hierarchical structure differs between countries and there is a focus on the Swedish situation.

Road upgrading is becoming an increasingly important planning problem to secure a continuous supply of wood. In Sweden, during the periods of thawing and periods of heavy rain there is an uncertain accessibility to parts of the road network due to unfirm ground. The thesis addresses the optimization problem to minimize the combined road upgrade and transportation costs while meeting requirements on road standard such that accessibility to harvest areas is secured during all weather conditions. In this work mixed integer linear programming (MILP) models including multiple assortments, several time periods and a set of road classes are developed. For a typical forest district, the road upgrade problem becomes large and techniques to improve solution performance through model reformulations are discussed. The models are tested in a case study for a major Swedish company. For practical usage of the models we present the development of a new decision support system called RoadOpt. The development has involved the Forestry Research Institute of Sweden, two software companies and several participating forest companies. The system uses a GIS-based map user-interface to present and analyse data and results. The recently developed Swedish road database is an important part. The system is tested on a case study from Stora Enso.

The harvest planning problems addressed cover planning periods ranging from one year down to one month. Annual plans are required for budgeting, contracting harvest teams, contracting transportation firms and assuring road access. The main decisions deal with which areas to harvest, and by which team, during an annual period so that the industries receive the required volume of assortments. Overall decisions about transportation and storage are included. The monthly planning problem includes detailed scheduling of harvest crews, that is, the sequencing of areas for each team. The thesis addresses these planning problems and provides MILP models for each problem. Methods based on both a commercial solver and developed LP based heuristics are used. Models and methods are tested on case studies from Holmen Skog.

```
@phdthesis{diva2:249516,
author = {Karlsson, Jenny},
title = {{Optimization models and methods for harvest planning and forest road upgrading}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 956}},
year = {2005},
address = {Sweden},
}
```

The topic of this dissertation, within the subfield of mathematics known as optimization, is the development of a new dual ascent method for convex multicommodity flow problems. Convex multicommodity flow problems arize in many different routing problems such as the design of packet switched computer networks and the computation of traffic network equilibria. The dual problem of a strictly convex twice differentiable convex multicommodity flow problem is an essentially unconstrained maximization problem with a piecewise twice differentiable concave objective. The idea behind this new dual ascent algorithm is to compute Newton-like ascent directions in the current differentiable piece and performing line searches in those directions combined with an active set type treatment of the borders between the differentiable pieces. The first contribution in this dissertation is a detailed investigation of this special structure. The insights gained are then used to explain the proposed algorithm. The algorithm is also tested numerically on well known traffic equilibrium problem instances. The computational results are very promising. The second contribution is a new approach for verifying feasibility in multicommodity flow problems. These feasibility problems arizes naturally in the new dual ascent algorithm proposed. The background of the problem is that if a certain representation of a solution to the dual convex multicommodity flow problem is proved to be feasible for the convex muticommodity flow problem aswell, an optimal solution is found. Hence, it is natural to seek a method for verifying feasibility of a given candidate solution for the multicommodity flow problem. The core of the approach is a distance minimizing method for verifying feasibility and hence for demonstrating optimality. This method is described in detail and some computational results in a traffic assignment setting is also given. Finally, a short note illustrating that many published test problems for traffic assignment algorithms have peculiarities are given. First and foremost, several of them have incorrectly specified attributes, such as the number of nodes. In other testproblems, the network contains subnetworks with constant travel times; subnetworks which to a large extent can be reduced or eliminated. In further test problems, the constant travel time subnetworks imply that the solution has nonunique arc flows.

```
@phdthesis{diva2:249288,
author = {Hägglöf, Kristoffer},
title = {{Convex multicommodity flow problems:
a bidual approach}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 954}},
year = {2005},
address = {Sweden},
}
```

We consider the daily transportation problem in forestry which arises when transporting logs from forest sites to customers such as sawmills and pulp and paper mills. Each customer requires a specific amount of a certain assortment, and the deliveries to the customers can be made within time intervals, known as time windows. Further, there are a number of supply points, each with a certain assortment, and a number of vehicles of a given capacity, to be used for transport.

The log truck scheduling problem consists of finding a set of minimal costs routes, one for each vehicle, such that the customers’ demands are satisfied without exceeding the supplies available at the supplies. Each route has to satisfy a number of constraints concerning time windows, truck capacity, timetable of the driver, lunch breaks, et cetera. The model used to describe the log truck scheduling problem is based on the route concept, and each variable, or column, represents one feasible route. Since the number of feasible routes is huge, we work only with restricted versions of this problem, which are similar to restricted master problems in a Dantzig-Wolfe decomposition scheme.

We use three solution methods based on the column generation principle, together with a pool strategy which allows us to deal with the feasible routes outside the restricted master problem. The three methods proposed have a common structure; they use branch-andprice together with a column generator, followed by branch-and-bound. The column generators in the three methods differ. In the first method, the subproblem is based on a cluster-first-route-second strategy. The column generator in the second method involves solving a constrained shortest path problem, and finally, the third method builds on a repeated generation of clusters and routes.

The three methods are tested on real cases from Swedish forestry companies, and the third method has been adapted to a computerised system that utilises the Swedish national road data base, for computing travelling distances. The results obtained show that the optimisation methods succeed in finding significantly better solutions than those obtained by manual planning, and in a reasonable computing time.

```
@phdthesis{diva2:20408,
author = {Palmgren, Myrna},
title = {{Optimal Truck Scheduling:
Mathematical Modeling and Solution by the Column Generation Principle}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 967}},
year = {2005},
address = {Sweden},
}
```

By using decision support tools based on operations research (OR) and optimization in the supply chain in the forest industry, the planning process can be improved and higher profitability can be obtained. The focus of this thesis is on modelling two real-life problems. The problems are united by the facts that they concern tactical (annual) planning in the forest industry, and that the models are developed and tested with real data at Swedish companies.

In the first paper, a problem of the supply chain of forest fuel is modelled and solved. The problem of deciding when and where forest residues are to be converted into forest fuel, and how the residues are to be t ransported and stored in order to satisfy demand at heating plants is studied. Decisions also include whether or not additional harvest areas and saw-mills are to be contracted. In addition, we consider the flow of products from saw-mills and import harbours, and address the question about which terminals to use. The planning horizon is one year and monthly time periods are considered. The test data is from Sydved Energileveranser AB.

The second paper is a case study from Södra Cell AB. A combined problem of terminal location and ship routing is studied. The objective is to minimize the costs of distributing pulp products from pulp mills to customers (paper mills). Shipping vessels chartered on short or long term are used to transport products to terminals in Europe. In addition, trains and lorries are used for direct transports to customers from mills. From each terminal, the products are transported to customers by lorry, train, or a combination of both. Decisions about which terminals to use, which shipping routes to use, and which other transportation possibilities to use are included.

In both problems, relatively large mixed integer programming (MIP) models have been developed and solved using a commercial lp (linear programming) solver. Heuristics have also been developed in both problems in order to obtain faster solutions. Several scenarios of the problems have been tested and evaluated.

```
@phdthesis{diva2:253218,
author = {Gunnarsson, Helene},
title = {{Optimization approaches to tactical planning problems in the forest industry}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1071}},
year = {2004},
address = {Sweden},
}
```

The focus of this dissertation is on constructing optimization models and developing solution methods for Snow Removal Routing Problems with Time V\lindows. Two cases, after and during snowfall, are studied. We discuss theoretical aspects of these problems and considerations to be taken when modeling and solving the real-world problem of the Swedish National Road Agency.

In the case of after snowfall we are concerned with finding a set of routes, with minimal total cost, for homogeneous snowplows, where every road segment is plowed in its associated time window. The routes must start from and end at the same depot, at which a numbers of snowplows are stationed. This case is solved by Dantzig-Wolfe decomposition combined with a column generation procedure. An integer solution to the master problem is found by a greedy algorithm or a variable reduction procedure. The case of generating routes for snowplows during snowfall is studied in two versions; the problems with and without time horizon on the duration of the snowfall. The focus is put on the latter problem, which is solved in two ways using a Dantzig-Wolfe decomposition scheme and a Tabu Search heuristic. The Dantzig-Wolfe master problem, is a Constrained Set Covering Problem and the subproblems are Prize Collecting Connected Subgraph Problems. Further, we show that the Prize Collecting Connected Subgraph Problem is a generalization of Prize Collecting Steiner Tree Problem and thus is NP-hard. An integer solution to the master problem is extracted by a greedy search procedure and the obtained solution is improved by post processing. The solution approaches for both cases are implemented and tested on the real-world problem under consideration and on an artificial, generated problem.

```
@phdthesis{diva2:244006,
author = {Razmara, Gholamreza},
title = {{Snow removal routing problems:
theory and applications}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 888}},
year = {2004},
address = {Sweden},
}
```

This thesis address optimization problems related to the design of lP (Internet Protocol) networks. One of the most commonly used IGP's (Interior Gateway Protocols) in telecommunication networks is the OSPF (Open Shortest Path First) protocol with ECM (Equal Cost Multipath) distribution. This link state based protocol uses link weights, which usually are set by an network operator, for providing routers necessary information regarding how traffic should be routed. By shortest path computations, router decide by themselves how to distribute the traffic.

One problem considered in this thesis is the problem of designing networks which are capable of accommodating large parts of the traffic when network components fails. Link failures are the most common failures, and this is the kind of disturbance which is considered. A multiobjective optimization model is presented and we search for pareto-optimal solutions using a two-phased weight based search method. This method tries to minimize the design cost simultaneously as the level of network survivability is maximized. This method uses the fact that is is easy to compute how traffic is distributed when the OSPF metric is known.

In contrary, it is a much more difficult task to determine if there are any OSPF weights that yield a desired flow distribution. This is a topic which also is studied in the thesis. Necessary conditions, which flow patterns must fulfill for being identical to the shortest paths obtained from OSPF weights, are reported. Sufficient conditions for a special type of flow patterns are also presented. The thesis also contains a polynomial method which can be used for detecting impossibilities in a set of flow patterns when this set do not correspond to the shortest paths of any OSPF weights.

Flow patterns are the basis for a new model which is developed for designing lP networks where OSPF and ECM distribution is used . This model uses a set of in-graphs in order to represent weights implicitly. The model also uses a new set of constraints used for modeling ECM distribution. The usefulness of the model is verified by computational experiments.

```
@phdthesis{diva2:242667,
author = {Broström, Peter},
title = {{Optimization in the Design of OSPF Telecommunication Networks}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1102}},
year = {2004},
address = {Sweden},
}
```

When designing a telecommunication network, one often wish to include some kind of survivability requirement, for example that the network should be two-connected. A two-connected network fulfills the requirement that there should be at least two paths with no links in common between all pairs of nodes. One form of design model is to prescribe that the network should be composed of connected rings of links. The network design problem is then to choose links from a given network, and compose them into a number of rings. A ring is reliable in the sense that there always exist two ways of sending traffic, clockwise or counter-clockwise, which means that a ring fulfills the two-connectivity requirement. There is often a number of requirements on a ring, such as a limited length and limited number of nodes connected to the ring. This means that a ring network will include a number of rings, and traffic between rings must be possible. The traffic between rings is usually made at certain nodes, called transit nodes. Therefore all rings should be connected to at least one of the transit nodes. We focus on the case where we have two transit nodes in the network.

Each possible ring is associated with a certain fixed cost, and all links in a certain ring are given the same capacity. Reserve capacity is allocated according to certain principles. The number of possible rings in a network is an exponential function of the number of nodes in the network, so for larger networks is it impossible to a priori generate all possible rings.

We describe the problem, and model it as a linear integer programming problem, where a set of rings are assumed to be known. The usage of rings, i.e., the allocation of demand to rings, is determined. In practice, too many rings can not be included in the model. Instead we must be able to generate useful rings. A Lagrangean relaxation of the model is formulated, and the dual solution is used in order to derive reduced costs which can be used to generate new better rings. The information generated describes only the physical structure of the ring, not the usage of it. The ring generation problem is a modified traveling salesman subtour problem, which is known to be difficult to solve. Therefore, we focus on heuristic solution methods for this problem.

We also presents a column generation approach where the problem is modeled as a set covering problem. Here, a column describes both the topology of the ring and the exact usage of it. A similar ring generation problem appears as a subproblem, in order to generate new rings.

All methods are computationally tested on both real life data and randomly generated data, similar to real life problems.

```
@phdthesis{diva2:242670,
author = {Henningsson, Mathias},
title = {{Ring network design in telecommunications:
optimization based solution approaches}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 829}},
year = {2003},
address = {Sweden},
}
```

To use optimization models and methods is an increasingly acknowledged element of supply chain management for large industries. Concurrently with the increasing global market competition and modern business policies, an already effective organization needs to find new possibilities to improve their planning procedures and resource utilization. To incorporate the planning in a more comprehensive context, in an overall supply chain, provides new possibilities to efficiently manage several planning steps than if considered individually.

This thesis considers production planning and ship scheduling problems at Södra Cell, a large Scandinavian pulp manufacturer. The main purpose of the thesis is to suggest optimization models and methods that can support production and distribution planners to objectively consider the supply chain impact in the short-term decision making process. The production planning and the ship scheduling problems are approached separately. Optimization models are formulated for both problems, and specially adapted solution methods are developed that account the characteristics of the problems.

The thesis consists of three research papers. In the first paper two mixed integer models (MIPs) are developed for the production planning problem. The first model has binary variables for every possible production plan and a column generation approach with a constraint branching heuristic is used to solve the otherwise very large model. The second model has the production choices formulated in constraints and is solved with a standard branch and bound search. The results are compared with manual planning and potential large savings are indicated.

The second and the third paper both consider the pulp distribution problem where ship routing and scheduling decisions have a key role. In the second paper the problem is solved with a rolling time horizon method. An MIP model is solved repeatedly over short ranges of time until eventually the whole planning period is covered. The third paper presents a genetic algorithm with integrated linear programming models for the distribution problem. The solutions from the different solution strategies for the distribution problem are mutually compared in the third paper.

```
@phdthesis{diva2:242668,
author = {Bredström, David},
title = {{Optimization models and methods for production planning and ship scheduling in the pulp industry}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1056}},
year = {2003},
address = {Sweden},
}
```

```
@phdthesis{diva2:254043,
author = {Karlsson, Jenny},
title = {{Optimization Models and Methods for Tactical and Operative Harvest Planning}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 979}},
year = {2002},
address = {Sweden},
}
```

The heat transfer in the filling phase of injection moulding is studied, based on Gunnar Aronsson’s distance model for flow expansion ([Aronsson], 1996).

The choice of a thermoplastic materials model is motivated by general physical properties, admitting temperature and pressure dependence. Two-phase, per-phase-incompressible, power-law fluids are considered. The shear rate expression takes into account pseudo-radial flow from a point inlet.

Instead of using a finite element (FEM) solver for the momentum equations a general analytical viscosity expression is used, adjusted to current axial temperature profiles and yielding expressions for axial velocity profile, pressure distribution, frozen layer expansion and special front convection.

The nonlinear energy partial differential equation is transformed into its conservative form, expressed by the internal energy, and is solved differently in the regions of streaming and stagnant flow, respectively. A finite difference (FD) scheme is chosen using control volume discretization to keep truncation errors small in the presence of non-uniform axial node spacing. Time and pseudo-radial marching is used. A local system of nonlinear FD equations is solved. In an outer iterative procedure the position of the boundary between the “solid” and “liquid” fluid cavity parts is determined. The uniqueness of the solution is claimed. In an inner iterative procedure the axial node temperatures are found. For all physically realistic material properties the convergence is proved. In particular the assumptions needed for the Newton-Mysovskii theorem are secured. The metal mould PDE is locally solved by a series expansion. For particular material properties the same technique can be applied to the “solid” fluid.

In the circular plate application, comparisons with the commercial FEM-FD program Moldflow (Mfl) are made, on two Mfl-database materials, for which model parameters are estimated/adjusted. The resulting time evolutions of pressures and temperatures are analysed, as well as the radial and axial profiles of temperature and frozen layer. The greatest differences occur at the flow front, where Mfl neglects axial heat convection. The effects of using more and more complex material models are also investigated. Our method performance is reported.

In the polygonal star-shaped plate application a geometric cavity model is developed. Comparison runs with the commercial FEM-FD program Cadmould (Cmd) are performed, on two Cmd-database materials, in an equilateral triangular mould cavity, and materials model parameters are estimated/adjusted. The resulting average temperatures at the end of filling are compared, on rays of different angular deviation from the closest corner ray and on different concentric circles, using angular and axial (cavity-halves) symmetry. The greatest differences occur in narrow flow sectors, fatal for our 2D model for a material with non-realistic viscosity model. We present some colour plots, e.g. for the residence time.

The classical square-root increase by time of the frozen layer is used for extrapolation. It may also be part of the front model in the initial collision with the cold metal mould. An extension of the model is found which describes the radial profile of the frozen layer in the circular plate application accurately also close to the inlet.

The well-posedness of the corresponding linearized problem is studied, as well as the stability of the linearized FD-scheme.

```
@phdthesis{diva2:21489,
author = {Andersson, Per-Åke},
title = {{Computation of Thermal Development in Injection Mould Filling, based on the Distance Model}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 993}},
year = {2002},
address = {Sweden},
}
```

## Other

```
@misc{diva2:242933,
author = {Lundgren, Jan and Rönnqvist, Mikael and Värbrand, Peter},
title = {{Optimeringslära}},
howpublished = {},
year = {2003},
}
```

```
@misc{diva2:242932,
author = {Holmberg, Kaj},
title = {{Kombinatorisk optimering med linjärprogrammering}},
howpublished = {},
year = {2003},
}
```

## Reports

In modern integrated modular avionic systems, applications share hardware resources on a common avionic platform. Such an architecture necessitates strict requirements on the spatial and temporal partitioning of the system to prevent fault propagation between different aircraft functions. One way to establish a temporal partitioning is through pre-runtime scheduling of the system, which involves creating a schedule for both tasks and a communication network.

While the avionic systems are growing more and more complex, so is the challenge of scheduling them. Scheduling of the system has an important role in the development of new avionic systems since functionality typically is added to the system over a period of several years and a scheduling tool is used both to detect if the platform can host the new functionality and, in case this is possible, to create a new schedule. For this reason an exact solution strategy for avionics scheduling is preferred over a heuristic one.

In this paper we present a mathematical model for an industrially relevant avionic system and present a constraint generation procedure for scheduling of such systems. We apply our optimisation approach to instances provided by our industrial partner. These instances are of relevance for the development of future avionic systems and contain up to 20 000 tasks to be scheduled. The computational results show that our optimisation approach can be used to create schedules for such instances within reasonable time.

```
@techreport{diva2:1120784,
author = {Blikstad, Mathias and Karlsson, Emil and Lööw, Tomas and Rönnberg, Elina},
title = {{An Optimisation Approach for Pre-Runtime Scheduling of Tasks and Communication in an Integrated Modular Avionic System}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2017},
type = {Other academic},
number = {LiTH-MAT-R, 2017:03},
address = {Sweden},
}
```

In this article, we propose a method for solving the trust-region subproblem when a limited-memory symmetric rank-one matrix is used in place of the true Hessian matrix. The method takes advantage of two shape-changing norms to decompose the trust-region subproblem into two separate problems, one of which has a closed-form solution and the other one is easy to solve. Sufficient conditions for global solutions to both subproblems are given. The proposed solver makes use of the structure of limited-memory symmetric rank-one matrices to find solutions that satisfy these optimality conditions. Solutions to the trust-region subproblem are computed to high-accuracy even in the so-called "hard case".

```
@techreport{diva2:953957,
author = {Brust, Johannes and Burdakov, Oleg and Erway, Jennifer B. and Marcia, Roummel F. and Yuan, Ya-xiang},
title = {{Shape-Changing L-SR1 Trust-Region Methods}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2016},
type = {Other academic},
number = {, },
address = {Sweden},
}
```

We discuss the problem of projecting points on their convex hull. Points in the interior of the convex hull are moved outwards to the boundary of the convex hull. While finding the convex hull is a well treated problem, projecting each interior point on the convex hull is not. It is a harder problem, since each point has to be treated. We first discuss a solution approach in two dimensions, and then generalize it to three dimensions. After some significant improvements and changes, we arrive at efficient solutions method for the three dimensional case, using various column and/or constraint generation techniques.

```
@techreport{diva2:930115,
author = {Holmberg, Kaj},
title = {{Projecting points on the convex hull}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2016},
type = {Other academic},
number = {LiTH-MAT-R, 2016:06},
address = {Sweden},
}
```

Monotonic (isotonic) Regression (MR) is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the MR formulated as a least distance problem with monotonicity constraints. The resulting Smoothed Monotonic Regrassion (SMR) is a convex quadratic optimization problem. We focus on the SMR, where the set of observations is completely (linearly) ordered. Our Smoothed Pool-Adjacent-Violators (SPAV) algorithm is designed for solving the SMR. It belongs to the class of dual activeset algorithms. We proved its finite convergence to the optimal solution in, at most, n iterations, where n is the problem size. One of its advantages is that the active set is progressively enlarging by including one or, typically, more constraints per iteration. This resulted in solving large-scale SMR test problems in a few iterations, whereas the size of that problems was prohibitively too large for the conventional quadratic optimization solvers. Although the complexity of the SPAV algorithm is *O*(n^{2}), its running time was growing in our computational experiments in proportion to *n ^{1:16}*.

```
@techreport{diva2:929073,
author = {Burdakov, Oleg and Sysoev, Oleg},
title = {{Regularized monotonic regression}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2016},
type = {Other academic},
number = {LiTH-MAT-R, 2016:02},
address = {Sweden},
}
```

Planning snow removal is a difficult, infrequently occurring optimization problem, concerning complicated routing of vehicles. Clearing a street includes several different activities, and the tours must be allowed to contain subtours. The streets are classified into different types, each type requiring different activities. We address the problem facing a single vehicle, including details such as precedence requirements and turning penalties. We describe a solution approach based on a reformulation to an asymmetric traveling salesman problem in an extended graph, plus a heuristic for finding feasible solutions. The method have been implemented and tested on real life examples, and the solution times are short enough to allow online usage. We compare two different principles for the number of sweeps on a normal street, encountered in discussions with snow removal contractors. A principle using a first sweep in the middle of the street around the block, in order to quickly allow usage of the streets, is found to yield interesting theoretical and practical difficulties.

```
@techreport{diva2:919119,
author = {Holmberg, Kaj},
title = {{The (Over) Zealous Snow Remover Problem}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2016},
type = {Other academic},
number = {LiTH-MAT-R, 2016:04},
address = {Sweden},
}
```

Monotonic Regression (MR) is a standard method for extracting a monotone function from non-monotonic data, and it is used in many applications. However, a known drawback of this method is that its fitted response is a piecewise constant function, while practical response functions are often required to be continuous. The method proposed in this paper achieves monotonicity and smoothness of the regression by introducing an L2 regularization term, and it is shown that the complexity of this method is *O*(n^{2}). In addition, our simulations demonstrate that the proposed method normally has higher predictive power than some commonly used alternative methods, such as monotonic kernel smoothers. In contrast to these methods, our approach is probabilistically motivated and has connections to Bayesian modeling.

```
@techreport{diva2:905380,
author = {Sysoev, Oleg and Burdakov, Oleg},
title = {{A Smoothed Monotonic Regression via L2 Regularization}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2016},
type = {Other academic},
number = {LiTH-MAT-R, 2016:01},
address = {Sweden},
}
```

Data from OpenStreetMap can be a valuable tool in route optimization of many kinds. With GPS data, analyses of trips can be made. In this paper, we discuss how to extract such data and transform it into forms that are useful for optimization purposes. We also discuss obtaining coordinates from such data. Sets of test problems, for future use, are presented and extracted.

```
@techreport{diva2:867759,
author = {Holmberg, Kaj},
title = {{On Using OpenStreetMap and GPS for Optimization}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2015},
type = {Other academic},
number = {LiTH-MAT-R, 2015:15},
address = {Sweden},
}
```

We describe a weighted version of the k-Chinese or k-rural postman problem that occurs in the context of snow removal. The problem concerns the questions of which vehicle shall do each task and how the vehicles shall travel between tasks. We also consider different numbers of vehicles, in view of a fixed cost for each vehicle. We describe and discuss heuristic solution approaches, based on usable substructures, such as Chinese/rural postman problems, meta-heuristics, k-means clustering and local search improvements by moving cycles. The methods have been implemented and tested on real life examples.

```
@techreport{diva2:862625,
author = {Holmberg, Kaj},
title = {{Heuristics for the weighted k-Chinese/rural postman problem with a hint of fixed costs with applications to urban snow removal}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2015},
type = {Other academic},
number = {LiTH-MAT-R, 2015:13},
address = {Sweden},
}
```

Trängselskatt finns idag i både Stockholm och Göteborg, och det är troligt att utformningen av dessa trängselskattesystem kommer att justeras framöver med avseende på avgiftsnivå, placering och tidpunkt. För Stockholm finns beslut om ändring från januari 2016 och i Göteborg ändrades avgiftsnivåerna i januari 2015. I detta projekt utvecklas metoder som ska kunna ge stöd vid justering av avigiftsnivåer, så att en så stor samhällsekonomisk nytta som är möjligt uppnås med trängselskattesystemet.

För storstadsområden, där det under rusningstrafik är trängsel i delar av nätverket, är trängselskatt främst intressant att analysera med dynamiska transportmodeller. Tidigare utveckling av metoder för optimal avgiftssättning har dock främst fokuserat på statiska modeller, exempelvis Emme, som har kända problem med att korrekt uppskatta förändring i restider när det är trängsel i delar av trafiknätverket. I detta projekt har vi därför tillämpat surrogat-baserad optimering, som är en metodansats som ställer få krav på vilken transportmodell som används. Den dynamiska transportmodellen Regent/VisumDUE finns sedan tidigare implementerad för Stockholmsregionen, och har därför även använts i detta projekt. VisumDUE är en makroskopisk nätutläggningsmodell med dynamiskt ruttval, och Regent är en efterfrågemodell som innehåller resgenerering, färdmedelsval och destinationsval för arbetsresor[1].

Surrogat-baserad optimering erbjuder ett ramverk för optimering av problem med beräkningsmässigt kostsamma målfunktioner. Genom att approximera en funktionsyta till samplade punkter från den kostsamma målfunktionen, kan optimeringen istället göras över den approximerade funktionsytan. För Regent/VisumDUE tar utvärderingen av ett givet trängselskattescenario ca tio timmar, och det är denna beräkningstid som gör målfunktionen kostsam. Givet ett antal samplade punkter, görs ytterligare sampling utifrån en given strategi för att förbättra approximationen, så kallad iterativ sampling. Inom ramverket finns dock en mängd möjligheter för hur de olika komponenterna designas. Därför är det svårt att utvärdera surrogat-baserad optimering med endast Regent/VisumDUE. En statisk transportmodell har därför använts för att utvärdera ett antal kombinationer av samplingsstrategi och funktionsyta. Den mest lovande kombinationen har sedan även utvärderats med Regent/VisumDUE. För att vara praktiskt tillämpbart i framtiden har fokus i projektet varit att utvärdera hur metodansatsen fungerar när antalet möjliga tulluppsättningar är kraftigt begränsat (20-40 stycken).

Det scenario som har använts som grund i projektet är trängselskatt i Stockholm på nuvarande tullring, på Essingeleden samt på innerstadsbroarna. Skatten är differentierad med avseende på riktning, vilket ger sex olika skattenivåer att optimera. Optimeringen har gjorts för trängselskattenivå under maxtimmen. I det dynamiska fallet har trängselskattens nivå utanför maxtimme funnits med som indata, men samma tidsprofil som på nuvarande tullring har antagits i alla scenarier (avgiftstrappa 50%, 75%, 100%, 75%, 50%). Utvärderingen med den statiska transportmodellen visar att lösningar nära globalt optimum kan uppnås med endast 40 utvärderade trängselskattenivåer, och en tydlig förbättring av den samhällsekonomiska nyttan uppnås redan vid 20 utvärderade trängselskattenivåer.

Även med ett kraftigt begränsat antal utvärderingar av den kostsamma målfunktionen i Regent/VisumDUE, har vi visat att det är möjligt att använda metodansatsen. En tydlig förbättring av den samhällsekonomiska nyttan uppnås med endast 22 utvärderade trängselskattenivåer. Ytterligare experiment skulle dock behövas för att undersöka hur stor denna förbättring är i förhållande till vad som skulle kunna uppnås.

```
@techreport{diva2:858376,
author = {Ekström, Joakim and Kristoffersson, Ida and Quttineh, Nils-Hassan},
title = {{Surrogatbaserad optimering av avgiftsnivåer i trängselavgiftssystem}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2015},
type = {Other academic},
number = {, },
address = {Sweden},
}
```

The problem of map matching appears when evaluating GPS-tracks recorded by service vehicles, and is used to associate the sequences of GPS-points to links in a graph. Difficulties are errors in the GPS-coordinates and possible lack of GPS-points on short street segments. This paper reports computational tests on integer programming models for the problem, and on several heuristic methods, based on shortest paths and rural postman problems. We present extensive computational results for several methods and for both artificial and real life test cases.

```
@techreport{diva2:785701,
author = {Holmberg, Kaj},
title = {{Computational Results for Map Matching by Optimization}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2015},
type = {Other academic},
number = {LiTH-MAT-R, 2015:02},
address = {Sweden},
}
```

The problem of map matching appears when evaluating GPS-tracks recorded by service vehicles, and utilizing GPS-information in graphs suitable for route optimization. The task is to associate sequences of GPS-points to links in a graph, suitable for optimization, and thereby obtain paths or tours in the graph. Difficulties are errors in the GPS-coordinates and possible lack of GPS-points on short street segments. We apply mathematical modeling to the problem, in the form of integer programming, and do computational tests of the solvability of the models. In addition to integer programming, we develop several heuristic methods for off-line solution of this problem, based on heuristics, shortest paths and rural postman problems. All methods are computationally tested, and summarized results are reported.

```
@techreport{diva2:785696,
author = {Holmberg, Kaj},
title = {{Map Matching by Optimization}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2015},
type = {Other academic},
number = {LiTH-MAT-R, 2015:01},
address = {Sweden},
}
```

Optimization problems with cardinality constraints are very diﬃcult mathematical programs which are typically solved by global techniques from discreteoptimization. Here we introduce a mixed-integer formulation whose standard relaxation still has the same solutions (in the sense of global minima) as the underlying cardinality-constrained problem; the relation between thelocal minima is also discussed in detail. Since our reformulation is a mini-mization problem in continuous variables, it allows to apply ideas from thatﬁeld to cardinality-constrained problems. Here, in particular, we therefore also derive suitable stationarity conditions and suggest an appropriate regularization method for the solution of optimization problems with cardinalityconstraints. This regularization method is shown to be globally convergentto a Mordukhovich-stationary point. Extensive numerical results are given to illustrate the behavior of this method.

```
@techreport{diva2:763733,
author = {Burdakov, Oleg and Kanzow, Christian and Schwartz, Alexandra},
title = {{Mathematical Programs with Cardinality Constraints:
Reformulation by Complementarity-type Constraints and a Regularization Method}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2014},
type = {Other academic},
number = {, },
address = {Sweden},
}
```

We study the problem of forming groups of students so that the groups are as even as possible with respect to certain aspects, and formulate it as a mixed integer programming problem. We find that standard software cannot solve real life sized instances, so we develop several heuristics and metaheuristics for the problem. Computational tests are made on randomly generated instances as well as real life instances. Some of the heuristics give good solutions in short time, and tests on real life problems indicate that satisfactory solutions can be found within 60 seconds.

```
@techreport{diva2:749178,
author = {Holmberg, Kaj},
title = {{Formation of Student Groups with the Help of Optimization}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2014},
type = {Other academic},
number = {LiTH-MAT-R, 2014:14},
address = {Sweden},
}
```

Guarding the perimeter of an area in order to detect potential intruders is an important task in a variety of security-related applications. This task can in many circumstances be performed by a set of camera-equipped unmanned aerial vehicles (UAVs). Such UAVs will occasionally require refueling or recharging, in which case they must temporarily be replaced by other UAVs in order to maintain complete surveillance of the perimeter. In this paper we consider the problem of scheduling such replacements. We present optimal replacement strategies and justify their optimality.

```
@techreport{diva2:739489,
author = {Burdakov, Oleg and Doherty, Patrick and Kvarnström, Jonas},
title = {{Optimal Scheduling for Replacing Perimeter Guarding Unmanned Aerial Vehicles}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2014},
type = {Other academic},
number = {LiTH-MAT-R, 2014:09},
address = {Sweden},
}
```

We consider the directed Steiner tree problem (DSTP) with a constraint on the total number of arcs (hops) in the tree. This problem is known to be NP-hard, and therefore, only heuristics can be applied in the case of its large-scale instances. For the hop-constrained DSTP, we propose local search strategies aimed at improving any heuristically produced initial Steiner tree. They are based on solving a sequence of hop-constrained shortest path problems for which we have recently developed ecient label correcting algorithms. The presented approach is applied to nding suitable 3D locations where unmanned aerial vehicles (UAVs) can be placed to relay information gathered in multi-target monitoring and surveillance. The eciency of our algorithms is illustrated by results of numerical experiments involving problem instances with up to 40 000 nodes and up to 20 million arcs.

```
@techreport{diva2:739485,
author = {Burdakov, Oleg and Doherty, Patrick and Kvarnström, Jonas},
title = {{Local Search for Hop-constrained Directed Steiner Tree Problem with Application to UAV-based Multi-target Surveillance}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2014},
type = {Other academic},
number = {LiTH-MAT-R, 2014:10},
address = {Sweden},
}
```

Snow removal is an important problem in certain countries. It is also difficult, especially in urban areas. The main questions are which vehicle shall do what task, when shall the tasks be done and how shall the vehicles travel. In this paper we describe the problem in detail and formulate a detailed mathematical time-indexed model that contains all practical complications we have encountered, for example different vehicles and switching times between tasks. We investigate the solvability of the model, and present a number of alternate formulations, relaxations and simplifications, yielding several different models of different sizes and different strength. We can either solve very small problems exactly or larger problems more approximately. Our main goal is to find good lower bounds on the optimal objective function value in a limited time, and we present extensive computational tests comparing the obtained bounds and the times needed for the different models. As a result we find some rather efficient models, in the sense that they yield rather good lower bounds in rather short time. With these models, we find lower bounds for several real life instances in the form of small local cities.

```
@techreport{diva2:728056,
author = {Holmberg, Kaj},
title = {{Urban Snow Removal:
Modeling and Relaxations}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2014},
type = {Other academic},
number = {LiTH-MAT-R, 2014:08},
address = {Sweden},
}
```

During the last ten years, the speaker has developed Fortran software for minimizing general objective functions when first derivatives are not available. He has released two packages, namely NEWUOA and UOBYQA, for the unconstrained case and for simple upper and lower bounds on the variables, respectively. His current research addresses general linear constraints on the variables, which is intended to provide the new LINCOA package. These algorithms require only of magnitude n squared operations on a typical iteration, where n is the number of variables, which has allowed them to be applied when n is in the hundreds. No attention is given to sparsity.

Some excellent numerical results have been achieved, which certainly depend on the use of the symmetric Broyden updating formula. That formula when derivatives are not available is the subject of Lecture 1. It supplies the solution to the following variational problem. Some values of the objective function and a quadratic approximation to the objective function are given, where usually the approximation does not interpolate all the given function values. A new quadratic approximation is required that does interpolate the given function values. The freedom that remains in the new approximation is taken up by making as small as possible the Frobenius norm of the change to the second derivative matrix of the approximation.

A trust region in the space of the variables is the set of all points that are within a prescribed distance of the best point so far. A typical change to the vector of variables is a move from the best point so far to a new point within the trust region, the new point being chosen to provide a relatively small value of the current quadratic approximation to the objective function. The calculation of the new point is the subject of Lecture 2. A version of truncated conjugate gradients is employed, in order that the amount of work is only of magnitude n^{2}. Several decisions had to be taken to specify an algorithm. They are particularly interesting for the LINCOA software, due to the linear constraints on the variables.

Lectures 1 and 2 describe vital ingredients of the software that has been mentioned. A few more details are addressed in Lecture 3, to complete an outline of how the algorithms work. The numerical results from several test problems are considered too. The accuracy and efficiency are much better than expected if one takes the view that the second derivative matrix of the quadratic approximation should become close to the second derivative matrix of the objective function. We find clearly that this view is wrong. Also the numerical results show that good accuracy is achieved eventually, although the number of iterations is sometimes very sensitive to effects from computer rounding errors.

```
@techreport{diva2:697412,
author = {Powell, M.J.D.},
title = {{Derivate Free Optimization}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2014},
type = {Other academic},
number = {LiTH-MAT-R, 2014:02},
address = {Sweden},
}
```

We present results from testing of parallel versions of algorithms for derivativefree optimization. Such algorithms are required when an analytical expression for the objective function is not available, which often happens in simulator-driven product development. Since the objective function is unavailable, we cannot use the common algorithms that require gradient and Hessian information. Instead special algorithms for derivative-free optimization are used. Such algorithms are typically sequential, and here we present the rst test results of parallelization of such algorithms. The parallel extensions include using several start points, generating several points from each start point in each iteration, alternative model building, and more. We also investigate whether we can generate synergy between the di erent start points through information sharing. Examples of this include using several models to predict the objective function value of a point in order to prioritize the order in which points are sent for evaluation. We also present results for higher-level control of the optimization algorithms.

```
@techreport{diva2:697257,
author = {Olsson, Per-Magnus},
title = {{Test Results from Parallelization of Model Building Algorithms for Derivative-free Optimization}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2014},
type = {Other academic},
number = {LiTH-MAT-R, 2014:01},
address = {Sweden},
}
```

Filter networks is a powerful tool used for reducing the image processing time, while maintaining its reasonably high quality.They are composed of sparse sub-filters whose low sparsity ensures fast image processing.The filter network design is related to solvinga sparse optimization problem where a cardinality constraint bounds above the sparsity level.In the case of sequentially connected sub-filters, which is the simplest network structure of those considered in this paper, a cardinality-constrained multilinear least-squares (MLLS) problem is to be solved. If to disregard the cardinality constraint, the MLLS is typically a large-scale problem characterized by a large number of local minimizers. Each of the local minimizers is singular and non-isolated.The cardinality constraint makes the problem even more difficult to solve.An approach for approximately solving the cardinality-constrained MLLS problem is presented.It is then applied to solving a bi-criteria optimization problem in which both thetime and quality of image processing are optimized. The developed approach is extended to designing filter networks of a more general structure. Its efficiency is demonstrated by designing certain 2D and 3D filter networks. It is also compared with the existing approaches.

```
@techreport{diva2:692904,
author = {Andersson, Mats and Burdakov, Oleg and Knutsson, Hans and Zikrin, Spartak},
title = {{Sparsity Optimization in Design of Multidimensional Filter Networks}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2013},
type = {Other academic},
number = {LiTH-MAT-R, 2013:16},
address = {Sweden},
}
```

Limited memory quasi-Newton methods and trust-region methods represent two efficient approaches used for solving unconstrained optimization problems. A straightforward combination of them deteriorates the efficiency of the former approach, especially in the case of large-scale problems. For this reason, the limited memory methods are usually combined with a line search. We show how to efficiently combine limited memory and trust-region techniques. One of our approaches is based on the eigenvalue decomposition of the limited memory quasi-Newton approximation of the Hessian matrix. The decomposition allows for finding a nearly-exact solution to the trust-region subproblem defined by the Euclidean norm with an insignificant computational overhead compared with the cost of computing the quasi-Newton direction in line-search limited memory methods. The other approach is based on two new eigenvalue-based norms. The advantage of the new norms is that the trust-region subproblem is separable and each of the smaller subproblems is easy to solve. We show that our eigenvalue-based limited-memory trust-region methods are globally convergent. Moreover, we propose improved versions of the existing limited-memory trust-region algorithms. The presented results of numerical experiments demonstrate the efficiency of our approach which is competitive with line-search versions of the L-BFGS method.

```
@techreport{diva2:667359,
author = {Burdakov, Oleg and Gong, Lujin and Yuan, Ya-Xiang and Zikrin, Spartak},
title = {{On Efficiently Combining Limited Memory and Trust-Region Techniques}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2013},
type = {Other academic},
number = {LiTH-MAT-R, 2013:13},
address = {Sweden},
}
```

High dose-rate (HDR) brachytherapy is one type of treatment for prostate cancer, in which a radioactive source is moved through catheters implanted into the prostate. For each patient, a unique treatment plan is constructed. This plan determines for example the catheter positioning and the dwelling time pattern, that is, where and for how long the source should stop.

Mathematical optimization methods are frequently used to find high-quality dwelling time patterns. However, choosing the catheter positioning is usually done without any aid of mathematical optimization methods. Researchers have recently suggested some optimization models for catheter positioning, and also heuristics for solving them. However, there are no available methods for finding the optimal solution of these models within a clinically acceptable time frame.

In this paper we present the foundation for a branch-and-bound method that has been tailored to the catheter positioning problem. Tests show that this tailored branch-and-bound method has some promising features, for example that the dual bound is improved faster than when using a standard branch-and-bound method. But the tests also show that further research is required to develop it into a method that can find the optimal solution fast enough.

```
@techreport{diva2:658187,
author = {Holm, Åsa},
title = {{A Tailored Branch-and-Bound Method for Optimizing the Dwelling Time Pattern and Catheter Positioning in HDR Brachytherapy}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2013},
type = {Other academic},
number = {LiTH-MAT-R, 2013:12},
address = {Sweden},
}
```

This paper deals with a Military Aircraft Mission Planning Problem, where the problem is to find time efficient flight paths for a given aircraft fleet that should attack a number of ground targets. Due to the nature of the attack, two aircraft need to rendezvous at the target, that is, they need to be synchronized in both space and time. At the attack, one aircraft is launching a guided weapon, while the other is illuminating the target. Each target is associated with multiple attack and illumination options. Further, there may be precedence constraints between targets, limiting the order of the attacks. The objective is to maximize the outcome of the entire attack, while also minimizing the mission time span. We present two mathematical models for this problem and compare their efficiency on some small test cases. We also provide some heuristic approaches since direct application of a general MIP solver to the mathematical model is only practical for smaller scenarios. The heuristics are compared and they successfully provide solutions to a number of scenarios.

```
@techreport{diva2:524832,
author = {Quttineh, Nils-Hassan and Lundberg, Kristian and Holmberg, Kaj and Larsson, Torbjörn},
title = {{Aircraft Mission Planning}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2012},
type = {Other academic},
number = {LiTH-MAT-R, 2012:7},
address = {Sweden},
}
```

The problem setting concerns the tactical planning of a military operation. Imagine a big wide open area where a number of interesting targets are positioned. It could be radar stations or other surveillance equipment, with or without defensive capabilities, which the attacker wishes to destroy. Moreover, the targets are possibly guarded by defending units, like Surface-to-Air Missile (SAM) units. The positions of all units, targets and defenders, are known. We consider the problem of the attacker, where the objective is to maximize the expected outcome of a joint attack against the enemy, subject to a limited amount of resources (i.e. aircraft, tanks). We present a mathematical model for this problem, together with alternative model versions which provide optimistic and a pessimistic approximations. The model is not efficient for large problem instances, hence we also provide heuristic solution approaches and successfully provide solutions to a number of scenarios.

```
@techreport{diva2:524831,
author = {Quttineh, Nils-Hassan and Lundberg, Kristian and Holmberg, Kaj and Larsson, Torbjörn},
title = {{Effect Oriented Planning}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2012},
type = {Other academic},
number = {LiTH-MAT-R, 2012:6},
address = {Sweden},
}
```

The multilinear least-squares (MLLS) problem is an extension of the linear least-squares problem. The difference is that a multilinearoperator is used in place of a matrix-vector product. The MLLS istypically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows formoving from one local minimizer to a better one. The efficiencyof this strategy isillustrated by results of numerical experiments performed forsome problems related to the design of filter networks.

```
@techreport{diva2:457980,
author = {Andersson, Mats and Burdakov, Oleg and Knutsson, Hans and Zikrin, Spartak},
title = {{Global Search Strategies for Solving Multilinear Least-squares Problems}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2011},
type = {Other academic},
number = {LiTH-MAT-R, 2011:17},
address = {Sweden},
}
```

A well-known approach to the design of computationally efficient filters is to use spectral factorization, i.e. a decomposition of a filter into a sequence of sub-filters. Due to the sparsity of the sub-filters, the typical processing speedup factor is within the range 1-10 in 2D, and for 3D it achieves10-100. The design of such decompositions consists in choosing the proper number of sub-filters, their individual types and sparsity. We focus here on finding optimal values of coefficients for given sequences of sparse sub-filters. It is a non-convex large scale optimization problem. The existing approaches are characterized by a lack of robustness - a very slow convergence with no guarantee of success. They are typically based on generating random initial points for further refinement with the use of local search methods. To deal with the multi-extremal nature of the original problem, we introduce a new constrained optimization problem. Its solution is then used as an initial point in the original problem for further refinement. Our approach is applicable to designing multi-dimensional filters. Its efficiency and robustness is illustrated by designing sub-filter sequences for 2D low-pass, band-pass and high-pass filters of approximately the same quality as with the use of a standard approach, but with the overall design speedup factor of several hundred.

```
@techreport{diva2:444274,
author = {Norell, Björn and Burdakov, Oleg and Andersson, Mats and Knutsson, Hans},
title = {{Approximate spectral factorization for design of efficient sub-filter sequences}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2011},
type = {Other academic},
number = {LiTH-MAT-R, 2011:14},
address = {Sweden},
}
```

We suggest here a least-change correction to available finite element (FE) solution.This postprocessing procedure is aimed at recoveringthe monotonicity and some other important properties that may not beexhibited by the FE solution. It is based on solvinga monotonic regression problem with some extra constraints.One of them is a linear equality-type constraint which models the conservativityrequirement. The other ones are box-type constraints, andthey originate from the discrete maximum principle.The resulting postprocessing problem is a large scale quadratic optimization problem. It is proved that the postprocessedFE solution preserves the accuracy of the discrete FE approximation.We introduce an algorithm for solving the postprocessingproblem. It can be viewed as a dual ascent method basedon the Lagrangian relaxation of the equality constraint.We justify theoretically its correctness.Its efficiency is demonstrated by the presented results of numerical experiments.

```
@techreport{diva2:410865,
author = {Burdakov, Oleg and Kapyrin, Ivan and Vassilevski, Yuri},
title = {{Monotonicity recovering and accuracy preserving optimization methods for postprocessing finite element solutions}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2011},
type = {Other academic},
number = {LiTH-MAT-R, 8},
address = {Sweden},
}
```

Almost every Costly Global Optimization (CGO) solver utilizes a surrogate model, or response surface, to approximate the true (costly) function. The EGO algorithm introduced by Jones et al. utilizes the DACE framework to build an approximating surrogate model. By optimizing a less costly utility function, the algorithm determines a new point where the original objective function is evaluated. This is repeated until some convergence criteria is fulfilled.The original EGO algorithm finds the new point to sample in a two-stage process. In its first stage, the estimates of the interpolation parameters are optimized with respect to already sampled points. In the second stage, these estimated values are considered true in order to optimize the location of the new point. The use of estimate values as correct introduces a source of error.Instead, in the one-stage EGO algorithm, both the parameters and the location of a new point are optimized at the same time, removing the source of error. This new subproblem becomes more difficult, but eliminates the need of solving two subproblems.Difficulties in implementing a fast and robust One-Stage EGO algorithm in TOMLAB are discussed, especially the solution of the new subproblem.

```
@techreport{diva2:524834,
author = {Quttineh, Nils-Hassan and Holmström, Kenneth},
title = {{Implementation of a One-Stage Efficient Global Optimization (EGO) Algorithm}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2009},
type = {Other academic},
number = {Research Report 2009, School of Education, Culture and Communication, Division of Applied Mathematics, Mälardalen University, 2009:2},
address = {Sweden},
}
```

We consider a constrained optimization problem with mixed integer and real variables. It models optimal placement of communications relay nodes in the presence of obstacles. This problem is widely encountered, for instance, in robotics, where it is required to survey some target located in one point and convey the gathered information back to a base station located in another point. One or more unmanned aerial or ground vehicles (UAVs or UGVs) can be used for this purpose as communications relays. The decision variables are the number of unmanned vehicles (UVs) and the UV positions. The objective function is assumed to access the placement quality. We suggest one instance of such a function which is more suitable for accessing UAV placement. The constraints are determined by, firstly, a free line of sight requirement for every consecutive pair in the chain and, secondly, a limited communication range. Because of these requirements, our constrained optimization problem is a difficult multi-extremal problem for any fixed number of UVs. Moreover, the feasible set of real variables is typically disjoint. We present an approach that allows us to efficiently find a practically acceptable approximation to a global minimum in the problem of optimal placement of communications relay nodes. It is based on a spatial discretization with a subsequent reduction to a shortest path problem. The case of a restricted number of available UVs is also considered here. We introduce two label correcting algorithms which are able to take advantage of using some peculiarities of the resulting restricted shortest path problem. The algorithms produce a Pareto solution to the two-objective problem of minimizing the path cost and the number of hops. We justify their correctness. The presented results of numerical 3D experiments show that our algorithms are superior to the conventional Bellman-Ford algorithm tailored to solving this problem.

```
@techreport{diva2:272756,
author = {Burdakov, Oleg and Holmberg, Kaj and Doherty, Patrick and Olsson, Per-Magnus},
title = {{Optimal placement of communications relay nodes}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2009},
type = {Other (popular science, discussion, etc.)},
number = {LiTH-MAT-R, 2009:3},
address = {Sweden},
}
```

```
@techreport{diva2:263187,
author = {Holmberg, Kaj},
title = {{Heuristics for the Rural Postman Problem with Application to Snow Removal for Secondary Roads}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2008},
type = {Other academic},
number = {LiTH-MAT-R, 2008:14},
address = {Sweden},
}
```

The set partitioning polytope has the quasi-integrality propertythat enables the use of simplex pivot based methods for finding animproved integer solution, which thereby is associated with a linearprogramming basis and a corresponding dual solution. Presented in thispaper is a framework for an all-integer column generation methodologyfor set partitioning problems that utilises the quasi-integralityproperty of the feasible polytope.In the presented methodology, each successively found solution to arestricted master problem is feasible, integer and associated with acorresponding dual solution, which is then used in the columngeneration step. The column generation problem is tailored to producecolumns that maintain integrality when pivoted into the basis.Furthermore, criteria for verifying optimality are presented.

```
@techreport{diva2:113790,
author = {Rönnberg, Elina and Larsson, Torbjörn},
title = {{An All-Integer Column Generation Methodology for Set Partitioning Problems}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2008},
type = {Other academic},
number = {Report / Department of Mathematics, Universitetet i Linköping, Tekniska högskolan, },
address = {Sweden},
}
```

Hospital wards need to be staffed by nurses round the clock, resultingin irregular working hours for many nurses. Over the years, thenurses' influence on the scheduling have been increased in order toimprove their working conditions. In Sweden it is common to apply a kindof self-scheduling where each nurse individually proposes a schedule,and then the final schedule is determined through informalnegotiations between the nurses. This kind of self-scheduling is verytime-consuming and does often lead to conflicts.We present a pilot study which aims at determining if it is possibleto create an optimisation tool that automatically delivers a usableschedule based on the schedules proposed by the nurses. The study isperformed at a typical Swedish nursing ward, for which we havedeveloped a mathematical model and delivered schedules. The results ofthis study are very promising.

```
@techreport{diva2:113789,
author = {Rönnberg, Elina and Larsson, Torbjörn},
title = {{Automating the Self-Scheduling Process of Nurses in Swedish Healthcare:
A Pilot Study}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2008},
type = {Other academic},
number = {LiTH-MAT-R, 2008:8},
address = {Sweden},
}
```

We study the problem of positioning unmanned aerial vehicles (UAVs) to maintain an unobstructed flow of communication from a surveying UAV to some base station through the use of multiple relay UAVs. This problem can be modeled as a hopconstrained shortest path problem in a large visibility graph. We propose a dual ascent method for solving this problem, optionally within a branch-and-bound framework. Computational tests show that realistic problems can be solved in a reasonably short time, and that the proposed method is faster than the classical dynamic programming approach.

```
@techreport{diva2:37550,
author = {Burdakov, Oleg and Holmberg, Kaj and Olsson, Per-Magnus},
title = {{A Dual Ascent Method for the Hop-constrained Shortest Path with Application to Positioning of Unmanned Aerial Vehicles}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2008},
type = {Other academic},
number = {Report / Department of Mathematics, Universitetet i Linköping, Tekniska högskolan, 2008:7},
address = {Sweden},
}
```

Considering routing in symmetrical three stage Clos networks, we search for the routing of an additional connection that requires the least rearrangements, i.e.\ the minimal number of changes of already routed connections. We describe polynomial methods, based on matchings and edge colorings. The basic idea is to swap colors along alternating paths. The paths need to be maximal, and the shortest of these maximal paths is chosen, since it minimizes the rerouting that needs to be done. Computational tests confirm the efficiency of the approach.

```
@techreport{diva2:262221,
author = {Holmberg, Kaj},
title = {{Graph Optimization Approaches for Minimal Rerouting in Symmetric Three Stage Clos Networks}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2007},
type = {Other academic},
number = {LiTH-MAT-R, 2007:12},
address = {Sweden},
}
```

```
@techreport{diva2:259097,
author = {Gunnarsson, Helene and Rönnqvist, Mikael},
title = {{Solving a multi-period supply chain problem for a pulp industry using Lagrangian heuristics based on time periods}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2007},
type = {Other academic},
number = {LiTH-MAT-R, 5},
address = {Sweden},
}
```

```
@techreport{diva2:258865,
author = {Broström, Peter and Holmberg, Kaj},
title = {{Compatible Weights and Valid Cycles in Non-spanning OSPF Routing Patterns}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2007},
type = {Other academic},
number = {LiTH-MAT-R, 4},
address = {Sweden},
}
```

```
@techreport{diva2:258727,
author = {Broström, Peter and Holmberg, Kaj},
title = {{Design of OSPF Networks using Subpath Consistent Routing Patterns}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2007},
type = {Other academic},
number = {LiTH-MAT-R, 5},
address = {Sweden},
}
```

```
@techreport{diva2:255533,
author = {Burdakov, Oleg and Grimvall, Anders and Sysoev, Oleg},
title = {{Data preordering in generalized pav algorithm for monotonic regression}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2006},
type = {Other academic},
number = {LiTH-MAT-R, 6},
address = {Sweden},
}
```

```
@techreport{diva2:253664,
author = {Broström, Peter and Holmberg, Kaj},
title = {{On the Extremal Structure of an OSPF Related Cone}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2006},
type = {Other academic},
number = {LiTH-MAT-R, 2006:2},
address = {Sweden},
}
```

In this paper we consider a combined supply chain and ship routing problem for a large pulp producer in Scandinavia. The problem concerns the distribution of pulp to customers, with route scheduling of ships as a central part of modeling. It is an operative planning problem with daily ship routing decisions over a 40 days period. The pulp supply is determined by fixed production plans, and the transport flows and storages are modeled with the requirement to satisfy the demand in a cost-optimal way. We develop a mixed integer programming model with binary variables for route usage of a vessel.

The problem is solved with a heuristic solution method, based on a rolling time horizon and a standard branch and bound algorithm. We apply the heuristic on problem instances with real world data, and compare results from reduced problem instances with the results from an exact branch and bound search. The computational experiments indicate that real world problems are solvable with the solution method and that it in many cases can be very effcient.

```
@techreport{diva2:17883,
author = {Bredström, David and Rönnqvist, Mikael},
title = {{Supply chain optimization in pulp distribution using a rolling horizon solution approach}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2006},
type = {Other academic},
number = {LiTH-MAT-R, 2003-25},
address = {Sweden},
}
```

```
@techreport{diva2:244891,
author = {Broström, Peter and Holmberg, Kaj},
title = {{A new derivation of valid cycles}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2005},
type = {Other academic},
number = {, },
address = {Sweden},
}
```

```
@techreport{diva2:244188,
author = {Hussian, Mohamed and Grimvall, Anders and Burdakov, Oleg and Sysoev, Oleg},
title = {{Monotonic Regression for Assessment of Trends in Environmental Quality Data}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiU-MAT-R, 1},
address = {Sweden},
}
```

```
@techreport{diva2:243375,
author = {Henningsson, Mathias and Karlsson, Jenny and Rönnqvist, Mikael},
title = {{Mixed Integer Programming models to support Tactical Forest Road Upgrade Planning}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-MAT-R, 20},
address = {Sweden},
}
```

```
@techreport{diva2:243362,
author = {Gunnarsson, Helene and Rönnqvist, Mikael and Carlsson, Dick},
title = {{Integrated production and distribution planning of the supply chain for Södra Cell AB}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-MAT-R, 19},
address = {Sweden},
}
```

```
@techreport{diva2:242703,
author = {Holmberg, Kaj and Joborn, Martin and Melin, Kennet},
title = {{Lagrangian Based Heuristics for the Multicommodity Network Flow Problem with Fixed Costs On Paths}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-MAT-R, 15},
address = {Sweden},
}
```

```
@techreport{diva2:242702,
author = {Holmberg, Kaj},
title = {{Mean Value Cross Decomposition Based Branch-and-Bound for Mixed Integer Programming Problems}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-MAT-R, 13},
address = {Sweden},
}
```

```
@techreport{diva2:242676,
author = {Broström, Peter and Holmberg, Kaj},
title = {{Multiobjective design of survivable IP networks}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-MAT-R, 3},
address = {Sweden},
}
```

```
@techreport{diva2:242669,
author = {Gunnarsson, Helene and Rönnqvist, Mikael and Carlsson, Dick},
title = {{A combined terminal location and ship routing problem}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-MAT-R, 4},
address = {Sweden},
}
```

Many telecommunication networks use Internet Protocol for deciding the routing of traffic. The specifications OSPF (Open Shortest Path First) and ECM (Equal Cost Multipath) are very common, and state that each router sends the traffic on the shortest path to the destination. If there are several shortest path, the router splits the traffic evenly. In order to have some control over the traffic distribution, the operator can assign weights to the links in the network, and these weights are used by the routers when calculating the shortest paths. It has been shown that by optimizing over the values of the weights, the performance of a network can be much improved. A difficult question is whether or not for a set of desired traffic patterns there exists a compatible metric, i.e. weights making the routers give the specified traffic patterns. There is one known necessary condition for the existence of such a metric, but up to now no sufficient conditions. We investigate this problem, and find more general necessary conditions for the existence of compatible weights for a set of given desired "shortest path"-graphs. A polynomial algorithm that for most cases verifies the non-existence of a compatible metric is presented. The algorithm also indicates which parts of the traffic patterns that are in conflict. A few numer;cal examples are used to illustrate the procedure, and some computational tests are reported.

```
@techreport{diva2:242675,
author = {Broström, Peter and Holmberg, Kaj},
title = {{Determining the Non-Existence of a Compatible OSPF Metric}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-MAT-R, 6},
address = {Sweden},
}
```

A dominating standard for routing in telecommunication networks is Internet Protocol with OSPF (Open Shortest Path First) and ECM (Equal Cost Multipath). Each router sends the traffic on the shortest path to the destination. If there are several shortest path, the router splits the traffic evenly. The operator can assign weights to the links in the network, which are used by the routers when calculating the shortest paths. An important question is whether or not for a set of desired traffic patterns there exists a compatible metric, i.e. weights making the routers give the specified traffic patterns. We describe necessary conditions, stronger than those previously discovered, for the existence of compatible weights for a set of given desired shortest path-graphs.

```
@techreport{diva2:242674,
author = {Broström, Peter and Holmberg, Kaj},
title = {{Stronger Necessary Conditions for the Existence of a Compatible OSPF Metric}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2004},
type = {Other academic},
number = {LiTH-MAT-R, 8},
address = {Sweden},
}
```

```
@techreport{diva2:242701,
author = {Holmberg, Kaj and Kiwiel, Krzysztof},
title = {{Mean Value Cross Decomposition for Nonlinear Convex Problems}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2003},
type = {Other academic},
number = {LiTH-MAT-R, 10},
address = {Sweden},
}
```

```
@techreport{diva2:242678,
author = {Flisberg, Patrik and Eveborn, Patrik and Rönnqvist, Mikael},
title = {{Staff planning in the Swedish home care system}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2003},
type = {Other academic},
number = {LiTH-MAT-R, 11},
address = {Sweden},
}
```

In this paper we present a genetic algorithm for the pulp distribution problem at a large pulp producer in Scandinavia. The distribution is a major part of the company's supply chain and includes transports with cargo vessels, by train and trucks and storages at terminals in port, at pulp mills and in customer locations. The problem we focus on is to find ship schedules and pulp deliveries in order to minimize the total cost of distribution.

The genetic algorithm utilizes two linear programming models. The first model optimizes all transport flows given a schedule and the second model approximates the performance of a schedule, measured in the total distribution cost. In the computational experiments we use instances from the real world and compare the results with an exact mixed integer programming approach.

```
@techreport{diva2:242677,
author = {Bredström, David and Rönnqvist, Mikael},
title = {{A genetic algorithm for a pulp distribution problem}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2003},
type = {Other academic},
number = {LiTH-MAT-R, 26},
address = {Sweden},
}
```

In this paper we propose an algorithm for solving problems with nonconvex objective function and linear constraints. We extend the previously suggested Conjugate direction Frank–Wolfe algorithm to nonconvex problems. We apply our method to multi-class user equilibria under social marginal cost pricing. Results of numerical experiments on Sioux Falls and Winnipeg are reported.

```
@techreport{diva2:23506,
author = {Daneva, Maria and Lindberg, Per Olov},
title = {{A Conjugate Direction Frank-Wolfe Method for Nonconvex Problems}},
institution = {Linköping University, Department of Electrical Engineering},
year = {2003},
type = {Refereed},
number = {LiTH-MAT-R, 2003:09},
address = {Sweden},
}
```

## Student theses

The container loading problem considered in this thesis is to determine placements of a set of packages within one or multiple shipping containers. Smaller packages are consolidated on pallets prior to being loaded in the shipping containers together with larger packages. There are multiple objectives which may be summarized as fitting all the packages while achieving good stability of the cargo as well as the shipping containers themselves.

According to recent literature reviews, previous research in the field have to large extent been neglecting issues relevant in practice. Our real-world application was developed for the industrial company Atlas Copco to be used for sea container shipments at their Distribution Center (DC) in Texas, USA. Hence all applicable practical constraints faced by the DC operators had to be treated properly. A high variety in sizes, weights and other attributes such as stackability among packages added complexity to an already challenging combinatorial problem.

Inspired by how the DC operators plan and perform loading manually, the batch concept was developed, which refers to grouping of boxes based on their characteristics and solving subproblems in terms of partial load plans. In each batch, an extensive placement heuristic and a load plan evaluation run iteratively, guided by a Genetic Algorithm (GA). In the placement heuristic, potential placements are evaluated using a scoring function considering aspects of the current situation, such as space utilization, horizontal support and heavier boxes closer to the floor. The scoring function is weighted by coefficients corresponding to the chromosomes of an individual in the GA population. Consequently, the fitness value of an individual in the GA population is the rating of a load plan.

The loading optimization software has been tested and successfully implemented at the DC in Texas. The software has been proven capable of generating satisfactory load plans within acceptable computation times, which has resulted in reduced uncertainty and labor usage in the loading process. Analysis using real sea container shipments shows that the GA is able to tune the scoring coefficients to suit the particular problem instance being solved.

```
@mastersthesis{diva2:1073644,
author = {Olsson, Jonas},
title = {{Solving a highly constrained multi-level container loading problem from practice}},
school = {Linköping University},
type = {{LiTH-MAT-EX--2017/01--SE}},
year = {2017},
address = {Sweden},
}
```

The distribution of tasks to a heterogeneous work force at libraries and other service institutions is a time consuming task for manual schedulers. In this thesis, we study the possibility of making the assignment using operations research techniques. The problem studied concerns seven days per week, five types of tasks, two types of staff qualifications and around 100 tasks per week to be assigned to the staff. Staff member satisfaction is also taken into account in the scheduling process.

The main objective is to create an optimal ten week rotating schedule, in which the stand-in staff members are evenly distributed. Such a schedule is considered to be robust, since stand-in staff can replace the regular staff when there is unforseen absence.

A mathematical model is formulated for the problem and is solved using the commercial solver CPLEX. We also present two different large neighbourhood search heuristic implementations for this problem. The first heuristic assigns complete week blocks to the staff members, while the second one distributes one task at a time. The latter heuristic works better than the former and achieves results comparable to those of the commercial solver. Our conclusion is that the second heuristic works better because it focuses on finding a good weekend distribution before creating the rest of the schedule. A conclusion from our work is that the weekend-worker constellation is the most significant degree of freedom in the problem.

```
@mastersthesis{diva2:954109,
author = {Karlsson, Emelie and Arvidson, Claes},
title = {{Work Distribution for a Heterogeneous Library Staff:
A Personnel Task Scheduling Problem}},
school = {Linköping University},
type = {{LiTH-MAT-EX--2016/05--SE}},
year = {2016},
address = {Sweden},
}
```

We derive a technique for scaling the search directions of feasible direction methods when applied to optimization problems over Cartesian product sets. It is proved that when the scaling is included in a convergent feasible direction method, also the new method will be convergent. The scaling technique is applied to the Frank-Wolfe method, the partanized Frank-Wolfe method and a heuristic Frank-Wolfe method. The performance of these algorithms with and without scaling is evaluated on the stochastic transportation problem. It is found that the scaling technique has the ability to improve the performance of some methods. In particular we observed a huge improvement in the performance of the partanized Frank-Wolfe method, especially when the scaling is used together with an exact line search and when the number of sets in the Cartesian product is large.

```
@mastersthesis{diva2:892603,
author = {Högdahl, Johan},
title = {{Conditional steepest descent directions over Cartesian product sets:
With application to the Frank-Wolfe method}},
school = {Linköping University},
type = {{LiTH-MAT-EX--2015/11--SE}},
year = {2015},
address = {Sweden},
}
```

In telecommunications in general and in GSM in particular, the handover is a feature that guarantees a smooth transition of a call from one base station - that is for the purpose of this project an antenna - to another. In the recent ten years, the amount of data traffic through mobile telecommunications has doubled annually, putting an enormous strain on the network and forcing operators to upgrade with more and more base stations and new features. Although 3G and 4G are responsible for data traffic in most countries, GSM still provides more than 80% of the coverage for mobile devices around the world. Due to the increase in data traffic, 3G and 4G need to use more and more frequencies at the expense of GSM. An optimization of the GSM network is thus vital. In this project, we research two methods to automatically choose the parameters of interest (PoI) that govern the handover feature in each cell which is, roughly speaking, the area of coverage of one antenna. In one of these methods, the choice of cell- and cell-to-cell-specific parameters has its origins in control theory while the other method is based on mathematical optimization. In the mathematical sense, our goal is to optimize the quality of service over PoIs. Extensive simulations have been run using these PoIs in order to evaluate if and how the two different methods can effectively be used in reality. Several useful insights have been gained that will provide the basis for future work. The optimization approach in particular has proved to deliver good results within the limitations of the simulated environment used for testing.

```
@mastersthesis{diva2:884937,
author = {Pavski, Johann Joachim},
title = {{Handover Optimization in GSM}},
school = {Linköping University},
type = {{LiTH-MAT-EX--2015/10--SE}},
year = {2015},
address = {Sweden},
}
```

Several industries use what is called rotating workforce scheduling. This often means that employees are needed around the clock seven days a week, and that they have a schedule which repeats itself after some weeks. This thesis gives an introduction to this kind of scheduling and presents a review of previous work done in the field. Two different optimization models for rotating workforce scheduling are formulated and compared, and some examples are created to demonstrate how the addition of soft constraints to the models affects the scheduling outcome. Two large realistic cases, with constraints commonly used in many industries, are then presented. The schedules are in these cases analyzed in depth and evaluated. One of the models excelled as it provides good results within a short time limit and it appears to be a worthy candidate for rotating workforce scheduling.

```
@mastersthesis{diva2:867722,
author = {Granfeldt, Caroline},
title = {{Rotating Workforce Scheduling}},
school = {Linköping University},
type = {{LITH-MAT-EX--2015/08--SE}},
year = {2015},
address = {Sweden},
}
```

In IP networks using the OPSF-principle together with the ECMP-principle, thetraffic is routed in all shortest paths. Weights on links are set by an administrator,not knowing how the resulting routing pattern will become. In this final thesis, I givea heuristic solution to the problem of changing a set of desired routing patterns inan ordered way to make them compatible with each other. An implementation of thealgorithm has been made and some testing with provided data for performance is alsopresented.

```
@mastersthesis{diva2:838801,
author = {Lindblad, Andreas},
title = {{Routing of traffic in an IP-network using combined routing patterns}},
school = {Linköping University},
type = {{LiTH-MAT-EX--2015/06--SE}},
year = {2015},
address = {Sweden},
}
```

Numerous studies have been done in the area of nurse scheduling, since this is a complex area with a lot of aspects that has to be taken into account. An interesting but little studied subject is how the requirements for the scheduling affect the possibility to construct a feasible schedule, or how the requirements affect the quality of the schedule. Of special interest is the effect of the composition of the workforce and of the change in scheduling rules. What is missing is results showing which composition and changes that are possible, and if so what is needed to be able to follow through with them.

The changes tested in our simulation study are changes that is up for discussion at many wards in Sweden today, with topics such as split shifts and high part-time work percentages within the staff. In order to simulate various scheduling requirements and changes, an integer linear model for creating nurse schedules is developed. The results provide some insight into the dependence between scheduling requirements and the resulting schedules. In particular our simulation results indicate that there is an inherent conflict between high part-time work percentages and split or long work shifts. Our results can be used as a basis for future research on these topics in the area of nurse scheduling.

```
@mastersthesis{diva2:796283,
author = {Håkansson, Rebecka},
title = {{Staff scheduling in elderly care - A simulation study of trade-offs}},
school = {Linköping University},
type = {{LiTH-MAT-EX--2015/02--SE}},
year = {2015},
address = {Sweden},
}
```

This thesis is about solving the manpower planning problem concerning stangand transitioning of pilots. The objective of the planning is to have enoughpilots to satisfy the demand while minimizing the cost. The main decisions totake are how many pilots to hire, which pilots to train and which courses toschedule. The planning problems that arise are both large and dicult whichmakes it important to use ecient solution methods. Seniority rules betweenpairs of pilots are the most complicating factor.A major part in the solution process is the solving of mixed integer programs.The emphasis in the thesis is to develop and test adaptations of the branch andbound algorithm to solve mixed integer programs faster. One of these is abranching principle that takes a problem specic structure into account. Agraph of implications is constructed from the seniority rules and this graph isthen used to estimate the impact of each branching candidate. The implementedmethods outperform the software XPRESS on some instances, while for mostinstances the performance is comparable.

```
@mastersthesis{diva2:558759,
author = {Mor\'{e}n, Björn},
title = {{Utilizing problem specic structures in branch and bound methods for manpower planning}},
school = {Linköping University},
type = {{LiTH-MAT-EX-2012/10-SE}},
year = {2012},
address = {Sweden},
}
```

The airline industry faces some of the largest and most complicated optimization problems among all industries today. A large part of their costs iscrew costs and large savings can be made by efficient manpower planning. The research focus has been on the later steps in the planning process such as crew scheduling. The focus of this thesis is on the largely unexplored research area of staffing and transition planning for pilots. For the important question of which pilot should receive training to which position no solution strategy with optimization has, before this thesis, been presented. One reason for this might be that many complicated regulations concern this question, making an easily solved model impossible. I have developed a tabu search based algorithm with encouraging results. The algorithm was tested on data from Scandinavian Airlines. Compared to a reference algorithm based on commercial mixed integer programming software, the tabu search algorithm finds similar solutions about 30 times faster. I show how tabu search can be tailored to a specific complicated problem, and the results are good enough to not only be of theoretical interest but also of direct practical interest for use in the airline industry.

```
@mastersthesis{diva2:780198,
author = {Thal\'{e}n, Björn},
title = {{Manpower planning for airline pilots:
A tabu search approach}},
school = {Linköping University},
type = {{LiTH-MAT-EX--2010/16--SE}},
year = {2010},
address = {Sweden},
}
```

Inom vården utförs ofta schemaläggning av personal manuellt, vilket kräver mycket tid och resurser. Att planera arbetet för en grupp läkare, med dess ofta mycket komplexa sammansättning vad gäller exempelvis arbetsuppgifter och kompetenser, är ingen lätt uppgift. Detta examensarbete studerar huruvida en automatiserad taktisk bemanningsplanering med en tidshorisont på ett halvår till ett år, skulle kunna underlätta denna uppgift.

I rapporten presenteras en måloptimeringsmodell som implementerats i AMPL för att med CPLEX som lösare generera förslag till bemanningsplaner. För att utveckla en matematisk modell som väl representerar de förutsättningar som råder vid bemanningsplanering av läkare har alternativa formuleringar provats och utvärderats. Den mest lovande av modellerna, som baseras på måloptimering, har i en pilotstudie testats på data från Onkologiska kliniken vid Linköpings universitetssjukhus. Flexibiliteten i modellen gjorde att den enkelt kunde användas på de data som erhölls därifrån. Resultatet från pilotstudien indikerar att den utvecklade modellen har kapacitet att ge förslag till rimliga bemanningsplaner.

```
@mastersthesis{diva2:411402,
author = {Lund\'{e}n, Anna},
title = {{Taktisk bemanningsplanering av läkare:
modellutveckling och en pilotstudie}},
school = {Linköping University},
type = {{LiTH-MAT-EX--2010/15--SE}},
year = {2010},
address = {Sweden},
}
```

Ruttningen av trafik i IP-nätverk sker ofta med hjälp av bågvikter som bestämmer vilken väg trafiken tar (kortastevägruttning). Problemet här är att avgöra ifall det existerar en uppsättning vikter givet ett önskat ruttningsschema. Den hör rapporten undersöker prestandan hos ett antal modeller och optimeringsprogram avsedda att lösa denna typ av problem som ofta kallas inversa kortastevägruttningsproblemet.

Undersökningen visar att det existerar en stor skillnad mellan modellerna och optimeringsprogrammen och att modellen baserad på cykelbaser löst med CPLEXdualopt lösaren är snabbast.

```
@mastersthesis{diva2:352876,
author = {Sandberg, Richard},
title = {{A survey of optimization methods for solving the inverse shortest path routing problem}},
school = {Linköping University},
type = {{LiTH-MAT-EX--2010/26--SE}},
year = {2010},
address = {Sweden},
}
```

Within telecommunication and routing of traﬃc in IP-networks a protocol named“Open Shortest Path First” (OSPF) is widely used. This means that a server dealswith the routing over a network with given weights by calculating shortest paths touse for routing. If we assume that a desired traﬃc pattern is given the problem isto ﬁnd out if it is possible to set the weights so that the desired traﬃc pattern is apart of a shortest path graph. In this thesis we assume that it is a unique shortestpath. To search for weights that solve the problem leads to a complex LP-model. Analternative is to search in the LP-dual under certain restrictions. These solutions tothe LP-dual are called conﬂicts and a conﬂict means that there exists no weights sothat the desired traﬃc pattern is obtained. The goal of this thesis is to study, classifyand search for conﬂicts. An algorithm has been developed that ﬁnds some kind ofconﬂicts in polynomial time with respect to the size of the graph.

```
@mastersthesis{diva2:345479,
author = {Mor\'{e}n, Björn},
title = {{Icke-triviala billigaste väg-ruttningskonflikter - klassificering och sökmetoder}},
school = {Linköping University},
type = {{LiTH-MAT-EX--2010/19--SE}},
year = {2010},
address = {Sweden},
}
```

Sidansvarig:
karin.johansson@liu.se

Senast uppdaterad: Tue Mar 04 08:53:57 CET 2014