# Doktors- och licentiatavhandlingar

Numerical approximations using high order finite differences on summation-byparts (SBP) form are investigated for discontinuous and fully nonlinear systems of partial differential equations. Stability and conservation properties of the approximations are obtained through a weak imposition of interface and boundary conditions with the simultaneous-approximation-term (SAT) technique. The SBP-SAT approximations replicate the continuous integration by parts rule. From this property, well-posedness and integral properties of the continuous problem are mimicked, and energy estimates leading to stability are obtained.

The first part of the thesis focuses on the simulations of discontinuous linear advection problems. An artificial interface is introduced, separating parts of the spatial domain characterized by different wave speeds. A set of flexible stability conditions at the interface are derived, which can be adapted to yield conservative or non-conservative approximations. This model can be interpreted as a simplified version of nonlinear problems involving jumps at shocks, or as a prototypical of wave propagation through different materials.

In the second part of the thesis, the vorticity/stream function formulation of the nonlinear momentum equation for an incompressible inviscid fluid is considered. SBP operators are used to derive a new Arakawa-like Jacobian with mimetic properties by combining different consistent approximations of the convection terms. Energy and enstrophy conservation is obtained for periodic problems using schemes with arbitrarily high order of accuracy. These properties are crucial for long-term numerical calculations in climate and weather forecasts or ocean circulation predictions.

The third and final contribution of the thesis is dedicated to the incompressible Navier-Stokes problem. First, different completely general formulations of energy bounding boundary conditions are derived for the nonlinear equations. The boundary conditions can be used at both far field and solid wall boundaries. The discretisation in time and space with weakly imposed initial and boundary conditions using the SBP-SAT framework is proved to be stable and the divergence free condition is approximated with the design order of the scheme. Next, the same formulations are considered in a linearised setting, whereupon the spectra associated with the initial boundary value problem and its SBP-SAT discretisation are derived using the Laplace-Fourier technique. The influence of different boundary conditions on the spectrum and in particular the convergence to steady state is studied.

```
@phdthesis{diva2:1138469,
author = {La Cognata, Cristina},
title = {{High order summation-by-parts based approximations for discontinuous and nonlinear problems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1880}},
year = {2017},
address = {Sweden},
}
```

This thesis focuses on developing an appropriate stochastic model for temperature dynamics as a means of pricing weather derivative contracts based on temperature. There are various methods for pricing weather derivatives ranging from simple one like historical burn analysis, which does not involve modeling the underlying weather variable to complex ones that require Monte Carlo simulations to achieve explicit weather derivatives contract prices, particularly the daily average temperature (DAT) dynamics models. Among various DAT models, appropriate regime switching models are considered relative better than single regime models due to its ability to capture most of the temperature dynamics features caused by urbanization, deforestation, clear skies and changes of measurement station. A new proposed model for DAT dynamics, is a two regime switching models with heteroskedastic mean-reverting process in the base regime and Brownian motion with nonzero drift in the shifted regime. Before using the model for pricing temperature derivative contracts, we compare the performance of the model with a benchmark model proposed by Elias et al. (2014), interms of the HDDs, CDDs and CAT indices. Using ve data sets from dierent measurement locations in Sweden, the results shows that, a two regime switching models with heteroskedastic mean-reverting process gives relatively better results than the model given by Elias et al. We develop mathematical expressions for pricing futures and option contracts on HDDs, CDDs and CAT indices. The local volatility nature of the model in the base regime captures very well the dynamics of the underlying process, thus leading to a better pricing processes for temperature derivatives contracts written on various index variables. We use the Monte Carlo simulation method for pricing weather derivatives call option contracts.

```
@phdthesis{diva2:1120854,
author = {Evarest Sinkwembe, Emanuel},
title = {{Modelling Weather Dynamics for Weather Derivatives Pricing}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1784}},
year = {2017},
address = {Sweden},
}
```

We study admissible transformations and solve group classification problems for various classes of linear and nonlinear Schrödinger equations with an arbitrary number n of space variables.

The aim of the thesis is twofold. The first is the construction of the new theory of uniform seminormalized classes of differential equations and its application to solving group classification problems for these classes. Point transformations connecting two equations (source and target) from the class under study may have special properties of semi-normalization. This makes the group classification of that class using the algebraic method more involved. To extend this method we introduce the new notion of uniformly semi-normalized classes. Various types of uniform semi-normalization are studied: with respect to the corresponding equivalence group, with respect to a proper subgroup of the equivalence group as well as the corresponding types of weak uniform semi-normalization. An important kind of uniform semi-normalization is given by classes of homogeneous linear differential equations, which we call uniform semi-normalization with respect to linear superposition of solutions.

The class of linear Schrödinger equations with complex potentials is of this type and its group classification can be effectively carried out within the framework of the uniform semi-normalization. Computing the equivalence groupoid and the equivalence group of this class, we show that it is uniformly seminormalized with respect to linear superposition of solutions. This allow us to apply the version of the algebraic method for uniformly semi-normalized classes and to reduce the group classification of this class to the classification of appropriate subalgebras of its equivalence algebra. To single out the classification cases, integers that are invariant under equivalence transformations are introduced. The complete group classification of linear Schrödinger equations is carried out for the cases n = 1 and n = 2.

The second aim is to study group classification problem for classes of generalized nonlinear Schrödinger equations which are not uniformly semi-normalized. We find their equivalence groupoids and their equivalence groups and then conclude whether these classes are normalized or not. The most appealing classes are the class of nonlinear Schrödinger equations with potentials and modular nonlinearities and the class of generalized Schrödinger equations with complex-valued and, in general, coefficients of Laplacian term. Both these classes are not normalized. The first is partitioned into an infinite number of disjoint normalized subclasses of three kinds: logarithmic nonlinearity, power nonlinearity and general modular nonlinearity. The properties of the Lie invariance algebras of equations from each subclass are studied for arbitrary space dimension n, and the complete group classification is carried out for each subclass in dimension (1+2). The second class is successively reduced into subclasses until we reach the subclass of (1+1)-dimensional linear Schrödinger equations with variable mass, which also turns out to be non-normalized. We prove that this class is mapped by a family of point transformations to the class of (1+1)-dimensional linear Schrödinger equations with unique constant mass.

```
@phdthesis{diva2:1095590,
author = {Kurujyibwami, Celestin},
title = {{Admissible transformations and the group classification of Schrödinger equations}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1846}},
year = {2017},
address = {Sweden},
}
```

This dissertation considers Small Area Estimation with a main focus on estimation and prediction for repeated measures data. The demand of small area statistics is for both cross-sectional and repeated measures data. For instance, small area estimates for repeated measures data may be useful for public policy makers for different purposes such as funds allocation, new educational or health programs, etc, where decision makers might be interested in the trend of estimates for a specic characteristic of interest for a given category of the target population as a basis of their planning.

It has been shown that the multivariate approach for model-based methods in small area estimation may achieve substantial improvement over the usual univariate approach. In this work, we consider repeated surveys taken on the same subjects at different time points. The population from which a sample has been drawn is partitioned into several non-overlapping subpopulations and within all subpopulations there is the same number of group units. The aim is to propose a model that borrows strength across small areas and over time with a particular interest of growth profiles over time. The model accounts for repeated surveys, group individuals and random effects variations.

Firstly, a multivariate linear model for repeated measures data is formulated under small area estimation settings. The estimation of model parameters is discussed within a likelihood based approach, the prediction of random effects and the prediction of small area means across timepoints, per group units and for all time points are obtained. In particular, as an application of the proposed model, an empirical study is conducted to produce district level estimates of beans in Rwanda during agricultural seasons 2014 which comprise two varieties, bush beans and climbing beans.

Secondly, the thesis develops the properties of the proposed estimators and discusses the computation of their first and second moments. Through a method based on parametric bootstrap, these moments are used to estimate the mean-squared errors for the predicted small area means. Finally, a particular case of incomplete multivariate repeated measures data that follow a monotonic sample pattern for small area estimation is studied. By using a conditional likelihood based approach, the estimators of model parameters are derived. The prediction of random effects and predicted small area means are also produced.

```
@phdthesis{diva2:1094130,
author = {Ngaruye, Innocent},
title = {{Contributions to Small Area Estimation:
Using Random Effects Growth Curve Model}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1855}},
year = {2017},
address = {Sweden},
}
```

The family of sets with the Baire property of a topological space *X*, i.e., sets which differ from open sets by meager sets, has different nice properties, like being closed under countable unions and differences. On the other hand, the family of sets without the Baire property of *X* is, in general, not closed under finite unions and intersections. This thesis focuses on the algebraic set-theoretic aspect of the families of sets without the Baire property which are not empty. It is composed of an introduction and five papers.

In the first paper, we prove that the family of all subsets of ℝ of the form (*C* \ *M*) ∪ *N*, where *C* is a finite union of *V*itali sets and *M, N* are meager, is closed under finite unions. It consists of sets without the Baire property and it is invariant under translations of ℝ. The results are extended to the space ℝ^{n} for *n* ≥ 2 and to products of ℝ^{n} with finite powers of the Sorgenfrey line.

In the second paper, we suggest a way to build a countable decomposition of a topological space X which has an open subset homeomorphic to (ℝ^{n}, τ), *n* ≥ 1, where τ is some admissible extension of the Euclidean topology, such that the union of each non-empty proper subfamily of does not have the Baire property in X. In the case when X is a separable metrizable manifold of finite dimension, each element of can be chosen dense and zero-dimensional.

In the third paper, we develop a theory of semigroups of sets with respect to the union of sets. The theory is applied to Vitali selectors of ℝ to construct diverse abelian semigroups of sets without the Baire property. It is shown that in the family of such semigroups there is no element which contains all others. This leads to a supersemigroup of sets without the Baire property which contains all these semigroups and which is invariant under translations of ℝ. All the considered semigroups are enlarged by the use of meager sets, and the construction is extended to Euclidean spaces ℝ^{n} for *n* ≥ 2.

In the fourth paper, we consider the family V1(Q) of all finite unions of Vitali selectors of a topological group *G* having a countable dense subgroup *Q*. It is shown that the collection is a base for a topology τ(Q) on G. The space (G, τ (Q)) is *T*_{1}, not Hausdorff and hyperconnected. It is proved that if *Q*_{1} and *Q*2 are countable dense subgroups of G such that *Q*_{1} ⊆ *Q*_{2} and the factor group *Q*_{2}/*Q*_{1} is infinite (resp. finite) then τ(*Q*_{1}) τ(*Q*_{2}) (resp. τ (*Q*_{1}) ⊆ τ (*Q*_{2})). Nevertheless, we prove that all spaces constructed in this manner are homeomorphic.

In the fifth paper, we investigate the relationship (inclusion or equality) between the families of sets with the Baire property for different topologies on the same underlying set. We also present some applications of the local function defined by the Euclidean topology on R and the ideal of meager sets there.

```
@phdthesis{diva2:1092947,
author = {Nyagahakwa, Venuste},
title = {{Families of Sets Without the Baire Property}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1825}},
year = {2017},
address = {Sweden},
}
```

Since its isolation in 2004, that resulted in the Nobel Prize award in 2010, graphene has been the object of an intense interest, due to its novel physics and possible applications in electronic devices. Graphene has many properties that differ it from usual semiconductors, for example its low-energy electrons behave like massless particles. To exploit the full potential of this material, one first needs to investigate its fundamental properties that depend on shape, number of layers, defects and interaction. The goal of this thesis is to perform such an investigation.

In paper I, we study electronic transport in monolayer and bilayer graphene nanoribbons with single and many short-range defects, focusing on the role of the edge termination (zigzag vs armchair). Within the discrete tight-binding model, we perform an-alytical analysis of the scattering on a single defect and combine it with the numerical calculations based on the Recursive Green's Function technique for many defects. We find that conductivity of zigzag nanoribbons is practically insensitive to defects situated close to the edges. In contrast, armchair nanoribbons are strongly affected by such defects, even in small concentration. When the concentration of the defects increases, the difference between different edge terminations disappears. This behaviour is related to the effective boundary condition at the edges, which respectively does not and does couple valleys for zigzag and armchair ribbons. We also study the Fano resonances.

In the second paper we consider electron-electron interaction in graphene quantum dots defined by external electrostatic potential and a high magnetic field. The interaction is introduced on the semi-classical level within the Thomas Fermi approximation and results in compressible strips, visible in the potential profile. We numerically solve the Dirac equation for our quantum dot and demonstrate that compressible strips lead to the appearance of plateaus in the electron energies as a function of the magnetic field. This analysis is complemented by the last paper (VI) covering a general error estimation of eigenvalues for unbounded linear operators, which can be used for the energy spectrum of the quantum dot considered in paper II. We show that an error estimate for the approximate eigenvalues can be obtained by evaluating the residual for an approximate eigenpair. The interpolation scheme is selected in such a way that the residual can be evaluated analytically.

In the papers III, IV and V, we focus on the scattering on ultra-low long-range potentials in graphene nanoribbons. Within the continuous Dirac model, we perform analytical analysis and show that, considering scattering of not only the propagating modes but also a few extended modes, we can predict the appearance of the trapped mode with an energy eigenvalue close to one of the thresholds in the continuous spectrum. We prove that trapped modes do not appear outside the threshold, provided the potential is sufficiently small. The approach to the problem is different for zigzag vs armchair nanoribbons as the related systems are non-elliptic and elliptic respectively; however the resulting condition for the existence of the trapped mode is analogous in both cases.

```
@phdthesis{diva2:1084758,
author = {Orlof, Anna},
title = {{Quantum scattering and interaction in graphene structures}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1837}},
year = {2017},
address = {Sweden},
}
```

The problem of describing two-dimensional traveling water waves is considered. The water region is of finite depth and the interface between the region and the air is given by the graph of a function. We assume the flow to be incompressible and neglect the effects of surface tension. However we assume the flow to be rotational so that the vorticity distribution is a given function depending on the values of the stream function of the flow. The presence of vorticity increases the complexity of the problem and also leads to a wider class of solutions.

First we study unidirectional waves with vorticity and verify the Benjamin-Lighthill conjecture for flows whose Bernoulli constant is close to the critical one. For this purpose it is shown that every wave, whose slope is bounded by a fixed constant, is either a Stokes or a solitary wave. It is proved that the whole set of these waves is uniquely parametrised (up to translation) by the flow force which varies between its values for the supercritical and subcritical shear flows of constant depth. We also study large-amplitude unidirectional waves for which we prove bounds for the free-surface profile and for Bernoulli’s constant.

Second, we consider small-amplitude waves over flows with counter currents. Such flows admit layers, where the fluid flows in different directions. In this case we prove that the initial nonlinear free-boundary problem can be reduced to a finite-dimensional Hamiltonian system with a stable equilibrium point corresponding to a uniform stream. As an application of this result, we prove the existence of non-symmetric wave profiles. Furthermore, using a different method, we prove the existence of periodic waves with an arbitrary number of crests per period.

```
@phdthesis{diva2:1069608,
author = {Lokharu, Evgeniy},
title = {{Small-amplitude steady water waves with vorticity}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1830}},
year = {2017},
address = {Sweden},
}
```

In this thesis, we use finite difference operators with the Summation-By-Partsproperty (SBP) and a weak boundary treatment, known as SimultaneousApproximation Terms (SAT), to construct high-order accurate numerical schemes.The SBP property and the SAT’s makes the schemes provably stable. The numerical procedure is general, and can be applied to most problems, but we focus on hyperbolic problems such as the shallow water, Euler and wave equations.

For a well-posed problem and a stable numerical scheme, data must be available at the boundaries of the domain. However, there are many scenarios where additional information is available inside the computational domain. In termsof well-posedness and stability, the additional information is redundant, but it can still be used to improve the performance of the numerical scheme. As a first contribution, we introduce a procedure for implementing additional data using SAT’s; we call the procedure the Multiple Penalty Technique (MPT).

A stable and accurate scheme augmented with the MPT remains stable and accurate. Moreover, the MPT introduces free parameters that can be used to increase the accuracy, construct absorbing boundary layers, increase the rate of convergence and control the error growth in time.

To model infinite physical domains, one need transparent artificial boundary conditions, often referred to as Non-Reflecting Boundary Conditions (NRBC). In general, constructing and implementing such boundary conditions is a difficult task that often requires various approximations of the frequency and range of incident angles of the incoming waves. In the second contribution of this thesis,we show how to construct NRBC’s by using SBP operators in time.

In the final contribution of this thesis, we investigate long time error bounds for the wave equation on second order form. Upper bounds for the spatial and temporal derivatives of the error can be obtained, but not for the actual error. The theoretical results indicate that the error grows linearly in time. However, the numerical experiments show that the error is in fact bounded, and consequently that the derived error bounds are probably suboptimal.

```
@phdthesis{diva2:1068069,
author = {Frenander, Hannes},
title = {{High-order finite difference approximations for hyperbolic problems:
multiple penalties and non-reflecting boundary conditions}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1824}},
year = {2017},
address = {Sweden},
}
```

This thesis develops numerical methods for the simulation of wave propagation in solids containing faults and fluid-filled fractures. These techniques have applications in earthquake hazard analysis, seismic imaging of reservoirs, and volcano seismology. A central component of this work is the coupling of mechanical systems. This aspect involves the coupling of both ordinary differential equations (ODE)(s) and partial differential equations (PDE)(s) along curved interfaces. All of these problems satisfy a mechanical energy balance. This mechanical energy balance is mimicked by the numerical scheme using high-order accurate difference approximations that satisfy the principle of summation by parts, and by weakly enforcing the coupling conditions.

The first part of the thesis considers the simulation of dynamic earthquake ruptures along non-planar fault geometries and the simulation of seismic wave radiation from earthquakes, when the earthquakes are idealized as point moment tensor sources. The dynamic earthquake rupture process is simulated by coupling the elastic wave equation at a fault interface to nonlinear ODEs that describe the fault mechanics. The fault geometry is complex and treated by combining structured and unstructured grid techniques. In other applications, when the earthquake source dimension is smaller than wavelengths of interest, the earthquake can be accurately described by a point moment tensor source localized at a single point. The numerical challenge is to discretize the point source with high-order accuracy and without producing spurious oscillations.

The second part of the thesis presents a numerical method for wave propagation in and around fluid-filled fractures. This problem requires the coupling of the elastic wave equation to a fluid inside curved and branching fractures in the solid. The fluid model is a lubrication approximation that incorporates fluid inertia, compressibility, and viscosity. The fracture geometry can have local irregularities such as constrictions and tapered tips. The numerical method discretizes the fracture geometry by using curvilinear multiblock grids and applies implicit-explicit time stepping to isolate and overcome stiffness arising in the semi-discrete equations from viscous diffusion terms, fluid compressibility, and the particular enforcement of the fluid-solid coupling conditions. This numerical method is applied to study the interaction of waves in a fracture-conduit system. A methodology to constrain fracture geometry for oil and gas (hydraulic fracturing) and volcano seismology applications is proposed.

The third part of the thesis extends the summation-by-parts methodology to staggered grids. This extension reduces numerical dispersion and enables the formulation of stable and high-order accurate multiblock discretizations for wave equations in first order form on staggered grids. Finally, the summation-by-parts methodology on staggered grids is further extended to second derivatives and used for the treatment of coordinate singularities in axisymmetric wave propagation.

```
@phdthesis{diva2:1046328,
author = {O'Reilly, Ossian},
title = {{Numerical methods for wave propagation in solids containing faults and fluid-filled fractures}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1806}},
year = {2016},
address = {Sweden},
}
```

The taut string problem concerns finding the function with the shortest graph length, i.e. the taut string, in a certain set of continuous piecewise linear functions. It has appeared in a broad range of applications including statistics, image processing and economics. As it turns out, the taut string has besides minimal graph length also minimal energy and minimal total variation among the functions in Ω.

The theory of real interpolation is based on Peetre’s *K*-functional. In terms of the *K*-functional, we introduce invariant K-minimal sets and show a close connection between taut strings and invariant *K*-minimal sets.

This insight leads to new problems of interpolation theory, gives possibility to generalize the notion of taut strings and provides new applications.

The thesis consists of four papers. In paper I, connections between invariant *K*-minimal sets and various forms of taut strings are investigated. It is shown that the set Ω′ of the derivatives of the functions in can be interpreted as an invariant K-minimal set for the Banach couple (ℓ^{1}, ℓ^{∞}) on R^{n}. In particular, the derivative of the taut string has minimal *K*-functional in Ω′. A characterization of all bounded, closed and convex sets in R^{n} that are invariant K-minimal for (ℓ^{1}, ℓ^{∞}) is established.

Paper II presents examples of invariant K-minimal sets in R^{n} for (ℓ^{1}, ℓ^{∞}). A convergent algorithm for computing the element with minimal *K*-functional in such sets is given. In the infinite-dimensional setting, a sufficient condition for a set to be invariant *K*-minimal with respect to the Banach couple *L*^{1} ([0,1]^{m}) ,*L*^{∞} ([0,1]^{m}) is established. With this condition at hand, different examples of invariant K-minimal sets for this couple are constructed.

Paper III considers an application of taut strings to buffered real-time communication systems. The optimal buffer management strategy, with respect to minimization of a class of convex distortion functions, is characterized in terms of a taut string. Further, an algorithm for computing the optimal buffer management strategy is provided.

In paper IV, infinite-dimensional taut strings are investigated in connection with the Wiener process. It is shown that the average energy per unit of time of the taut string in the long run converges, if it is constrained to stay within the distance r > 0 from the trajectory of a Wiener process, to a constant *C*^{2}/r^{2} where *C*^{ }∈ (0,∞). While the exact value of *C* is unknown, the numerical estimate *C* ≈ 0.63 is obtained through simulations on a super computer. These simulations are based on a certain algorithm for constructing finite-dimensional taut strings.

```
@phdthesis{diva2:1045576,
author = {Setterqvist, Eric},
title = {{Taut Strings and Real Interpolation}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1801}},
year = {2016},
address = {Sweden},
}
```

We construct stable, accurate and efficient numerical schemes for wave propagation and flow problems posed on spatial geometries that are moving, deforming, erroneously described or non-simply connected. The schemes are on Summation-by-Parts (SBP) form, combined with the Simultaneous Approximation Term (SAT) technique for imposing initial and boundary conditions. The main analytical tool is the energy method, by which well-posedness, stability and conservation are investigated. To handle the deforming domains, time-dependent coordinate transformations are used to map the problem to fixed geometries.

The discretization is performed in such a way that the Numerical Geometric Conservation Law (NGCL) is satisfied. Additionally, even though the schemes are constructed on fixed domains, time-dependent penalty formulations are necessary, due to the originally moving boundaries. We show how to satisfy the NGCL and present an automatic formulation for the penalty operators, such that the correct number of boundary conditions are imposed, when and where required.

For problems posed on erroneously described geometries, we investigate how the accuracy of the solution is affected. It is shown that the inaccurate geometry descriptions may lead to wrong wave speeds, a misplacement of the boundary condition, the wrong boundary operator or a mismatch of data. Next, the SBP-SAT technique is extended to time-dependent coupling procedures for deforming interfaces in hyperbolic problems. We prove conservation and stability and show how to formulate the penalty operators such that the coupling procedure is automatically adjusted to the variations of the interface location while the NGCL is preserved.

Moreover, dual consistent SBP-SAT schemes for the linearized incompressible Navier-Stokes equations posed on deforming domains are investigated. To simplify the derivations of the dual problem and incorporate the motions of the boundaries, the second order formulation is reduced to first order and the problem is transformed to a fixed domain. We prove energy stability and dual consistency. It is shown that the solution as well as the divergence of the solution converge with the design order of accuracy, and that functionals of the solution are superconverging.

Finally, initial boundary value problems posed on non-simply connected spatial domains are investigated. The new formulation increases the accuracy of the scheme by minimizing the use of multi-block couplings. In order to show stability, the spectrum of the semi-discrete SBP-SAT formulation is studied. We show that the eigenvalues have the correct sign, which implies stability, in combination with the SBP-SAT technique in time.

```
@phdthesis{diva2:956877,
author = {Nikkar, Samira},
title = {{Stable High Order Finite Difference Methods for Wave Propagation and Flow Problems on Deforming Domains}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1774}},
year = {2016},
address = {Sweden},
}
```

It is widely recognized that various biotic and abiotic factors cause changes in the size of a population and its age distribution. Population structure, intra-specific competition, temporal variability and spatial heterogeneity are identified as the most important factors that, alone or in combination, influence population dynamics. Despite being well-known, these factors are difficult to study, both theoretically and empirically. However, in an increasingly variable world, permanence of a growing number of species is threatened by climate changes, habitat fragmentation or reduced habitat quality. For purposes of conservation of species and land management, it is crucially important to have a good analysis of population dynamics, which will increase our theoretical knowledge and provide practical guidelines.

One way to address the problem of population dynamics is to use mathematical models. The choice of a model depends on what we want to study or what we aim to achieve. For an extensive theoretical study of population processes and for obtaining qualitative results about population growth or decline, analytical models with various level of complexity are used. The competing interests of realism and solvability of the model are always present. This means that, on one hand, we always aim to make a model that will truthfully reflect reality, while on the other hand, we need to keep the model mathematically solvable. This prompts us to carefully choose the most prominent ecological factors relevant to the problem at hand and to incorporate them into a model. Ideally, the results give new insights into population processes and complex interactions between the mentioned factors and population dynamics.

The objective of the thesis is to formulate, analyze, and apply various mathematical models of population dynamics. We begin with a classical linear age-structured model and gradually add temporal variability, intra-specific competition and spatial heterogeneity. In this way, every subsequent model is more realistic and complex than the previous one. We prove existence and uniqueness of a nonnegative solution to each boundary-initial problem, and continue with investigation of the large time behavior of the solution.

In the ecological terms, we are establishing conditions under which a population can persist in a certain environment. Since our aim is a qualitative analysis of a solution, we often examine upper and lower bounds of a solution. Their importance is in the fact that they are obtained analytically and parameters in their expression have biological meaning. Thus, instead of analyzing an exact solution (which often proves to be difficult), we analyze the corresponding upper and lower solutions.

We apply our models to demonstrate the influence of seasonal changes (or some other periodic temporal variation) and spatial structure of the habitat on population persistence. This is particularly important in explaining behavior of migratory birds or populations that inhabits several patches, some of which are of low quality. Our results extend the previously obtained results in some aspects and point out that all factors (age structure, density dependence, spatio-temporal variability) need to be considered when setting up a population model and predicting population growth.

```
@phdthesis{diva2:956827,
author = {Radosavljevic, Sonja},
title = {{Permanence of age-structured populations in a spatio-temporal variable environment}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1781}},
year = {2016},
address = {Sweden},
}
```

The focus of this thesis is to characterize half–exact coherent functors over principal ideal domains (PIDs) and Dedekind domains. Ever since they where discovered, coherent functors have been useful in the study of some mathematical objects. We aim to explore a little more about them in this thesis.

We first give here a review of the general categorical notions relevant to the characterization. We also review the functors Ext(*M*,−) and Tor(*M*,−) on the category on A–modules, where A is a commutative ring and M is an A–module.

With the assumption that A is a commutative noetherian ring, we introduce coherent functors defined on the category of finitely generated A–modules. It is then shown in the paper that any half–exact coherent functor over a PID, and more generally over a Dedekind domain, arises from a complex of projective modules.

```
@phdthesis{diva2:930732,
author = {Banda, Adson},
title = {{Half--Exact Coherent Functors over PIDs and Dedekind Domains}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1752}},
year = {2016},
address = {Sweden},
}
```

Column-oriented models are today common in the eld of discrete optimization, and there is an increasing interest in using such models as a basis for heuristic solution methods. The common theme of this work is to explore some possibilities to integrate heuristic principles and column-oriented models for discrete optimization problems.

In the rst paper, we consider a resource allocation problem for cellular systems. We propose a strong column-oriented formulation and a corresponding column generation method, as well as an enhanced column generation scheme for this problem. The enhanced scheme is composed of a stabilization technique, an approximate column generation principle, and, for nding integer solutions, a heuristic that is embedded in the column generation scheme.

The second paper provides a new and strong convexied formulation of the xed charge transportation problem. This formulation is obtained by integrating the concepts of Lagrangian decomposition and column generation. It is shown both theoretically and practically that this integration yields a formulation which is stronger than three other convexied formulations of the problem.

```
@phdthesis{diva2:921971,
author = {Zhao, Yixin},
title = {{On the Integration of Heuristics with Column-Oriented Models for Discrete Optimization}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1764}},
year = {2016},
address = {Sweden},
}
```

This thesis develops the methodology for solving initial boundary value problems with the use of summation-by-parts discretizations. The combination of high orders of accuracy and a systematic approach to construct provably stable boundary and interface procedures makes this methodology especially suitable for scientific computations with high demands on efficiency and robustness. Most classes of high order methods can be applied in a way that satisfies a summation-by-parts rule. These include, but are not limited to, finite difference, spectral and nodal discontinuous Galerkin methods.

In the first part of this thesis, the summation-by-parts methodology is extended to the time domain, enabling fully discrete formulations with superior stability properties. The resulting time discretization technique is closely related to fully implicit Runge-Kutta methods, and may alternatively be formulated as either a global method or as a family of multi-stage methods. Both first and second order derivatives in time are considered. In the latter case also including mixed initial and boundary conditions (i.e. conditions involving derivatives in both space and time).

The second part of the thesis deals with summation-by-parts discretizations on multi-block and hybrid meshes. A new formulation of general multi-block couplings in several dimensions is presented and analyzed. It collects all multi-block, multi-element and hybrid summation-by-parts schemes into a single compact framework. The new framework includes a generalized description of non-conforming interfaces based on so called summation-by-parts preserving interpolation operators, for which a new theoretical accuracy result is presented.

```
@phdthesis{diva2:912667,
author = {Lundquist, Tomas},
title = {{High order summation-by-parts methods in time and space}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1740}},
year = {2016},
address = {Sweden},
}
```

This thesis is devoted to the group classification of linear Schrödinger equations. The study of Lie symmetries of such equations was initiated more than 40 years ago using the classical Lie infinitesimal method for specific types of real-valued potentials. In first papers on this subject, most attention was paid to dynamical transformations, which necessarily change the time and space variables. This is why phase translations were missed. Later, the study of Lie symmetries was extended to nonlinear Schrödinger equations. At the same time, the group classification problem for the class of linear Schrödinger equations with complex potentials remains unsolved.

The aim of the present thesis is to carry out the group classification for the class of linear Schrödinger equations with complex potentials. These potentials are nowadays important in quantum mechanics, scattering theory, condensed matter physics, quantum field theory, optics, electromagnetics and so forth. We exhaustively solve the group classification problem for space dimensions one and two.

The thesis comprises two parts. The first part is a brief review of Lie symmetries and group classification of differential equations. Next, we outline the equivalence transformations in a class of differential equations, normalization properties of such class and the algebraic method for group classification of differential equations.

The second part consists of two research papers. In the first paper, the algebraic method is applied to solve the group classification problem for (1+1)-dimensional linear Schrödinger equations with complex potentials. With this technique, the problem of the group classification of the class under study is reduced to the classification of certain subalgebras of its equivalence algebra. As a result, we find that the inequivalent cases are exhausted by eight families of potentials and we give the corresponding maximal Lie invariance algebras.

In the second paper we carry out the preliminary symmetry analysis of the class of linear Schrödinger equations with complex potentials in the multi-dimensional case. Using the direct method, we find the equivalence groupoid and the equivalence group of this class. Due to the multi-dimensionality, the results of the computations are quite different from the ones presented in Paper I. We obtain the complete group classification of (1+2)-dimensional linear Schrödinger equations with complex potentials.

```
@phdthesis{diva2:903188,
author = {Kurujyibwami, C\'{e}lestin},
title = {{Group classification of linear Schrödinger equations by the algebraic method}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1743}},
year = {2016},
address = {Sweden},
}
```

The aim of this work is to present new contributions to the theory of peaked solitons. The thesis consists of two papers,which are named “Newsolutionswith peakon creation in the Camassa–HolmandNovikov equations” and “Peakon-antipeakon solutions of the Novikov equation” respectively.

In Paper I, a new kind of peakon-like solution to the Novikov equation is discovered, by transforming the one-peakon solution via a Lie symmetry transformation. This new kind of solution is unbounded as *x* → +∞ and/or *x* → –∞, and has a peak, though only for some interval of time. Thus, the solutions exhibit creation and/or destruction of peaks. We make sure that the peakon-like function is still a solution in the weak sense for those times where the function is non-differentiable. We find that similar solutions, with peaks living only for some interval in time, are validweak solutions to the Camassa–Holm equation, though it appears that these can not be obtained via a symmetry transformation.

In Paper II we investigate multipeakon solutions of the Novikov equation, in particular interactions between peakons with positive amplitude and antipeakons with negative amplitude. The solutions are given by explicit formulas, which makes it possible to analyze them in great detail. As in the Camassa–Holm case, the slope of the wave develops a singularity when a peakon collides with an antipeakon, while the wave itself remains continuous and can be continued past the collision to provide a global weak solution. However, the Novikov equation differs in several interesting ways from other peakon equations, especially regarding asymptotics for large times. For example, peakons and antipeakons both travel to the right, making it possible for several peakons and antipeakons to travel together with the same speed and collide infinitely many times. Such clusters may exhibit very intricate periodic or quasi-periodic interactions. It is also possible for peakons to have the same asymptotic velocity but separate at a logarithmic rate; this phenomenon is associated with coinciding eigenvalues in the spectral problem coming from the Lax pair, and requires nontrivial modifications to the previously known solution formulas which assume that all eigenvalues are simple.

```
@phdthesis{diva2:897568,
author = {Kardell, Marcus},
title = {{New Phenomena in the World of Peaked Solitons}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1737}},
year = {2016},
address = {Sweden},
}
```

This thesis consists of two papers within two different areas of combinatorics.

Ramsey theory is a classic topic in graph theory, and Paper A deals with two of its most fundamental problems: to compute Ramsey numbers and to characterise critical graphs. More precisely, we study generalised Ramsey numbers for two sets Γ_{1} and Γ_{2} of cycles. We determine, in particular, all generalised Ramsey numbers R(Γ_{1}, Γ_{2}) such that Γ_{1} or Γ_{2} contains a cycle of length at most 6, or the shortest cycle in each set is even. This generalises previous results of Erdös, Faudree, Rosta, Rousseau, and Schelp. Furthermore, we give a conjecture for the general case. We also characterise many (Γ_{1}, Γ_{2})-critical graphs. As special cases, we obtain complete characterisations of all (C_{n},C_{3})-critical graphs for n ≥ 5, and all (C_{n},C_{5})-critical graphs for n ≥ 6.

In Paper B, we study the combinatorics of certain partially ordered sets. These posets are unions of conjugacy classes of involutions in the symmetric group S_{n}, with the order induced by the Bruhat order on S* _{n}*. We obtain a complete characterisation of the posets that are graded. In particular, we prove that the set of involutions with exactly one fixed point is graded, which settles a conjecture of Hultman in the affirmative. When the posets are graded, we give their rank functions. We also give a short, new proof of the EL-shellability of the set of fixed-point-free involutions, recently proved by Can, Cherniavsky, and Twelbeck.

```
@phdthesis{diva2:876270,
author = {Hansson, Mikael},
title = {{Generalised Ramsey numbers and Bruhat order on involutions}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1734}},
year = {2015},
address = {Sweden},
}
```

This thesis is about high–dimensional problems considered under the so{called Kolmogorov condition. Hence, we consider research questions related to random matrices with p rows (corresponding to the parameters) and n columns (corresponding to the sample size), where *p* > *n*, assuming that the ratio converges when the number of parameters and the sample size increase.

We focus on the eigenvalue distribution of the considered matrices, since it is a well–known information–carrying object. The spectral distribution with compact support is fully characterized by its moments, i.e., by the normalized expectation of the trace of powers of the matrices. Moreover, such an expectation can be seen as a free moment in the non–commutative space of random matrices of size *p* x *p* equipped with the functional . Here, the connections with free probability theory arise. In the relation to that eld we investigate the closed form of the asymptotic spectral distribution for the sum of the quadratic forms. Moreover, we put a free cumulant–moment relation formula that is based on the summation over partitions of the number. This formula is an alternative to the free cumulant{moment relation given through non{crossing partitions ofthe set.

Furthermore, we investigate the normalized and derive, using the dierentiation with respect to some symmetric matrix, a recursive formula for that expectation. That allows us to re–establish moments of the Marcenko–Pastur distribution, and hence the recursive relation for the Catalan numbers.

In this thesis we also prove that the , where , is a consistent estimator of the . We consider

,

where , which is proven to be normally distributed. Moreover, we propose, based on these random variables, a test for the identity of the covariance matrix using a goodness{of{t approach. The test performs very well regarding the power of the test compared to some presented alternatives for both the high–dimensional data (*p* > *n*) and the multivariate data (p ≤ n).

```
@phdthesis{diva2:868746,
author = {Pielaszkiewicz, Jolanta Maria},
title = {{Contributions to High--Dimensional Analysis under Kolmogorov Condition}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1724}},
year = {2015},
address = {Sweden},
}
```

This licentiate thesis is concerned with applying the Japanese problem solving oriented (PSO) teaching approach to Swedish mathematics classrooms. The overall aim of my research project is to describe and investigate the viability of PSO as design tool for teaching mathematics. The PSO approach is a variation of a more general Japanese teaching paradigm referred to as “structured problem solving”. These teaching methods aim to stimulate the process of students’ mathematical thinking and have their focus on enhancing the students’ attitudes towards engaging in mathematical activities. The empirical data are collected using interviews, observations and video recordings over a period of nine months, following two Swedish lower secondary school classes. Chevallard’s anthropological framework is used to analyse which mathematical knowledge is exposed in the original Japanese lesson plans and in the lessons observed in the classrooms. In addition, Brousseau’s framework of learning mathematics is applied to analyse the perception of individual students and particular situations in the classroom.

The results show that the PSO based lesson plans induce a complex body of mathematical knowledge, where different areas of mathematics are linked. It is found that the discrepancy between the Japanese and Swedish curriculum cause some limitations for the adaptation of the lesson plans, especially in the area of Geometry. Four distinct aspects of the PSO approach supporting the teaching of mathematics are presented.

```
@phdthesis{diva2:864120,
author = {Asami-Johansson, Yukiko},
title = {{Designing Mathematics Lessons Using Japanese Problem Solving Oriented Lesson Structure:
A Swedish case study}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1723}},
year = {2015},
address = {Sweden},
}
```

This thesis consists of two papers which focuses on a particular diffusion type Dirichlet form

where Here is the basis in the Cameron-Martin space, H, consisting of the Schauder functions, and ν denotes the Wiener measure.

In Paper I, we let vary over the space of wiener trajectories in a way that the diffusion operator A is almost everywhere an unbounded operator on the Cameron–Martin space. In addition we put a weight function on theWiener measure and show that under these changes of the reference measure, the Malliavin derivative and divergence are closable operators with certain closable inverses. It is then shown that under certain conditions on , and these changes of reference measure, the Dirichlet form is quasi-regular. This is done first in the classical Wiener space and then the results are transferred to the Wiener space over a Riemannian manifold.

Paper II focuses on the case when is a sequence of non-decreasing real numbers. The process X associated to is then an infinite dimensional Ornstein-Uhlenbeck process. In this case we show that the distributions of a sequence of certain finite dimensional Ornstein-Uhlenbeck processes converge weakly to the distribution of the infinite dimensional Ornstein-Uhlenbeck process. We also investigate the quadratic variation for this process, both in the classical sense and in the recent framework of stochastic calculus via regularization. Since the process is Banach space valued, the tensor quadratic variation is an appropriate tool to establish the Itô formula for the infinite dimensional Ornstein-Uhlenbeck process X. Sufficient conditions are presented for the scalar as well as the tensor quadratic variation to exist.

```
@phdthesis{diva2:857512,
author = {Karlsson, John},
title = {{A class of infinite dimensional stochastic processes with unbounded diffusion and its associated Dirichlet forms}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1699}},
year = {2015},
address = {Sweden},
}
```

Modern portfolio theory is about determining how to distribute capital among available securities such that, for a given level of risk, the expected return is maximized, or for a given level of return, the associated risk is minimized. In the pioneering work of Markowitz in 1952, variance was used as a measure of risk, which gave rise to the wellknown mean-variance portfolio optimization model. Although other mean-risk models have been proposed in the literature, the mean-variance model continues to be the backbone of modern portfolio theory and it is still commonly applied. The scope of this thesis is a solution technique for the mean-variance model in which eigendecomposition of the covariance matrix is performed.

The first part of the thesis is a review of the mean-risk models that have been suggested in the literature. For each of them, the properties of the model are discussed and the solution methods are presented, as well as some insight into possible areas of future research.

The second part of the thesis is two research papers. In the first of these, a solution technique for solving the mean-variance problem is proposed. This technique involves making an eigendecomposition of the covariance matrix and solving an approximate problem that includes only relatively few eigenvalues and corresponding eigenvectors. The method gives strong bounds on the exact solution in a reasonable amount of computing time, and can thus be used to solve large-scale mean-variance problems.

The second paper studies the mean-variance model with cardinality constraints, that is, with a restricted number of securities included in the portfolio, and the solution technique from the first paper is extended to solve such problems. Near-optimal solutions to large-scale cardinality constrained mean-variance portfolio optimization problems are obtained within a reasonable amount of computing time, compared to the time required by a commercial general-purpose solver.

```
@phdthesis{diva2:814636,
author = {Mayambala, Fred},
title = {{Mean-Variance Portfolio Optimization:
Eigendecomposition-Based Methods}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1717}},
year = {2015},
address = {Sweden},
}
```

The main idea of the thesis is to develop new connections between the theory of real interpolation and applications. Near and exact minimizers for E–, K– and L–functionals of the theory of real interpolation are very important in applications connected to regularization of inverse problems such as image processing. The problem which appears is how to characterize and construct these minimizers. These exact minimizers referred to as optimal decompositions in the thesis, have certain extremal properties that we completely express and characterize in terms of duality. Our characterization generalizes known characterization for a particular Banach couple. The characterization presented in the thesis also makes it possible to understand the geometrical meaning of optimal decomposition for some important particular cases and gives a possibility to construct them. One of the most famous models in image processing is the total variation regularization published by Rudin, Osher and Fatemi. We propose a new fast algorithm to find the exact minimizer for this model. Optimal decompositions mentioned have some connections to optimization problems which are also pointed out. The thesis is based on results that have been presented in international conferences and have been published in five papers.

In Paper 1, we characterize optimal decomposition for the E–, K– and L* _{p0,p1}* –functional. We also present a geometrical interpretation of optimal decomposition for the L

*–functional for the couple (ℓp, X) on R*

_{p,1}^{n}. The characterization presented is useful in the sense that it gives insights into the construction of these minimizers.

The characterization mentioned in Paper 1 is based on optimal decomposition for infimal convolution. The operation of infimal convolution is a very important and non–trivial tool in functional analysis and is also very well–known within the context of convex analysis. The L–, K– and E–functionals can be regarded as an infimal convolution of two well–defined functions. Unfortunately tools from convex analysis can not be applied in a straightforward way in this context of couples of spaces. The most important requirement that an infimal convolution would satisfy for a decomposition to be optimal is subdifferentiability. In Paper 2, we have used an approach based on the famous Attouch–Brezis theorem to prove subdifferentiability of infimal convolution on Banach couples.

In Paper 3, we apply result from Paper 1 to the well–known Rudin–Osher–Fatemi (ROF) image denoising model on a general finite directed graph. We define the space BV of functions of bounded variation on the graph and show that the unit ball of its dual space can be described as the image of the unit ball of the space `¥ on the graph by a divergence operator. Based on this result, we propose a new fast algorithm to find the exact minimizer for the ROF model. Proof of convergence of the algorithm is presented and its performance on image denoising test examples is illustrated.

In Paper 4, we present some extensions of results presented in Paper 1 and Paper 2. First we extend the results from Banach couples to Banach triples. Then we prove that our approach can apply when complex spaces are considered instead of real spaces. Finally we compare the performance of the algorithm that was proposed in Paper 3 with the Split Bregman algorithm which is one of the benchmark algorithms known for the ROF model. We find out that in most cases both algorithms behave in a similar way and that in some cases our algorithm decreases the error faster with the number of iterations.

In Paper 5, we point out some connections between optimal decompositions mentioned in the thesis and optimization problems. We apply the approach used in Paper 2 to two well–known optimization problems, namely convex and linear programming to investigate connections with standard results in the framework of these problems. It is shown that we can derive proofs for duality theorems for these problems under the assumptions of our approach.

```
@phdthesis{diva2:814591,
author = {Niyobuhungiro, Japhet},
title = {{Exact Minimizers in Real Interpolation:
Characterization and Appliations}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1650}},
year = {2015},
address = {Sweden},
}
```

This thesis focuses on the problem of estimating parameters in bilinear and trilinear regression models in which random errors are normally distributed. In these models the covariance matrix has a Kronecker product structure and some factor matrices may be linearly structured. The interest of considering various structures for the covariance matrices in different statistical models is partly driven by the idea that altering the covariance structure of a parametric model alters the variances of the model’s estimated mean parameters.

Firstly, the extended growth curve model with a linearly structured covariance matrix is considered. The main theme is to find explicit estimators for the mean and for the linearly structured covariance matrix. We show how to decompose the residual space, the orthogonal complement to the mean space, into appropriate orthogonal subspaces and how to derive explicit estimators of the covariance matrix from the sum of squared residuals obtained by projecting observations on those subspaces. Also an explicit estimator of the mean is derived and some properties of the proposed estimators are studied.

Secondly, we study a bilinear regression model with matrix normally distributed random errors. For those models, the dispersion matrix follows a Kronecker product structure and it can be used, for example, to model data with spatio-temporal relationships. The aim is to estimate the parameters of the model when, in addition, one covariance matrix is assumed to be linearly structured. On the basis of *n* independent observations from a matrix normal distribution, estimating equations, a flip-flop relation, are established.

At last, the models based on normally distributed random third order tensors are studied. These models are useful in analyzing 3-dimensional data arrays. In some studies the analysis is done using the tensor normal model, where the focus is on the estimation of the variance-covariance matrix which has a Kronecker structure. Little attention is paid to the structure of the mean, however, there is a potential to improve the analysis by assuming a structured mean. We formally introduce a 2-fold growth curve model by assuming a trilinear structure for the mean in the tensor normal model and propose an estimation algorithm for parameters. Also some extensions are discussed.

```
@phdthesis{diva2:813054,
author = {Nzabanita, Joseph},
title = {{Bilinear and Trilinear Regression Models with Structured Covariance Matrices}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1665}},
year = {2015},
address = {Sweden},
}
```

A semigroup of sets is a family of sets closed under finite unions. This thesis focuses on the search of semigroups of sets in finite dimensional Euclidean spaces R^{n}, *n* ≥ 1, which elements do not possess the Baire property, and on the study of their properties.

Recall that the family of sets having the Baire property in the real line R, is a σ-algebra of sets, which includes both meager and open subsets of R. However, there are subsets of R which do not belong to the algebra. For example, each classical Vitali set on R does not have the Baire property.

It has been shown by Chatyrko that the family of all finite unions of Vitali sets on the real line, as well as its natural extensions by the collection of meager sets, are (invariant under translations of R) semigroups of sets which elements do not possess the Baire property.

Using analogues of Vitali sets, when the group 𝒬 of rationals in the Vitali construction is replaced by any countable dense subgroup 𝒬 of reals, (we call the sets Vitali 𝒬-selectors of R) and Chatyrko’s method, we produce semigroups of sets on R related to 𝒬, which consist of sets without the Baire property and which are invariant under translations of R. Furthermore, we study the relationship in the sense of inclusion between the semigroups related to different 𝒬. From here, we define a supersemigroup of sets based on all Vitali selectors of R. The defined supersemigroup also consists of sets without the Baire property and is invariant under translations of R. Then we extend and generalize the results from the real line to the finite-dimensional Euclidean spaces R^{n}, *n* ≥ 2, and indicate the difference between the cases *n* = 1 and *n* ≥ 2. Additionally, we show how the covering dimension can be used in defining diverse semigroups of sets without the Baire property.

```
@phdthesis{diva2:799656,
author = {Nyagahakwa, Venuste},
title = {{Semigroups of Sets Without the Baire Property In Finite Dimensional Euclidean Spaces}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1700}},
year = {2015},
address = {Sweden},
}
```

This thesis considers Small Area Estimation with a main focus on estimation and prediction theory for repeated measures data. The demand for small area statistics is for both cross-sectional and repeated measures data. For instance, small area estimates for repeated measures data may be used by public policy makers for different purposes such as funds allocation, new educational or health programs and in some cases, they might be interested in a given group of population.

It has been shown that the multivariate approach for model-based methods in small area estimation may achieve substantial improvement over the usual univariate approach. In this work, we consider repeated surveys including the same subjects at different time points. The population from which a sample has been drawn is partitioned into several subpopulations and within all subpopulations there is the same number of group units. For this setting a multivariate linear regression model is formulated. The aim of the proposed model is to borrow strength across small areas and over time with a particular interest of growth profiles over time. The model accounts for repeated surveys, group individuals and random effects variations.

The estimation of model parameters is discussed with a restricted maximum likelihood based approach. The prediction of random effects and the prediction of small area means across time points, per group units and for all time points are derived. The theoretical results have also been supported by a simulation study and finally, suggestions for future research are presented.

```
@phdthesis{diva2:762231,
author = {Ngaruye, Innocent},
title = {{Small Area Estimation for Multivariate Repeated Measures Data}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1684}},
year = {2014},
address = {Sweden},
}
```

Studiens övergripande syfte var att undersöka matematikutbildningen och matematikämnets roll i samband med stadiebytet från grund- till gymnasieskolan. Studien styrdes av två forskningsfrågor gällande den bild eleverna ger av matematikämnets betydelse för deras val av gymnasieprogram samt vad som karaktäriserar matematikutbildningen i årskurs 9 och gymnasiets årskurs 1. Den första frågan besvarades via en skriftlig enkät till samtliga elever i årskurs 9 i en större kommun och den andra frågan utifrån videoinspelade lektionsobservationer i båda skolstadierna, kompletterat med elevintervjuer. Resultatet visar att faktorer som goda valmöjligheter för framtiden och möjlighet till ett bra arbete var viktiga för programvalet. Cirka 36 % av eleverna angav att matematikämnet inte hade påverkat programvalet, medan 35 % ansåg sig ha påverkats och då oftast i positiv bemärkelse, dock med tydliga skillnader mellan olika program. I jämförelsen av årskurs 9 och årskurs 1 användes begrepp från Bernsteins teori om pedagogisk diskurs samt den antropologiska teorin om det didaktiska. Analysen pekade på stora likheter mellan de båda skolstadierna när det gäller lektionernas struktur och elevernas frihet att välja uppgifter att arbeta med, vilket kan ha bidragit till att stadiebytet inte upplevdes som särskilt dramatiskt. En uttalad skillnad var fler och längre gemensamma genomgångar och ett högre studietempo i gymnasiet. Studien lyfter fram olika aspekter av och kopplingar mellan elevernas gymnasieval och matematikutbildningens karaktär, som bör ha betydelse för såväl studievägledning som för organisation och planering av undervisning. Den osynliga pedagogik som observerades beträffande kunskapskriterier och uppgiftsval är något som behöver synliggöras i den aktuella debatten om skolans matematikutbildning.

```
@phdthesis{diva2:761415,
author = {Larson, Niclas},
title = {{Matematikämnet och stadiebytet mellan grundskolan och gymnasieskolan:
En enkät- och klassrumsstudie}},
school = {Linköping University},
type = {{Linköping Studies in Behavioural Science No. 187}},
year = {2014},
address = {Sweden},
}
```

Denna fallstudie belyser gymnasieelevers arbete i små grupper med ett problem kopplat till derivata och syftar till att belysa faktorer som gynnar eller hindrar individernas deltagande i och utveckling av den matematiska kommunikationen i klassrummet. Studien har sin teoretiska förankring i Anna Sfards kommognitiva ramverk, där lärande i matematik ses som deltagande i en matematisk diskurs.

Under mer än ett årtionde har larmrapporter om svenska elevers bristande kunskaper i matematik avlöst varandra. Forskningsrapporter pekar på olika faktorer bakom denna sjunkande kunskapsutveckling. Den rådande undervisningskulturen, där eleverna i hög grad arbetar individuellt med uppgifter ur läroboken, ses som en förklaring till de försämrade resultaten, och att undervisningen inte ger eleverna möjlighet att utveckla samtliga föreskrivna förmågor i ämnet. För att uppnå detta betonar både forskningsfältet och den nya läroplanen från 2011 vikten av att eleverna kommunicerar i matematik. I detta perspektiv finns ett behov av att belysa skillnader i elevernas deltagande i kommunikationen om matematik, inte minst i samband med lärande i smågrupper, och hur detta antas påverka elevernas förutsättningar till lärande.

Studiens fokus är riktat mot deltagarnas olika bidrag till gruppens matematiska diskurs, det vill säga då eleverna kommunicerar om matematiska objekt eller processer, och hur dessa påverkar elevernas förutsättningar och deltagande i kommunikationen. Fokus är också riktat mot den kommunikation som handlar om deltagarna i gruppen, vad eleverna gör och hur de värderar varandras sätt att delta i den matematiska diskursen i klassrummet. Denna kommunikation, benämns i ramverket för subjektifiering och antas vara sammankopplad med individens lärande i matematik.

Datainsamlingsmetoder som använts är intervjuer, audio- och videoinspelningar och användning av audiovisuella pennor för att sammanföra verbal och skriftlig kommunikation. Diskursen ses som den naturliga analysenheten. I analysens första steg studerades den matematiska diskursen avseende skillnader i innehållet i deltagarnas yttranden. I ett andra analyssteg fokuserades på interaktionsflödet i gruppen för att förstå mer av skillnader i varje elevs deltagande och bidrag till kommunikation.

Studiens resultat visar på stora skillnader avseende deltagande och innehåll i elevernas kommunikation, både på grupp- och individnivå. Elevernas utveckling av den matematiska diskursen gynnas av användningen av flera olika mediatorer för att representera de matematiska objekten. När eleverna erbjuds kopplingar till en tidigare erövrad diskurs, leder det till diskursiva framflyttningar. Eleverna visar sig vidare ha stora svårigheter att tolka och använda det formella matematiska symbolspråket som stöd för matematiserandet. Elevernas tolkning av likhetstecknet, olikhetstecknet och symbolen f´(x) på en processnivå skapar hinder för att utveckla den matematiska diskursen i önskvärd riktning. Den diskurs som handlar om deltagarna och deras egenskaper (identifiering) utgör ca 10 % av samtliga yttranden och är i stort sett samtliga negativa omdömen, ofta använda i syfte att utesluta eller införliva sig själva eller andra från deltagande i matematiserandet.

Forskningsstudien visar på ett behov av ytterligare kunskap om hur matematiklärare på bästa sätt kan organisera arbete i smågrupper för att öka elevernas engagemang och kvaliteten på elevernas matematiserande. Studien pekar vidare på vikten av att matematiklärare belyser och varierar användningen av olika mediatorer för att representera de matematiska objekt som är föremål för lärandet. Fallstudien belyser även vikten av att bygga upp det tillåtande arbetsklimat där eleverna inte bedömer sig själva och andra, utan istället vågar ställa de frågor som innebär att de blir alltmer delaktiga i den matematiska diskursen. Ett behov framträder av ytterligare forskning riktad mot inte bara mot den bedömning som sker mellan lärare och elev, utan också mot den bedömning som pågår i klassrummet mellan eleverna, vilket kan påverka vilka roller de väljer eller tilldelas i klassrummet. Detta kan antas vara av stor vikt för hur eleverna kommunicerar om matematik med andra deltagare i klassrummet, vilket också kan antas påverka lärandet.

```
@phdthesis{diva2:716778,
author = {Bergholm, Marie},
title = {{Gymnasieelevers kommunikativa strategier i matematikklassrummet:
En fallstudie av ett smågruppsarbete om derivata}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1665}},
year = {2014},
address = {Sweden},
}
```

This thesis consists of four papers and focuses on function spaces related to first-order analysis in abstract metric measure spaces. The classical (i.e., Sobolev) theory in Euclidean spaces makes use of summability of distributional gradients, whose definition depends on the linear structure of **R*** ^{n}*. In metric spaces, we can replace the distributional gradients by (weak) upper gradients that control the functions’ behavior along (almost) all rectifiable curves, which gives rise to the so-called Newtonian spaces. The summability condition, considered in the thesis, is expressed using a general Banach function lattice quasi-norm and so an extensive framework is built. Sobolev-type spaces (mainly based on the

*L*norm) on metric spaces, and Newtonian spaces in particular, have been under intensive study since the mid-1990s.

^{p}In Paper I, the elementary theory of Newtonian spaces based on quasi-Banach function lattices is built up. Standard tools such as moduli of curve families and the Sobolev capacity are developed and applied to study the basic properties of Newtonian functions. Summability of a (weak) upper gradient of a function is shown to guarantee the function’s absolute continuity on almost all curves. Moreover, Newtonian spaces are proven complete in this general setting.

Paper II investigates the set of all weak upper gradients of a Newtonian function. In particular, existence of minimal weak upper gradients is established. Validity of Lebesgue’s differentiation theorem for the underlying metric measure space ensures that a family of representation formulae for minimal weak upper gradients can be found. Furthermore, the connection between pointwise and norm convergence of a sequence of Newtonian functions is studied.

Smooth functions are frequently used as an approximation of Sobolev functions in analysis of partial differential equations. In fact, Lipschitz continuity, which is (unlike -smoothness) well-defined even for functions on metric spaces, often suffices as a regularity condition. Thus, Paper III concentrates on the question when Lipschitz functions provide good approximations of Newtonian functions. As shown in the paper, it suffices that the function lattice quasi-norm is absolutely continuous and a fractional sharp maximal operator satisfies a weak norm estimate, which it does, e.g., in doubling Poincaré spaces if a non-centered maximal operator of Hardy–Littlewood type is locally weakly bounded. Therefore, such a local weak boundedness on rearrangement-invariant spaces is explored as well.

Finer qualitative properties of Newtonian functions and the Sobolev capacity get into focus in Paper IV. Under certain hypotheses, Newtonian functions are proven to be quasi-continuous, which yields that the capacity is an outer capacity. Various sufficient conditions for local boundedness and continuity of Newtonian functions are established. Finally, quasi-continuity is applied to discuss density of locally Lipschitz functions in Newtonian spaces on open subsets of doubling Poincaré spaces.

```
@phdthesis{diva2:708831,
author = {Malý, Luká\v{s}},
title = {{Sobolev-Type Spaces:
Properties of Newtonian Functions Based on Quasi-Banach Function Lattices in Metric Spaces}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1591}},
year = {2014},
address = {Sweden},
}
```

The inverse problem of reconstructing the acoustic, or electromagnetic, field from inexact measurements on a part of the boundary of a domain is important in applications, for instance for detecting the source of acoustic noise. The governing equation for the applications we consider is the Helmholtz equation. More precisely, in this thesis we study the case where Cauchy data is available on a part of the boundary and we seek to recover the solution in the whole domain. The problem is ill-posed in the sense that small errors in the Cauchy data may lead to large errors in the recovered solution. Thus special regularization methods that restore the stability with respect to measurements errors are used.

In the thesis, we focus on iterative methods for solving the Cauchy problem. The methods are based on solving a sequence of well-posed boundary value problems. The specific choices for the boundary conditions used are selected in such a way that the sequence of solutions converges to the solution for the original Cauchy problem. For the iterative methods to converge, it is important that a certain bilinear form, associated with the boundary value problem, is positive definite. This is sometimes not the case for problems with a high wave number.

The main focus of our research is to study certain modifications to the problem that restore positive definiteness to the associated bilinear form. First we add an artificial interior boundary inside the domain together with a jump condition that includes a parameter μ. We have shown by selecting an appropriate interior boundary and sufficiently large value for μ, we get a convergent iterative regularization method. We have proved the convergence of this method. This method converges slowly. We have therefore developed two conjugate gradient type methods and achieved much faster convergence. Finally, we have attempted to reduce the size of the computational domain by solving well–posed problems only in a strip between the outer and inner boundaries. We demonstrate that by alternating between Robin and Dirichlet conditions on the interior boundary, we can get a convergent iterative regularization method. Numerical experiments are used to illustrate the performance of the methods suggested.

```
@phdthesis{diva2:711818,
author = {Mpinganzima, Lydie},
title = {{Iterative Methods for Solving the Cauchy Problem for the Helmholtz Equation}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1593}},
year = {2014},
address = {Sweden},
}
```

The aim of this work is to present some new contributions to the theory of peaked solitons. The thesis contains two papers, named "Lie symmetry analysis of the Novikov and Geng-Xue equations, and new peakon-like unbounded solutions to the Camassa-Holm, Degasperis-Procesi and Novikov equations'' and "Peakon-antipeakon solutions of the Novikov equation'' respectively.

In the first paper, a new kind of peakon-like solutions to the Novikov equation is obtained, by transforming the one-peakon solution via a Lie symmetry transformation. This new kind of solution is unbounded as x tends to infinity and/or minus infinity. It also has a peak, though only for some interval of time. We make sure that the peakon-like function is still a solution in the weak sense for those times where the function is non-differentiable. We find that similar solutions, with peaks living only for some interval in time, are valid weak solutions to the Camassa-Holm equation, though these can not be obtained via a symmetry transformation.

The second paper covers peakon-antipeakon solutions of the Novikov equation, on the basis of known solution formulas from the pure peakon case. A priori, these formulas are valid only for some interval of time and only for some initial values. The aim of the article is to study the Novikov multipeakon solution formulas in detail, to overcome these problems. We find that the formulas for locations and heights of the peakons are valid for all times at least in an ODE sense. Also, we suggest a procedure of how to deal with multipeakons where the initial conditions are such that the usual spectral data are not well-defined as residues of single poles of a Weyl function. In particular we cover the interaction between one peakon and one antipeakon, revealing some unexpected properties. For example, with complex spectral data, the solution is shown to be periodic, except for a translation, with an infinite number of collisions between the peakon and the antipeakon. Also, plotting solution formulas for larger number of peakons shows that there are similarities to the phenomenon called "waltzing peakons''.

```
@phdthesis{diva2:709792,
author = {Kardell, Marcus},
title = {{Contributions to the theory of peaked solitons}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1650}},
year = {2014},
address = {Sweden},
}
```

This thesis is divided into two parts that each is concerned with a specific problem.

The problem under consideration in the first part is to find suitable graph representations, abstractions, cost measures and algorithms for calculating placements of unmanned aerial vehicles (UAVs) such that they can keep one or several static targets under constant surveillance. Each target is kept under surveillance by a surveillance UAV, which transmits information, typically real time video, to a relay UAV. The role of the relay UAV is to retransmit the information to another relay UAV, which retransmits it again to yet another UAV. This chain of retransmission continues until the information eventually reaches an operator at a base station.

When there is a single target, then all Pareto-optimal solutions, i.e. all relevant compromises between quality and the number of UAVs required, can be found using an efficient new algorithm. If there are several targets, the problem becomes a variant of the Steiner tree problem and to solve this problem we adapt an existing algorithm to find an initial tree. Once it is found, we can further improve it using a new algorithm presentedin this thesis.

The second problem is optimization of time-consuming problems where the objective function is seen as a black box, where the input parameters are sent and a function valueis returned. This has the important implication that no gradient or Hessian information is available. Such problems are common when simulators are used to perform advanced calculations such as crash test simulations of cars, dynamic multibody simulations etc. It is common that a single function evaluation takes several hours.

Algorithms for solving such problems can be broadly divided into direct search algorithms and model building algorithms. The first kind evaluates the objective function directly, whereas the second kind builds a model of the objective function, which is then optimized in order to find a new point where it is believed that objective function has agood value. Then the objective function is evaluated in that point.

Since the objective function is very time-consuming, it is common to focus on minimizing the number of function evaluations. However, this completely disregards the possibility to perform calculations in parallel and to exploit this we investigate different ways parallelization can be used in model-building algorithms. Some of the ways to do this is to use several starting points, generate several new points in each iteration, new ways of predicting a point’s value and more.

We have implemented the parallel extensions in one of the state of the art algorithms for derivative-free optimization and report results from testing on synthetic benchmarksas well as from solving real industrial problems.

```
@phdthesis{diva2:695431,
author = {Olsson, Per-Magnus},
title = {{Methods for Network Optimization and Parallel Derivative-free Optimization}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1580}},
year = {2014},
address = {Sweden},
}
```

Nowadays, large-scale optimization problems are among those most challenging. Any progress in developing methods for large-scale optimization results in solving important applied problems more effectively. Limited memory methods and trust-region methods represent two ecient approaches used for solving unconstrained optimization problems. A straightforward combination of them deteriorates the efficiency of the former approach, especially in the case of large-scale problems. For this reason, the limited memory methods are usually combined with a line search. We develop new limited memory trust-region algorithms for large-scale unconstrained optimization. They are competitive with the traditional limited memory line-search algorithms.

In this thesis, we consider applied optimization problems originating from the design of lter networks. Filter networks represent an ecient tool in medical image processing. It is based on replacing a set of dense multidimensional lters by a network of smaller sparse lters called sub-filters. This allows for improving image processing time, while maintaining image quality and the robustness of image processing.

Design of lter networks is a nontrivial procedure that involves three steps: 1) choosing the network structure, 2) choosing the sparsity pattern of each sub-filter and 3) optimizing the nonzero coecient values. So far, steps 1 and 2 were mainly based on the individual expertise of network designers and their intuition. Given a sparsity pattern, the choice of the coecients at stage 3 is related to solving a weighted nonlinear least-squares problem. Even in the case of sequentially connected lters, the resulting problem is of a multilinear least-squares (MLLS) type, which is a non-convex large-scale optimization problem. This is a very dicult global optimization problem that may have a large number of local minima, and each of them is singular and non-isolated. It is characterized by a large number of decision variables, especially for 3D and 4D lters.

We develop an effective global optimization approach to solving the MLLS problem that reduces signicantly the computational time. Furthermore, we develop efficient methods for optimizing sparsity of individual sub-filters in lter networks of a more general structure. This approach offers practitioners a means of nding a proper trade-o between the image processing quality and time. It allows also for improving the network structure, which makes automated some stages of designing lter networks.

```
@phdthesis{diva2:692950,
author = {Zikrin, Spartak},
title = {{Large-Scale Optimization Methods with Application to Design of Filter Networks}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1561}},
year = {2014},
address = {Sweden},
}
```

The relevance of using mathematics in and for out-of-school activities is one main argument for teaching mathematics in education. Mathematical modelling is considered as a bridge between the mathematics learned and taught in schools and the mathematics used at the workplace and in society and it is also a central notion in the present Swedish mathematical syllabus for upper secondary school. This doctoral thesis reports on students’, teachers’ and modelling experts’ experiences of, learning, teaching and working with mathematical modelling in and out of school settings and their interpretations of the notion of mathematical modelling.

The thesis includes five papers and a preamble, where the papers are summarised, analysed, and discussed. Different methods are being used in the thesis such as video analysis of students’ collaboration working with modelling problem, interview investigations with teachers and expert modellers, content analysis of textbooks and literature review of modelling assessment. Theoretical aspects concerning mathematical modelling and the didactic transposition of modelling are examined.

The results presented in this thesis provide a fragmented picture of the didactic transposition of mathematical modelling in school mathematics in Sweden. There are significant differences in how modellers, teachers and students work with modelling in different practices in terms of the goal with the modelling activity, the risks involved in using the models, the use of technology, division of labour and the construction of mathematical models. However, there are also similarities identified described as important aspects of modelling work in the different practices, such as communication, collaboration, projects, and the use of applying and adapting pre-defined models. Students, teachers and modellers expressed a variety of descriptions of what modelling means. The variety of descriptions in the workplace is not surprising, since their working approaches are quite different, but it makes the notion difficult to transpose into school practise. Questions raised are if it is unrealistic to search for a general definition and if it is really necessary to have a general definition. The consequence, for anyone how uses the notion, is to always be explicit with the meaning.

An implication for teaching is that modelling as it shows in the workplace can never be fully ‘mapped’ in the mathematical classroom. However, it may be possible to ‘simulate’ such activity. Working with mathematical modelling in projects is suggested to simulate workplace activities, which include collaboration and communication between different participants. The modelling problems may for example involve economic and environmental decisions, to prepare students to be critically aware of the use of mathematics in private life and in society, where many decisions are based on mathematical models.

```
@phdthesis{diva2:690259,
author = {Frejd, Peter},
title = {{Modes of Mathematical Modelling:
An analysis of how modelling is used and interpreted in and out of school settings}},
school = {Linköping University},
type = {{Linköping Studies in Behavioural Science No. 181}},
year = {2014},
address = {Sweden},
}
```

In this thesis three combinatorial problems are studied in four papers.

In Paper 1 we study the structure of the *k*-assignment polytope, whose vertices are the *m*x*n* (0,1)-matrices with exactly *k* 1:s and at most one 1 in each row and each column. This is a natural generalisation of the Birkhoff polytope and many of the known properties of the Birkhoff polytope are generalised. A representation of the faces by certain bipartite graphs is given. This representation is used to describe the properties of the polytope, such as a complete description of the cover relation in the face poset of the polytope and an exact expression for the diameter of its graph. An ear decomposition of these bipartite graphs is constructed.

In Paper 2 we investigate the topology and combinatorics of a topological space, called the edge-product space, that is generated by a set of edge-weighted finite semi-labelled trees. This space arises by multiplying the weights of edges on paths in trees, and is closely connected to tree-indexed Markov processes in molecular evolutionary biology. In particular, by considering combinatorial properties of the Tuffley poset of semi-labelled forests, we show that the edge-product space has a regular cell decomposition with face poset equal to the Tuffley poset.

The elements of the Tuffley poset are called *X*-forests, where *X* is a finite set of labels. A generating function of the *X*-forests withrespect to natural statistics is studied in Paper 3 and a closed formula is found. In addition, a closed formula for the corresponding generating function of *X*-trees is found. Singularity analysis is used on these formulas to find asymptotics for the number of components, edges, and unlabelled nodes in *X*-forests and *X*-trees as |*X*| goes towards infinity.

In Paper 4 permutation statistics counting occurrences of patterns are studied. Their expected values on a product of *t* permutations chosen randomly from a subset Γ of S_{n}, where Γ is a union of conjugacy classes, are considered. Hultman has described a method for computing such an expected value, denoted *E*(*s*,*t*), of a statistic *s*, when Γ is a union of conjugacy classes of S_{n}. The only prerequisite is that the mean of *s* over the conjugacy classes is written as a linear combination of irreducible characters of S_{n}. Therefore, the main focus of this paper is to express the means of pattern-counting statistics as such linear combinations. A procedure for calculating such expressions for statistics counting occurrences of classical and vincular patterns of length 3 is developed, and is then used to calculate all these expressions.

```
@phdthesis{diva2:653747,
author = {Gill, Jonna},
title = {{The k-assignment polytope, phylogenetic trees, and permutation patterns}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1544}},
year = {2013},
address = {Sweden},
}
```

One out of eight deaths throughout the world is due to cancer. Developing new treatments and improving existing treatments is hence of major importance. In this thesis we have studied how mathematical optimization can be used to improve an existing treatment method: high-dose-rate (HDR) brachytherapy.

HDR brachytherapy is a radiation modality used to treat tumours of for example the cervix, prostate, breasts, and skin. In HDR brachytherapy catheters are implanted into or close to the tumour volume. A radioactive source is moved through the catheters, and by adjusting where the catheters are placed, called catheter positioning, and how the source is moved through the catheters, called the dwelling time pattern, the dose distribution can be controlled.

By constructing an individualized catheter positioning and dwelling time pattern, called dose plan, based on each patient's anatomy, it is possible to improve the treatment result. Mathematical optimization has during the last decade been used to aid in creating individualized dose plans. The dominating optimization model for this purpose is a linear penalty model. This model only considers the dwelling time pattern within already implanted catheters, and minimizes a weighted deviation from dose intervals prescribed by a physician.

In this thesis we show that the distribution of the basic variables in the linear penalty model implies that only dwelling time patterns that have certain characteristics can be optimal. These characteristics cause troublesome inhomogeneities in the plans, and although various measures for mitigating these are already available, it is of fundamental interest to understand their cause.

We have also shown that the relationship between the objective function of the linear penalty model and the measures commonly used for evaluating the quality of the dose distribution is weak. This implies that even if the model is solved to optimality there is no guarantee that the generated plan is optimal with respect to clinically relevant objectives, or even near-optimal. We have therefore constructed a new model for optimizing the dwelling time pattern. This model approximates the quality measures by the concept conditional value-at-risk, and we show that the relationship between our new model and the quality measures is strong. Furthermore, the new model generates dwelling time patterns that yield high-quality dose distributions.

Combining optimization of the dwelling time pattern with optimization of the catheter positioning yields a problem for which it is rarely possible to find a proven optimal solution within a reasonable time frame. We have therefore developed a variable neighbourhood search heuristic that outperforms a state-of-the-art optimization software (CPLEX). We have also developed a tailored branch-and-bound algorithm that is better at improving the dual bound than a general branch-and-bound algorithm. This is a step towards the development of a method that can find proven optimal solutions to the combined problem within a reasonable time frame.

```
@phdthesis{diva2:658212,
author = {Holm, Åsa},
title = {{Mathematical Optimization of HDR Brachytherapy}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1550}},
year = {2013},
address = {Sweden},
}
```

The current work is devoted to estimating the term structure of interest rates based on a generalized optimization framework. To x the ideas of the subject, we introduce representations of the term structure as they are used in nance: yield curve, discount curve and forward rate curve.

Yield curves are used in empirical research in nance and macroeconomic to support nancial decisions made by governments and/or private nancial institutions. When governments (or nancial corporations) need fundings, they issue to the public (i.e. the market) debt securities (bills, bonds, notes, etc ) which are sold at the discount rate at the settlement date and promise the face value of the security at the redemption date, known as maturity date. Bills, notes and bonds are usually sold with maximum maturity of 1 year, 10 years and 30 years respectively.

Let us assume that the government issues to the market zero-coupon bonds, which provide a single payment at maturity of each bond. To determine the price of the security at time of settlement, a single discount factor is used. Thus, the yield can be dened as the discount rate which makes the present value of the security issued (the zero-coupon bond) equal to its initial price. The yield curve describes the relationship between a particular yield and a bond's maturity. In general, given a certain number of bonds with dierent time to maturity, the yield curve will describe the one-to-one relationship between the bond yields and their corresponding time to maturity. For a realistic yield curve, it is important to use only bonds from the same class of issuer or securities having the same degree of liquidity when plotting the yields.

Discount factors, used to price bonds, are functions of the time to maturity. Given that yields are positive, these functions are assumed to be monotonically decreasing as the time to maturity increases. Thus, a discount curve is simply the graph of discount factors for dierent maturities associated with dierent securities.

Another useful curve uses the forward rate function which can be deduced from both the discount factor and the yield function. The forward rate is the rate of return for an investment that is agreed upon today but which starts at some time in the future and provides payment at some time in the future as well. When forward rates are used, the resulting curve is referred to as the forward rate curve. Thus, any of these curves, that is, the yield curve, the discount curve or the forward rate curve, can be used to represent what is known as the term structure of interest rate. The shapes that the term structure of interest rates can assume include upward sloping, downward sloping, atness or humped, depending on the state of the economy. When the expectations of market participants are incorporated in the construction of these curves representing the term structure, their shapes capture and summarize the cost of credit and risks associated with every security traded.

However, constructing these curves and the choice of an appropriate representation of the term structure to use is not a straightforward task. This is due to the complexity of the market data, precisely, the scarcity of zero-coupon bonds which constitutes the backbone of the term structure. The market often provides coupons alongside market security prices for a small number of maturities. This implies that, for the entire maturity spectrum, yields can not be observed on the market. Based on available market data, yields must be estimated using traditional interpolation methods. To this end, polynomial splines as well as parsimonious functions are the methods mostly used by nancial institutions and in research in nance. However, it is observed in literature that these methods suer from the shape constraints which cause them to produce yield curves that are not realistic with respect to the market observations. Precisely, the yield curves produced by these methods are characterized by unrealistic t of the market data, either in the short end or in the long end of the term structure of interest rate.

To ll the gap, the current research models the yield curve using a generalized optimization framework. The method is not shape constrained, which implies that it can adapt to any shape the yield curve can take across the entire maturity spectrum. While estimating the yield curve using this method in comparison with traditional methods on the Swedish and US markets, it is shown that any other traditional method used is a special case of the generalized optimization framework. Moreover, it is shown that, for a certain market consistency, the method produces lower variances than any of the traditional methods tested. This implies that the method produces forward rate curve of higher quality compared to the existing traditional methods.

Interest rate derivatives are instruments whose prices depend or are derived from the price of other instruments. Derivatives instruments that are extensively used include the forward rate agreement (FRA) contracts where forward rate is used and the interest rate swap (IRS) where LIBOR rate is used as oating rate. These instruments will only be used to build up the term structure of interest rates. Since the liquidity crisis in 2007, it is observed that discrepancies in basis spread between interest rates applied to dierent interest rate derivatives have grown so large that a single discount curve is no longer appropriate to use for pricing securities consistently. It has been suggested that the market needs new methods for multiple yield curves estimation to price securities consistently with the market. As a response, the generalized optimization framework is extended to a multiple yield curves estimation. We show that, unlike the cubic spline for instance, which is among the mostly used traditional method, the generalized framework can produce multiple yield curves and tenor premium curves that are altogether smooth and realistic with respect to the market observations.

U.S. Treasury market is, by size and importance, a leading market which is considered as benchmark for most xed-income securities that are traded worldwide. However, existing U.S. Treasury yield curves that are used in the market are of poor quality since they have been estimated by traditional interpolation methods which are shape constrained. This implies that the market prices they imply contain lots of noise and as such, are not safe to use. In this work, we use the generalized optimization framework to estimate high-quality forward rates for the U.S. Treasury yield curve. Using ecient frontiers, we show that the method can produce low pricing error with low variance as compared to the least squares methods that have been used to estimate U.S. Treasury yield curves.

We nally use the high-quality U.S. Treasury forward rate curve estimated by the generalized optimization framework as input to the essentially ane model to capture the randomness property in interest rates and the time-varying term premium. This premium is simply a compensation that is required for additional risks that investors are exposed to. To determine optimal investment in the U.S. Treasury market, a two-stage stochastic programming model without recourse is proposed, which model borrowing, shorting and proportional transaction cost. It is found that the proposed model can provide growth of wealth in the long run. Moreover, its Sharpe ratio is better than the market index and its Jensen's alpha is positive. This implies that the Stochastic Programming model proposed can produce portfolios that perform better than the market index.

```
@phdthesis{diva2:647662,
author = {Ndengo Rugengamanzi, Marcel},
title = {{Term structure estimation based on a generalized optimization framework}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1539}},
year = {2013},
address = {Sweden},
}
```

The aim of this work is to provide an introduction into the theory of infinite dimensional stochastic processes. The thesis contains the paper *A class of infinite dimensional stochastic processes with unbounded diffusion* written at Linköping University during 2012. The aim of that paper is to take results from the finite dimensional theory into the infinite dimensional case. This is done via the means of a coordinate representation. It is shown that for a certain kind of Dirichlet form with unbounded diffusion, we have properties such as closability, quasi-regularity, and existence of local first and second moment of the associated process. The starting chapters of this thesis contain the prerequisite theory for understanding the paper. It is my hope that any reader unfamiliar with the subject will find this thesis useful, as an introduction to the field of infinite dimensional processes.

```
@phdthesis{diva2:642221,
author = {Karlsson, John},
title = {{A class of infinite dimensional stochastic processes with unbounded diffusion}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1612}},
year = {2013},
address = {Sweden},
}
```

This thesis is devoted to the study of mathematical properties of exact minimizers for the K–,L–, and E– functionals of the theory of real interpolation. Recently exact minimizers for these functionals have appeared in important results in image processing.

In the thesis, we present a geometry of optimal decomposition for L– functional for the couple (ℓ^{2}, X), where space ℓ^{2} is defined by the standard Euclidean norm ǁ · ǁ_{2} and where X is a Banach space on R^{n}. The well known ROF denoising model is a special case of an L– functional for the couple (L^{2}, BV) where L^{2} and BV stand for the space of square integrable functions and the space of functions with bounded variation on a rectangular domain respectively. We provide simple proofs and geometrical interpretation of optimal decomposition by following ideas by Yves Meyer who has used a duality approach to characterize optimal decomposition for ROF denoising model.

The operation of infimal convolution is a very important and non–trivial tool in functional analysis and is also very well–known within the context of convex analysis. The L–, K– and E– functionals can be regarded as an infimal convolution of two well defined functions but unfortunately tools from convex analysis can not be applied in a straigtforward way in this context of couples of spaces. We have considered infimal convolution on Banach couples and by using a theorem due to Attouch and Brezis, we have established sufficient conditions for an infimal convolution on a given Banach couple to be subdifferentiable, which turns out to be the most important requirement that an infimal convolution would satisfy for a decomposition to be optimal. We have also provided a lemma that we have named Key Lemma, which characterizes optimal decomposition for an infimal convolution in general.

The main results concerning mathematical properties of optimal decomposition for L–, K– and E– functionals for the case of general regular Banach couples are presented. We use a duality approach which can be summarized in three steps: First we consider the concerned functional as an infimal convolution and reformulate the infimal convolution at hand as a minimization of a sum of two specific functions on the intersection of the couple. Then we prove that it is subdifferentiable and finally use the characterizaton of its optimal decomposition.

We have also investigated how powerful our approach is by applying it to two well–known optimization problems, namely convex and linear programming. As a result we have obtained new proofs for duality theorems which are central for these problems.

```
@phdthesis{diva2:632532,
author = {Niyobuhungiro, Japhet},
title = {{Optimal Decomposition in Real Interpolation and Duality in Convex Analysis}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1604}},
year = {2013},
address = {Sweden},
}
```

Today there are many aspects of model based planning, probably much more than for just 20 years ago. Today data is not the problem, data is everywhere, but the big issue is to understand how to gain advantage of data in decision making - a growing focus is now on modeling!

This work is devoted to this task and can be seen as a three part study. First a general problem of multi-resource routing with sequence dependent costs is studied both in terms of models, as well as the development of methods to solve this class of problems efficiently. Sequence dependent costs among resources arise in situations where for instance, one resource is allocated to a task which influence another task and makes that easier in the sense that it can be conducted to a lower cost or shorter time or some other measured effort. This is a useful building block in military decision making when making plans for troops, attacks or missions in general.

The next part is founded by Vinnova via the research project “Effect Oriented Planning in Dynamic Scenarios”, and deals with a military ground attack problem with simultaneous attacks against a plurality of targets. This part deals with the difficulties of the attacker-defender problem which is modeled in a Nonlinear Mixed Integer Programming formulation. Suggestions are given how to refine and transform this into robust solution methods.

The last part, also included in the Vinnova research project, deals with mission support modeling of Air to Ground missions including multiple aircrafts and a plurality of targets. In this case, sequencing is most important and a strong effort has been put in the understanding and transformation of the problem into models and methods.

In these last two parts implementations of models and heuristics as well as computer runs and simulations, originates from the work in [19] and [20] which has been an invaluable contribution to this work.

The results are very promising where fast execution and sufficient planning accuracy have drawn the attention as a foundation for future work and research in model based decision making applicable to industry needs and ambitions.

```
@phdthesis{diva2:622880,
author = {Lundberg, Kristian},
title = {{Effect Oriented Planning in Military Mission Support Systems:
Models and Heuristic Approaches}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1544}},
year = {2013},
address = {Sweden},
}
```

The spectral distribution function of random matrices is an information-carrying object widely studied within Random matrix theory. In this thesis we combine the results of the theory together with the idea of free independence introduced by Voiculescu (1985).

Important theoretical part of the thesis consists of the introduction to Free probability theory, which justifies use of asymptotic freeness with respect to particular matrices as well as the use of Stieltjes and R-transform. Both transforms are presented together with their properties.

The aim of thesis is to point out characterizations of those classes of the matrices, which have closed form expressions for the asymptotic spectral distribution function. We consider all matrices which can be decomposed to the sum of asymptotically free independent summands.

In particular, explicit calculations are performed in order to illustrate the use of asymptotic free independence to obtain the asymptotic spectral distribution for a matrix Q and generalize Marcenko and Pastur (1967) theorem. The matrix Q is defined as

where X_{i} is *p × n* matrix following a matrix normal distribution, X_{i }~ *N _{p,n}(0, \sigma^2I, I)*.

Finally, theorems pointing out classes of matrices Q which lead to closed formula for the asymptotic spectral distribution will be presented. Particularly, results for matrices with inverse Stieltjes transform, with respect to the composition, given by a ratio of polynomials of 1st and 2nd degree, are given.

```
@phdthesis{diva2:621444,
author = {Pielaszkiewicz, Jolanta},
title = {{On the asymptotic spectral distribution of random matrices:
Closed form solutions using free independence}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1597}},
year = {2013},
address = {Sweden},
}
```

The Tippe Top is a toy that has the form of a truncated sphere with a small peg. When spun on its spherical part on a flat supporting surface it will start to turn upside down to spin on its peg. This counterintuitive phenomenon, called inversion, has been studied for some time, but obtaining a complete description of the dynamics of inversion has proven to be a difficult problem. This is because even the most simplified model for the rolling and gliding Tippe Top is a non-integrable, nonlinear dynamical system with at least 6 degrees of freedom. The existing results are based on numerical simulations of the equations of motion or an asymptotic analysis showing that the inverted position is the only asymptotically attractive and stable position for the Tippe Top under certain conditions. The question of describing dynamics of inverting solutions remained rather intact.

In this thesis we develop methods for analysing equations of motion of the Tippe Top and present conditions for oscillatory behaviour of inverting solutions.

Our approach is based on an integrated form of Tippe Top equations that leads to the Main Equation for the Tippe Top (METT) describing the time evolution of the inclination angle $\theta(t)$ for the symmetry axis of the Tippe Top.

In particular we show that we can take values for physical parameters such that the potential function $V(\cos\theta,D,\lambda)$ in METT becomes a rational function of $\cos\theta$, which is easier to analyse. We estimate quantities characterizing an inverting Tippe Top, such as the period of oscillation for $\theta(t)$ as it moves from a neighborhood of $\theta=0$ to a neighborhood of $\theta=\pi$ during inversion. Results of numerical simulations for realistic values of physical parameters confirm the conclusions of the mathematical analysis performed in this thesis.

```
@phdthesis{diva2:602220,
author = {Rutstam, Nils},
title = {{Analysis of Dynamics of the Tippe Top}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1500}},
year = {2013},
address = {Sweden},
}
```

The Internet is constantly growing but the available resources, i.e. bandwidth, are limited. Using bandwidth efficiently to provide high quality of service to users is referred to as traffic engineering. This is of utmost importance. Traffic in IP networks is commonly routed along shortest paths with respect to auxiliary link weights, e.g. using the OSPF or IS-IS protocol. Here, shortest path routing is indirectly controlled via the link weights only, and it is therefore crucial to have a profound understanding of the shortest path routing mechanism to solve traffic engineering problems in IP networks. The theoretical aspects of such problems have received little attention.

Traffic engineering in IP networks leads to very difficult optimization problems and the key element in exact methods for such problems is an inverse shortest path routing problem. It is used to answer the fundamental question of whether there exist link weights that reproduce a set of tentative paths. Path sets that cannot be obtained correspond to routing conflicts. Valid inequalities are instrumental to prohibit such routing conflicts.

We analyze the inverse shortest path routing problem thoroughly. We show that the problem is NP-complete, contrary to what is claimed in the literature, and propose a stronger relaxation. Its Farkas system is used to develop a novel and compact formulation based on cycle bases, and to classify and characterize minimal infeasible subsystems. Valid inequalities that prevent routing conflicts are derived and separation algorithms are developed for such inequalities.

We also consider several approaches to modelling traffic engineering problems, especially Dantzig–Wolfe reformulations and extended formulations. We give characterizations of facets for some relaxations of traffic engineering problems.

Our results contribute to the theoretical understanding, modelling and solution of problems related to traffic engineering in IP networks.

```
@phdthesis{diva2:571482,
author = {Call, Mikael},
title = {{Shortest Path Routing Modelling, Infeasibility and Polyhedra}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1486}},
year = {2012},
address = {Sweden},
}
```

Layer potential operators associated to elliptic partial differential equations have been an object of investigation for more than a century, due to their contribution in the solution of boundary value problems through integral equations.

In this Licentiate thesis we prove the boundedness of the double layer potential operator on the Hilbert space of square integrable functions on the boundary, associated to second order uniformly elliptic equations in divergence form in the upper half-space, with real, possibly non-symmetric, bounded measurable coefficients, that do not depend on the variable transversal to the boundary. This uses functional calculus of bisectorial operators and is done through a series of four steps.

The first step consists of reformulating the second order partial differential equation as an equivalent first order vector-valued ordinary differential equation in the upper halfspace. This ordinary differential equation has a particularly simple form and it is here that the bisectorial operator corresponding to the original divergence form equation appears as an infinitesimal generator.

Solving this ordinary differential through functional calculus comprises the second step. This is done with the help of the holomorphic semigroup associated to the restriction of the bisectorial operator to an appropriate spectral subspace; the restriction of the operator is a sectorial operator and the holomorphic semigroup is well-defined on the spectral subspace.

The third step is the construction of the fundamental solution to the original divergence form equation. The behaviour of this fundamental solution is analogous to the behaviour of the fundamental solution to the classical Laplace equation and its conormal gradient of the adjoint fundamental solution is used as the kernel of the double layer potential operator. This third step is of a different nature than the others, insofar as it does not involve tools from functional calculus.

In the last step Green’s formula for solutions of the divergence form partial differential equation is used to give a concrete integral representation of the solutions to the divergence form equation. Identifying this Green’s formula with the abstract formula derived by functional calculus yields the sought-after boundedness of the double layer potential operator, for coefficients of the particular form mentioned above.

```
@phdthesis{diva2:556540,
author = {Krimpogiannis, Michail},
title = {{The Double Layer Potential Operator Through Functional Calculus}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1553}},
year = {2012},
address = {Sweden},
}
```

The traditional first-order analysis in Euclidean spaces relies on the Sobolev spaces W^{1,p}(Ω), where Ω ⊂ R^{n} is open and p ∈ [1, ∞].The Sobolev norm is then defined as the sum of Lp norms of a function and its distributional gradient.We generalize the notion of Sobolev spaces in two different ways. First, the underlying function norm will be replaced by the “norm” of a quasi-Banach function lattice. Second, we will investigate functions defined on an abstract metric measure space and that is why the distributional gradients need to be substituted.

The thesis consists of two papers. The first one builds up the elementary theory of Newtonian spaces based on quasi-Banach function lattices. These lattices are complete linear spaces of measurable functions with a topology given by a quasinorm satisfying the lattice property. Newtonian spaces are first-order Sobolev-type spaces on abstract metric measure spaces, where the role of weak derivatives is passed on to upper gradients. Tools such asmoduli of curve families and the Sobolev capacity are developed, which allows us to study basic properties of the Newtonian functions.We will see that Newtonian spaces can be equivalently defined using the notion of weak upper gradients, which increases the number of techniques available to study these spaces. The absolute continuity of Newtonian functions along curves and the completeness of Newtonian spaces in this general setting are also established.

The second paper in the thesis then continues with investigation of properties of Newtonian spaces based on quasi-Banach function lattices. The set of all weak upper gradients of a Newtonian function is of particular interest.We will prove that minimalweak upper gradients exist in this general setting.Assuming that Lebesgue’s differentiation theoremholds for the underlyingmetricmeasure space,wewill find a family of representation formulae. Furthermore, the connection between pointwise convergence of a sequence of Newtonian functions and its convergence in norm is studied.

```
@phdthesis{diva2:538662,
author = {Malý, Luká\v{s}},
title = {{Newtonian Spaces Based on Quasi-Banach Function Lattices}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1543}},
year = {2012},
address = {Sweden},
}
```

This thesis focuses on the problem of estimating parameters in multivariate linear models where particularly the mean has a bilinear structure and the covariance matrix has a linear structure. Most of techniques in statistical modeling rely on the assumption that data were generated from the normal distribution. Whereas real data may not be exactly normal, the normal distributions serve as a useful approximation to the true distribution. The modeling of normally distributed data relies heavily on the estimation of the mean and the covariance matrix. The interest of considering various structures for the covariance matrices in different statistical models is partly driven by the idea that altering the covariance structure of a parametric model alters the variances of the model’s estimated mean parameters.

The extended growth curve model with two terms and a linearly structured covariance matrix is considered. In general there is no problem to estimate the covariance matrix when it is completely unknown. However, problems arise when one has to take into account that there exists a structure generated by a few number of parameters. An estimation procedure that handles linear structured covariance matrices is proposed. The idea is first to estimate the covariance matrix when it should be used to define an inner product in a regression space and thereafter reestimate it when it should be interpreted as a dispersion matrix. This idea is exploited by decomposing the residual space, the orthogonal complement to the design space, into three orthogonal subspaces. Studying residuals obtained from projections of observations on these subspaces yields explicit consistent estimators of the covariance matrix. An explicit consistent estimator of the mean is also proposed and numerical examples are given.

The models based on normally distributed random matrix are also studied in this thesis. For these models, the dispersion matrix has the so called Kronecker product structure and they can be used for example to model data with spatio-temporal relationships. The aim is to estimate the parameters of the model when, in addition, one covariance matrix is assumed to be linearly structured. On the basis of *n* independent observations from a matrix normal distribution, estimation equations in a flip-flop relation are presented and numerical examples are given.

```
@phdthesis{diva2:536195,
author = {Nzabanita, Joseph},
title = {{Estimation in Multivariate Linear Models with Linearly Structured Covariance Matrices}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1531}},
year = {2012},
address = {Sweden},
}
```

The spaces of conformally equivalent Riemann surfaces, *M _{g}* where

*g*≥ 1, are not manifolds. However the spaces of the weaker Teichmüller equivalence,

*Tg*are known to be manifolds. The Teichmüller space

*T*is the universal covering of

_{g}*M*and

_{g}*M*is the quotient space by the action of the modular group. This gives

_{g}*M*an orbifold structure with a branch locus

_{g}*B*. The branch loci

_{g}*B*can be identified with Riemann surfaces admitting non-trivial automorphisms for surfaces of genus

_{g}*g*≥ 3. In this thesis we consider the topological structure of

*B*. We study the connectedness of the branch loci in general by considering families of isolated strata and we we establish that connectedness is a phenomenon for low genera. Further, we give the orbifold structure of the branch locus of surfaces of genus 4 and genus 5 in particular, by studying the equisymmetric stratification of the branch locus.

_{g}**Paper 1.** In this paper we show that the strata corresponding to actions of order 2 and 3 belong to the same connected component for arbitrary genera. Further we show that the branch locus is connected with the exception of one isolated point for genera 5 and 6, it is connected for genus 7 and it is connected with the exception of two isolated points for genus 8.

**Paper 2.** This paper contains a collection of results regarding components of the branch loci, some of them proved in detail in other papers. It is shown that for any integer d if p is a prime such that p > (*d* + 2)^{2}, there there exist isolated strata of dimension *d* in the moduli space of Riemann surfaces of genus (*d* + 1)(*p* − 1)/2. It is also shown that if we consider Riemann surfaces as Klein surfaces, the branch loci are connected for every genera due to reflections.

**Paper 3.** Here we consider surfaces of genus 4 and 5. Here we study the automorphism groups of Riemann surfaces of genus 4 and 5 up to topological equivalence and determine the complete structure of the equisymmetric stratification of the branch locus.

**Paper 4.** In this paper we establish that the connectedness of the branch loci is a phenomenon for low genera. More precisely we prove that the only genera *g* where *B _{g}* is connected are

*g*= 3, 4, 13, 17, 19, 59.

```
@phdthesis{diva2:527079,
author = {Bartolini, Gabriel},
title = {{On the Branch Loci of Moduli Spaces of Riemann Surfaces}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1440}},
year = {2012},
address = {Sweden},
}
```

Let be a bounded domain in R^{n} with a Lipschitz boundary Г divided into two parts Г_{0} and Г_{1} which do not intersect one another and have a common Lipschitz boundary. We consider the following Cauchy problem for the Helmholtz equation:

where *k*, the wave number, is a positive real constant, *а _{v}* denotes the outward normal derivative, and

*f*and g are specified Cauchy data on Г

_{0}. This problem is ill–posed in the sense that small errors in the Cauchy data f and g may blow up and cause a large error in the solution.

Alternating iterative algorithms for solving this problem are developed and studied. These algorithms are based on the alternating iterative schemes suggested by V.A. Kozlov and V. Maz’ya for solving ill–posed problems. Since these original alternating iterative algorithms diverge for large values of the constant *k ^{2}* in the Helmholtz equation, we develop a modification of the alterating iterative algorithms that converges for all

*k*. We also perform numerical experiments that confirm that the proposed modification works.

^{2}```
@phdthesis{diva2:526247,
author = {Mpinganzima, Lydie},
title = {{An alternating iterative procedure for the Cauchy problem for the Helmholtz equation}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1530}},
year = {2012},
address = {Sweden},
}
```

The thesis consists of five papers. The first three deal with topics within costly global optimization and the last two concern military decision support systems.

The first part of the thesis addresses so-called costly problems where the objective function is seen as a “black box” to which the input parameter values are sent and a function value is returned. This means in particular that no information about derivatives is available. The black box could, for example, solve a large system of differential equations or carry out timeconsuming simulation, where a single function evaluation can take several hours! This is the reason for describing such problems as costly and why they require customized algorithms. The goal is to construct algorithms that find a (near)-optimal solution using as few function evaluations as possible. A good example of a real life application comes from the automotive industry, where the development of new engines utilizes advanced mathematical models that are governed by a dozen key parameters. The objective is to optimize the engine by changing these parameters in such a way that it becomes as energy efficient as possible, but still meets all sorts of demands on strength and external constraints. The first three papers describe algorithms and implementation details for these costly global optimization problems.

The second part deals with military mission planning, that is, problems that concern logistics, allocation and deployment of military resources. Given a fleet of resource, the decision problem is to allocate the resources against the enemy so that the overall mission success is optimized. We focus on the problem of the attacker and consider two separate problem classes. In the fourth paper we introduce an effect oriented planning approach to an advanced weapon-target allocation problem, where the objective is to maximize the expected outcome of a coordinated attack. We present a mathematical model together with efficient solution techniques. Finally, in the fifth paper, we introduce a military aircraft mission planning problem, where an aircraft fleet should attack a given set of targets. Aircraft routing is an essential part of the problem, and the objective is to maximize the expected mission success while minimizing the overall mission time. The problem is stated as a generalized vehicle routing model with synchronization and precedence side constraints.

```
@phdthesis{diva2:524846,
author = {Quttineh, Nils-Hassan},
title = {{Models and Methods for Costly Global Optimization and Military Decision Support Systems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1450}},
year = {2012},
address = {Sweden},
}
```

Integer programming can be used to provide solutions to complex decision and planning problems occurring in a wide variety of situations. The application of integer programming to solve real world problems requires a modelling phase in which the problem at hand is translated into a mathematical description of the problem, and a solution phase that aims at developing methods for producing solutions to the mathematical formulation of the problem.

The first two papers of this thesis have their focus on the modelling phase, and the application of integer programming for solving nurse scheduling problems. Common to both papers is that the research has been conducted in collaboration with health care representatives, and that the models presented can be used for providing schedules that can be used by nurses. In the latter paper, a meta-heuristic approach is suggested for providing the schedules.

The last three papers address method development and specifically the design of column generation methods. The first of these papers presents optimality conditions that are useful in methods where columns are generated using dual solutions that are not necessarily optimal with respect to a linear programming relaxation, and the usefulness of these conditions are illustrated by examples from the literature.

Many applications of column generation yield master problems of a set partitioning type, and the fourth and fifth paper present methodologies for solving such problems. The characteristics of these methodologies are that all solutions derived are feasible and integral, where the preservation of integrality is a major distinction from other column generation methods presented in the literature.

```
@phdthesis{diva2:512202,
author = {Rönnberg, Elina},
title = {{Contributions within two topics in integer programming:
nurse scheduling and column generation}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1421}},
year = {2012},
address = {Sweden},
}
```

This thesis is concerned with the problem of characterizing sums, differences, and products of two projections on a separable Hilbert space. Other objective is characterizing the Moore-Penrose and the Drazin inverse for pairs of operators. We use reasoning similar to one presented in the famous P. Halmos’ two projection theorem: using matrix representation of two orthogonal projection depending on the relations between their ranges and null-spaces gives us simpler form of their matrices and allows us to involve matrix theory in solving problems. We extend research to idempotents, generalized and hypergeneralized projections and their combinations.

```
@phdthesis{diva2:510422,
author = {Radosavljevic, Sonja},
title = {{Pairs of projections on a Hilbert space:properties and generalized invertibility}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1525}},
year = {2012},
address = {Sweden},
}
```

Open-pit mining is an operation in which blocks from the ground are dug to extract the ore contained in them, and in this process a deeper and deeper pit is formed until the mining operation ends. Mining is often a highly complex industrial operation, with respect to both technological and planning aspects. The latter may involve decisions about which ore to mine and in which order. Furthermore, mining operations are typically capital intensive and long-term, and subject to uncertainties regarding ore grades, future mining costs, and the market prices of the precious metals contained in the ore. Today, most of the high-grade or low-cost ore deposits have already been depleted, and to obtain sufficient profitability in mining operations it is therefore today often a necessity to achieve operational efficiency with respect to both technological and planning issues.

In this thesis, we study the open-pit design problem, the open-pit mining scheduling problem, and the open-pit design problem with geological and price uncertainty. These problems give rise to (mixed) discrete optimization models that in real-life settings are large scale and computationally challenging.

The open-pit design problem is to find an optimal ultimate contour of the pit, given estimates of ore grades, that are typically obtained from samples in drill holes, estimates of costs for mining and processing ore, and physical constraints on mining precedence and maximal pit slope. As is well known, this problem can be solved as a maximum flow problem in a special network. In a first paper, we show that two well known parametric procedures for finding a sequence of intermediate contours leading to an ultimate one, can be interpreted as Lagrangian dual approaches to certain side-constrained design models. In a second paper, we give an alternative derivation of the maximum flow problem of the design problem.

We also study the combined open-pit design and mining scheduling problem, which is the problem of simultaneously finding an ultimate pit contour and the sequence in which the parts of the orebody shall be removed, subject to mining capacity restrictions. The goal is to maximize the discounted net profit during the life-time of the mine. We show in a third paper that the combined problem can also be formulated as a maximum flow problem, if the mining capacity restrictions are relaxed; in this case the network however needs to be time-expanded.

In a fourth paper, we provide some suggestions for Lagrangian dual heuristic and time aggregation approaches for the open-pit scheduling problem. Finally, we study the open-pit design problem under uncertainty, which is taken into account by using the concept of conditional value-atrisk. This concept enables us to incorporate a variety of possible uncertainties, especially regarding grades, costs and prices, in the planning process. In real-life situations, the resulting models would however become very computationally challenging.

```
@phdthesis{diva2:442025,
author = {Amankwah, Henry},
title = {{Mathematical Optimization Models and Methods for Open-Pit Mining}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1396}},
year = {2011},
address = {Sweden},
}
```

Klassrumsbedömning i matematik på gymnasienivå är en licentiatstudie med en intention att närma sig, fånga upp samt förstå och tolka lärares ageranden, tankar, upplevelser och reflektioner inom och om bedömningen i matematik i gymnasiet. En annan avsikt är delvis att fylla i luckan mellan det målrelaterade kursbedömningssystemet och det praktiska fältet med en kvalitativ forskningsstudie om lärarerfarenheter från gymnasiets matematikkurser.Studien kan inte på något sätt betraktas som uttömmande. Den är selektiv till sin natur. Det finns många andra faktorer som också påverkar de deltagande lärarnas handlingar, funderingar och beslut inom bedömningsfältet av vilka endast ett fåtal berörs av studien. Det finns även så många matematiklärare utanför studien. Lägg likaså till mer eller mindre involverade och intresserade organisationer och institutioner.Intervjuer och fokusgruppsamtal med ett antal matematiklärare från diverse gymnasieskolor har spelats in och detaljerat analyserats genom transkriberingar. Den metodanalytiska och den teoretiska ansatsen från grundad teori har succesivt lett till urskiljandet av åtta typer av bedömningsstrategier: den intuitiva, den inväntande, den kontinuerliga, den likvärdiga, den målinriktade, den provinriktade, den undervisningskopplade samt självbedömningsstrategin och till en teori om hur lärare beskriver och reflekterar om bedömning i matematik på gymnasiet. Strategierna uppvisar en rikedom i lärarnas resonemang om bedömning. Teorin har avslutningsvis fört studien till ett antal begrepp och tillhörande kategoriseringar som har underättat förstaelsen av lärarnas handlingar och resonemang inom matematikbedömningen på gymnasiet.

```
@phdthesis{diva2:439814,
author = {Becevic, Semir},
title = {{Klassrumsbedömning i matematik på gymnasieskolans nivå}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1502}},
year = {2011},
address = {Sweden},
}
```

Proportionalitet är ett centralt begrepp i skolmatematiken. Begreppet introduceras i de lägre stadierna och återkommer i så gott som samtliga kurser från årskurs 9 tillsista kursen på gymnasiet. Det övergripande syftet med denna studie är att undersöka hur det matematiska begreppet proportionalitet hanteras i den svenska gymnasieskolan. En generell problematik kopplad till detta syfte är hur skolans styrdokument realiseras i läromedel och nationella prov. Fokus för denna avhandling har varit hur proportionalitet hanteras i det svenska gymnasiet i kursen Matematik A i några läromedel och nationella prov. För att undersöka detta utvecklades ett analysverktyg utifrån det teoretiska ramverket i ATD (Anthropological Theory of the Didactic). Av intresse är här relationer mellan de olika nivåerna i den didaktiska transpositionen, som berör just hur skolans styrdokument realiseras i läromedel och nationella prov. För det empiriska studiet av materialet användes från ATD begreppet matematisk organisation, genom att använda ett analysverktyg för att granska typer av uppgifter om proportionalitet, lösningstekniker och teoretiska modeller för proportionalitetsbegreppet.

De data som presenterats i denna studie ger en ganska ostrukturerad bild av de matematiska organisationer av begreppsområdet proportionalitet som presenteras i läromedel och i nationella prov och de ser även olika ut när det gäller hur proportionalitet hanteras i läromedlen respektive det nationella provet för Matematik A. Resultatet visar att ungefär var fjärde uppgift i de studerade kapitlen och de nationella proven berör proportionalitet men att begreppet hanteras ensidigt vad avser uppgiftstyp. Skillnader observerades mellan läromedel och nationella prov när det gäller hur lösningstekniker rekommenderas för olika typer av proportionalitetsuppgifter. De två teoretiska modeller för proportionalitet som har undersökts, dvs. statisk och dynamisk proportionalitet, finns representerade i ungefär lika omfattning i både läromedel och nationella prov. Vid uppgifter inom geometri handlar det dock ofta om statisk proportionalitet medan det inomområdet funktioner är vanligare att använda dynamisk proportionalitet.

Lärare bör få kunskap om skillnader mellan läromedel och läroplaner, och hur dessa tolkas i nationella prov, så att de i sin verksamhet kan välja det undervisningsinnehåll, inklusive övningsuppgifter, som ger en god variation för eleven kopplat till kursplanernas mål

```
@phdthesis{diva2:421154,
author = {Lundberg, Anna, L., V.},
title = {{Proportionalitetsbegreppet i den svenska gymnasiematematiken:
en studie om läromedel och nationella prov}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1498}},
year = {2011},
address = {Sweden},
}
```

An introduction of computer software into mathematics classrooms makes the didactical situation more complex compared with previous learning environments (Blomhøj, 2005). A technological tool becoming a mathematic work tool in the hands of the students is a process that has turned up unexpectedly complex (Artigue, 2002). In addition to this problem, the teachers as the users of the tool go through the same process, while, at the same time, trying to integrate the tool into their teaching activities in a meaningful way. For these reasons it seems important to contribute to the research focused on the learning and teaching conditions in environments, where computer software is newly introduced, in order to better understand impacts of the introduction of different software in mathematics classrooms.

In this study the dynamic mathematical software GeoGebra was used. GeoGebra is freely available for a number of platforms and has drawn much attention during the last years with growing user communities (www.GeoGebra.org). However, being generally available just recently, there are, comparatively, few studies on the use of GeoGebra in classroom settings.

In this thesis the introduction and integration of GeoGebra was investigated in two studies with different perspectives. In the first study students’ work with GeoGebra in their mathematical activities related to the integral concept has been researched. In the second study teachers’ utilization of the didactical potential has been investigated. The results of the two studies show that GeoGebra as a mathematical tool in the hands of the students and the teachers can have a significant role in supporting their mathematical work if exploited in a, from a didactical perspective, adequate way. A learning and teaching environment based on GeoGebra bring with it a possibility to work with mathematical concepts in a broader way compared with blackboard based classrooms. GeoGebra’s facilities makes it possible to communicate mathematics in different ways and expressing mathematical concepts in different representations in a more direct way than in non dynamical environments. Communicating mathematics in different ways and expressing mathematics knowledge through different representations is of significant importance for students, not least in relation to the new curriculum for mathematics in Sweden (The Swedish National Agency for Education, 2011), where these aspects are explicitly named as aims for students to work towards.

On the other hand, the investigations also showed that the introduction and the integration of GeoGebrawas a complex process for both the students and the teachers in this research. The introduction and integration of the software in the students’ mathematical activities made the didactical situation more complex and a differentiation of students’ work with the software was observed. For some students the use of the software seemingly supported their mathematical work, and at the same time for some students the result was the opposite; the use of the software was seen as a disturbing factor in their mathematical activities. When it comes to the study of teachers’ work with GeoGebra the investigations revealed that they encountered different types of obstacles that prevented them from utilizing the full didactical potential of the software in their teaching of mathematics. Three different types of obstacles were identified:

- technical - a teacher is not able to operate the software in the intended way;
- epistemological - a teacher is not aware of the didactical potential of GeoGebra and howto exploit it in in a way that supports students’ learning of integrals;
- didactical - a teacher is not aware of the complexity of technology based environments or he/she is aware of this aspect, but not comfortable with his/her competence in carrying out the process of integration of the software into his/her teaching without external help and support.

Even if it is difficult to see the software detached from the context in this research, it seems that many of the obstacles perceived by the teachers in the experimental group, as well as difficulties students perceived in their work with the software, were related to the fact that they were inexperienced with the software and, consequently, lacked in knowledge in how to exploit its features in their mathematical activities. As it seems, the teachers would encounter the same obstacles every time they try to integrate a new, to them unfamiliar, software into their teaching practice. Also many of the students would experience same difficulties if they are not adequately supported in this process. Based on this, there are reasons to believe that problems with integration of GeoGebra into mathematics classrooms identified in this research would be similar in relation to integration of other dynamic mathematic software into mathematics classrooms, or even broader, other types of software as e.g. Computer Algebra Systems (CAS), as long as the integration considers the use of an unfamiliar software.

```
@phdthesis{diva2:421408,
author = {Mehanovic, Sanela},
title = {{The Potential and Challenges of the Use of Dynamic Software in Upper Secondary Mathematics:
Students' and Teachers' Work with Integrals in GeoGebra Based Environments}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1499}},
year = {2011},
address = {Sweden},
}
```

High dose-rate (HDR) brachytherapy is a specific type of radiotherapy used to treat tumours of for example the cervix, prostate, and breasts. In HDR brachytherapy applicators are implanted into or close to the tumour volume. A radioactive source is moved through these applicators and stops at certain positions, known as dwell points. For each patient an anatomy-based dose plan is created that decides for example where to place the applicators, which dwell points to use, and for how long. The aim when creating a dose plan is to deliver an as high dose as possible to the tumour while simultaneously keeping the dose to the surrounding healthy organs as low as possible.

In order to improve the quality of dose plans mathematical optimization methods are today used in clinical practice. Usually one solves a linear penalty model that minimizes a weighted deviation from dose intervals provided by a physician. In this thesis we study certain properties and alterations of this model.

One interesting property of the model that we study is the distribution of the basic variables. We show that due to the distribution of these variables only a limited number of dwell positions can be used. Since relatively few dwell positions are used some of the corresponding dwell times have to be long in order for the desired overall dose level to be reached. These long dwell times have been observed in clinical practice and are considered to be a problem.

Another property that we study is the correlation between the objective value of the linear penalty model and dose-volume parameters used for evaluation of dose plans. We show that the correlation is weak, which implies that optimizing the linear penalty model does not give a solution to the correct problem.

Some alternative models are also considered. One that includes into the optimization the decision of where to place the applicators, when HDR brachytherapy is applied for prostate cancer, and one that reduces the long dwell times by using piecewise linear penalties. The solutions to both models show significant improvements.

```
@phdthesis{diva2:412846,
author = {Holm, Åsa},
title = {{Dose Plan Optimization in HDR Brachytherapy using Penalties:
Properties and Extensions}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1486}},
year = {2011},
address = {Sweden},
}
```

The official curriculum guidelines for upper secondary school in Sweden emphasise the use of mathematical models and mathematical modelling in mathematics education. However, no explicit definitions or descriptions of the notions are given in the curriculum. This licentiate thesis is an exploratory study which investigates teachers’ and students’ conceptions of the notion of mathematical modelling as well as their attitudes and experiences of working with mathematical modelling in mathematics classrooms. One experience of mathematical modelling that faces both students and teachers which is investigated is the national course tests in mathematics.

The thesis includes five papers and a preamble, where the papers are summarised, analysed, and discussed. Both quantitative and qualitative methods are being used in the thesis and theoretical aspects concerning mathematical modelling and conceptions are examined.

The results indicate that mathematical modelling plays a minor role in the investigated mathematics classrooms. The students as well as the teachers were not familiar with the notion of mathematical modelling. Only 23% of the 381 students and 50 % of the 18 teachers had heard the notion before participating in the study. Both teachers and students participating in this study expressed a variety of different interpretations of the notion of mathematical modelling. Negative attitudes were expressed by the students as well as by some of the teachers concerning mathematical modelling. These negative attitudes may present obstacles for implementing mathematical modelling in the upper secondary mathematics classroom. However, these negative attitudes are related to the used test items, which may have had a negative impact on the research, especially, as the test items only test parts of the modelling process. One dominant conception found among the teachers was that mathematical modelling is related to physics or chemistry. The conclusion made from the investigation about national course tests in mathematics course D, is that there is a lack of holistic assessment of mathematical modelling. Intra-mathematical aspects of mathematical modelling are put in favour for extra-mathematical aspects.

Researchers argue that if we want develop students’ modelling competency, than modelling has to be explicitly used and practised in the mathematics classrooms. However, for the Swedish upper secondary school this study concludes that this is not the case. A suggestion for future research is to focus on mathematical modelling in teacher education and design studies of incorporation of modelling activities into mathematics classrooms.

```
@phdthesis{diva2:403178,
author = {Frejd, Peter},
title = {{Mathematical modelling in upper secondary school in Sweden:
An exploratory investigation}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1471}},
year = {2011},
address = {Sweden},
}
```

This thesis is concerned with problems related to shortest pathrouting (SPR) in Internet protocol (IP) networks. In IP routing, alldata traffic is routed in accordance with an SPR protocol, e.g. OSPF.That is, the routing paths are shortest paths w.r.t. some artificialmetric. This implies that the majority of the Internet traffic isdirected by SPR. Since the Internet is steadily growing, efficientutilization of its resources is of major importance. In theoperational planning phase the objective is to utilize the availableresources as efficiently as possible without changing the actualdesign. That is, only by re-configuration of the routing. This isreferred to as traffic engineering (TE). In this thesis, TE in IPnetworks and related problems are approached by integer linearprogramming.

Most TE problems are closely related to multicommodity routingproblems and they are regularly solved by integer programmingtechniques. However, TE in IP networks has not been studied as much,and is in fact a lot harder than ordinary TE problems without IProuting since the complicating shortest path aspect has to be takeninto account. In a TE problem in an IP network the routing isperformed in accordance with an SPR protocol that depends on a metric,the so called set of administrative weights. The major differencebetween ordinary TE problems and TE in IP networks is that all routingpaths must be simultaneously realizable as shortest paths w.r.t. thismetric. This restriction implies that the set of feasible routingpatterns is significantly reduced and that the only means available toadjust and control the routing is indirectly, via the administrativeweights.

A constraint generation method for solving TE problems in IP networksis outlined in this thesis. Given an "original" TE problem, the ideais to iteratively generate and augment valid inequalities that handlethe SPR aspect of IP networks. These valid inequalities are derived byanalyzing the inverse SPR problem. The inverse SPR problem is todecide if a set of tentative routing patterns is simultaneouslyrealizable as shortest paths w.r.t. some metric. When this is not thecase, an SPR conflict exists which must be prohibited by a validinequality that is then augmented to the original TE problem. Toderive strong valid inequalities that prohibit SPR conflicts, athorough analysis of the inverse SPR problem is first performed. Inthe end, this allows us to draw conclusions for the design problem,which was the initial primary concern.

```
@phdthesis{diva2:403478,
author = {Call, Mikael},
title = {{Inverse Shortest Path Routing Problems in the Design of IP Networks}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1448}},
year = {2010},
address = {Sweden},
}
```

The Tippe Top consist of a small truncated sphere with a peg as a handle. When it is spun fast enough on its spherical part it starts to turn upside down and ends up spinning on the peg. This counterintuitive behaviour, called inversion, is a curious feature of this dynamical system that has been studied for some time, but obtaining a complete description of the dynamics of inversion has proved to be a difficult problem.

The existing results are either numerical simulations of the equations of motion or asymptotic analysis that shows that the inverted position is the only attractive and stable position under certain conditions.

This thesis will present methods to analyze the equations of motion of the Tippe Top, which we study in three equivalent forms that each helps us to understand different aspects of the inversion phenomenon.

Our study of the Tippe Top also focuses on the role of the underlying assumptions in the standard model for the external force, and what consequences these assumptions have, in particular for the asymptotic cases.

We define two dynamical systems as an aid to understand the dynamics of the Tippe Top, the gliding heavy symmetric top and the gliding eccentric cylinder. The gliding heavy symmetric top is a natural non-integrable generalization of the well-known heavy symmetric top. Equations of motion and asymptotics for this system are derived, but we also show that equations for the gliding heavy symmetric top can be obtained as a limit of the equations for the Tippe Top.

The equations for the gliding eccentric cylinder can be interpreted as a special case of the equations for the Tippe Top, and since it is a simpler system, properties of the Tippe Top equations are easier to study. In particular, asymptotic analysis of the gliding eccentric cylinder reveals that the standard model seems to have inconsistencies that need to be addressed.

```
@phdthesis{diva2:359340,
author = {Rutstam, Nils},
title = {{Study of equations for Tippe Top and related rigid bodies}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1452}},
year = {2010},
address = {Sweden},
}
```

Authentication is an indispensable part of Quantum Cryptography, which is an unconditionally secure key distribution technique based on the laws of nature. Without proper authentication, Quantum Cryptography is vulnerable to “man-in-the-middle” attacks. Therefore, to guarantee unconditional security of any Quantum Cryptographic protocols, the authentication used must also be unconditionally secure. The standard in Quantum Cryptography is to use theWegman-Carter authentication, which is unconditionally secure and is based on the idea of universal hashing.

In this thesis, we first investigate properties of a Strongly Universal hash function family to facilitate understanding the properties of (classical) authentication used in Quantum Cryptography. Then, we study vulnerabilities of a recently proposed authentication protocol intended to rule out a "man-in-the-middle" attack on Quantum Cryptography. Here, we point out that the proposed authentication primitive is not secure when used in a generic Quantum Cryptographic protocol. Lastly, we estimate the lifetime of authentication using encrypted tags when the encryption key is partially known. Under simplifying assumptions, we derive that the lifetime is linearly dependent on the length of the authentication key. Experimental results that support the theoretical results are also presented.

```
@phdthesis{diva2:324702,
author = {Abidin, Aysajan},
title = {{Weaknesses of Authentication in Quantum Cryptography and Strongly Universal Hash Functions}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1447}},
year = {2010},
address = {Sweden},
}
```

The aim of this thesis is to investigate and enhance our understanding of the notions of mathematical models and modelling at the Swedish upper secondary school level. Focus is on how mathematical models and modelling are viewed by the different actors in the school system, and what characterises the collaborative process of a didactician and a group of teachers engaged in designing and developing, implementing and evaluating teaching modules (so called modelling modules) exposing students to mathematical modelling in line with the present mathematics curriculum. The thesis consists of five papers and reports, along with a summary introduction, addressing both theoretical and empirical aspects of mathematical modelling.

The thesis uses both qualitative and quantitative methods and draws partly on design-based research methodology and cultural-historical activity theory (CHAT). The results of the thesis are presented using the structure of the three curriculum levels of the intended, potentially implemented, and attained curriculum respectively.

The results show that since 1965 and to the present day, gradually more and more explicit emphasis has been put on mathematical models and modelling in the syllabuses at this school level. However, no explicit definitions of these notions are provided but described only implicitly, opening up for a diversity of interpretations.

From the collaborative work case study it is concluded that the participating teachers could not express a clear conception of the notions mathematical models or modelling, that the designing process often was restrained by constraints originating from the local school context, and that working with modelling highlights many systemic tensions in the established school practice. In addition, meta-results in form of suggestions of how to resolve different kinds of tensions in order to improve the study design are reported.

In a questionnaire study with 381 participating students it is concluded that only one out of four students stated that they had heard about or used mathematical models or modelling in their education before, and the expressed overall attitudes towards working with mathematical modelling as represented in the test items were negative. Students’ modelling proficiency was positively affected by the students’ grade, last taken mathematics course, and if they thought the problems in the tests were easy or interesting. In addition empirical findings indicate that so-called realistic Fermi problems given to students working in groups inherently evoke modelling activities.

```
@phdthesis{diva2:302720,
author = {Bergman Ärlebäck, Jonas},
title = {{Mathematical modelling in upper secondary mathematics education in Sweden}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1289}},
year = {2010},
address = {Sweden},
}
```

Ill-posed mathematical problem occur in many interesting scientific and engineering applications. The solution of such a problem, if it exists, may not depend continuously on the observed data. For computing a stable approximate solution it is necessary to apply a regularization method. The purpose of this thesis is to investigate regularization approaches and develop numerical methods for solving certain ill-posed problems for parabolic partial differential equations. In thermal engineering applications one wants to determine the surface temperature of a body when the surface itself is inaccessible to measurements. This problem can be modelled by a sideways heat equation. The mathematical and numerical properties of the sideways heat equation with constant convection and diffusion coefficients is first studied. The problem is reformulated as a Volterra integral equation of the first kind with smooth kernel. The influence of the coefficients on the degree of ill-posedness are also studied. The rate of decay of the singular values of the Volterra integral operator determines the degree of ill-posedness. It is shown that the sign of the coefficient in the convection term influences the rate of decay of the singular values.

Further a sideways heat equation in cylindrical geometry is studied. The equation is a mathematical model of the temperature changes inside a thermocouple, which is used to approximate the gas temperature in a combustion chamber. The heat transfer coefficient at the surface of thermocouple is also unknown. This coefficient is approximated via a calibration experiment. Then the gas temperature in the combustion chamber is computed using the convection boundary condition. In both steps the surface temperature and heat flux are approximated using Tikhonov regularization and the method of lines.

Many existing methods for solving sideways parabolic equations are inadequate for solving multi-dimensional problems with variable coefficients. A new iterative regularization technique for solving a two-dimensional sideways parabolic equation with variable coefficients is proposed. A preconditioned Generalized Minimum Residuals Method (GMRS) is used to regularize the problem. The preconditioner is based on a semi-analytic solution formula for the corresponding problem with constant coefficients. Regularization is used in the preconditioner as well as truncating the GMRES algorithm. The computed examples indicate that the proposed PGMRES method is well suited for this problem.

In this thesis also a numerical method is presented for the solution of a Cauchy problem for a parabolic equation in multi-dimensional space, where the domain is cylindrical in one spatial direction. The formal solution is written as a hyperbolic cosine function in terms of a parabolic unbounded operator. The ill-posedness is dealt with by truncating the large eigenvalues of the operator. The approximate solution is computed by projecting onto a smaller subspace generated by the Arnoldi algorithm applied on the inverse of the operator. A well-posed parabolic problem is solved in each iteration step. Further the hyperbolic cosine is evaluated explicitly only for a small triangular matrix. Numerical examples are given to illustrate the performance of the method.

```
@phdthesis{diva2:302609,
author = {Ranjbar, Zohreh},
title = {{Numerical Solution of Ill-posed Cauchy Problems for Parabolic Equations}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1300}},
year = {2010},
address = {Sweden},
}
```

The first subject we investigate in this thesis deals with optimization problems on graphs. The edges are given costs defined by the values of independent exponential random variables. We show how to calculate some or all moments of the distributions of the costs of some optimization problems on graphs.

The second subject that we investigate is 1-error correcting perfect binary codes, perfect codes for short. In most work about perfect codes, two codes are considered equivalent if there is an isometric mapping between them. We call this isometric equivalence. Another type of equivalence is given if two codes can be mapped on each other using a non-singular linear map. We call this linear equivalence. A third type of equivalence is given if two codes can be mapped on each other using a composition of an isometric map and a non-singular linear map. We call this extended equivalence.

- In Paper 1 we give a new better bound on how much the cost of the matching problem with exponential edge costs varies from its mean.
- In Paper 2 we calculate the expected cost of an LP-relaxed version of the matching problem where some edges are given zero cost. A special case is when the vertices with probability 1 – p have a zero cost loop, for this problem we prove that the expected cost is given by a formula.
- In Paper 3 we define the polymatroid assignment problem and give a formula for calculating all moments of its cost.
- In Paper 4 we present a computer enumeration of the 197 isometric equivalence classes of the perfect codes of length 31 of rank 27 and with a kernel of dimension 24.
- In Paper 5 we investigate when it is possible to map two perfect codes on each other using a non-singular linear map.
- In Paper 6 we give an invariant for the equivalence classes of all perfect codes of all lengths when linear equivalence is considered.
- In Paper 7 we give an invariant for the equivalence classes of all perfect codes of all lengths when extended equivalence is considered.
- In Paper 8 we define a class of perfect codes that we call FRH-codes. It is shown that each FRH-code is linearly equivalent to a so called Phelps code and that this class contains Phelps codes as a proper subset.

```
@phdthesis{diva2:277130,
author = {Hessler, Martin},
title = {{Optimization, Matroids and Error-Correcting Codes}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1277}},
year = {2009},
address = {Sweden},
}
```

In this thesis we investigate the double obstacle problem for p-harmonic functions on metric spaces. We minimize the p-energy integral among all functions which have prescribed boundary values and lie between two given obstacles. This is a generalization of the Dirichlet problem for p-harmonic functions, in which case the obstacles are —*∞* and * ∞*.

We show the existence and uniqueness of solutions, and their continuity when the obstacles are continuous. Moreover we show that the continuous solution is p-harmonic in the open set where it does not touch the continuous obstacles. If the obstacles are not continuous, but satisfy a Wiener type regularity condition, we prove that the solution is still continuous. The Hölder continuity for solutions is shown, when the obstacles are Hölder continuous. Boundary regularity of the solutions is also studied.

Furthermore we study two kinds of convergence problems for the solutions. First we let the obstacles and the boundary values vary and show the convergence of the solutions. We also consider generalized solutions for insoluble obstacle problems, using the convergence results. Moreover we show that for soluble obstacle problems the generalized solution coincides, locally, with the standard solution.

Second we consider an increasing sequence of open sets, with union Ω, and fix the obstacles and the boundary values. We show that the solutions of the obstacle problems in these sets converge to the solution of the corresponding problem in Ω.

```
@phdthesis{diva2:275934,
author = {Farnana, Zohra},
title = {{The Double Obstacle Problem on Metric Spaces}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1283}},
year = {2009},
address = {Sweden},
}
```

Compact Riemann surfaces of genus greater than 1 can be realized as quotient spaces of the hyperbolic plane by the action of Fuchsian groups. The Teichmüller space is the set of all complex structures of Riemann surfaces and the moduli space the set of conformal equivalence classes of Riemann surfaces. For genus greater than two the branch locus of the covering of the moduli space by the Teichmüller space can be identified wi the set of Riemann surfaces admitting non-trivial automorphisms. Here we give the orbifold structure of the branch locus of surfaces of genus 5 by studying the equisymmetric stratification of the branch locus. This gives the orbifold structure of the moduli space.

We also show that the strata corresponding to surfaces with automorphisms of order 2 and 3 belong to the same connected component for every genus. Further we show that the branch locus is connected with the exception of one isolated point for genera 5 and 6, it is connected for genus 7 and it is connected with the exception of two isolated points for genus 8.

```
@phdthesis{diva2:275389,
author = {Bartolini, Gabriel},
title = {{On the Branch Loci of Moduli Spaces of Riemann Surfaces of Low Genera}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1413}},
year = {2009},
address = {Sweden},
}
```

Many testing, estimation and confidence interval procedures discussed in the multivariate statistical literature are based on the assumption that the observation vectors are independent and normally distributed. The main reason for this is that often sets of multivariate observations are, at least approximately, normally distributed. Normally distributed data can be modeled entirely in terms of their means and variances/covariances. Estimating the mean and the covariance matrix is therefore a problem of great interest in statistics and it is of great significance to consider the correct statistical model. The estimator for the covariance matrix is important since inference on the mean parameters strongly depends on the estimated covariance matrix and the dispersion matrix for the estimator of the mean is a function of it.

In this thesis the problem of estimating parameters for a matrix normal distribution with different patterned covariance matrices, i.e., different statistical models, is studied.

A *p*-dimensional random vector is considered for a banded covariance structure reflecting *m*-dependence. A simple non-iterative estimation procedure is suggested which gives an explicit, unbiased and consistent estimator of the mean and an explicit and consistent estimator of the covariance matrix for arbitrary *p *and *m*.

Estimation of parameters in the classical Growth Curve model when the covariance matrix has some specific linear structure is considered. In our examples maximum likelihood estimators can not be obtained explicitly and must rely on numerical optimization algorithms. Therefore explicit estimators are obtained as alternatives to the maximum likelihood estimators. From a discussion about residuals, a simple non-iterative estimation procedure is suggested which gives explicit and consistent estimators of both the mean and the linearly structured covariance matrix.

This thesis also deals with the problem of estimating the Kronecker product structure. The sample observation matrix is assumed to follow a matrix normal distribution with a separable covariance matrix, in other words it can be written as a Kronecker product of two positive definite matrices. The proposed estimators are used to derive a likelihood ratio test for spatial independence. Two cases are considered, when the temporal covariance is known and when it is unknown. When the temporal covariance is known, the maximum likelihood estimates are computed and the asymptotic null distribution is given. In the case when the temporal covariance is unknown the maximum likelihood estimates of the parameters are found by an iterative alternating algorithm and the null distribution for the likelihood ratio statistic is discussed.

```
@phdthesis{diva2:220133,
author = {Ohlson, Martin},
title = {{Studies in Estimation of Patterned Covariance Matrices}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1255}},
year = {2009},
address = {Sweden},
}
```

This work is devoted to the equation

where *S* is the graph of a Lipschitz function φ on **R ^{N}** with small Lipschitz constant, and

*dS*is the Euclidian surface measure. The integral in the left-hand side is referred to as a simple layer potential and

*f*is a given function. The main objective is to find a solution

*u*to this equation along with estimates for solutions near points on

*S*. Our analysis is carried out in local

*L*-spaces and local Sobolev spaces, and the estimates are given in terms of seminorms.

^{p}In Paper 1, we consider the case when *S* is a hyperplane. This gives rise to the classical Riesz potential operator of order one, and we prove uniqueness of solutions in the largest class of functions for which the potential in (1) is defined as an absolutely convergent integral. We also prove an existence result and derive an asymptotic formula for solutions near a point on the surface. Our analysis allows us to obtain optimal results concerning the class of right-hand sides for which a solution to (1) exists. We also apply our results to weighted *L ^{p}*- and Sobolev spaces, showing that for certain weights, the operator in question is an isomorphism between these spaces.

In Paper 2, we present a fixed point theorem for a locally convex space , where the topology is given by a family of seminorms. We study the existence and uniqueness of fixed points for a mapping defined on a set . It is assumed that there exists a linear and positive operator *K*, acting on functions defined on the index set Ω, such that for every ,

Under some additional assumptions, one of which is the existence of a fixed point for the operator K + p( ; · ), we prove that there exists a fixed point of . For a class of elements satisfying *K ^{n}* (p(u ; · ))(α) → 0 as

*n*→ ∞, we show that fixed points are unique. This class includes, in particular, the solution we construct in the paper. We give several applications, proving existence and uniqueness of solutions for two types of first and second order nonlinear differential equations in Banach spaces. We also consider pseudodifferential equations with nonlinear terms.

In Paper 3, we treat equation (1) in the case when *S* is a general Lipschitz surface and 1 < *p* < ∞. Our results are presented in terms of Λ(*r*), which is the Lipschitz constant of φ on the ball centered at the origin with radius *2r*. Estimates of solutions to (1) are provided, which can be used to obtain knowledge about behaviour near a point on *S* in terms of seminorms. We also show that solutions to (1) are unique if they are subject to certain growth conditions. Examples are given when specific assumptions are placed on Λ. The main tool used for both existence and uniqueness is the fixed point theorem from Paper 2.

In Paper 4, we collect some properties and estimates of Riesz potential operators, and also for the operator that was used in Paper 1 and Paper 3 to invert the Riesz potential of order one on **R ^{N}**, for the case when the density function is either radial or has mean value zero on spheres. It turns out that these properties define invariant subspaces of the respective domains of the operators in question.

```
@phdthesis{diva2:158270,
author = {Thim, Johan},
title = {{Simple Layer Potentials on Lipschitz Surfaces: An Asymptotic Approach}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1235}},
year = {2009},
address = {Sweden},
}
```

Integer programming can be used to provide solutionsto complex decision and planning problems occurring in a wide varietyof situations. Applying integer programming to a real life problembasically involves a first phase where a mathematical model isconstructed, and a second phase where the problem described by themodel is solved. While the nature of the challenges involved in therespective two phases differ, the strong relationship between theproperties of models, and which methods that are appropriate for theirsolution, links the two phases. This thesis constitutes of threepapers, of which the third one considers the modeling phase, while thefirst and second one consider the solution phase.

Many applications of column generation yield master problems of setpartitioning type, and the first and second papers presentmethodologies for solving such problems. The characteristics of themethodologies presented are that all successively found solutions arefeasible and integral, where the retention of integrality is a majordistinction from other column generation methods presented in theliterature.

The third paper concerns nurse scheduling and describes the results ofa pilot implementation of a scheduling tool at a Swedish nursing ward.This paper focuses on the practical aspects of modeling and thechallenges of providing a solution to a complex real life problem.

```
@phdthesis{diva2:113882,
author = {Rönnberg, Elina},
title = {{Methods and Applications in Integer Programming:
All-Integer Column Generation and Nurse Scheduling}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1383}},
year = {2008},
address = {Sweden},
}
```

In this thesis we study the relativistic multipole moments for stationary asymptotically flat spacetimes as introduced by Geroch and Hansen. These multipole moments give an asymptotic description of the gravitational field in a coordinate independent way.

Due to this good description of the spacetimes, it is natural to try to construct a spacetime from only the set of multipole moments. Here we present a simple method to do this for the static axisymmetric case. We also give explicit solutions for the cases where the number of non-zero multipole moments are finite. In addition, for the general stationary axisymmetric case, we present methods to generate solutions.

It has been a long standing conjecture that the multipole moments give a complete characterization of the stationary spacetimes. Much progress toward a proof has been made over the years. However, there is one remaining difficult task: to prove that a spacetime exists with an a-priori given arbitrary set of multipole moments subject to some given condition.

Here we present such a condition for the axisymmetric case, and prove that it is both necessary and sufficient. We also extend this condition to the general case without axisymmetry, but in this case we only prove the necessity of our condition.

```
@phdthesis{diva2:17945,
author = {Bäckdahl, Thomas},
title = {{Multipole Moments of Stationary Spacetimes}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1181}},
year = {2008},
address = {Sweden},
}
```

Ill-posed sets of linear equations typically arise when discretizing certain types of integral transforms. A well known example is image reconstruction, which can be modeled using the Radon transform. After expanding the solution into a finite series of basis functions a large, sparse and ill-conditioned linear system occurs. We consider the solution of such systems. In particular we study a new class of iteration methods named DROP (for Diagonal Relaxed Orthogonal Projections) constructed for solving both linear equations and linear inequalities. This class can also be viewed, when applied to linear equations, as a generalized Landweber iteration. The method is compared with other iteration methods using test data from a medical application and from electron microscopy. Our theoretical analysis include convergence proofs of the fully-simultaneous DROP algorithm for linear equations without consistency assumptions, and of block-iterative algorithms both for linear equations and linear inequalities, for the consistent case.

When applying an iterative solver to an ill-posed set of linear equations the error usually initially decreases but after some iterations, depending on the amount of noise in the data, and the degree of ill-posedness, it starts to increase. This phenomenon is called semi-convergence. We study the semi-convergence performance of Landweber-type iteration, and propose new ways to specify the relaxation parameters. These are computed so as to control the propagated error.

We also describe a class of stopping rules for Landweber-type iteration for solving linear inverse problems. The class includes the well known discrepancy principle, and the monotone error rule. We unify the error analysis of these two methods. The stopping rules depend critically on a certain parameter whose value needs to be specified. A training procedure is therefore introduced for securing robustness. The advantages of using trained rules are demonstrated on examples taken from image reconstruction from projections.

Kaczmarz's method, also called ART (Algebraic Reconstruction Technique) is often used for solving the linear system which appears in image reconstruction. This is a fully sequential method. We examine and compare ART and its symmetric version. It is shown that the cycles of symmetric ART, unlike ART, converge to a weighted least squares solution if and only if the relaxation parameter lies between zero and two. Further we show that ART has faster asymptotic rate of convergence than symmetric ART. Also a stopping criterion is proposed and evaluated for symmetric ART.

We further investigate a class of block-iterative methods used in image reconstruction. The cycles of the iterative sequences are characterized in terms of the original linear system. We define symmetric block-iteration and compare the behavior of symmetric and non-symmetric block-iteration. The results are illustrated using some well-known methods. A stopping criterion is offered and assessed for symmetric block-iteration.

```
@phdthesis{diva2:18098,
author = {Nikazad, Touraj},
title = {{Algebraic Reconstruction Methods}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1186}},
year = {2008},
address = {Sweden},
}
```

In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis we discuss algorithms for the reduced rank regression problem and algorithms for the computation of the best multilinear rank approximation of tensors.

The first two papers deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is

\[

\min_{\rank(X) = k} \det (B - X A)(B - X A)\tp,

\]

where $A$ and $B$ are given matrices and we want to find $X$ under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on $A$ and $B$ so that $(B - X A)(B - X A)\tp$ is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing a singular objective matrix.

Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third paper concerns with classification of handwritten digits in the context of tensors or multidimensional data arrays. Tensor and multilinear algebra is an area that attracts more and more attention because of the multidimensional structure of the collected data in various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98--99 \% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The resulting algorithm achieves 5\% in classification error with fairly low amount of computations.

The remaining two papers discuss computational methods for the best multilinear

rank approximation problem

\[

\min_{\cB} \| \cA - \cB\|

\]

where $\cA$ is a given tensor and we seek the best low multilinear rank approximation tensor $\cB$. This is a generalization of the best low rank matrix approximation problem. It is well known that for matrices the solution is given by truncating the singular values in the singular value decomposition (SVD) of the matrix. But for tensors in general the truncated HOSVD does not give an optimal approximation. For example, a third order tensor $\cB \in \RR^{I \x J \x K}$ with rank$(\cB) = (r_1,r_2,r_3)$ can be written as the product

\[

\cB = \tml{X,Y,Z}{\cC}, \qquad b_{ijk}=\sum_{\lambda,\mu,\nu}

x_{i\lambda} y_{j\mu} z_{k\nu} c_{\lambda\mu\nu},

\]

where $\cC \in \RR^{r_1 \x r_2 \x r_3}$ and $X \in \RR^{I \times r_1}$, $Y \in \RR^{J \times r_2}$, and $Z \in \RR^{K \times r_3}$ are matrices of full column rank. Since it is no restriction to assume that $X$, $Y$, and $Z$ have orthonormal columns and due to these constraints, the approximation problem can be considered as a nonlinear optimization problem defined on a product of Grassmann manifolds. We introduce novel techniques for multilinear algebraic manipulations enabling means for theoretical analysis and algorithmic implementation. These techniques are used to solve the approximation problem using Newton and Quasi-Newton methods specifically adapted to operate on products of Grassmann manifolds. The presented algorithms are suited for small, large and sparse problems and, when applied on difficult problems, they clearly outperform alternating least squares methods, which are standard in the field.

```
@phdthesis{diva2:18011,
author = {Savas, Berkant},
title = {{Algorithms in data mining using matrix and tensor methods}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1178}},
year = {2008},
address = {Sweden},
}
```

During the last decade, potential theory and p-harmonic functions have been developed in the setting of doubling metric measure spaces supporting a *p*-Poincar´e inequality. This theory unifies, and has applications in several areas of analysis, such as weighted Sobolev spaces, calculus on Riemannian manifolds and Carnot groups, subelliptic differential operators and potential theory on graphs.

In this thesis we investigate the double obstacle problem for p-harmonic functions on metric spaces. We show the existence and uniqueness of solutions and their continuity when the obstacles are continuous. Moreover the solution is p-harmonic in the open set where it does not touch the continuous obstacles. The boundary regularity of the solutions is also studied.

Furthermore we study two kinds of convergence problems for the solutions. First we let the obstacles vary and fix the boundary values and show the convergence of the solutions. Second we consider an increasing sequence of open sets, with union Ω, and fix the obstacles and the boundary values. We show that the solutions of the obstacle problems in these sets converge to the solution of the corresponding problem in Ω.

```
@phdthesis{diva2:17335,
author = {Farnana, Zohra},
title = {{The Double Obstacle Problem on Metric Spaces}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1342}},
year = {2008},
address = {Sweden},
}
```

Spatio-temporal processes like multivariate time series and stochastic processes occur in many applications, for example the observations from functional magnetic resonance imaging (fMRl) or positron emission tomography (PET). It is interesting to test independence between k sets of the variables, that is testing spatial independence.

This thesis deals with the problem of testing spatial independence for dependent observations. The sample observation matrix is assumed to follow a matrix normal distribution with a separable covariance matrix, in other words it can be written as a Kronecker product of two positive definite matrices. Instead of having a sample observation matrix with independent columns, a covariance between the columns is considered and this covariance matrix is interpreted as a temporal covariance. The main results in this thesis are the computations of the maximum likelihood estimates and the null distribution of the likelihood ratio statistic. Two cases are considered, when the temporal covariance is known and when it is unknown. When the temporal covariance is known, the maximum likelihood estimates are computed and the asymptotic null distribution is shown to be similar to the independent observation case. In the case when the temporal covariance is unknown the maximum likelihood estimates of the parameters are found by an iterative alternating algorithm.

A well known fact is that when testing hypotheses for covariance matrices, distributions of quadratic forms arise. A generalization of the distribution of the multivariate quadratic form *X AX'*, where *X* is a (*p* x *n*) normally distributed matrix and *A* is a (*n* x *n*) symmetric real matrix, is presented. It is shown that the distribution of the quadratic form is the same as the distribution of a weighted sum of noncentral Wishart distributed matrices.

```
@phdthesis{diva2:258570,
author = {Ohlson, Martin},
title = {{Testing spatial independence using a separable covariance matrix}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1299}},
year = {2007},
address = {Sweden},
}
```

This thesis deals with large-scale industrial problems that can be formulated using mixed integer linear programming (MIP) models. Because of the large problem size, it is not often possible to apply standard solution methods. Therefore special techniques must be used. In this thesis, both full optimization and optimization based heuristic approaches are developed.

The body of this thesis consists of five research papers. In the papers we consider industrial cases based on three applications: production planning at Södra Cell AB, ship routing and distribution at Södra Cell AB, and staff routing and scheduling in the Swedish home care.

We develop two large-scale MIP models for the production-planning problem. For solving, we use both a direct approach, and a combination of column generation and constraint branching. The latter is a technique to control the branching rules in a branch and bound framework and takes into account application specific properties.

For the ship routing problem, we present a MIP model and develop two solution methods. The first is based on an iterative rolling time horizon. In each step a part of the model is solved and later fixed. This is repeated until the full problem is solved. The second approach is to combine a genetic algorithm and linear programming. From the MIP model, we obtain solution bounds rarely present in genetic algorithm frameworks.

In the staff routing problem there are special synchronization constraints and we introduce a new MIP model. An exact solution approach based on column generation and branch and bound is developed. We also suggest a variable fixing heuristic, which we use for solving the more general problem with requirements on equal workload.

In the computational experiments, we use data instances obtained from real-world cases, and generated instances based on real-world cases. The numerical results show that high quality solutions can be obtained within reasonable time limits in all applications.

```
@phdthesis{diva2:17885,
author = {Bredström, David},
title = {{Models and solution methods for large-scale industrial mixed integer programming problems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1071}},
year = {2007},
address = {Sweden},
}
```

Model based probabilistic classification is heavily used in data mining and machine learning. For computational learning these models may need approximation steps however. One popular approximation in classification is to model the class conditional densities by factorization, which in the independence case is usually called the ’Naïve Bayes’ classifier. In general probabilistic independence cannot model all distributions exactly, and not much has been published on how much a discrete distribution can differ from the independence assumption. In this dissertation the approximation quality of factorizations is analyzed in two articles.

A specific class of factorizations is the factorizations represented by graphical models. Several challenges arise from the use of statistical methods for learning graphical models from data. Examples of problems include the increase in the number of graphical model structures as a function of the number of nodes, and the equivalence of statistical models determined by different graphical models. In one article an algorithm for learning graphical models is presented. In the final article an algorithm for clustering parts of DNA strings is developed, and a graphical representation for the remaining DNA part is learned.

```
@phdthesis{diva2:17846,
author = {Ekdahl, Magnus},
title = {{On approximations and computations in probabilistic classification and in learning of graphical models}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1141}},
year = {2007},
address = {Sweden},
}
```

The focus of this thesis is the use of Operations Research for pplications in the forest industry. Optimization models and methods have been developed for problems in the forest supply chain and they have been integrated in decision support systems. The problems considered in this thesis are operative with a planning horizon of less than a month. Short solution times for the methods and the feasibility of the models used are important aspects. The body of this thesis consists of eight research papers where six of them consider operative problems and follows the forest supply chain. The industrial applications include routing of forwarders, routing of logging trucks, a process control problem, and roll cutting problems. The other two papers consider an operative planning problem in the home care sector. They are spin offs from one of the other projects in this thesis. In these applications both linear and nonlinear problems occur.

The forwarding problem is to decide routes for forwarders to pick up the small piles of logs the harvesters have left in the harvest areas. The forwarders then put the logs adjacent to forest roads. The logging truck problem is to decide routes for logging trucks to pick up the piles created by the forwarders and transport them to demand points, for example pulp or paper mills. The process control problem appear in the bleaching stage of a pulp mill where the control variables are the bleaching chemical charges. The cost of bleaching chemicals is minimized while a finishing brightness within target values is ensured. Mainly two roll cutting problems are studied. One is to minimize the number of cutting patterns and one is to minimize the number of reels when defects in the papper reels are considered. The solution methods developed for the forwarding problem have also been applied to a routing problem which appears in staff planning for home care operations.

The different DSS developed and implemented have been tested and several are in daily industrial use. In each of the papers, we have developed robust OR models and quick and effective OR methods. The savings from using the systems vary but are typically in the range 5-20%.

```
@phdthesis{diva2:23492,
author = {Flisberg, Patrik},
title = {{Application of operations research in operative planning in the forest industry}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1104}},
year = {2007},
address = {Sweden},
}
```

Ill-posed sets of linear equations typically arise when discretizing certain types of integral transforms. A well known example is image reconstruction, which can be modelled using the Radon transform. After expanding the solution into a finite series of basis functions a large, sparse and ill-conditioned linear system arises. We consider the solution of such systems. In particular we study a new class of iteration methods named DROP (for Diagonal Relaxed Orthogonal Projections) constructed for solving both linear equations and linear inequalities. This class can also be viewed, when applied to linear equations, as a generalized Landweber iteration. The method is compared with other iteration methods using test data from a medical application and from electron microscopy. Our theoretical analysis include convergence proofs of the fully-simultaneous DROP algorithm for linear equations without consistency assumptions, and of block-iterative algorithms both for linear equations and linear inequalities, for the consistent case.

When applying an iterative solver to an ill-posed set of linear equations the error typically initially decreases but after some iterations (depending on the amount of noise in the data, and the degree of ill-posedness) it starts to increase. This phenomena is called semi-convergence. It is therefore vital to find good stopping rules for the iteration.

We describe a class of stopping rules for Landweber type iterations for solving linear inverse problems. The class includes, e.g., the well known discrepancy principle, and also the monotone error rule. We also unify the error analysis of these two methods. The stopping rules depend critically on a certain parameter whose value needs to be specified. A training procedure is therefore introduced for securing robustness. The advantages of using trained rules are demonstrated on examples taken from image reconstruction from projections.

```
@phdthesis{diva2:16771,
author = {Nikazad, Touraj},
title = {{The Use of Landweber Algorithm in Image Reconstruction}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1311}},
year = {2007},
address = {Sweden},
}
```

In this thesis we investigate the superenergy tensor that was introduced by Chevreton in 1964 as an electromagnetic counterpart to the Bel-Robinson tensor for the gravitational feld.

We show that in Einstein-Maxwell spacetimes with a source-free electromagnetic feld, the Chevreton superenergy tensor has many interesting properties. It is a completely symmetric rank-4 tensor and it gives rise to conserved currents for orthogonally transitive 1- and 2-parameter isometry groups.

The trace of this tensor is divergence-free and it is related to the Bach tensor. We investigate the implications for when the trace vanishes and we are able to determine the full set of such spacetimes. We use this to treat the problem of Einstein{-Maxwell spacetimes that are conformally related to Einstein spaces and we find new exact solutions with this property.

```
@phdthesis{diva2:16839,
author = {Eriksson, Ingemar},
title = {{The Chevreton Superenergy Tensor in Einstein-Maxwell Spacetimes}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1136}},
year = {2007},
address = {Sweden},
}
```

The Cauchy–Riemann equations admit a bilinear multiplication of solutions, since the product of two holomorphic functions is again holomorphic. This multiplication plays the role of a nonlinear superposition principle for solutions, allowing for construction of new solutions from already known ones, and it leads to the exceptional property of the Cauchy–Riemann equations that all solutions can locally be built from power series of a single solution *z* = *x* + i*y* ∈ C.

In this thesis we have found a differential algebraic characterization of linear first order systems of partial differential equations admitting a bilinear ∗-multiplication of solutions, and we have determined large new classes of systems having this property. Among them are the already known quasi-Cauchy–Riemann equations, characterizing integrable Newton equations, and the gradient equations ∇*f* = *M*∇*g* with constant matrices *M*. A systematic description of linear systems of PDEs with variable coefficients have been given for systems with few independent and few dependent variables.

An important property of the ∗-multiplication is that infinite families of solutions can be constructed algebraically as power series of known solutions. For the equation ∇*f *= *M*∇*g* it has been proved that the general solution, found by Jodeit and Olver, can be locally represented as convergent power series of a single simple solution similarly as for solutions of the Cauchy–Riemann equations.

```
@phdthesis{diva2:24241,
author = {Jonasson, Jens},
title = {{Systems of Linear First Order Partial Differential Equations Admitting a Bilinear Multiplication of Solutions}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1135}},
year = {2007},
address = {Sweden},
}
```

Data assimilation arises in a vast array of different topics: traditionally in meteorological and oceanographic modelling, wind tunnel or water tunnel experiments and recently from biomedical engineering. Data assimilation is a process for combine measured or observed data with a mathematical model, to obtain estimates of the expected data. The measured data usually contains inaccuracies and is given with low spatial and/or temporal resolution.

In this thesis data assimilation for time dependent fluid flow is considered. The flow is assumed to satisfy a given partial differential equation, representing the mathematical model. The problem is to determine the initial state which leads to a flow field which satisfies the flow equation and is close to the given data.

In the first part we consider one-dimensional flow governed by Burgers’ equation. We analyze two iterative methods for data assimilation problem for this equation. One of them so called adjoint optimization method, is based on minimization in L2-norm. We show that this minimization problem is ill-posed but the adjoint optimization iterative method is regularizing, and represents the well-known Landweber method in inverse problems. The second method is based on L2-minimization of the gradient. We prove that this problem always has a solution. We present numerical comparisons of these two methods.

In the second part three-dimensional inviscid compressible flow represented by the Euler equations is considered. Adjoint technique is used to obtain an explicit formula for the gradient to the optimization problem. The gradient is used in combination with a quasi-Newton method to obtain a solution. The main focus regards the derivation of the adjoint equations with boundary conditions. An existing flow solver EDGE has been modified to solve the adjoint Euler equations and the gradient computations are validated numerically. The proposed iteration method are applied to a test problem where the initial pressure state is reconstructed, for exact data as well as when disturbances in data are present. The numerical convergence and the result are satisfying.

```
@phdthesis{diva2:24091,
author = {Lundvall, Johan},
title = {{Data Assimilation in Fluid Dynamics using Adjoint Optimization}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1121}},
year = {2007},
address = {Sweden},
}
```

I avhandlingen studeras hur kostnadseffektiva underhålls- (uh-)planer för belagd väg kan genereras, på basis av information om aktuellt vägytetillstånd och funktionella modeller för kostnads- och tillståndsförändringar, delvis utvecklade i samarbete med svenska Vägverket (VV). Tilltänkt användning är på strategisk och programnivå, innan mer detaljerad objektinformation finns att tillgå. Till skillnad från hittills använda modeller, så genereras individuella uh-planer för varje vägsegment (en homogen vägsträcka vad gäller aktuellt beläggningstillstånd och beläggningshistorik), i kontinuerliga tillstånds- och åtgärdsrum. Genom användning av Lagrangerelaxerande optimeringsteknik, så kan de speciella nytto/kostnads-kvot-villkor som VV ålägger varje uh-objekt naturligen hanteras med dualpriser för budgetvillkoren. Antalet vägsegment som konkurrerar om budgetmedlen är vanligtvis stort. Data från VV:s Vägdatabank för Värmland har använts, omfattande ca 9000 vägsegment. Genom den stora datamängden har datorprogrammen implementerats för parallellbearbetning. Under avhandlingsarbetet har projektet beviljats tillgång till Monolith PCklustret vid NSC. För att kunna reducera optimeringskörtiderna har modell- och metodutveckling varit nödvändig. Genom att aggregera vägsegmenten till vägklasser har goda startvärden på dualpriserna erhållits. Genom utvecklingen av en speciell restvärdesrutin har den explicit behandlade tidsperioden kunnat reduceras. Vid lösandet av det duala subproblemet har speciell uppmärksamhet ägnats åt de diskretiseringseffekter som uppstår i metoden dynamisk programmering. En typ av tillämpning avser ett delvägnät, exempelvis en väg. Valideringsstudier har genomförts på väg 63 i Värmland – med lovande men inte tillfredsställande resultat (se nedan). En speciell modell för samordnat uh beaktar stordriftsfördelarna vid samtidig åtgärd på en hel vägsträcka. Den andra huvudtypen av studier gäller ett helt nätverk. Flera metodtyper har tillämpats, både för att lösa de relaxerade optimeringsproblemen och för att generera uhplaner som uppfyller budgetvillkoren. För en anständig diskretisering är körtiderna för hela Värmland mindre än 80 CPU-timmar. Genom en a posteriori primal heuristik reduceras kraven på parallellbearbetning till ett litet PC-kluster. Avhandlingen studerar vidare effekterna av omfördelade budgetmedel samt en övergång till en transparent, stokastisk modell – vilka båda visar små avvikelser från basmodellen.

Optimeringsresultaten för Värmland indikerar att budgetnivåer på ca 40% av Värmlands verkliga uh-budget är tillräckliga. Dock saknas viktiga kostnadsdrivande faktorer i denna första modellomgång, exempelvis vissa funktionella prestanda (säkerhet), all miljöpåverkande prestanda (buller etc.) och strukturell prestanda (ex.vis bärighet, som enbart modelleras via ett åldersmått). För ökad tilltro till PMS i allmänhet och optimering i synnerhet, bör avvikelserna analyseras ytterligare och leda till förbättringar vad gäller tillståndsmätning, tillståndseffekt- & kostnadsmodellering samt matematisk modellering & implementering.

```
@phdthesis{diva2:23632,
author = {Andersson, Per-Åke},
title = {{Multi-year maintenance optimisation for paved public roads - segment based modelling and price-directive decomposition}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1107}},
year = {2007},
address = {Sweden},
}
```

The scope of this thesis is modelling and solving large-scale planning problems in the supply chain within the forest industry. Five research papers are included, the first three of which focus on the modelling, and the last two on the solution methods. All problems included are tactical multi-commodity problems expressed as mixed integer programming (MIP) models. The work has been done in collaboration with two Swedish companies within the forest industry.

In Paper I, a problem concerning the supply chain of forest fuel for Sydved Energileveranser AB is modelled and solved. We study the problem of deciding when and where forest residues are to be converted into wood chips, and how the residues and chips are to be transported and stored in order to satisfy energy demand at heating plants. The company has long-term contracts with forest owners and saw mills. Decisions in the model include whether or not additional harvest areas and saw mills are to be contracted and which terminals to use. The planning horizon is one year and monthly time periods are used.

Papers II--V are based on planning problems at Södra Cell AB. The planning horizon is normally one year. Papers II--III consider only one time period. In Paper II the supply chain from pulp mills to customers is modelled and the combined problem of deciding terminal locations and which ship routes to use is studied. Shipping vessels chartered on short or long term are used to transport products to terminals in Europe. From each terminal, the products are transported to customers by truck, train, or a combination of both. In addition, trains and trucks can be used for transports directly to customers from mills. In Paper III the entire supply chain, from harvest areas to customers, is considered. Decisions included are transportation of raw materials, production mix, the distribution of pulp products, and the selection of potential orders and their quantities at customers. The ship routes are considered as flow links.

In Papers IV--V the problems in Papers II--III are combined into one model and several time periods are used. Lagrangian heuristics based on Lagrangian decomposition are used as solution methods in both papers. In Paper IV, the approach leads to subproblems for each time period, whereas in Paper V, another approach that results in subproblems for different parts of the supply chain is developed.

All models are based on real data from the companies. The models are detailed and describe the problems accurately. The solution methods are developed such that the solution time is kept within practical limits. Results from Papers II--III have been used by Södra Cell AB to support the change of the terminal structure as well as in budget planning.

```
@phdthesis{diva2:23550,
author = {Gunnarsson Lidestam, Helene},
title = {{Supply chain optimization in the forest industry}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1105}},
year = {2007},
address = {Sweden},
}
```

Vid analys av vattenprover tagna från t.ex. ett vattendrag betäms halten av olika ämnen. Dessa halter är ofta beroende av vattenföringen. Det är av intresse att ta reda på om observerade förändringar i halterna beror på naturliga variationer eller är orsakade av andra faktorer. För att undersöka detta har föreslagits en statistisk tidsseriemodell som innehåller okända parametrar. Modellen anpassas till uppmätta data vilket leder till ett underbestämt ekvationssystem. I avhandlingen studeras bl.a. olika sätt att säkerställa en unik och rimlig lösning. Grundidén är att införa vissa tilläggsvillkor på de sökta parametrarna. I den studerade modellen kan man t.ex. kräva att vissa parametrar inte varierar kraftigt med tiden men tillåter årstidsvariationer. Det görs genom att dessa parametrar i modellen regulariseras.

Detta ger upphov till ett minsta kvadratproblem med en eller två regulariseringsparametrar. I och med att inte alla ingående parametrar regulariseras får vi dessutom ett partiellt regulariserat minsta kvadratproblem. I allmänhet känner man inte värden på regulariseringsparametrarna utan problemet kan behöva lösas med flera olika värden på dessa för att få en rimlig lösning. I avhandlingen studeras hur detta problem kan lösas numeriskt med i huvudsak två olika metoder, en iterativ och en direkt metod. Dessutom studeras några sätt att bestämma lämpliga värden på regulariseringsparametrarna.

I en iterativ lösningsmetod förbättras stegvis en given begynnelseapproximation tills ett lämpligt valt stoppkriterium blir uppfyllt. Vi använder här konjugerade gradientmetoden med speciellt konstruerade prekonditionerare. Antalet iterationer som krävs för att lösa problemet utan prekonditionering och med prekonditionering jämförs både teoretiskt och praktiskt. Metoden undersöks här endast med samma värde på de två regulariseringsparametrarna.

I den direkta metoden används QR-faktorisering för att lösa minsta kvadratproblemet. Idén är att först utföra de beräkningar som kan göras oberoende av regulariseringsparametrarna samtidigt som hänsyn tas till problemets speciella struktur.

För att bestämma värden på regulariseringsparametrarna generaliseras Reinsch’s etod till fallet med två parametrar. Även generaliserad korsvalidering och en mindre beräkningstung Monte Carlo-metod undersöks.

```
@phdthesis{diva2:23480,
author = {Skoglund, Ingegerd},
title = {{Algorithms for a Partially Regularized Least Squares Problem}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1310}},
year = {2007},
address = {Sweden},
}
```

This thesis concerns the development of novel feasible direction type algorithms for constrained nonlinear optimization. The new algorithms are based upon enhancements of the search direction determination and the line search steps.

The Frank-Wolfe method is popular for solving certain structured linearly constrained nonlinear problems, although its rate of convergence is often poor. We develop improved Frank--Wolfe type algorithms based on conjugate directions. In the conjugate direction Frank-Wolfe method a line search is performed along a direction which is conjugate to the previous one with respect to the Hessian matrix of the objective. A further refinement of this method is derived by applying conjugation with respect to the last two directions, instead of only the last one.

The new methods are applied to the single-class user traffic equilibrium problem, the multi-class user traffic equilibrium problem under social marginal cost pricing, and the stochastic transportation problem. In a limited set of computational tests the algorithms turn out to be quite efficient. Additionally, a feasible direction method with multi-dimensional search for the stochastic transportation problem is developed.

We also derive a novel sequential linear programming algorithm for general constrained nonlinear optimization problems, with the intention of being able to attack problems with large numbers of variables and constraints. The algorithm is based on inner approximations of both the primal and the dual spaces, which yields a method combining column and constraint generation in the primal space.

```
@phdthesis{diva2:23509,
author = {Mitradjieva-Daneva, Maria},
title = {{Feasible Direction Methods for Constrained Nonlinear Optimization:
Suggestions for Improvements}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1095}},
year = {2007},
address = {Sweden},
}
```

In the thesis we consider two types of problems. In Paper 1, we study

small solutions to a time-independent nonlinear elliptic partial differential equation of Emden-Fowler type in a semi-infnite cylinder. The asymptotic behaviour of these solutions at infnity is determined. First, the equation under the Neumann boundary condition is studied. We show that any solution small enough either vanishes at infnity or tends to a nonzero periodic solution to a nonlinear ordinary differential equation. Thereafter, the same equation under the Dirichlet boundary condition is studied, the non-linear term and right-hand side now being slightly more general than in the Neumann problem. Here, an estimate of the solution in terms of the right-hand side of the equation is given. If the equation is homogeneous, then every solution small enough tends to zero. Moreover, if the cross-section is star-shaped and the nonlinear term in the equation is subject to some additional constraints, then every bounded solution to the homogeneous Dirichlet problem vanishes at infnity.

In Paper 2, we study asymptotics as t → ∞ of solutions to a linear, parabolic system of equations with time-dependent coefficients in Ωx(0,∞), where Ω is a bounded domain. On δΩ(0,∞) we prescribe the homogeneous Dirichlet boundary condition. For large values of t, the coefficients in the elliptic part are close to time-independent coefficients in an integral sense which is described by a certain function κ(t). This includes in particular situations when the coefficients may take different values on different parts of Ω and the boundaries between them can move with t but stabilize as t → ∞. The main result is an asymptotic representation of solutions for large t. As a corollary, it is proved that if κєL1(0,∞), then the solution behaves asymptotically as the solution to a parabolic system with time-independent coefficients.

```
@phdthesis{diva2:23075,
author = {Rand, Peter},
title = {{Asymptotic analysis of solutions to elliptic and parabolic problems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1044}},
year = {2006},
address = {Sweden},
}
```

A closed Riemann surface which can be realized as a 3-sheeted covering of the Riemann sphere is called trigonal, and such a covering is called a trigonal morphism. Accola showed that the trigonal morphism is unique for Riemann surfaces of genus g ≥ 5. This thesis characterizes the cyclic trigonal Riemann surfaces of genus 4 with non-unique trigonal morphism using the automorphism groups of the surfaces. The thesis shows that Accola’s bound is sharp with the existence of a uniparametric family of cyclic trigonal Riemann surfaces of genus 4 having several trigonal morphisms. The structure of the moduli space of trigonal Riemann surfaces of genus 4 is also characterized.

Finally, by using the same technique as in the case of cyclic trigonal Riemann surfaces of genus 4, we are able to deal with *p*-gonal Riemann surfaces and show that Accola’s bound is sharp for *p*-gonal Riemann surfaces. Furthermore, we study families of *p*-gonal Riemann surfaces of genus (*p* − 1)^{2} with two *p*-gonal morphisms, and describe the structure of their moduli space.

```
@phdthesis{diva2:23078,
author = {Ying, Daniel},
title = {{On the Moduli Space of Cyclic Trigonal Riemann Surfaces of Genus 4}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1060}},
year = {2006},
address = {Sweden},
}
```

This thesis is concerned with the computation of certain subspaces connected to a given matrix, where the closely related problem of approximating the matrix with one of lower rank is given special attention.

To determine the rank and obtain bases for fundamental subspaces such as the range and null space of a matrix, computing the singular value decomposition (SVD) is the standard method. When new data are added, like in adaptive signal processing, a more economic alternative to the SVD is to use a rank-revealing UTV (ULV or URV) decomposition since it can be updated more easily.

The scenario in part I of the thesis is that the matrix to be updated is either a product or a quotient of two other matrices. There exist implicit algorithms for computing the SVD of a product or quotient that operate on the two matrices separately. For the corresponding problem of an URV decomposition of a product or quotient, originally sketched by S. Qiao, we give the details of the updating algorithms. Sample numerical experiments confirm that the quality of the approximate subspaces compared to the ones obtained by the implicitly computed URV, is degraded if the product is formed explicitly in some cases. We argue that the same pros and cons that affect the choice between the URV and ULV decomposition of one matrix, carry over to the choice between the implicit URV decomposition and the more established ULLV decomposition in the quotient case. As a signal processing application, we track the range of an estimated cross-covariance matrix.

We also describe the updating issues of a decomposition that reveals the ranks of the individual matrices in a product. That decomposition suffers from a difficult decision about the rank of the product and will not be tested as a competitor to the implicit URV decomposition referred to above.

A common situation in scientific computing is that the matrix is too lagre to admit a full factorization within reasonable time. In that case iterative methods must be employed, where Lanczos type algorithms are the most widely used. In part II we discuss the formulation of standard numerical optimization methods on the Grassmann manifold whose objects are subspaces and focus on the application to numerical linear algebra problems. This approach allow us to (re-)derive algorithms for the partial symmetric eigenvalue problem and the inexact Newton method is given special attention.

A recent method is the Jacobi-Davidson (JD) algorithm that can be seen both as a variation of an inexact Newton method for solving a set of nonlinear equations/minimizing a function and as an expanding subspace algorithm that is equivalent to Lanczos if the equation in each step is solved exactly. Our contribution is an algorithm that is fairly robust with a subspace that is only twice as large as the desired one. A large part treats the implementation issues associated with the solution of a correction equation including stopping rules and the use of preconditioners.

Other numerical linear algebra problems call for a pair of subspaces. We give Grassmann type algorithms for the partial SVD problem and low rank approximation of matrices with missing entries, but will restrain ourselves by demonstrating their efficiency for exact solution of the Newton equation.

```
@phdthesis{diva2:23076,
author = {Simonsson, Lennart},
title = {{Subspace Computations via Matrix Decompositions and Geometric Optimization}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1052}},
year = {2006},
address = {Sweden},
}
```

Det verkar finnas en allmän uppfattning om att matematiska texter är så speciella att man måste få lära sig en särskild typ av läsförmåga för att förstå sådana texter. Denna uppfattning verkar dock inte vara baserad på forskningsresultat eftersom det visar sig inte finnas mycket forskning genomförd som behandlar läsförståelse inom matematik.

Huvudsyftet med denna avhandling är att undersöka om det krävs speciella kunskaper eller förmågor för att läsa matematiska texter. Fokus ligger på studerandes läsning av olika typer av texter som behandlar matematik från grundläggande universitetsnivå. Detta studeras utifrån två olika perspektiv, dels ett kognitivt, där läsförmågor och ämneskunskaper studeras i relation till läsförståelse, och dels ett metakognitivt, vilket innefattar uppfattningar och hur man som läsare avgör om man förstått en text.

I avhandlingen ingår tre empiriska studier samt teoretiska diskussioner som bland annat utgår från två litteraturstudier, den ena om egenskaper hos matematiska texter och den andra om läsning i relation till problemlösning. I de empiriska studierna jämförs dels läsning av matematiska texter med läsning av texter med annat ämnesinnehåll och dels läsning av olika typer av matematiska texter, där speciellt symbolanvändningen och om innehållet berör begrepp eller procedurer studeras. Dessutom undersöks hur studerande uppfattar sin egen läsförståelse samt läsning och texter i allmänhet inom matematik, och huruvida variationer i dessa uppfattningar kan kopplas till läsförståelsen.

Resultat från studierna i denna avhandling visar att de studerande verkar använda en speciell sorts läsförmåga för matematiska texter; att fokusera på symboler i en text. För matematiska texter utan symboler utnyttjas en mer generell läsförmåga, det vill säga en läsförmåga som används också för texter med annat ämnesinnehåll. Men när symboler finns i texten läses alltså texten på ett särskilt sätt, vilket påverkar läsförståelsen på olika sätt för olika typer av texter (avseende om de berör begrepp eller procedurer). Jämfört med när den generella läsförmågan utnyttjas, skapas sämre läsförståelse när den speciella läsförmågan används.

Det verkar finnas ett behov av att fokusera på läsning och läsförståelse inom matematikutbildning eftersom resultat visar att kurser på gymnasiet (kurs E) och på universitetet (inom algebra och analys) inte påverkar den speciella läsförmågan. De nämnda resultaten påvisar dock att det primärt inte nödvändigtvis handlar om att lära sig att läsa matematiska texter på något särskilt sätt utan att utnyttja en befintlig generell läsförmåga också för matematiska texter.

Resultat från det metakognitiva perspektivet påvisar en skillnad mellan medvetna aspekter, såsom avseende uppfattningar och reflektion kring förståelse, samt omedvetna aspekter, såsom de mer automatiska processer som gör att man förstår en text när den läses, där också metakognitiva processer finns aktiva. Speciellt visar det sig att uppfattningar, som undersökts med hjälp av en enkät, inte har någon tydlig och oberoende effekt på läsförståelse.

Utifrån de texter som använts och de studerande som deltagit verkar det som helhet inte finnas någon anledning att betrakta läsning av matematiska texter som en speciell sorts process som kräver särskilda läsförmågor. Studerandes utveckling av speciella läsförmågor kan istället handla om att de inte upplevt något behov av (eller krav på) att läsa olika typer av matematiska texter där likheter med läsning i allmänhet kan uppmärksammas och utnyttjas.

```
@phdthesis{diva2:22667,
author = {Österholm, Magnus},
title = {{Kognitiva och metakognitiva perspektiv på läsförståelse inom matematik}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1057}},
year = {2006},
address = {Sweden},
}
```

The routing in OSPF Telecommunication networks is determined by computing shortest paths with respect to link weights set by the network operator. All shortest paths to a destination are used by the routers when traffic is routed, and the routers split the traffic evenly when alternative shortest paths exist.

This thesis includes models and methods for various problems related to OSPF networks and OSPF routing. Two different models for the design of OSPF networks are presented. One model ensures that the network is capable of accommodating large parts of the traffic even if a single link fails. The OSPF protocol is modeled using constraints based on weights, and a heuristical method uses the link weights for generating good feasible solutions. The second model is based on routing patterns and does not include the link weights. The OSPF protocol is modeled by ensuring that no pair of routing patterns contains different paths between the same pair of nodes, which is a necessary condition for the existence of weights. A Lagrangean heuristic is proposed as solution method.

The thesis also includes models that can be used for determining if a set of desired routing patterns is obtainable in an OSPF network. The models are infeasible if no link weights yield shortest path graphs which are identical to the patterns, and this means that the patterns can not be used simultaneously in an OSPF network. Infeasible instances are characterized using LP-duality, and a certain structure called “valid cycle” is found in most infeasible instances. A valid cycle involves two routing patterns and indicates which parts of the patterns that are in conflict. This information can be used for finding weights that yields slightly modified patterns. Polynomial methods which can be used for finding valid cycles are presented. There are however also some infeasible instances where no valid cycle exists, so the absence of valid cycles is a necessary condition for the existence of weights. In this case, the conflict involves more than two routing patterns, and such conflicts are characterized by a structure called “3-valid set of cycles”. The absence of 3-valid sets of cycles is a stronger necessary condition for the existence of weights.

```
@phdthesis{diva2:22448,
author = {Broström, Peter},
title = {{Optimization Models and Methods for Telecommunication Networks using OSPF}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 1032}},
year = {2006},
address = {Sweden},
}
```

It is rarely possible to use an optimal classifier. Often the classifier used for a specific problem is an approximation of the optimal classifier. Methods are presented for evaluating the performance of an approximation in the model class of Bayesian Networks. Specifically for the approximation of class conditional independence a bound for the performance is sharpened.

The class conditional independence approximation is connected to the minimum description length principle (MDL), which is connected to Jeffreys’ prior through commonly used assumptions. One algorithm for unsupervised classification is presented and compared against other unsupervised classifiers on three data sets.

```
@phdthesis{diva2:21540,
author = {Ekdahl, Magnus},
title = {{Approximations of Bayes Classifiers for Statistical Learning of Clusters}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1230}},
year = {2006},
address = {Sweden},
}
```

In this thesis we study multipole moments of axisymmetric spacetimes. Using the recursive definition of the multipole moments of Geroch and Hansen we develop a method for computing all multipole moments of a stationary axisymmetric spacetime without the use of a recursion. This is a generalisation of a method developed by Herberthson for the static case.

Using Herberthson’s method we also develop a method for finding a static axisymmetric spacetime with arbitrary prescribed multipole moments, subject to a specified convergence criteria. This method has, in general, a step where one has to find an explicit expression for an implicitly defined function. However, if the number of multipole moments are finite we give an explicit expression in terms of power series.

```
@phdthesis{diva2:21293,
author = {Bäckdahl, Thomas},
title = {{Multipole moments of axisymmetric spacetimes}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1223}},
year = {2006},
address = {Sweden},
}
```

Many integer optimization problems of great practical importance are today attacked with column generation. Merits of column generation is that it enables the use of compact and flexible formulations of many complex optimization problems, and that it often gives rise to good (strong) formulations. A potential drawback with column generation is the well-known tailing-off phenomenon, that is, that the objective value is improved rapidly in early iterations but very slowly in the late iterations.

We study various techniques for accelerating column generation methods for (integer) linear programs. We evaluate the use of stabilized column generation on a Traveling Salesman Subtour Problem, TSSP, a problem which is closely related to the Prize Collecting Traveling Salesman Problem. We further study how subgradient optimization can be used with the purpose of predicting optimal columns (and, optionally, non-binding restrictions). This technique is tested on the TSSP and the multicommodity network flow problem.

Another idea evaluated in this thesis is the use of over-generation of columns in a column generation scheme for an integer programming problem, in this case a vehicle routing problem with a heterogeneous fleet of vehicles.

The thesis also includes a note on a class of relatives to the Held and Karp 1–tree problem. It is shown that two subclasses have polynomial time-complexity. Further, we note a mistake in an earlier work and provide a counter-example to the erroneous result.

```
@phdthesis{diva2:251095,
author = {Westerlund, Andreas},
title = {{Accelerating column generation schemes:
applications to routing problems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 978}},
year = {2005},
address = {Sweden},
}
```

In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis, which consists of three appended manuscripts, we discuss algorithms for reduced rank regression and for classification in the context of tensor theory.

The first two manuscripts deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is

where A and B are given matrices and we want to find X under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on *A* and *B* so that (*B* - *XA*)(*B* - *XA*)* ^{T}* is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing singularity on the objective matrix.

Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third appended manuscript concerns with classification of hand written digits in the context of tensors or multidimensional data arrays. Tensor theory is also an area that attracts more and more attention because of the multidimensional structure of the collected data in a various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98%- 99% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The amount of computations is fairly low and the performance reasonably good, 5% in error rate.

```
@phdthesis{diva2:251094,
author = {Savas, Berkant},
title = {{Algorithms in data mining:
reduced rank regression and classification by tensor methods}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1214}},
year = {2005},
address = {Sweden},
}
```

Data on flows of materials and substances through the economy and the environment are collected by many different organizations and play a key role in the science of industrial ecology. In this thesis, a framework is suggested for structuring and organizing such data. First, the investigation focuses on the quantities of primary interest in material flow studies and how they can be stored and organized in a data warehouse. This process is shown to provide easy access to data, well-structured data management, a basis for knowledge discovery, and effective analysis of collected data. Secondly, a theoretical framework is proposed for handling and structuring multidimensional flow data, and for facilitating mathematics-assisted modeling in industrial ecology. In particular, it is shown how mathematical operations can be used to merge and compare flow data originating from different studies. Finally, it is illustrated how bootstrap analysis, Bayesian models and balancing procedures can be employed to systematize the quality and uncertainty assessment of physical flow data. Together, these three different aspects of handling physical flow data constitute a new framework that offers better knowledge, quality, and consistency of the data used in industrial ecology.

```
@phdthesis{diva2:250904,
author = {Löfving, Erik},
title = {{Organizing physical flow data:
from input-output tables to data warehouses}},
school = {Linköping University},
type = {{Linköping Studies in Statistics No. 5}},
year = {2005},
address = {Sweden},
}
```

The purpose of this thesis is to contribute to the development and the use of optimization models and methods to support efficient decision making in Swedish forestry. The main problem areas concerned are forest road upgrade planning and operational harvest planning. Today much of the planning is done manually by experienced managers. Forest management has a natural hierarchical structure based on the wide range of planning periods and organisational structure. The hierarchical harvest planning and the subdivision into strategic, tactical and operational levels are described from an Operations Research perspective. The description of the hierarchical structure differs between countries and there is a focus on the Swedish situation.

Road upgrading is becoming an increasingly important planning problem to secure a continuous supply of wood. In Sweden, during the periods of thawing and periods of heavy rain there is an uncertain accessibility to parts of the road network due to unfirm ground. The thesis addresses the optimization problem to minimize the combined road upgrade and transportation costs while meeting requirements on road standard such that accessibility to harvest areas is secured during all weather conditions. In this work mixed integer linear programming (MILP) models including multiple assortments, several time periods and a set of road classes are developed. For a typical forest district, the road upgrade problem becomes large and techniques to improve solution performance through model reformulations are discussed. The models are tested in a case study for a major Swedish company. For practical usage of the models we present the development of a new decision support system called RoadOpt. The development has involved the Forestry Research Institute of Sweden, two software companies and several participating forest companies. The system uses a GIS-based map user-interface to present and analyse data and results. The recently developed Swedish road database is an important part. The system is tested on a case study from Stora Enso.

The harvest planning problems addressed cover planning periods ranging from one year down to one month. Annual plans are required for budgeting, contracting harvest teams, contracting transportation firms and assuring road access. The main decisions deal with which areas to harvest, and by which team, during an annual period so that the industries receive the required volume of assortments. Overall decisions about transportation and storage are included. The monthly planning problem includes detailed scheduling of harvest crews, that is, the sequencing of areas for each team. The thesis addresses these planning problems and provides MILP models for each problem. Methods based on both a commercial solver and developed LP based heuristics are used. Models and methods are tested on case studies from Holmen Skog.

```
@phdthesis{diva2:249516,
author = {Karlsson, Jenny},
title = {{Optimization models and methods for harvest planning and forest road upgrading}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 956}},
year = {2005},
address = {Sweden},
}
```

The topic of this dissertation, within the subfield of mathematics known as optimization, is the development of a new dual ascent method for convex multicommodity flow problems. Convex multicommodity flow problems arize in many different routing problems such as the design of packet switched computer networks and the computation of traffic network equilibria. The dual problem of a strictly convex twice differentiable convex multicommodity flow problem is an essentially unconstrained maximization problem with a piecewise twice differentiable concave objective. The idea behind this new dual ascent algorithm is to compute Newton-like ascent directions in the current differentiable piece and performing line searches in those directions combined with an active set type treatment of the borders between the differentiable pieces. The first contribution in this dissertation is a detailed investigation of this special structure. The insights gained are then used to explain the proposed algorithm. The algorithm is also tested numerically on well known traffic equilibrium problem instances. The computational results are very promising. The second contribution is a new approach for verifying feasibility in multicommodity flow problems. These feasibility problems arizes naturally in the new dual ascent algorithm proposed. The background of the problem is that if a certain representation of a solution to the dual convex multicommodity flow problem is proved to be feasible for the convex muticommodity flow problem aswell, an optimal solution is found. Hence, it is natural to seek a method for verifying feasibility of a given candidate solution for the multicommodity flow problem. The core of the approach is a distance minimizing method for verifying feasibility and hence for demonstrating optimality. This method is described in detail and some computational results in a traffic assignment setting is also given. Finally, a short note illustrating that many published test problems for traffic assignment algorithms have peculiarities are given. First and foremost, several of them have incorrectly specified attributes, such as the number of nodes. In other testproblems, the network contains subnetworks with constant travel times; subnetworks which to a large extent can be reduced or eliminated. In further test problems, the constant travel time subnetworks imply that the solution has nonunique arc flows.

```
@phdthesis{diva2:249288,
author = {Hägglöf, Kristoffer},
title = {{Convex multicommodity flow problems:
a bidual approach}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 954}},
year = {2005},
address = {Sweden},
}
```

Web surveys - the collection of survey data by means of self-administered questionnaires on the Internet - are becoming increasingly popular, perhaps mostly because of advantageous cost factors. If e-mail contacts are used, the cost benefit of a Web survey, compared with a mail or telephone survey, could be major. The reverse of the medal is that Web access is far from universal, and hence Web surveys only permit inference to populations of Web users. However, the Web can be used in combination with traditional response modes in a mixed-mode approach. Such an approach enables data collection from a probability sample of a general population, while still aiming at reducing costs by using the Web.

This thesis focuses on mixed-mode approaches in which (1) mail contacts are used and (2) a choice between a mail questionnaire and a Web questionnaire is offered. In the first paper, a theoretical framework is developed, enabling comparisons between different mixed-mode approaches with respect to minimum expected cost, subject to a precision requirement. A limited numerical study is performed to examine how the cost is influenced by the Web access rate and the Web response propensity. In the second paper, three mixed-mode approaches are evaluated empirically with respect to cost-efficiency. The mixed-mode approaches are compared with a mail-based reference approach with no Web response option. The cost-efficiency analysis is based on the results of a survey experiment, conducted in a population of university students in spring 2004. It is concluded that each of the three mixed-mode approaches was more cost-efficient than the reference approach.

```
@phdthesis{diva2:249257,
author = {Werner, Peter},
title = {{On the cost-efficiency of probability sampling based mail surveys with a web response option}},
school = {Linköping University},
type = {{Linköping Studies in Statistics No. 6}},
year = {2005},
address = {Sweden},
}
```

When studying equivalence of dynamical systems, in the sense of Levi-Civita, the concept of cofactor pair systems plays an important role. Co-factor pair systems can be constructed through a multiplicative structure of the so called quasi-Cauchy-Riemann equations (cof *J*)^{-1}▽*V = (cof )*^{-1}▽, where J and are special conformal Killing tensors. In this thesis we study this multiplication and its role in the theory of equivalentdynamical systems. We have isolated the properties that are responsible for the multiplication, allowing us to give an elegant characterization of systems that admit multiplication. We describe how the multiplication of cofactor pair systems can be considered as a special case of a more general kind of multiplication. We also investigate algebraic properties of the multiplication and provide several methods for constructing new systems with multiplicative structure.

```
@phdthesis{diva2:249256,
author = {Jonasson, Jens},
title = {{The Levi-Civita geodesic equivalence problem and multiplication of cofactor pair systems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1154}},
year = {2005},
address = {Sweden},
}
```

In this thesis we investigate the Chevreton tensor in Einstein-Maxwell theory. It was introduced in 1964 as the counterpart, for electromagnetic fields, of the well-known Bel-Robinson tensor of the gravitational field. We prove that, in the absence of electromagnetic sources, this tensor is completely symmetric. We consider currents constructed from the Chevreton tensor with Killing vectors and show that these currents are conserved for some types of spacetimes with a hypersurface orthogonal Killing vector or two commuting Killing vectors that act orthogonally transitive on non-null surfaces. In addition, we show that the trace of the Chevreton tensor is a rank-two, symmetric, trace-free, divergence-free tensor and that it is related to the Bach tensor. This allows us to investigate Einstein Maxwell spacetimes with a vanishing Bach tensor.

```
@phdthesis{diva2:244773,
author = {Eriksson, Ingemar},
title = {{The Chevreton tensor and its trace}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1146}},
year = {2005},
address = {Sweden},
}
```

We consider the daily transportation problem in forestry which arises when transporting logs from forest sites to customers such as sawmills and pulp and paper mills. Each customer requires a specific amount of a certain assortment, and the deliveries to the customers can be made within time intervals, known as time windows. Further, there are a number of supply points, each with a certain assortment, and a number of vehicles of a given capacity, to be used for transport.

The log truck scheduling problem consists of finding a set of minimal costs routes, one for each vehicle, such that the customers’ demands are satisfied without exceeding the supplies available at the supplies. Each route has to satisfy a number of constraints concerning time windows, truck capacity, timetable of the driver, lunch breaks, et cetera. The model used to describe the log truck scheduling problem is based on the route concept, and each variable, or column, represents one feasible route. Since the number of feasible routes is huge, we work only with restricted versions of this problem, which are similar to restricted master problems in a Dantzig-Wolfe decomposition scheme.

We use three solution methods based on the column generation principle, together with a pool strategy which allows us to deal with the feasible routes outside the restricted master problem. The three methods proposed have a common structure; they use branch-andprice together with a column generator, followed by branch-and-bound. The column generators in the three methods differ. In the first method, the subproblem is based on a cluster-first-route-second strategy. The column generator in the second method involves solving a constrained shortest path problem, and finally, the third method builds on a repeated generation of clusters and routes.

The three methods are tested on real cases from Swedish forestry companies, and the third method has been adapted to a computerised system that utilises the Swedish national road data base, for computing travelling distances. The results obtained show that the optimisation methods succeed in finding significantly better solutions than those obtained by manual planning, and in a reasonable computing time.

```
@phdthesis{diva2:20408,
author = {Palmgren, Myrna},
title = {{Optimal Truck Scheduling:
Mathematical Modeling and Solution by the Column Generation Principle}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 967}},
year = {2005},
address = {Sweden},
}
```

Friction is a phenomenon which is present in most mechanical devices and frequently encountered in everyday life. In particular, understanding of this phenomenon is important in the modelling of contact between an elastic object and an obstacle. Noncoercive incremental contact problems with Coulomb friction constitute an important class of such friction problems due to their frequent occurrence in mechanical engineering. They occur for example when modelling an object which is not fixed to a support. The topic of this thesis is to study this class of friction problems.

This thesis considers both discrete and continuous systems. For the continuous systems we consider both problems with a nonlocal friction law where the contact force is mollified and problems with a normal compliance friction law where the body may penetrate the obstacle. For all friction problems we derive a sufficient condition for the existence of a solution. This condition is a compatibility condition on the applied force field, and if it is violated there exists a nontrivial solution to a corresponding dynamical problem.

```
@phdthesis{diva2:20407,
author = {Rietz, Andreas},
title = {{Existence theorems for noncoercive incremental contact problems with Coulomb friction}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 0345-7524}},
year = {2005},
address = {Sweden},
}
```

By using decision support tools based on operations research (OR) and optimization in the supply chain in the forest industry, the planning process can be improved and higher profitability can be obtained. The focus of this thesis is on modelling two real-life problems. The problems are united by the facts that they concern tactical (annual) planning in the forest industry, and that the models are developed and tested with real data at Swedish companies.

In the first paper, a problem of the supply chain of forest fuel is modelled and solved. The problem of deciding when and where forest residues are to be converted into forest fuel, and how the residues are to be t ransported and stored in order to satisfy demand at heating plants is studied. Decisions also include whether or not additional harvest areas and saw-mills are to be contracted. In addition, we consider the flow of products from saw-mills and import harbours, and address the question about which terminals to use. The planning horizon is one year and monthly time periods are considered. The test data is from Sydved Energileveranser AB.

The second paper is a case study from Södra Cell AB. A combined problem of terminal location and ship routing is studied. The objective is to minimize the costs of distributing pulp products from pulp mills to customers (paper mills). Shipping vessels chartered on short or long term are used to transport products to terminals in Europe. In addition, trains and lorries are used for direct transports to customers from mills. From each terminal, the products are transported to customers by lorry, train, or a combination of both. Decisions about which terminals to use, which shipping routes to use, and which other transportation possibilities to use are included.

In both problems, relatively large mixed integer programming (MIP) models have been developed and solved using a commercial lp (linear programming) solver. Heuristics have also been developed in both problems in order to obtain faster solutions. Several scenarios of the problems have been tested and evaluated.

```
@phdthesis{diva2:253218,
author = {Gunnarsson, Helene},
title = {{Optimization approaches to tactical planning problems in the forest industry}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1071}},
year = {2004},
address = {Sweden},
}
```

The focus of this dissertation is on constructing optimization models and developing solution methods for Snow Removal Routing Problems with Time V\lindows. Two cases, after and during snowfall, are studied. We discuss theoretical aspects of these problems and considerations to be taken when modeling and solving the real-world problem of the Swedish National Road Agency.

In the case of after snowfall we are concerned with finding a set of routes, with minimal total cost, for homogeneous snowplows, where every road segment is plowed in its associated time window. The routes must start from and end at the same depot, at which a numbers of snowplows are stationed. This case is solved by Dantzig-Wolfe decomposition combined with a column generation procedure. An integer solution to the master problem is found by a greedy algorithm or a variable reduction procedure. The case of generating routes for snowplows during snowfall is studied in two versions; the problems with and without time horizon on the duration of the snowfall. The focus is put on the latter problem, which is solved in two ways using a Dantzig-Wolfe decomposition scheme and a Tabu Search heuristic. The Dantzig-Wolfe master problem, is a Constrained Set Covering Problem and the subproblems are Prize Collecting Connected Subgraph Problems. Further, we show that the Prize Collecting Connected Subgraph Problem is a generalization of Prize Collecting Steiner Tree Problem and thus is NP-hard. An integer solution to the master problem is extracted by a greedy search procedure and the obtained solution is improved by post processing. The solution approaches for both cases are implemented and tested on the real-world problem under consideration and on an artificial, generated problem.

```
@phdthesis{diva2:244006,
author = {Razmara, Gholamreza},
title = {{Snow removal routing problems:
theory and applications}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 888}},
year = {2004},
address = {Sweden},
}
```

*n*dimensions", Linköping Studies in Science and Technology. Thesis, No. 1113, 2004.

In this thesis we investigate necessary and su±cient conditions for an n-dimensional space, *n* ≥ 4, to be locally conformal to an Einstein space. After reviewing the classical results derived in tensors we consider the four-dimensional spinor result of Kozameh, Newman and Tod. The involvement of the four-dimensional Bach tensor (which is divergence-free and conformally well-behaved) in their result motivates a search for an n-dimensional generalization of the Bach tensor Bab with the same properties. We strengthen a theorem due to Belfagón and Jaén and give a basis (U* _{ab}, V _{ab}* and

*W*) for all n-dimensional symmetric, divergence-free 2-index tensors quadratic in the Riemann curvature tensor. We discover the simple relationship B

_{ab}_{ab}= 1/2U

_{ab}+ 1/6V

_{ab}and show that the Bach tensor is the unique tensor with these properties in four dimensions. Unfortunately we have to conclude, in general that there is no direct analogue in higher dimension with all these properties.

Nevertheless, we are able to generalize the our-dimensional results due to Kozameh, Newman and Tod to *n* dimensions. We show that a generic space is conformal to an Einstein space if and only if there exists a vector field satisfying two conditions. The explicit use of dimensionally dependent identities (some of which are newly derived in this thesis) is also exploited in order to make the two conditions as simple as possible; explicit examples are given in five and six dimensions using these tensor identities. For n dimensions, we define the tensors babc and Bab, and we show that their vanishing is a conformal invariant property which guarantees that the space with non-degenerate Weyl tensor is a conformal Einstein space.

```
@phdthesis{diva2:244008,
author = {Bergman Ärlebäck, Jonas},
title = {{Conformal Einstein spaces and Bach tensor generalization in \emph{n} dimensions}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1113}},
year = {2004},
address = {Sweden},
}
```

We study small solutions of a nonlinear partial differential equation in a semi-infinite cylinder. The asymptotic behaviour of these solutions at infinity isdetermined. First, the equation under the Neumann boundary condition is studied. We show that any solution small enough either vanishes at infinity ortends to a nonzero periodic solution of a nonlinear ordinary differential equation. Thereafter, the same equation under the Dirichlet boundary condition is studied, but now the nonlinear term and right-hand side are slightly more general than in the Neumann problem. Here, an estimate of the solution in terms of the right-hand side of the equation is given. If the equation is homogeneous, then every solution small enough tends to zero. Moreover, if the cross-section is star-shaped and the nonlinear term in the equation is subject to some additional constraints, then every bounded solution of the homogeneous Dirichlet problem vanishes at infinity. An estimate for the solution is given.

```
@phdthesis{diva2:244007,
author = {Rand, Peter},
title = {{Asymptotic analysis of a nonlinear partial differential equation in a semicylinder}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1112}},
year = {2004},
address = {Sweden},
}
```

In many application areas there is a growing interest in data assimilation or data reconstruction. Data assimilation is a process for integrating observed or measured data into a physical model. The problem originates from a vast array of different topics: traditionally in metereological and oceanographic modelling, and recently from non-invasive medical measurement devices such as magnetic resonance imaging. The measured data may contain inaccurancies and random noise, given with low spatial and/or temporal resolution.

This thesis presents a method for solving reconstruction problems in fluid dynamics using optimal control theory. The problem considered here includes a known partial differential equation and some spatially and temporarily sparsely distributed data with an unknown initial state. From a given velocity field *u ^{δ}*, a flow field

*u*is determined which satisfies a given system of partial differential equations and minimizes ||

*u*-

*u**||

_{L}2. The function

*u*(

*x*,

*t*) is known at the boundary and the initial condition

*u*

_{0}(

*x*) is used as design variable. The optimization problem is solved using adjoint formulation.

```
@phdthesis{diva2:243476,
author = {Lundvall, Johan},
title = {{Reconstruction of velocity data using adjoint optimization}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1096}},
year = {2004},
address = {Sweden},
}
```

Many systems, and some of their properties, are studied in simulation models. The models may be very complex and difficult to understand if the systems they are meant to simulate are complex in themselves. Even small, simple and well-described models can have properties that cannot easily be seen in the model formulations. Sensitivity and uncertainty in the model results are examples of such properties.

One important field where simulation models are used is road traffic, for emissions or other traffic-related topics. To <late, little has been written about sensitivity and uncertainty in road traffic emission models. More must be known about the construction of such models before sensitivity methods can be proposed.

This thesis describes one model for road traffic emissions, both for construction and for ability to be an emission data generating tool for the analyses that follow. Some suitable sensitivity methods are studied; they are based on different approaches such as sensitivity on the margins and measure over all situations by using response surface methods. All of the methods in the study are also applied to the described emission model.

Different approaches are compared on theoretical grounds in several ways, and the sensitivity results allow many other comparisons.

```
@phdthesis{diva2:243004,
author = {Eriksson, Olle},
title = {{Sensitivity analysis methods for road traffic emission models}},
school = {Linköping University},
type = {{Linköping Studies in Statistics No. 4}},
year = {2004},
address = {Sweden},
}
```

This thesis address optimization problems related to the design of lP (Internet Protocol) networks. One of the most commonly used IGP's (Interior Gateway Protocols) in telecommunication networks is the OSPF (Open Shortest Path First) protocol with ECM (Equal Cost Multipath) distribution. This link state based protocol uses link weights, which usually are set by an network operator, for providing routers necessary information regarding how traffic should be routed. By shortest path computations, router decide by themselves how to distribute the traffic.

One problem considered in this thesis is the problem of designing networks which are capable of accommodating large parts of the traffic when network components fails. Link failures are the most common failures, and this is the kind of disturbance which is considered. A multiobjective optimization model is presented and we search for pareto-optimal solutions using a two-phased weight based search method. This method tries to minimize the design cost simultaneously as the level of network survivability is maximized. This method uses the fact that is is easy to compute how traffic is distributed when the OSPF metric is known.

In contrary, it is a much more difficult task to determine if there are any OSPF weights that yield a desired flow distribution. This is a topic which also is studied in the thesis. Necessary conditions, which flow patterns must fulfill for being identical to the shortest paths obtained from OSPF weights, are reported. Sufficient conditions for a special type of flow patterns are also presented. The thesis also contains a polynomial method which can be used for detecting impossibilities in a set of flow patterns when this set do not correspond to the shortest paths of any OSPF weights.

Flow patterns are the basis for a new model which is developed for designing lP networks where OSPF and ECM distribution is used . This model uses a set of in-graphs in order to represent weights implicitly. The model also uses a new set of constraints used for modeling ECM distribution. The usefulness of the model is verified by computational experiments.

```
@phdthesis{diva2:242667,
author = {Broström, Peter},
title = {{Optimization in the Design of OSPF Telecommunication Networks}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1102}},
year = {2004},
address = {Sweden},
}
```

This thesis consists of two papers.

The first paper is a study of the structure of the k-assignment polytope, whose vertices are the *m x n* (0; 1)-matrices with exactly *k* 1:s and at most one 1 in each row and each column. This is a natural generalisation of the Birkhoff polytope and many of the known properties of the Birkhoff polytope are generalised. Two equivalent representations of the faces are given, one as (0; 1)-matrices and one as ear decompositions of bipartite graphs. These tools are used to describe properties of the polytope, especially a complete description of the cover relation in the face lattice of the polytope and an exact expression for the diameter.

The second paper studies the edge-product space *Є(X)* for trees on *X*. This space is generated by the set of edge-weighted finite trees on *X*, and arises by multiplying the weights of edges on paths in trees. These spaces are closely connected to tree-indexed Markov processes in molecular evolutionary biology. It is known that *Є(X)* has a natural *CW*-complex structure, and a combinatorial description of the associated face poset exists which is a poset *S(X)* of *X*-forests. In this paper it is shown that the edge-product space is a regular cell complex. One important part in showing that is to conclude that all intervals *[Ô, Г], Г *Є* S(X),* have recursive coatom orderings.

```
@phdthesis{diva2:21437,
author = {Gill, Jonna},
title = {{The k-assignment Polytope and the Space of Evolutionary Trees}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1117}},
year = {2004},
address = {Sweden},
}
```

A closed Riemann surface which can be realized as a 3-sheeted covering of the Riemann sphere is called trigonal, and such a covering is called a trigonal morphism. Accola showed that the trigonal morphism is unique for Riemann surfaces of genus g ≥ 5. This thesis will characterize the Riemann surfaces of genus 4 wiht non-unique trigonal morphism. We will describe the structure of the space of cyclic trigonal Riemann surfaces of genus 4.

```
@phdthesis{diva2:21438,
author = {Ying, Daniel},
title = {{Cyclic Trigonal Riemann Surfaces of Genus 4}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1125}},
year = {2004},
address = {Sweden},
}
```

Denna avhandling behandlar läsning av matematiska texter; hur och vad man förstår och lär sig vid läsningen. Fokus ligger på *läsprocessen*, det vill säga själva läsandet av texten och vad man förstår efter att läst igenom texten. Huvudsyftet är att studera specifika aspekter i läsandet av just *matematiska* texter för att testa och utveckla en befintlig, allmän teori kring läsprocessen. Speciellt studeras användningen av symboler i matematiska texter och hur detta kan påverka läsprocessen. Avhandlingen byggs upp av teoretiska diskussioner kring läsning av matematiska texter samt en empirisk studie bland gymnasieelever och universitetsstuderande.

De teoretiska diskussionerna utgår bland annat från en litteraturstudie kring förekommande påståenden om speciella egenskaper hos matematiska texter, och speciellt diskuteras läsning av symboler och algebraiska uttryck.

Den empiriska studien (med 106 deltagare) använde tre olika texter; en historietext om ryska revolutionen samt två matematiktexter om gruppteori. Matematiktexterna behandlar samma sak som gruppteori, men skillnaden mellan dem är att den ena använder matematiska symboler i sin presentation medan den andra inte alls använder symboler. Varje deltagare fick läsa en utav matematiktexterna samt historietexterna, och fick efter varje text besvara frågor om textens innehåll.

Den grupp av personer som läste matematiktexten utan symboler har bättre resultat på frågor om texten än den grupp som läste texten med symboler. Detta verkar kunna bero på oförmåga att artikulera symboler vid läsning av texten samt att avkodningsförmågan inte verkar kunna utnyttjas på samma sätt för texten med symboler. Läsning av matematiska texter med symboler är alltså ganska speciellt och man kan behöva lära sig hur man läser sådana texter. Däremot verkar det finnas många likheter med läsning av matematiska texter utan symboler och historietexten. Det matematiska *innehållet* verkar alltså inte i någon större omfattning påverka läsprocessen, utan hur detta innehåll presenteras är en viktig aspekt.

I de teoretiska diskussionerna ges förslag på hur läsning av matematiska symboler kan infogas i den allmänna teorin för läsprocessen. Överlag finns dock ingen anledning att se läsning av matematiska texter som någon speciell typ av process som skiljer sig från läsning av andra texter. Den allmänna teorin för läsprocessen kan därmed fungera som teoretisk grund även för läsförståelse av matematiska texter, möjligen med föreslaget tillägg om matematiska symboler.

```
@phdthesis{diva2:21439,
author = {Österholm, Magnus},
title = {{Läsa matematiska texter:
Förståelse och lärande i läsprocessen}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1134}},
year = {2004},
address = {Sweden},
}
```

In this thesis we analyze the phase of the heavy symmetric top and the tippe top. These tops are two examples of physical systems for which the usefulness of integrals of motion and invariant manifolds, in phase space picture analysis, can be illustrated

In the case of the heavy symmetric top, simplified proofs of stability of the vertical rotation have been perpetuated by successive textbooks during the last century. In these proofs correct perturbations of integrals of motion are missing. This may seem harmless since the deduced threshold value for stability is correct. However, perturbations of first integrals are essential in rigorous proofs of stability of motions for both tops.

The tippe top is a toy that has the form of a truncated sphere equipped with a little peg. When spun fast on the spherical bottom its center of mass rises above its geometrical center and after a few seconds the top is spinning vertically on the peg. We study the tippe top through a sequence of embedded invariant manifolds to unveil the structure of the top's phase space. The last manifold, consisting of the asymptotic trajectories, is analyzed completely. We prove that trajectories in this manifold attract solutions in contact with the plane of support at all times and we give a complete description of their stability/instability properties for all admissible choices of model parameters and of the initial conditions.

```
@phdthesis{diva2:21435,
author = {Sköldstam, Markus},
title = {{Analysis of the phase space, asymptotic behavior and stability for heavy symmetric top and tippe top}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1106}},
year = {2004},
address = {Sweden},
}
```

When solving Newton systems *q = M(q), q ϵ R ^{n}*, by the method of separation of variables, one has to determine coordinates in which the related Hamilton-Jacobi equation separates.

The problem of finding separation coordinates for potential Newton systems *q = -*∇*V (q) *goes back ta Jacobi. In the first part of this thesis we give a complete solution to this classical problem. It can also be used to find separation coordinates for the Schrödinger equation.

In the second part of this thesis, we study separability for quasi-potential systems *q = -A(q) ^{-1}*∇

*W(q)*of generic cofactor pair type. We define separation coordinates that give these systems separable Stäckel form. The two most important families of these coordinates (cofactor-elliptic and cofactor-parabolic) generalize the Jacobi elliptic coordinates, and are shown to be defines by elegant rational equations.

```
@phdthesis{diva2:243482,
author = {Waksjö, Claes},
title = {{Determination of separation coordinates for potential and quasi-potential Newton systems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 845}},
year = {2003},
address = {Sweden},
}
```

Supersonic airflow is usually described by a nonlinear hyperbolic system of equations. In this thesis we consider an aircraft represented by a union of thin plates orthogonal to the direction of Right, and describe the derivation of an approximate linear model for the airflow around it. The linear model is reduced to a system of integral equations.

We propose a new set of functions to approximate the solution to the integral equation system and a panel method to solve the system in the stationary case based on those functions. A set of exact solutions on an infinite strip is derived and is used to verify the method.

```
@phdthesis{diva2:243480,
author = {Pettersson, David},
title = {{A Panel Method for Supersonic Flow}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1039}},
year = {2003},
address = {Sweden},
}
```

Time series of environmental data are collected to monitor the effectiveness of new emission reduction policies or to determine the general state of the environment. However, small gradual changes in such variables can easily be concealed by large fluctuations caused by prevailing weather conditions. Hence, there is a real need for procedures that facilitate separation of such natural variation from anthropogenic effects.

Taking meteorological or hydrological variables into consideration in a trend analysis can be done in several ways. The technique chosen to accomplish this objective depends on characteristics of the data set, for example the length of the time series and sampling frequencies, and the kind of relationships that exist between the response variable and the covariates. Two different approaches were examined in the studies underlying this thesis: multivariate non-parametric tests and parametric normalisation procedures. The non-parametric trend test proposed here was newly desinged, thus it was also necessary to conduct simulation studies to examine the performance of this method. By comparison, normalisation techniques have been used over the past few decades mainly to adjust for the impact of meteorological effects on air quality data. The choice of explanatory variables for such procedures was studied: first by examining variable selection procedures based on cross-validation, paying special attention to serially correlated response data; and secondly by considering variables derived from complex physics-based models as alternatives to measured variables. A number of other aspects that might influence the ability to detect trends were also explored, including level shifts due to instrument malfunctions.

```
@phdthesis{diva2:243068,
author = {Libiseller, Claudia},
title = {{Considering meteorological variation in assessments of environmental quality trends}},
school = {Linköping University},
type = {{Linköping Studies in Statistics No. 3}},
year = {2003},
address = {Sweden},
}
```

When designing a telecommunication network, one often wish to include some kind of survivability requirement, for example that the network should be two-connected. A two-connected network fulfills the requirement that there should be at least two paths with no links in common between all pairs of nodes. One form of design model is to prescribe that the network should be composed of connected rings of links. The network design problem is then to choose links from a given network, and compose them into a number of rings. A ring is reliable in the sense that there always exist two ways of sending traffic, clockwise or counter-clockwise, which means that a ring fulfills the two-connectivity requirement. There is often a number of requirements on a ring, such as a limited length and limited number of nodes connected to the ring. This means that a ring network will include a number of rings, and traffic between rings must be possible. The traffic between rings is usually made at certain nodes, called transit nodes. Therefore all rings should be connected to at least one of the transit nodes. We focus on the case where we have two transit nodes in the network.

Each possible ring is associated with a certain fixed cost, and all links in a certain ring are given the same capacity. Reserve capacity is allocated according to certain principles. The number of possible rings in a network is an exponential function of the number of nodes in the network, so for larger networks is it impossible to a priori generate all possible rings.

We describe the problem, and model it as a linear integer programming problem, where a set of rings are assumed to be known. The usage of rings, i.e., the allocation of demand to rings, is determined. In practice, too many rings can not be included in the model. Instead we must be able to generate useful rings. A Lagrangean relaxation of the model is formulated, and the dual solution is used in order to derive reduced costs which can be used to generate new better rings. The information generated describes only the physical structure of the ring, not the usage of it. The ring generation problem is a modified traveling salesman subtour problem, which is known to be difficult to solve. Therefore, we focus on heuristic solution methods for this problem.

We also presents a column generation approach where the problem is modeled as a set covering problem. Here, a column describes both the topology of the ring and the exact usage of it. A similar ring generation problem appears as a subproblem, in order to generate new rings.

All methods are computationally tested on both real life data and randomly generated data, similar to real life problems.

```
@phdthesis{diva2:242670,
author = {Henningsson, Mathias},
title = {{Ring network design in telecommunications:
optimization based solution approaches}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 829}},
year = {2003},
address = {Sweden},
}
```

To use optimization models and methods is an increasingly acknowledged element of supply chain management for large industries. Concurrently with the increasing global market competition and modern business policies, an already effective organization needs to find new possibilities to improve their planning procedures and resource utilization. To incorporate the planning in a more comprehensive context, in an overall supply chain, provides new possibilities to efficiently manage several planning steps than if considered individually.

This thesis considers production planning and ship scheduling problems at Södra Cell, a large Scandinavian pulp manufacturer. The main purpose of the thesis is to suggest optimization models and methods that can support production and distribution planners to objectively consider the supply chain impact in the short-term decision making process. The production planning and the ship scheduling problems are approached separately. Optimization models are formulated for both problems, and specially adapted solution methods are developed that account the characteristics of the problems.

The thesis consists of three research papers. In the first paper two mixed integer models (MIPs) are developed for the production planning problem. The first model has binary variables for every possible production plan and a column generation approach with a constraint branching heuristic is used to solve the otherwise very large model. The second model has the production choices formulated in constraints and is solved with a standard branch and bound search. The results are compared with manual planning and potential large savings are indicated.

The second and the third paper both consider the pulp distribution problem where ship routing and scheduling decisions have a key role. In the second paper the problem is solved with a rolling time horizon method. An MIP model is solved repeatedly over short ranges of time until eventually the whole planning period is covered. The third paper presents a genetic algorithm with integrated linear programming models for the distribution problem. The solutions from the different solution strategies for the distribution problem are mutually compared in the third paper.

```
@phdthesis{diva2:242668,
author = {Bredström, David},
title = {{Optimization models and methods for production planning and ship scheduling in the pulp industry}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 1056}},
year = {2003},
address = {Sweden},
}
```

Bootstrap and Markov chain Monte Carlo methods have received much attention in recent years. We study computer intensive methods that can be used in complex situations where it is not possible to express the likelihood estimates or the posterior analytically. The work is inspired by a set of car crash data from real traffic.

We formulate and develop a model for car crash data that aims to estimate and compare the relative collision safety among different car models. This model works sufficiently well, although complications arise due to a growing vector of incidental parameters. The bootstrap is shown to be a useful tool for studying uncertainties of the estimates of the structural parameters. This model is further extended to include driver characteristics. In a Poisson model with similar, but simpler structure, estimates of the structural parameter in the presence of incidental parameters are studied. The profile likelihood, bootstrap and the delta method are compared for deterministic and random incidental parameters. The same asymptotic properties, up to first order, are seen for deterministic as well as random incidental parameters.

The search for suitable methods that work in complex model structures leads us to consider Markov chain Monte Carlo (MCMC) methods. In the area of MCMC, we consider particularly the question of how and when to claim convergence of the MCMC run in situations where it is only possible to analyse the output values of the run and also how to compare different MCMC modellings. In Metropolis-Hastings algorithm, different proposal functions lead to different realisations. We develop a new convergence diagnostic, based on the Kullback-Leibler distance, which is shown to be particularly useful when comparing different runs. Comparisons with established methods turn out favourably for the KL.

In both models, a Bayesian analysis is made where the posterior distribution is obtained by MCMC methods. The credible intervals are compared to the corresponding confidence intervals from the bootstrap analysis and are shown to give the same qualitative conclusions.

```
@phdthesis{diva2:242532,
author = {Vadeby, Anna},
title = {{Computer based statistical treatment in models with incidental parameters:
inspired by car crash data}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 814}},
year = {2003},
address = {Sweden},
}
```

The impact of some errors, associated with a road traffic survey, is examined. The survey aims at evaluating efforts to reduce the speeds on Swedish roads. It covers both state and urban roads, but we consider only the latter. From these roads, observational sites are selected by a three-stage sampling procedure. A measurement device installed on the road is used to collect data, from which the average speed on the roads is estimated.

We focus on errors in the frames used in the final sampling stage, and on errors due to rnissing data. The impact of these errors on the total error of the survey estimators is investigated. Also, possibilities to reduce the total error by weighting adjustments for missing data, and by re-allocation of the sample over sampling stages, are explored. We approach the problems partly theoretically, by use of various error models; partly empirically, by collecting data on the errors. Throughout. the sampling design of the survey is taken properly into account. We conclude that the frame error under consideration probably does not bias the estimator of average speed, and only implies a minor increase of its variance. It remains unclear if the estimator needs to be adjusted for missing data - a theoretical frame for further investigations is however provided. For unchanged total sample size, the precision of the estimator is likely to improve if the sample sizes in stage three are increased, and the sampling sizes in stage one decreased correspondingly.

```
@phdthesis{diva2:242426,
author = {Isaksson, Annica},
title = {{Survey models for a vehicle speed survey}},
school = {Linköping University},
type = {{Linköping Studies in Statistics No. 2}},
year = {2003},
address = {Sweden},
}
```

```
@phdthesis{diva2:254043,
author = {Karlsson, Jenny},
title = {{Optimization Models and Methods for Tactical and Operative Harvest Planning}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 979}},
year = {2002},
address = {Sweden},
}
```

The heat transfer in the filling phase of injection moulding is studied, based on Gunnar Aronsson’s distance model for flow expansion ([Aronsson], 1996).

The choice of a thermoplastic materials model is motivated by general physical properties, admitting temperature and pressure dependence. Two-phase, per-phase-incompressible, power-law fluids are considered. The shear rate expression takes into account pseudo-radial flow from a point inlet.

Instead of using a finite element (FEM) solver for the momentum equations a general analytical viscosity expression is used, adjusted to current axial temperature profiles and yielding expressions for axial velocity profile, pressure distribution, frozen layer expansion and special front convection.

The nonlinear energy partial differential equation is transformed into its conservative form, expressed by the internal energy, and is solved differently in the regions of streaming and stagnant flow, respectively. A finite difference (FD) scheme is chosen using control volume discretization to keep truncation errors small in the presence of non-uniform axial node spacing. Time and pseudo-radial marching is used. A local system of nonlinear FD equations is solved. In an outer iterative procedure the position of the boundary between the “solid” and “liquid” fluid cavity parts is determined. The uniqueness of the solution is claimed. In an inner iterative procedure the axial node temperatures are found. For all physically realistic material properties the convergence is proved. In particular the assumptions needed for the Newton-Mysovskii theorem are secured. The metal mould PDE is locally solved by a series expansion. For particular material properties the same technique can be applied to the “solid” fluid.

In the circular plate application, comparisons with the commercial FEM-FD program Moldflow (Mfl) are made, on two Mfl-database materials, for which model parameters are estimated/adjusted. The resulting time evolutions of pressures and temperatures are analysed, as well as the radial and axial profiles of temperature and frozen layer. The greatest differences occur at the flow front, where Mfl neglects axial heat convection. The effects of using more and more complex material models are also investigated. Our method performance is reported.

In the polygonal star-shaped plate application a geometric cavity model is developed. Comparison runs with the commercial FEM-FD program Cadmould (Cmd) are performed, on two Cmd-database materials, in an equilateral triangular mould cavity, and materials model parameters are estimated/adjusted. The resulting average temperatures at the end of filling are compared, on rays of different angular deviation from the closest corner ray and on different concentric circles, using angular and axial (cavity-halves) symmetry. The greatest differences occur in narrow flow sectors, fatal for our 2D model for a material with non-realistic viscosity model. We present some colour plots, e.g. for the residence time.

The classical square-root increase by time of the frozen layer is used for extrapolation. It may also be part of the front model in the initial collision with the cold metal mould. An extension of the model is found which describes the radial profile of the frozen layer in the circular plate application accurately also close to the inlet.

The well-posedness of the corresponding linearized problem is studied, as well as the stability of the linearized FD-scheme.

```
@phdthesis{diva2:21489,
author = {Andersson, Per-Åke},
title = {{Computation of Thermal Development in Injection Mould Filling, based on the Distance Model}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 993}},
year = {2002},
address = {Sweden},
}
```

In many industrial applications one wishes to determine the temperature history on the surface of a body, where the surface itself is inaccessible for measurements. The *sideways heat equation* is a model of this situation. In a one-dimensional setting this is formulated mathematically as a Cauchy problem for the heat equation, where temperature and heat--flux data are available along the line *x*=1, and a solution is sought for 0 ≤ *x*< 1. This problem is ill-posed in the sense that the solution does not depend continuously on the data. Stability can be restored by replacing the time derivative in the heat equation by a bounded approximation. We consider both spectral and wavelet approximations of the derivative. The resulting problem is a system of ordinary differential equations in the space variable, that can be solved using standard methods, e.g. a Runge-Kutta method. The methods are analyzed theoretically, and error estimates are derived, that can be used for selecting the appropriate level of regularization. The numerical implementation of the proposed methods is discussed. Numerical experiments demonstrate that the proposed methods work well, and can be implemented efficiently. Furthermore, the numerical methods can easily be adapted to solve problems with variable coefficients, and also non-linear equations. As test problems we take model equations, with constant and variable coefficients. Also, we solve problems from applications, with actual measured data.

Inverse problems for the stationary heat equation are also discussed. Suppose that the Laplace equation is valid in a domain with a hole. Temperature and heat-flux data are given on the outer boundary, and we wish to compute the steady state temperature on the inner boundary. A standard approach is to discretize the equation by finite differences, and use Tikhonov's method for stabilizing the discrete problem, which leads to a large sparse least squares problem. Alternatively, we propose to use a conformal mapping to transform the domain into an annulus, where the equivalent problem can be solved using separation of variables. The ill-posedness is dealt with by filtering away high frequencies from the solution. Numerical results using both methods are presented. A closely related problem is that of determining the stationary temperature inside a body, from temperature and heat-flux measurements on a part of the boundary. In practical applications it is sometimes the case that the domain, where the differential equation is valid, is partly unknown. In such cases we want to determine not only the temperature, but also the shape of the boundary of the domain. This problem arises, for instance, in iron production, where the walls of a melting furnace is subject to both physical and chemical wear. In order to avoid a situation where molten metal breaks out through the walls the thickness of the walls should be constantly monitored. This is done by solving an inverse problem for the stationary heat equation, where temperature and heat-flux data are available at certain locations inside the walls of the furnace. Numerical results are presented also for this problem.

```
@phdthesis{diva2:255742,
author = {Berntsson, Fredrik},
title = {{Numerical methods for inverse heat conduction problems}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 723}},
year = {2001},
address = {Sweden},
}
```

In this thesis, the question "What kind of models can be used to describe microcosmos?" will be discussed. Being difficult and very large in scope, the question has here been restricted to whether or not Local Realistic models can be used to describe Quantum-Mechanical processes, one of a collection of questions often referred to as Quantum Paradoxes. Two such paradoxes will be investigated using techniques from probability theory: the Bell inequality and the Greenberger-Horne-Zeilinger (GHZ) paradox.

A problem with the two mentioned paradoxes is that they are only valid when the detectors are 100% efficient, whereas present experimental efficiency is much lower than that. Here, an approach is presented which enables a generalization of both the Bell inequality and the GHZ paradox to the inefficient case. This is done by introducing the concept of *change of ensemble*, which provides both qualitative and quantitative information on the nature of the "loophole" in the 100% efficiency prerequisite, and is more fundamental in this regard than the efficiency concept. Efficiency estimates are presented which are easy to obtain from experimental coincidence data, and a connection is established between these estimates and the concept of change of ensemble.

The concept is also studied in the context of Franson interferometry, where the Bell inequality cannot immediately be used. Unexpected subtleties occur when trying to establish whether or not a Local Realistic model of the data is possible even in the ideal case. A Local Realistic model of the experiment is presented, but nevertheless, by introducing an additional requirement on the experimental setup it is possible to refute the mentioned model and show that no other Local Realistic model exists.

```
@phdthesis{diva2:259446,
author = {Larsson, Jan-Åke},
title = {{Quantum paradoxes, probability theory, and change of ensemble}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 654}},
year = {2000},
address = {Sweden},
}
```

We propose a new mathematical model for relative collision safety in cars. Our present research is restricted to head-on crashes between two cars and we try to determine how much of the injury risk in a crash that depends on car model. The relative risks include the driver populations of the different car models. When two cars crash they are exposed to the same force, but the damage severity is different depending on various factors such as car mass, change of speed and design of the car. To explore the relative risks between different car models, we build a model where we let car mass, change of speed and design of the car explain the injury outcome in the crashes. The mathematical model we use is a birth process where we let the states correspond to the injury classes. A data base containing police reported traffic accidents and hospital information is used to explore the relationships in our model.

A bootstrap analysis is made to produce a picture of the uncertainty of the estimates. The uncertainty from the bootstrap analysis is compared to the asymptotic estimate of the uncertainty given by the inverse of an information sub-matrix.

```
@phdthesis{diva2:582861,
author = {Vadeby, Anna},
title = {{Modelling and inference of relative collision safety in cars}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Thesis No. 685}},
year = {1998},
address = {Sweden},
}
```

A meta-database system is constructed for describing the contents of very large databases. The meta-database is implemented as data structures in a symbol manipulation language, separate from the underlying database system. A number of programs are built around the meta-database. The most important program module is a query compiler, which translates a non-procedural query language called LRL into a lower level language (COBOL). LRL permits the specification of database retrievals without stating which files are to be used in the search, or how they shall be connected. This is decided automatically by the query compiler. A major feature of the system is a method, the Focus method, for compiletime optimization of these choices. Other facilities include the definition of "views" of the database; data directory services; authority codes; and meta-database entry and update.

Design issues discussed include the decision to compile rather than interpret non-procedural query languages; the decision to separate the meta-database from the underlying database system; and the problem of achieving an architecture convertible to any underlying database system. Experience with one such conversion is reported.

```
@phdthesis{diva2:256469,
author = {Risch, Tore},
title = {{Compilation of multiple file queries in a meta-database system}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 33}},
year = {1978},
address = {Sweden},
}
```

A LISP compiler is constructed without any a priori assumptions about the target machine. In parallel with the compiler a LISP oriented instruction set is developed. The instruction set can be seen as either an intermediarylanguage for a traditional computer oras the instruction set for a special purpose LISP machine. The code produced by the compiler is evaluated with regard to its static and dynamic properties. Finally some architectural aspects on LISP oriented hardware are discussed. The notion of segments with different word lengths, under program control, is developed and a proposed implementation of this is described.

```
@phdthesis{diva2:256468,
author = {Urmi, Jaak},
title = {{A machine independent LISP compiler and its implications for ideal hardware}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 22}},
year = {1978},
address = {Sweden},
}
```

The purpose of the project described in this report is to study control structures in Natural Swedish, especially those occuring when tasks of algorithmic nature are described, and how to transform these specifications into programs, which can then be executed.

The report describes and discusses the solutions that are used in an implemented system which can read and comprehend descriptions of patience (solitaire) games. The results are partly language dependent, but are not restricted to this specific problem environment.

The system is divided into four modules. The syntactic module splits the sentence approximately to its component parts. In addition to the standard component categories, such as subject and predicate, every preposition is regarded as a component category of the sentence. The semantic analysis within a sentence works with a set of internallsation rules, one for each combination of a verb and a component part. The third module deals with the semantics on text level and integrates the representation of a sentence into the program code that is built up. The last module is an interpreter which can execute the programs representing patience games.

```
@phdthesis{diva2:258779,
author = {Mats, Cedwall},
title = {{Semantisk analys av processbeskrivningar i naturligt språk}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 18}},
year = {1977},
address = {Sweden},
}
```

Program manipulation is the task to perform transformations on program code, and is normally done in order to optimize the code with respect of the utilization of some computer resource. Partial evaluation is the task when partial computations can be performed in a program before it is actually executed. If a parameter to a procedure is constant a specialized version of that procedure can be generated if the constant is inserted instead of the parameter in the procedure body and as much computations in the code as possible are performed.

A system is described which works on programs written in INTERLISP, and which performs partial evaluation together with other transformations such as beta-expansion and certain other optimization operations. The system works on full LISP and not only for a "pure" LISP dialect, and deals with problems occurring there involving side-effects, variable assignments etc. An analysis of a previous system, REDFUN, results in a list of problems, desired extensions and new features. This is used as a basis for a new design, resulting in a new implementation, REDFUN-2. This implementation, design considerations, constraints in the system, remaining problems, and other experience from the development and experiments with the system are reported in this paper.

```
@phdthesis{diva2:256466,
author = {Haraldsson, Anders},
title = {{A program manipulation system based on partial evaluation}},
school = {Linköping University},
type = {{Linköping Studies in Science and Technology. Dissertations No. 14}},
year = {1977},
address = {Sweden},
}
```

Sidansvarig:
fredrik.berntsson@liu.se

Senast uppdaterad: Fri Dec 20 11:36:15 CET 2013