Home About us Contact | |||
Minimization Problem (minimization + problem)
Selected AbstractsMeasuring and Optimizing Portfolio Credit Risk: A Copula-based Approach,ECONOMIC NOTES, Issue 3 2004Annalisa Di Clemente In this work, we present a methodology for measuring and optimizing the credit risk of a loan portfolio taking into account the non-normality of the credit loss distribution. In particular, we aim at modelling accurately joint default events for credit assets. In order to achieve this goal, we build the loss distribution of the loan portfolio by Monte Carlo simulation. The times until default of each obligor in portfolio are simulated following a copula-based approach. In particular, we study four different types of dependence structure for the credit assets in portfolio: the Gaussian copula, the Student's t-copula, the grouped t-copula and the Clayton n-copula (or Cook,Johnson copula). Our aim is to assess the impact of each type of copula on the value of different portfolio risk measures, such as expected loss, maximum loss, credit value at risk and expected shortfall. In addition, we want to verify whether and how the optimal portfolio composition may change utilizing various types of copula for describing the default dependence structure. In order to optimize portfolio credit risk, we minimize the conditional value at risk, a risk measure both relevant and tractable, by solving a simple linear programming problem subject to the traditional constraints of balance, portfolio expected return and trading. The outcomes, in terms of optimal portfolio compositions, obtained assuming different default dependence structures are compared with each other. The solution of the risk minimization problem may suggest us how to restructure the inefficient loan portfolios in order to obtain their best risk/return profile. In the absence of a developed secondary market for loans, we may follow the investment strategies indicated by the solution vector by utilizing credit default swaps. [source] Gain-scheduling control of a rotary inverted pendulum by weight optimization and H, loop shaping procedureELECTRICAL ENGINEERING IN JAPAN, Issue 2 2008Kazuhiro Yubai Abstract Gain-scheduling control is an effective method for use with plants whose dynamics change significantly according to the operating point. The frozen parameter method, a practical gain-scheduling controller synthesis method, interpolates the controllers designed at prespecified (frozen) operating points according to the current operation point. Hyde and Glover proposed a gain-scheduling control method in which the H, loop shaping procedure is adopted as a controller synthesis method at each operating point. The H, loop shaping procedure is based on loop shaping of an open loop characteristic by frequency weights and is known to be effective for plants with bad condition numbers. However, weight selection satisfying the control specifications is a difficult job for a designer. This paper describes the design of suboptimal weights and a controller by means of an algorithm that maximizes the robust stability margin and shapes the open-loop characteristic into the desired shape at each operating point. In addition, we formulate the weight optimization problem as a generalized eigenvalue minimization problem, which reduces the burden on the designer in weight selection. Finally, we realize a robust, high-performance control system by scheduling both weights and controllers. The effectiveness of the proposed control system is verified in terms of the achieved robust stability margin and the experimental time responses of a rotary inverted pendulum, which involves strong nonlinear dynamics. © 2008 Wiley Periodicals, Inc. Electr Eng Jpn, 163(2): 30,40, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20647 [source] A review of the adjoint-state method for computing the gradient of a functional with geophysical applicationsGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2006R.-E. Plessix SUMMARY Estimating the model parameters from measured data generally consists of minimizing an error functional. A classic technique to solve a minimization problem is to successively determine the minimum of a series of linearized problems. This formulation requires the Fréchet derivatives (the Jacobian matrix), which can be expensive to compute. If the minimization is viewed as a non-linear optimization problem, only the gradient of the error functional is needed. This gradient can be computed without the Fréchet derivatives. In the 1970s, the adjoint-state method was developed to efficiently compute the gradient. It is now a well-known method in the numerical community for computing the gradient of a functional with respect to the model parameters when this functional depends on those model parameters through state variables, which are solutions of the forward problem. However, this method is less well understood in the geophysical community. The goal of this paper is to review the adjoint-state method. The idea is to define some adjoint-state variables that are solutions of a linear system. The adjoint-state variables are independent of the model parameter perturbations and in a way gather the perturbations with respect to the state variables. The adjoint-state method is efficient because only one extra linear system needs to be solved. Several applications are presented. When applied to the computation of the derivatives of the ray trajectories, the link with the propagator of the perturbed ray equation is established. [source] Steady-state 3D rolling-contact using boundary elementsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 10 2007R. Abascal Abstract This work presents a new approach to the steady-state rolling contact problem for 3D elastic bodies. The problem solution is achieved by minimizing a general function representing the equilibrium equation and the rolling-contact restrictions. The boundary element method is used to compute the elastic influence coefficients of the surface points involved in the contact (equilibrium equations); while the contact conditions are represented with the help of projection functions. Finally, the minimization problem is solved by the generalized Newton's method with line search. Classic rolling problems are also solved and commented. Copyright © 2006 John Wiley & Sons, Ltd. [source] A minimization principle for finite strain plasticity: incremental objectivity and immediate implementationINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 12 2002Eric Lorentz Abstract A finite strain plasticity formulation is proposed which meets several requirements that often appear contradictory. On a physical ground, it is based on a multiplicative split of the deformation, hyperelasticity for the reversible part of the behaviour and the maximal dissipation principle to define the evolution laws. On a numerical ground, it is incrementally objective and the integration over a time increment can be expressed as a minimization problem, a proper framework to examine the questions of existence and uniqueness of the solutions. Last but not least, the implementation is immediate since it relies on the same equations for finite and infinitesimal transformations. Finally, the formulationis applied to von Mises plasticity with isotropic linear hardening and introduced in the finite element software Code_Aster®. The numerical computation of a cantilever beam shows that it leads to results in good agreement with those obtain with common approaches. Copyright © 2002 John Wiley & Sons, Ltd. [source] Piecewise constant level set method for structural topology optimizationINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2009Peng Wei Abstract In this paper, a piecewise constant level set (PCLS) method is implemented to solve a structural shape and topology optimization problem. In the classical level set method, the geometrical boundary of the structure under optimization is represented by the zero level set of a continuous level set function, e.g. the signed distance function. Instead, in the PCLS approach the boundary is described by discontinuities of PCLS functions. The PCLS method is related to the phase-field methods, and the topology optimization problem is defined as a minimization problem with piecewise constant constraints, without the need of solving the Hamilton,Jacobi equation. The result is not moving the boundaries during the iterative procedure. Thus, it offers some advantages in treating geometries, eliminating the reinitialization and naturally nucleating holes when needed. In the paper, the PCLS method is implemented with the additive operator splitting numerical scheme, and several numerical and procedural issues of the implementation are discussed. Examples of 2D structural topology optimization problem of minimum compliance design are presented, illustrating the effectiveness of the proposed method. Copyright © 2008 John Wiley & Sons, Ltd. [source] Analysis of microstructure development in shearbands by energy relaxation of incremental stress potentials: Large-strain theory for standard dissipative solidsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2003Christian Miehe Abstract We propose a fundamentally new approach to the treatment of shearband localizations in strain softening elastic,plastic solids at finite strains based on energy minimization principles associated with microstructure developments. The point of departure is a general internal variable formulation that determines the finite inelastic response as a standard dissipative medium. Consistent with this type of inelasticity we consider an incremental variational formulation of the local constitutive response where a quasi-hyperelastic stress potential is obtained from a local constitutive minimization problem with respect to the internal variables. The existence of this variational formulation allows the definition of the material stability of an inelastic solid based on weak convexity conditions of the incremental stress potential in analogy to treatments of finite elasticity. Furthermore, localization phenomena are interpreted as microstructure developments on multiple scales associated with non-convex incremental stress potentials in analogy to elastic phase decomposition problems. These microstructures can be resolved by the relaxation of non-convex energy functionals based on a convexification of the stress potential. The relaxed problem provides a well-posed formulation for a mesh-objective analysis of localizations as close as possible to the non-convex original problem. Based on an approximated rank-one convexification of the incremental stress potential we develop a computational two-scale procedure for a mesh-objective treatment of localization problems at finite strains. It constitutes a local minimization problem for a relaxed incremental stress potential with just one scalar variable representing the intensity of the microshearing of a rank-one laminate aligned to the shear band. This problem is sufficiently robust with regard to applications to large-scale inhomogeneous deformation processes of elastic,plastic solids. The performance of the proposed energy relaxation method is demonstrated for a representative set of numerical simulations of straight and curved shear bands which report on the mesh independence of the results. Copyright © 2003 John Wiley & Sons, Ltd. [source] Strain-driven homogenization of inelastic microstructures and composites based on an incremental variational formulationINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2002Christian Miehe Abstract The paper investigates computational procedures for the treatment of a homogenized macro-continuum with locally attached micro-structures of inelastic constituents undergoing small strains. The point of departure is a general internal variable formulation that determines the inelastic response of the constituents of a typical micro-structure as a generalized standard medium in terms of an energy storage and a dissipation function. Consistent with this type of inelasticity we develop a new incremental variational formulation of the local constitutive response where a quasi-hyperelastic micro-stress potential is obtained from a local minimization problem with respect to the internal variables. It is shown that this local minimization problem determines the internal state of the material for finite increments of time. We specify the local variational formulation for a setting of smooth single-surface inelasticity and discuss its numerical solution based on a time discretization of the internal variables. The existence of the quasi-hyperelastic stress potential allows the extension of homogenization approaches of elasticity to the incremental setting of inelasticity. Focusing on macro-strain-driven micro-structures, we develop a new incremental variational formulation of the global homogenization problem where a quasi-hyperelastic macro-stress potential is obtained from a global minimization problem with respect to the fine-scale displacement fluctuation field. It is shown that this global minimization problem determines the state of the micro-structure for finite increments of time. We consider three different settings of the global variational problem for prescribed linear displacements, periodic fluctuations and constant stresses on the boundary of the micro-structure and discuss their numerical solutions based on a spatial discretization of the fine-scale displacement fluctuation field. The performance of the proposed methods is demonstrated for the model problem of von Mises-type elasto-visco-plasticity of the constituents and applied to a comparative study of micro-to-macro transitions of inelastic composites. Copyright © 2002 John Wiley & Sons, Ltd. [source] A least square extrapolation method for the a posteriori error estimate of the incompressible Navier Stokes problemINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 1 2005M. Garbey Abstract A posteriori error estimators are fundamental tools for providing confidence in the numerical computation of PDEs. To date, the main theories of a posteriori estimators have been developed largely in the finite element framework, for either linear elliptic operators or non-linear PDEs in the absence of disparate length scales. On the other hand, there is a strong interest in using grid refinement combined with Richardson extrapolation to produce CFD solutions with improved accuracy and, therefore, a posteriori error estimates. But in practice, the effective order of a numerical method often depends on space location and is not uniform, rendering the Richardson extrapolation method unreliable. We have recently introduced (Garbey, 13th International Conference on Domain Decomposition, Barcelona, 2002; 379,386; Garbey and Shyy, J. Comput. Phys. 2003; 186:1,23) a new method which estimates the order of convergence of a computation as the solution of a least square minimization problem on the residual. This method, called least square extrapolation, introduces a framework facilitating multi-level extrapolation, improves accuracy and provides a posteriori error estimate. This method can accommodate different grid arrangements. The goal of this paper is to investigate the power and limits of this method via incompressible Navier Stokes flow computations. Copyright © 2005 John Wiley & Sons, Ltd. [source] Convergence properties of bias-eliminating algorithms for errors-in-variables identificationINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 9 2005Torsten Söderström Abstract This paper considers the problem of dynamic errors-in-variables identification. Convergence properties of the previously proposed bias-eliminating algorithms are investigated. An error dynamic equation for the bias-eliminating parameter estimates is derived. It is shown that the convergence of the bias-eliminating algorithms is basically determined by the eigenvalue of largest magnitude of a system matrix in the estimation error dynamic equation. When this system matrix has all its eigenvalues well inside the unit circle, the bias-eliminating algorithms can converge fast. In order to avoid possible divergence of the iteration-type bias-eliminating algorithms in the case of high noise, the bias-eliminating problem is re-formulated as a minimization problem associated with a concentrated loss function. A variable projection algorithm is proposed to efficiently solve the resulting minimization problem. A numerical simulation study is conducted to demonstrate the theoretical analysis. Copyright © 2005 John Wiley & Sons, Ltd. [source] Iteration domain H, -optimal iterative learning controller designINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 10 2008Kevin L. Moore Abstract This paper presents an H, -based design technique for the synthesis of higher-order iterative learning controllers (ILCs) for plants subject to iteration-domain input/output disturbances and plant model uncertainty. Formulating the higher-order ILC problem into a high-dimensional multivariable discrete-time system framework, it is shown how the addition of input/output disturbances and plant model uncertainty to the ILC problem can be cast as an H, -norm minimization problem. The distinctive feature of this formulation is to consider the uncertainty as arising in the iteration domain rather than the time domain. An algebraic approach to solving the problem in this framework is presented, resulting in a sub-optimal controller that can achieve both stability and robust performance. The key observation is that H, synthesis can be used for higher-order ILC design to achieve a reliable performance in the presence of iteration-varying external disturbances and model uncertainty. Copyright © 2007 John Wiley & Sons, Ltd. [source] A nonlinear minimization approach to multiobjective and structured controls for discrete-time systemsINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 16 2004Kwan Ho Lee Abstract In this paper, a nonlinear minimization approach is proposed for multiobjective and structured controls for discrete-time systems. The problem of finding multiobjective and structured controls for discrete-time systems is represented as a quadratic matrix inequality problem. It is shown that the problem is reduced to a nonlinear minimization problem that has a concave objective function and linear matrix inequality constraints. An algorithm for the nonlinear minimization problem is proposed, which is easily implemented with existing semidefinite programming algorithms. The validity of the proposed algorithm is illustrated by comparisons with existing methods. In addition, applications of this work are demonstrated via numerical examples. Copyright © 2004 John Wiley & Sons, Ltd. [source] Solving the irregular strip packing problem via guided local search for overlap minimizationINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 6 2009Shunji Umetani Abstract The irregular strip-packing problem (ISP) requires a given set of non-convex polygons to be placed without overlap within a rectangular container having a fixed width and a variable length, which is to be minimized. As a core sub-problem to solve ISP, we consider an overlap minimization problem (OMP) whose objective is to place all polygons into a container with given width and length so that the total amount of overlap between polygons is made as small as possible. We propose to use directional penetration depths to measure the amount of overlap between a pair of polygons, and present an efficient algorithm to find a position with the minimum overlap for each polygon when it is translated in a specified direction. Based on this, we develop a local search algorithm for OMP that translates a polygon in horizontal and vertical directions alternately. Then we incorporate it in our algorithm for OMP, which is a variant of the guided local search algorithm. Computational results show that our algorithm improves the best-known values of some well-known benchmark instances. [source] GenX: an extensible X-ray reflectivity refinement program utilizing differential evolutionJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 6 2007Matts Björck GenX is a versatile program using the differential evolution algorithm for fitting X-ray and neutron reflectivity data. It utilizes the Parratt recursion formula for simulating specular reflectivity. The program is easily extensible, allowing users to incorporate their own models into the program. This can be useful for fitting data from other scattering experiments, or for any other minimization problem which has a large number of input parameters and/or contains many local minima, where the differential evolution algorithm is suitable. In addition, GenX manages to fit an arbitrary number of data sets simultaneously. The program is released under the GNU General Public License. [source] An efficient nonlinear programming strategy for PCA models with incomplete data setsJOURNAL OF CHEMOMETRICS, Issue 6 2010Rodrigo López-Negrete de la Fuente Abstract Processing plants can produce large amounts of data that process engineers use for analysis, monitoring, or control. Principal component analysis (PCA) is well suited to analyze large amounts of (possibly) correlated data, and for reducing the dimensionality of the variable space. Failing online sensors, lost historical data, or missing experiments can lead to data sets that have missing values where the current methods for obtaining the PCA model parameters may give questionable results due to the properties of the estimated parameters. This paper proposes a method based on nonlinear programming (NLP) techniques to obtain the parameters of PCA models in the presence of incomplete data sets. We show the relationship that exists between the nonlinear iterative partial least squares (NIPALS) algorithm and the optimality conditions of the squared residuals minimization problem, and how this leads to the modified NIPALS used for the missing value problem. Moreover, we compare the current NIPALS-based methods with the proposed NLP with a simulation example and an industrial case study, and show how the latter is better suited when there are large amounts of missing values. The solutions obtained with the NLP and the iterative algorithm (IA) are very similar. However when using the NLP-based method, the loadings and scores are guaranteed to be orthogonal, and the scores will have zero mean. The latter is emphasized in the industrial case study. Also, with the industrial data used here we are able to show that the models obtained with the NLP were easier to interpret. Moreover, when using the NLP many fewer iterations were required to obtain them. Copyright © 2010 John Wiley & Sons, Ltd. [source] Modular solvers for image restoration problems using the discrepancy principleNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 5 2002Peter Blomgren Abstract Many problems in image restoration can be formulated as either an unconstrained non-linear minimization problem, usually with a Tikhonov -like regularization, where the regularization parameter has to be determined; or as a fully constrained problem, where an estimate of the noise level, either the variance or the signal-to-noise ratio, is available. The formulations are mathematically equivalent. However, in practice, it is much easier to develop algorithms for the unconstrained problem, and not always obvious how to adapt such methods to solve the corresponding constrained problem. In this paper, we present a new method which can make use of any existing convergent method for the unconstrained problem to solve the constrained one. The new method is based on a Newton iteration applied to an extended system of non-linear equations, which couples the constraint and the regularized problem, but it does not require knowledge of the Jacobian of the irregularity functional. The existing solver is only used as a black box solver, which for a fixed regularization parameter returns an improved solution to the unconstrained minimization problem given an initial guess. The new modular solver enables us to easily solve the constrained image restoration problem; the solver automatically identifies the regularization parameter, during the iterative solution process. We present some numerical results. The results indicate that even in the worst case the constrained solver requires only about twice as much work as the unconstrained one, and in some instances the constrained solver can be even faster. Copyright © 2002 John Wiley & Sons, Ltd. [source] A new method for mixed H2/H, control with regional pole constraintsOPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 3 2003Jenq-Lang Wu Abstract In this paper, the problem of state feedback mixed H2/H, control with regional pole constraints is studied. The constraint region is represented by several algebraic inequalities. This constrained optimization problem cannot be solved via the LMI approach. Based on the barrier method, we instead solve an auxiliary minimization problem to get an approximate solution. We shall show that the obtained minimal solution of the auxiliary minimization problem can be arbitrarily close to the infimal solution of the original problem. An example is provided to illustrate the benefits of the approach. Copyright © 2003 John Wiley & Sons, Ltd. [source] Incomplete sensitivities and cost function reformulation leading to multi-criteria investigation of inverse problemsOPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 2 2003A. Cabot Abstract This paper deals with the application of typical minimization methods based on dynamical systems to the solution of a characteristic inverse problem. The state equation is based on the Burgers equation. The control is meant to achieve a prescribed state distribution and a given shock location. We show how to use incomplete sensitivities during the minimization process. We also show through a redefinition of the cost function that a multi-criteria problem needs to be considered in inverse problems. This example shows that a correct definition of the minimization problem is crucial and needs to be studied before a direct application of brute force minimization approaches. Copyright © 2003 John Wiley & Sons, Ltd. [source] A Production Function with an Inferior Input: CommentTHE MANCHESTER SCHOOL, Issue 6 2001Christian E. Weber Epstein and Spiegel (The Manchester School, Vol. 68 (2000), No. 5, pp. 503,515) have discussed a production function in which one input is inferior: an increase in the target level of output reduces the quantity of the input demanded. This paper provides a more straightforward proof that the input in question is inferior. This proof has the added advantage that, unlike the proof of Epstein and Spiegel, it is based on the firm's cost minimization problem. It thus emphasizes the connection between the firm's cost minimization problem and the issue of input inferiority. It is also shown that, if we treat the Epstein,Spiegel functional form as a utility function rather than a production function, then the inferior good can exhibit Giffen behavior. [source] Structural design of composite nonlinear feedback control for linear systems with actuator constraint,ASIAN JOURNAL OF CONTROL, Issue 5 2010Weiyao Lan Abstract The performance of the composite nonlinear feedback (CNF) control law relies on the selection of the linear feedback gain and the nonlinear function. However, it is a tough task to select an appropriate linear feedback gain and appropriate parameters of the nonlinear function because the general design procedure of CNF control just gives some simple guidelines for the selections. This paper proposes an operational design procedure based on the structural decomposition of the linear systems with input saturation. The linear feedback gain is constructed by two linear gains which are designed independently to stabilize the unstable zero dynamics part and the pure integration part of the system respectively. By investigating the influence of these two linear gains on transient performance, it is flexible and efficient to design a satisfactory linear feedback gain for the CNF control law. Moreover, the parameters of the nonlinear function are tuned automatically by solving a minimization problem. The proposed design procedure is illustrated by applying it to design a tracking control law for the inverted pendulum on a cart system. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source] MINIMAL VALID AUTOMATA OF SAMPLE SEQUENCES FOR DISCRETE EVENT SYSTEMSASIAN JOURNAL OF CONTROL, Issue 2 2004Sheng-Luen Chung ABSTRACT Minimal valid automata (MVA) refer to valid automata models that fit a given input-output sequence sample from a Mealy machine model. They are minimal in the sense that the number of states in these automata is minimal. Critical to system identification problems of discrete event systems, MVA can be considered as a special case of the minimization problem for incompletely specified sequential machine (ISSM). While the minimization of ISSM in general is an NP-complete problem, various approaches have been proposed to alleviate computational requirement by taking special structural properties of the ISSM at hand. In essence, MVA is to find the minimal realization of an ISSM where each state only has one subsequent state transition defined. This paper presents an algorithm that divides the minimization process into two phases: first to give a reduced machine for the equivalent sequential machine, and then to minimize the reduced machine into minimal realization solutions. An example with comprehensive coverage on how the associated minimal valid automata are derived is also included. [source] Action minimization and sharp-interface limits for the stochastic Allen-Cahn equationCOMMUNICATIONS ON PURE & APPLIED MATHEMATICS, Issue 3 2007Robert V. Kohn We study the action minimization problem that is formally associated to phase transformation in the stochastically perturbed Allen-Cahn equation. The sharp-interface limit is related to (but different from) the sharp-interface limits of the related energy functional and deterministic gradient flows. In the sharp-interface limit of the action minimization problem, we find distinct "most likely switching pathways," depending on the relative costs of nucleation and propagation of interfaces. This competition is captured by the limit of the action functional, which we derive formally and propose as the natural candidate for the ,-limit. Guided by the reduced functional, we prove upper and lower bounds for the minimal action that agree on the level of scaling. © 2006 Wiley Periodicals, Inc. [source] Improving the performance of natural gas pipeline networks fuel consumption minimization problemsAICHE JOURNAL, Issue 4 2010F. Tabkhi Abstract As the gas industry has developed, gas pipeline networks have evolved over decades into very complex systems. A typical network today might consist of thousands of pipes, dozens of stations, and many other devices, such as valves and regulators. Inside each station, there can be several groups of compressor units of various vintages that were installed as the capacity of the system expanded. The compressor stations typically consume about 3,5% of the transported gas. It is estimated that the global optimization of operations can save considerably the fuel consumed by the stations. Hence, the problem of minimizing fuel cost is of great importance. Consequently, the objective is to operate a given compressor station or a set of compressor stations so that the total fuel consumption is reduced while maintaining the desired throughput in the line. Two case studies illustrate the proposed methodology. Case 1 was chosen for its simple and small-size design, developed for the sake of illustration. The implementation of the methodology is thoroughly presented and typical results are analyzed. Case 2 was submitted by the French Company Gaz de France. It is a more complex network containing several loops, supply nodes, and delivery points, referred as a multisupply multidelivery transmission network. The key points of implementation of an optimization framework are presented. The treatment of both case studies provides some guidelines for optimization of the operating performances of pipeline networks, according to the complexity of the involved problems. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source] Analysis of the bounded variation and the G regularization for nonlinear inverse problemsMATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 9 2010I. Cimrák Abstract We analyze the energy method for inverse problems. We study the unconstrained minimization of the energy functional consisting of a least-square fidelity term and two other regularization terms being the seminorm in the BV space and the norm in the G space. We consider a coercive (non)linear operator modelling the forward problem. We establish the uniqueness and stability results for the minimization problems. The stability is studied with respect to the perturbations in the data, in the operator, as well as in the regularization parameters. We settle convergence results for the general minimization schemes. Copyright © 2009 John Wiley & Sons, Ltd. [source] Implementing a proximal algorithm for some nonlinear multicommodity flow problemsNETWORKS: AN INTERNATIONAL JOURNAL, Issue 1 2007Adam Ouorou Abstract In this article, we consider applying a proximal algorithm introduced by Ouorou to some convex multicommodity network flow minimization problems. This algorithm follows the characterization of saddle points introduced earlier but can be derived from Martinet's proximal algorithm. In the primal space, the algorithm can be viewed as a regularized version of the projection algorithm by Rosen. A remarkable feature of the algorithm is that the projection step for multicommodity flow problems reduces to solving independent linear systems (one for each commodity) involving the node-arc incidence matrix of the network. The algorithm is therefore amenable to parallel implementation. We present some numerical results on large-scale routing problems arising in telecommunications and quadratic multicommodity flow problems. A comparison with a specialized code for multicommodity flow problems indicates that this proximal algorithm is specially designed for very large-scale instances. © 2006 Wiley Periodicals, Inc. NETWORKS, Vol. 49(1), 18,27 2007 [source] Primal,dual Newton interior point methods in shape and topology optimizationNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 5-6 2004R. H. W. Hoppe Abstract We consider non-linear minimization problems with both equality and inequality constraints on the state variables and design parameters as they typically arise in shape and topology optimization. In particular, the state variables are subject to a partial differential equation or systems of partial differential equations describing the operating behaviour of the device or system to be optimized. For the numerical solution of the appropriately discretized problems we emphasize the use of all-in-one approaches where the numerical solution of the discretized state equations is an integral part of the optimization routine. Such an approach is given by primal,dual Newton interior point methods which we present combined with a suitable steplength selection and a watchdog strategy for convergence monitoring. As applications, we deal with the topology optimization of electric drives for high power electromotors and with the shape optimization of biotemplated microcellular biomorphic ceramics based on homogenization modelling. Copyright © 2004 John Wiley & Sons, Ltd. [source] ,, model reduction for uncertain two-dimensional discrete systemsOPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 4 2005Huijun Gao Abstract This paper investigates the problem of ,, model reduction for two-dimensional (2-D) discrete systems with parameter uncertainties residing in a polytope. For a given robustly stable system, our attention is focused on the construction of a reduced-order model, which also resides in a polytope and approximates the original system well in an ,, norm sense. Both Fornasini,Marchesini local state-space (FMLSS) and Roesser models are considered through parameter-dependent approaches, with sufficient conditions obtained for the existence of admissible reduced-order solutions. Since these obtained conditions are not expressed as strict linear matrix inequalities (LMIs), the cone complementary linearization method is exploited to cast them into sequential minimization problems subject to LMI constraints, which can be readily solved using standard numerical software. In addition, the development of zeroth order models is also presented. Two numerical examples are provided to show the effectiveness of the proposed theories. Copyright © 2005 John Wiley & Sons, Ltd. [source] Existence and convergence for quasi-static evolution in brittle fractureCOMMUNICATIONS ON PURE & APPLIED MATHEMATICS, Issue 10 2003Gilles A. Francfort This paper investigates the mathematical well-posedness of the variational model of quasi-static growth for a brittle crack proposed by Francfort and Marigo in [15]. The starting point is a time discretized version of that evolution which results in a sequence of minimization problems of Mumford and Shah type functionals. The natural weak setting is that of special functions of bounded variation, and the main difficulty in showing existence of the time-continuous quasi-static growth is to pass to the limit as the time-discretization step tends to 0. This is performed with the help of a jump transfer theorem which permits, under weak convergence assumptions for a sequence {un} of SBV-functions to its BV-limit u, to transfer the part of the jump set of any test field that lies in the jump set of u onto that of the converging sequence {un}. In particular, it is shown that the notion of minimizer of a Mumford and Shah type functional for its own jump set is stable under weak convergence assumptions. Furthermore, our analysis justifies numerical methods used for computing the time-continuous quasi-static evolution. © 2003 Wiley Periodicals, Inc. [source] |