Objective Function (objective + function)

Distribution by Scientific Domains
Distribution within Engineering

Kinds of Objective Function

  • different objective function


  • Selected Abstracts


    Value Maximisation, Stakeholder Theory, and the Corporate Objective Function

    EUROPEAN FINANCIAL MANAGEMENT, Issue 3 2001
    Michael Jensen
    This paper examines the role of the corporate objective function in corporate productivity and efficiency, social welfare, and the accountability of managers and directors. I argue that since it is logically impossible to maximise in more than one dimension, purposeful behaviour requires a single valued objective function. Two hundred years of work in economics and finance implies that in the absence of externalities and monopoly (and when all goods are priced), social welfare is maximised when each firm in an economy maximises its total market value. Total value is not just the value of the equity but also includes the market values of all other financial claims including debt, preferred stock, and warrants. In sharp contrast stakeholder theory, argues that managers should make decisions so as to take account of the interests of all stakeholders in a firm (including not only financial claimants, but also employees, customers, communities, governmental officials and under some interpretations the environment, terrorists and blackmailers). Because the advocates of stakeholder theory refuse to specify how to make the necessary tradeoffs among these competing interests they leave managers with a theory that makes it impossible for them to make purposeful decisions. With no way to keep score, stakeholder theory makes managers unaccountable for their actions. It seems clear that such a theory can be attractive to the self interest of managers and directors. Creating value takes more than acceptance of value maximisation as the organisational objective. As a statement of corporate purpose or vision, value maximisation is not likely to tap into the energy and enthusiasm of employees and managers to create value. Seen in this light, change in long-term market value becomes the scorecard that managers, directors, and others use to assess success or failure of the organisation. The choice of value maximisation as the corporate scorecard must be complemented by a corporate vision, strategy and tactics that unite participants in the organisation in its struggle for dominance in its competitive arena. A firm cannot maximise value if it ignores the interest of its stakeholders. I offer a proposal to clarify what I believe is the proper relation between value maximisation and stakeholder theory. I call it enlightened value maximisation, and it is identical to what I call enlightened stakeholder theory. Enlightened value maximisation utilises much of the structure of stakeholder theory but accepts maximisation of the long run value of the firm as the criterion for making the requisite tradeoffs among its stakeholders. Managers, directors, strategists, and management scientists can benefit from enlightened stakeholder theory. Enlightened stakeholder theory specifies long-term value maximisation or value seeking as the firm's objective and therefore solves the problems that arise from the multiple objectives that accompany traditional stakeholder theory. I also discuss the Balanced Scorecard, the managerial equivalent of stakeholder theory. The same conclusions hold. Balanced Scorecard theory is flawed because it presents managers with a scorecard which gives no score,that is, no single-valued measure of how they have performed. Thus managers evaluated with such a system (which can easily have two dozen measures and provides no information on the tradeoffs between them) have no way to make principled or purposeful decisions. The solution is to define a true (single dimensional) score for measuring performance for the organisation or division (and it must be consistent with the organisation's strategy). Given this we then encourage managers to use measures of the drivers of performance to understand better how to maximise their score. And as long as their score is defined properly, (and for lower levels in the organisation it will generally not be value) this will enhance their contribution to the firm. [source]


    Dose Finding for Continuous and Ordinal Outcomes with a Monotone Objective Function: A Unified Approach

    BIOMETRICS, Issue 1 2009
    Anastasia Ivanova
    Summary In many phase I trials, the design goal is to find the dose associated with a certain target toxicity rate. In some trials, the goal can be to find the dose with a certain weighted sum of rates of various toxicity grades. For others, the goal is to find the dose with a certain mean value of a continuous response. In this article, we describe a dose-finding design that can be used in any of the dose-finding trials described above, trials where the target dose is defined as the dose at which a certain monotone function of the dose is a prespecified value. At each step of the proposed design, the normalized difference between the current dose and the target is computed. If that difference is close to zero, the dose is repeated. Otherwise, the dose is increased or decreased, depending on the sign of the difference. [source]


    Heat transfer during microwave combination heating: Computational modeling and MRI experiments

    AICHE JOURNAL, Issue 9 2010
    Vineet Rakesh
    Abstract Combination of heating modes such as microwaves, convection, and radiant heating can be used to realistically achieve the quality and safety needed for cooking processes and, at the same time, make the processes faster. Physics-based computational modeling used in conjunction with MRI experimentation can be used to obtain critical understanding of combination heating. The objectives were to: (1) formulate a fully coupled electromagnetics - heat transfer model, (2) use magnetic resonance imaging (MRI) experiments to determine the 3D spatial and temporal variation of temperatures and validate the numerical model, (3) use the insight gained from the model and experiments to understand the combination heating process and to optimize it. The different factors that affect heating patterns during combination heating such as the type of heating modes used, placement of sample, and microwave cycling were considered. Objective functions were defined and minimized for design and optimization. The use of such techniques can lead to greater control and automation of combination heating process benefitting the food process and product developers immensely. © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source]


    Video completion and synthesis

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Chunxia Xiao
    Abstract This paper presents a new exemplar-based framework for video completion, allowing aesthetically pleasing completion of large space-time holes. We regard video completion as a discrete global optimization on a 3D graph embedded in the space-time video volume. We introduce a new objective function which enforces global spatio-temporal consistency among patches that fill the hole and surrounding it, in terms of both color similarity and motion similarity. The optimization is solved by a novel algorithm, called weighted priority belief propagation (BP), which alleviates the problems of slow convergence and intolerable storage size when using the standard BP. This objective function can also handle video texture synthesis by extending an input video texture to a larger texture region. Experiments on a wide variety of video examples with complex dynamic scenes demonstrate the advantages of our method over existing techniques: salient structures and motion information are much better restored. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Range Scan Registration Using Reduced Deformable Models

    COMPUTER GRAPHICS FORUM, Issue 2 2009
    W. Chang
    Abstract We present an unsupervised method for registering range scans of deforming, articulated shapes. The key idea is to model the motion of the underlying object using a reduced deformable model. We use a linear skinning model for its simplicity and represent the weight functions on a regular grid localized to the surface geometry. This decouples the deformation model from the surface representation and allows us to deal with the severe occlusion and missing data that is inherent in range scan data. We formulate the registration problem using an objective function that enforces close alignment of the 3D data and includes an intuitive notion of joints. This leads to an optimization problem that we solve using an efficient EM-type algorithm. With our algorithm we obtain smooth deformations that accurately register pairs of range scans with significant motion and occlusion. The main advantages of our approach are that it does not require user specified markers, a template, nor manual segmentation of the surface geometry into rigid parts. [source]


    A Simulation Model for Life Cycle Project Management

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2002
    Ali Jaafari
    This paper puts forward a simulation model specifically designed for holistic evaluation of project functionality within a life cycle project management framework. The authors describe a methodology for development of the aforementioned tool, referred to as a dynamic simulation modeling system (DSMS). The DSMS is geared toward modeling of service and manufacturing processes with hierarchical and modular modeling methodology; however, the underlying philosophy can be adopted for modeling any generic system. The enhanced modeling features and logical division of large systems into small process components and their internal linkage are the key contributions of this work. The aim of this development is to apply the simulation technique in order to evaluate the overall project functionalities from the dynamic business perspective. A set of business objective functions (i.e., life cycle objective function [LCOF]) has been employed as a basis for decision making throughout the project's life. Object-oriented programming language with the object-oriented database technology facilitates the necessary model capability. A brief case study has been used to demonstrate and discuss the model capability. [source]


    Applying fuzzy logic and genetic algorithms to enhance the efficacy of the PID controller in buffer overflow elimination for better channel response timeliness over the Internet

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7 2006
    Wilfred W. K. Lin
    Abstract In this paper two novel intelligent buffer overflow controllers: the fuzzy logic controller (FLC) and the genetic algorithm controller (GAC) are proposed. In the FLC the extant algorithmic PID controller (PIDC) model, which combines the proportional (P), derivative (D) and integral (I) control elements, is augmented with fuzzy logic for higher control precision. The fuzzy logic divides the PIDC control domain into finer control regions. Every region is then defined either by a fuzzy rule or a ,don't care' state. The GAC combines the PIDC model with the genetic algorithm, which manipulates the parametric values of the PIDC as genes in a chromosome. The FLC and GAC operations are based on the objective function . The principle is that the controller should adaptively maintain the safety margin around the chosen reference point (represent by the ,0' of ) at runtime. The preliminary experimental results for the FLC and GAC prototypes indicate that they are both more effective and precise than the PIDC. After repeated timing analyses with the Intel's VTune Performer Analyzer, it was confirmed that the FLC can better support real-time computing than the GAC because of its shorter execution time and faster convergence without any buffer overflow. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Optimal design of added viscoelastic dampers and supporting braces

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 4 2004
    Ji-Hun Park
    Abstract This paper presents a simultaneous optimization procedure for both viscoelastic dampers (VEDs) and supporting braces installed in a structure. The effect of supporting braces on the control efficiency of VEDs is also investigated. To apply a general gradient-based optimization algorithm, closed-form expressions for the gradients of objective function and constraints are derived. Also, the constraint on the dynamic behavior of a structure is embedded in the gradient computation procedure to reduce the number of variables in the optimization. From numerical analysis of an example structure, it was found that when sufficient stiffness cannot be provided for the supporting braces, the flexibility of the brace should be taken into account in the design of the VED to achieve the desired performance of the structure. It was also observed that, as a result of the proposed optimization process, the size of the supporting brace could be reduced while the additional VED size (to compensate for the loss of the control effect) was insignificant. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Testing Parameters in GMM Without Assuming that They Are Identified

    ECONOMETRICA, Issue 4 2005
    Frank Kleibergen
    We propose a generalized method of moments (GMM) Lagrange multiplier statistic, i.e., the K statistic, that uses a Jacobian estimator based on the continuous updating estimator that is asymptotically uncorrelated with the sample average of the moments. Its asymptotic ,2 distribution therefore holds under a wider set of circumstances, like weak instruments, than the standard full rank case for the expected Jacobian under which the asymptotic ,2 distributions of the traditional statistics are valid. The behavior of the K statistic can be spurious around inflection points and maxima of the objective function. This inadequacy is overcome by combining the K statistic with a statistic that tests the validity of the moment equations and by an extension of Moreira's (2003) conditional likelihood ratio statistic toward GMM. We conduct a power comparison to test for the risk aversion parameter in a stochastic discount factor model and construct its confidence set for observed consumption growth and asset return series. [source]


    The Interaction between the Central Bank and a Single Monopoly Union Revisited: Does Greater Monetary Policy Uncertainty Reduce Nominal Wages?

    ECONOMIC NOTES, Issue 3 2007
    Luigi Bonatti
    Previous papers modelling the interaction between the central bank and a single monopoly union demonstrated that greater monetary policy uncertainty reduces the union's nominal wage. This paper shows that this result does not hold in general, since it depends on peculiar specifications of the union's objective function. In particular, I show that greater monetary policy uncertainty raises the nominal wage whenever union members tend to be more sensitive to the risk of getting low real wages than to the risk of remaining unemployed. This conclusion appears consistent with the evidence showing that greater monetary authority's transparency reduces average inflation. [source]


    What does Monetary Policy Reveal about a Central Bank's Preferences?

    ECONOMIC NOTES, Issue 3 2003
    Efrem Castelnuovo
    The design of monetary policy depends on the targeting strategy adopted by the central bank. This strategy describes a set of policy preferences, which are actually the structural parameters to analyse monetary policy making. Accordingly, we develop a calibration method to estimate a central bank's preferences from the estimates of an optimal Taylor,type rule. The empirical analysis on US data shows that output stabilization has not been an independent argument in the Fed's objective function during the Greenspan's era. This suggests that the output gap has entered the policy rule only as leading indicator for future inflation, therefore being only instrumental (to stabilize inflation) rather than important per se. (J.E.L.: C61, E52, E58). [source]


    Value Maximisation, Stakeholder Theory, and the Corporate Objective Function

    EUROPEAN FINANCIAL MANAGEMENT, Issue 3 2001
    Michael Jensen
    This paper examines the role of the corporate objective function in corporate productivity and efficiency, social welfare, and the accountability of managers and directors. I argue that since it is logically impossible to maximise in more than one dimension, purposeful behaviour requires a single valued objective function. Two hundred years of work in economics and finance implies that in the absence of externalities and monopoly (and when all goods are priced), social welfare is maximised when each firm in an economy maximises its total market value. Total value is not just the value of the equity but also includes the market values of all other financial claims including debt, preferred stock, and warrants. In sharp contrast stakeholder theory, argues that managers should make decisions so as to take account of the interests of all stakeholders in a firm (including not only financial claimants, but also employees, customers, communities, governmental officials and under some interpretations the environment, terrorists and blackmailers). Because the advocates of stakeholder theory refuse to specify how to make the necessary tradeoffs among these competing interests they leave managers with a theory that makes it impossible for them to make purposeful decisions. With no way to keep score, stakeholder theory makes managers unaccountable for their actions. It seems clear that such a theory can be attractive to the self interest of managers and directors. Creating value takes more than acceptance of value maximisation as the organisational objective. As a statement of corporate purpose or vision, value maximisation is not likely to tap into the energy and enthusiasm of employees and managers to create value. Seen in this light, change in long-term market value becomes the scorecard that managers, directors, and others use to assess success or failure of the organisation. The choice of value maximisation as the corporate scorecard must be complemented by a corporate vision, strategy and tactics that unite participants in the organisation in its struggle for dominance in its competitive arena. A firm cannot maximise value if it ignores the interest of its stakeholders. I offer a proposal to clarify what I believe is the proper relation between value maximisation and stakeholder theory. I call it enlightened value maximisation, and it is identical to what I call enlightened stakeholder theory. Enlightened value maximisation utilises much of the structure of stakeholder theory but accepts maximisation of the long run value of the firm as the criterion for making the requisite tradeoffs among its stakeholders. Managers, directors, strategists, and management scientists can benefit from enlightened stakeholder theory. Enlightened stakeholder theory specifies long-term value maximisation or value seeking as the firm's objective and therefore solves the problems that arise from the multiple objectives that accompany traditional stakeholder theory. I also discuss the Balanced Scorecard, the managerial equivalent of stakeholder theory. The same conclusions hold. Balanced Scorecard theory is flawed because it presents managers with a scorecard which gives no score,that is, no single-valued measure of how they have performed. Thus managers evaluated with such a system (which can easily have two dozen measures and provides no information on the tradeoffs between them) have no way to make principled or purposeful decisions. The solution is to define a true (single dimensional) score for measuring performance for the organisation or division (and it must be consistent with the organisation's strategy). Given this we then encourage managers to use measures of the drivers of performance to understand better how to maximise their score. And as long as their score is defined properly, (and for lower levels in the organisation it will generally not be value) this will enhance their contribution to the firm. [source]


    A genetic algorithm multi-objective approach for efficient operational planning technique of distribution systems

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 2 2009
    C. Lakshminarayana
    Abstract This paper presents a genetic algorithm multi-objective approach for efficient operational planning technique for electrical power distribution systems (EPDS). Service restoration is a non-linear, combinatorial, non-differential and multi-objective optimization problem that often has a great number of candidate solutions to be evaluated by the operators. To tackle the problem of service restoration with multiple objectives, the weighted sum strategy is employed to convert these objectives into a single objective function by giving equal weighting values. The transformer/feeders capacity in the post-fault distribution network creates a major problem for the electrical power distribution engineers and distribution system substation operators to satisfy the operational constraints at the consumer localities while restoring power supply. The feasibility of the developed algorithm for service restoration demonstrated on several distribution networks with fast and effective convergence of the results of this technique helps to find the efficient operational planning of the EPDS. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Linear-programming-based method for optimum schedule of reactive power sources in integrated AC-DC power systems

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 1 2003
    M. Abdel-Salam
    This paper is aimed at obtaining the optimal flow of reactive power which corresponds to minimum real power losses in integrated AC-DC power systems including one DC link. The DC power or DC current, drawn by the link at its rectifier side is introduced as a new control variable added to the normal control variables, i.e. transformer tap-settings, generator terminal voltages and reactive-power outputs, and switchable reactive power sources. The constraints include the reactive power limits of the generators, limits on the load bus voltages and the operating limits of the control variables. Dual linear programming is applied to minimize an objective function for system losses. Application of the proposed method on different test AC-DC systems confirmed that less system losses is achieved with the introduction of the DC control variable. [source]


    Dispersion of Nodes Added to a Network

    GEOGRAPHICAL ANALYSIS, Issue 4 2005
    Michael Kuby
    For location problems in which optimal locations can be at nodes or along arcs but no finite dominating set has been identified, researchers may desire a method for dispersing p additional discrete candidate sites along the m arcs of a network. This article develops and tests minimax and maximin models for solving this continuous network location problem, which we call the added-node dispersion problem (ANDP). Adding nodes to an arc subdivides it into subarcs. The minimax model minimizes the maximum subarc length, while the maximin model maximizes the minimum subarc length. Like most worst-case objectives, the minimax and maximin objectives are plagued by poorly behaved alternate optima. Therefore, a secondary MinSumMax objective is used to select the best-dispersed alternate optima. We prove that equal spacing of added nodes along arcs is optimal to the MinSumMax objective. Using this fact we develop greedy heuristic algorithms that are simple, optimal, and efficient (O(mp)). Empirical results show how the maximum subarc, minimum subarc, and sum of longest subarcs change as the number of added nodes increases. Further empirical results show how using the ANDP to locate additional nodes can improve the solutions of another location problem. Using the p-dispersion problem as a case study, we show how much adding ANDP sites to the network vertices improves the p-dispersion objective function compared with (a) network vertices only and (b) vertices plus randomly added nodes. The ANDP can also be used by itself to disperse facilities such as stores, refueling stations, cell phone towers, or relay facilities along the arcs of a network, assuming that such facilities already exist at all nodes of the network. [source]


    Joint full-waveform analysis of off-ground zero-offset ground penetrating radar and electromagnetic induction synthetic data for estimating soil electrical properties

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2010
    D. Moghadas
    SUMMARY A joint analysis of full-waveform information content in ground penetrating radar (GPR) and electromagnetic induction (EMI) synthetic data was investigated to reconstruct the electrical properties of multilayered media. The GPR and EMI systems operate in zero-offset, off-ground mode and are designed using vector network analyser technology. The inverse problem is formulated in the least-squares sense. We compared four approaches for GPR and EMI data fusion. The two first techniques consisted of defining a single objective function, applying different weighting methods. As a first approach, we weighted the EMI and GPR data using the inverse of the data variance. The ideal point method was also employed as a second weighting scenario. The third approach is the naive Bayesian method and the fourth technique corresponds to GPR,EMI and EMI,GPR sequential inversions. Synthetic GPR and EMI data were generated for the particular case of a two-layered medium. Analysis of the objective function response surfaces from the two first approaches demonstrated the benefit of combining the two sources of information. However, due to the variations of the GPR and EMI model sensitivities with respect to the medium electrical properties, the formulation of an optimal objective function based on the weighting methods is not straightforward. While the Bayesian method relies on assumptions with respect to the statistical distribution of the parameters, it may constitute a relevant alternative for GPR and EMI data fusion. Sequential inversions of different configurations for a two layered medium show that in the case of high conductivity or permittivity for the first layer, the inversion scheme can not fully retrieve the soil hydrogeophysical parameters. But in the case of low permittivity and conductivity for the first layer, GPR,EMI inversion provides proper estimation of values compared to the EMI,GPR inversion. [source]


    Addressing non-uniqueness in linearized multichannel surface wave inversion

    GEOPHYSICAL PROSPECTING, Issue 1 2009
    Michele Cercato
    ABSTRACT The multichannel analysis of the surface waves method is based on the inversion of observed Rayleigh-wave phase-velocity dispersion curves to estimate the shear-wave velocity profile of the site under investigation. This inverse problem is nonlinear and it is often solved using ,local' or linearized inversion strategies. Among linearized inversion algorithms, least-squares methods are widely used in research and prevailing in commercial software; the main drawback of this class of methods is their limited capability to explore the model parameter space. The possibility for the estimated solution to be trapped in local minima of the objective function strongly depends on the degree of nonuniqueness of the problem, which can be reduced by an adequate model parameterization and/or imposing constraints on the solution. In this article, a linearized algorithm based on inequality constraints is introduced for the inversion of observed dispersion curves; this provides a flexible way to insert a priori information as well as physical constraints into the inversion process. As linearized inversion methods are strongly dependent on the choice of the initial model and on the accuracy of partial derivative calculations, these factors are carefully reviewed. Attention is also focused on the appraisal of the inverted solution, using resolution analysis and uncertainty estimation together with a posteriori effective-velocity modelling. Efficiency and stability of the proposed approach are demonstrated using both synthetic and real data; in the latter case, cross-hole S-wave velocity measurements are blind-compared with the results of the inversion process. [source]


    Modeling the hepatitis C virus epidemic in France using the temporal pattern of hepatocellular carcinoma deaths

    HEPATOLOGY, Issue 3 2002
    Jenny Griffiths
    Deuffic et al. developed a compartmentalized model that characterized the evolution and spread of the hepatitis C virus (HCV) within France. There were various parameters defining the age- and sex-dependent transition probabilities between chronic hepatitis and cirrhosis in need of determination to completely specify their model. These were estimated by means of a weighted least-squares procedure that was executed numerically. The objective function used was based on the distribution of the age at death from hepatocellular carcinoma (HCC) rather than the temporal pattern of deaths due to HCC from 1979 to 1995. In this report, we investigate the impact of using an objective function based on the temporal pattern of deaths. We show that the dynamics of the epidemic can be quite different, in particular, short-term prediction of HCC deaths by HCV infection and times to death from onset of disease. [source]


    Towards a simple dynamic process conceptualization in rainfall,runoff models using multi-criteria calibration and tracers in temperate, upland catchments

    HYDROLOGICAL PROCESSES, Issue 3 2010
    C. Birkel
    Abstract Empirically based understanding of streamflow generation dynamics in a montane headwater catchment formed the basis for the development of simple, low-parameterized, rainfall,runoff models. This study was based in the Girnock catchment in the Cairngorm Mountains of Scotland, where runoff generation is dominated by overland flow from peaty soils in valley bottom areas that are characterized by dynamic expansion and contraction of saturation zones. A stepwise procedure was used to select the level of model complexity that could be supported by field data. This facilitated the assessment of the way the dynamic process representation improved model performance. Model performance was evaluated using a multi-criteria calibration procedure which applied a time series of hydrochemical tracers as an additional objective function. Flow simulations comparing a static against the dynamic saturation area model (SAM) substantially improved several evaluation criteria. Multi-criteria evaluation using ensembles of performance measures provided a much more comprehensive assessment of the model performance than single efficiency statistics, which alone, could be misleading. Simulation of conservative source area tracers (Gran alkalinity) as part of the calibration procedure showed that a simple two-storage model is the minimum complexity needed to capture the dominant processes governing catchment response. Additionally, calibration was improved by the integration of tracers into the flow model, which constrained model uncertainty and improved the hydrodynamics of simulations in a way that plausibly captured the contribution of different source areas to streamflow. This approach contributes to the quest for low-parameter models that can achieve process-based simulation of hydrological response. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Impact of time-scale of the calibration objective function on the performance of watershed models

    HYDROLOGICAL PROCESSES, Issue 25 2007
    K. P. Sudheer
    Abstract Many of the continuous watershed models perform all their computations on a daily time step, yet they are often calibrated at an annual or monthly time-scale that may not guarantee good simulation performance on a daily time step. The major objective of this paper is to evaluate the impact of the calibration time-scale on model predictive ability. This study considered the Soil and Water Assessment Tool for the analyses, and it has been calibrated at two time-scales, viz. monthly and daily for the War Eagle Creek watershed in the USA. The results demonstrate that the model's performance at the smaller time-scale (such as daily) cannot be ensured by calibrating them at a larger time-scale (such as monthly). It is observed that, even though the calibrated model possesses satisfactory ,goodness of fit' statistics, the simulation residuals failed to confirm the assumption of their homoscedasticity and independence. The results imply that evaluation of models should be conducted considering their behavior in various aspects of simulation, such as predictive uncertainty, hydrograph characteristics, ability to preserve statistical properties of the historic flow series, etc. The study enlightens the scope for improving/developing effective autocalibration procedures at the daily time step for watershed models. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Parameter estimation in semi-distributed hydrological catchment modelling using a multi-criteria objective function

    HYDROLOGICAL PROCESSES, Issue 22 2007
    Hamed Rouhani
    Abstract Output generated by hydrologic simulation models is traditionally calibrated and validated using split-samples of observed time series of total water flow, measured at the drainage outlet of the river basin. Although this approach might yield an optimal set of model parameters, capable of reproducing the total flow, it has been observed that the flow components making up the total flow are often poorly reproduced. Previous research suggests that notwithstanding the underlying physical processes are often poorly mimicked through calibration of a set of parameters hydrologic models most of the time acceptably estimates the total flow. The objective of this study was to calibrate and validate a computer-based hydrologic model with respect to the total and slow flow. The quick flow component used in this study was taken as the difference between the total and slow flow. Model calibrations were pursued on the basis of comparing the simulated output with the observed total and slow flow using qualitative (graphical) assessments and quantitative (statistical) indicators. The study was conducted using the Soil and Water Assessment Tool (SWAT) model and a 10-year historical record (1986,1995) of the daily flow components of the Grote Nete River basin (Belgium). The data of the period 1986,1989 were used for model calibration and data of the period 1990,1995 for model validation. The predicted daily average total flow matched the observed values with a Nash,Sutcliff coefficient of 0·67 during calibration and 0·66 during validation. The Nash,Sutcliff coefficient for slow flow was 0·72 during calibration and 0·61 during validation. Analysis of high and low flows indicated that the model is unbiased. A sensitivity analysis revealed that for the modelling of the daily total flow, accurate estimation of all 10 calibration parameters in the SWAT model is justified, while for the slow flow processes only 4 out of the set of 10 parameters were identified as most sensitive. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    A digital elevation analysis: a spatially distributed flow apportioning algorithm

    HYDROLOGICAL PROCESSES, Issue 10 2004
    Sanghyun Kim
    Abstract An integrated flow determination algorithm is proposed to calculate the spatial distribution of the topographic index to the channel network. The advantages of a single flow direction algorithm and other multiple flow direction schemes are selectively considered in order to address the drawbacks of existing algorithms. A spatially varying flow apportioning factor is introduced to distribute the contributing area from upslope cells to downslope cells. The channel initiation threshold concept is expanded and integrated into a spatially distributed flow apportioning algorithm to delineate a realistic channel network. The functional relationships between the flow apportioning factors and the expanded channel initiation threshold (ECIT) are developed to address the spatially varied flow distribution patterns considering the permanent channel locations. A genetic algorithm (GA) is integrated into the spatially distributed flow apportioning algorithm (SDFAA) with the objective function of river cell evaluation. An application of a field example suggests that the spatially distributed flow apportioning scheme provides several advantages over the existing approaches; the advantages include the relaxation of overdissipation problems near channel cells, the connectivity feature of river cells and the robustness of the parameter determination procedure over existing algorithms. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Incorporating Penalty Function to Reduce Spill in Stochastic Dynamic Programming Based Reservoir Operation of Hydropower Plants

    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, Issue 5 2010
    Deependra Kumar Jha Non-member
    Abstract This paper proposes a framework that includes a penalty function incorporated stochastic dynamic programming (SDP) model in order to derive the operation policy of the reservoir of a hydropower plant, with an aim to reduce the amount of spill during operation of the reservoir. SDP models with various inflow process assumptions (independent and Markov-I) are developed and executed in order to derive the reservoir operation policies for the case study of a storage type hydropower plant located in Japan. The policy thus determined consists of target storage levels (end-of-period storage levels) for each combination of the beginning-of-period storage levels and the inflow states of the current period. A penalty function is incorporated in the classical SDP model with objective function that maximizes annual energy generation through operation of the reservoir. Due to the inclusion of the penalty function, operation policy of the reservoir changes in a way that ensures reduced spill. Simulations are carried out to identify reservoir storage guide curves based on the derived operation policies. Reservoir storage guide curves for different values of the coefficient of penalty function , are plotted for a study horizon of 64 years, and the corresponding average annual spill values are compared. It is observed that, with increasing values of ,, the average annual spill decreases; however, the simulated average annual energy value is marginally reduced. The average annual energy generation can be checked vis-à-vis the average annual spill reduction, and the optimal value of , can be identified based on the cost functions associated with energy and spill. © 2010 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [source]


    Optimization routine for identification of model parameters in soil plasticity

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 5 2001
    Hans Mattsson
    Abstract The paper presents an optimization routine especially developed for the identification of model parameters in soil plasticity on the basis of different soil tests. Main focus is put on the mathematical aspects and the experience from application of this optimization routine. Mathematically, for the optimization, an objective function and a search strategy are needed. Some alternative expressions for the objective function are formulated. They capture the overall soil behaviour and can be used in a simultaneous optimization against several laboratory tests. Two different search strategies, Rosenbrock's method and the Simplex method, both belonging to the category of direct search methods, are utilized in the routine. Direct search methods have generally proved to be reliable and their relative simplicity make them quite easy to program into workable codes. The Rosenbrock and simplex methods are modified to make the search strategies as efficient and user-friendly as possible for the type of optimization problem addressed here. Since these search strategies are of a heuristic nature, which makes it difficult (or even impossible) to analyse their performance in a theoretical way, representative optimization examples against both simulated experimental results as well as performed triaxial tests are presented to show the efficiency of the optimization routine. From these examples, it has been concluded that the optimization routine is able to locate a minimum with a good accuracy, fast enough to be a very useful tool for identification of model parameters in soil plasticity. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Optimal shape of a grain or a fibre cross-section in a two-phase composite

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 2 2005
    Vladislav Shenfeld
    Abstract The shape of grains or of cross-sections of fibres in a two-phase elastic material has an important influence on the overall mechanical behaviour of the composite. In this paper a numerical scheme is devised for determining the optimal shape of a two-dimensional grain or of a fibre's cross-section. The optimization problem is first posed mathematically, using a global objective function, and then solved numerically by the finite element method and a specially designed global optimization scheme. Excellent agreement is obtained with analytical results available for extreme cases. In addition, optimal shapes are obtained under more general conditions. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    The use of an SQP algorithm in slope stability analysis

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 1 2005
    Jian Chen
    Abstract In the upper bound approach to limit analysis of slope stability based on the rigid finite element method, the search for the minimum factor of safety can be formulated as a non-linear programming problem with equality constraints only based on a yield criterion, a flow rule, boundary conditions, and an energy-work balance equation. Because of the non-linear property of the resulting optimization problems, a non-linear mathematical programming algorithm has to be employed. In this paper, the relations between the numbers of nodes, elements, interfaces, and subsequent unknowns and constraints in the approach have been derived. It can be shown that in the large-scale problems, the unknowns are subject to a highly sparse set of equality constraints. Because of the existence of non-linear equalities in the approach, this paper applies first time a special sequential quadratic programming (SQP) algorithm, feasible SQP (FSQP), to obtain solutions for such non-linear optimization problems. In FSQP algorithm, the non-linear equality constraints are turned into inequality constraints and the objective function is replaced by an exact penalty function which penalizes non-linear equality constraint violations only. Three numerical examples are presented to illustrate the potentialities and efficiencies of the FSQP algorithm in the slope stability analysis. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Groundwater parameter estimation via the unsteady adjoint variable formulation of discrete sensitivity analysis

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 6 2002
    C. O. E. Burg
    Abstract Discrete sensitivity analysis (DSA) is a method that efficiently estimates the derivatives of a numerically approximated objective function with respect to a set of parameters at a fraction of the cost of using finite differences. Coupled with an optimization algorithm, this method can be used to locate the optimal set of parameters for the objective function. The time dependent adjoint variable formulation of discrete sensitivity analysis is derived and applied to a time-dependent, two-dimensional groundwater code. The derivatives agreed with finite difference derivatives to between 6 and 8 significant digits, at approximately , the computational cost. Using the BFGS optimization algorithm to update the parameters, the parameter estimation technique successfully identified the target values, for problems with small number of parameters. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Design and application of layered composites with the prescribed magnetic permeability

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2010
    Jae Seok Choi
    Abstract This research aims to design the microstructure with the prescribed magnetic permeability and proposes a design method to control the magnetic flux flow using layered microstructures. In the optimization problem for the microstructure design, the objective function is set up to minimize the difference between the homogenized magnetic permeability during the design process and the prescribed permeability based on the so-called inverse homogenization method. Based on the microstructure design result, a microstructure composed of layered materials is proposed for the purpose of the efficient magnetic flux control. In addition, its analytical calculation is added to confirm the feasibility of the optimized results. The layered composite of a very thin ferromagnetic material is expected to guide the magnetic flux and the performance of the magnetic system can be improved by turning the microstructures appropriately. Optimal rotation angles of microstructures are determined using the homogenization design method. The proposed design method is applied to an example to confirm its feasibility. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Optimization-based dynamic human walking prediction: One step formulation

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 6 2009
    Yujiang Xiang
    Abstract A new methodology is introduced in this work to simulate normal walking using a spatial digital human model. The proposed methodology is based on an optimization formulation that minimizes the dynamic effort of people during walking while considering associated physical and kinematical constraints. Normal walking is formulated as a symmetric and cyclic motion. Recursive Lagrangian dynamics with analytical gradients for all the constraints and objective function are incorporated in the optimization process. Dynamic balance of the model is enforced by direct use of the equations of motion. In addition, the ground reaction forces are calculated using a new algorithm that enforces overall equilibrium of the human skeletal model. External loads on the human body, such as backpacks, are also included in the formulation. Simulation results with the present methodology show good correlation with the experimental data obtained from human subjects and the existing literature. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Anisotropic mesh adaption by metric-driven optimization

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2004
    Carlo L. Bottasso
    Abstract We describe a Gauss,Seidel algorithm for optimizing a three-dimensional unstructured grid so as to conform to a given metric. The objective function for the optimization process is based on the maximum value of an elemental residual measuring the distance of any simplex in the grid to the local target metric. We analyse different possible choices for the objective function, and we highlight their relative merits and deficiencies. Alternative strategies for conducting the optimization are compared and contrasted in terms of resulting grid quality and computational costs. Numerical simulations are used for demonstrating the features of the proposed methodology, and for studying some of its characteristics. Copyright © 2004 John Wiley & Sons, Ltd. [source]