Parametrization

Distribution by Scientific Domains
Distribution within Earth and Environmental Science

Kinds of Parametrization

  • model parametrization
  • physical parametrization

  • Terms modified by Parametrization

  • parametrization used

  • Selected Abstracts


    Determination of the seismic moment tensor for local events in the South Shetland Islands and Bransfield Strait

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2006
    M. Guidarelli
    SUMMARY Six events with magnitude between 3 and 5.6 have been analysed based on regional waveforms recorded by the temporal Seismic Experiment in Patagonia and Antarctica seismic broad-band network in the Bransfield Strait and the South Shetland Islands in the period 1997,1998. The source parameters have been retrieved using a robust methodology (INDirect PARametrization) to stabilize the inversion of a limited number of noisy records. This methodology is particularly important in oceanic environments, where the presence of seismic noise and the small number of stations makes it difficult to analyse small magnitude events. The source mechanisms obtained are quite variable but consistent with the active tectonic processes and the complicated structure of the South Shetland Island region. [source]


    Parametrization of the effect of drizzle upon the droplet effective radius in stratocumulus clouds

    THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 570 2000
    Robert Wood
    Abstract A method is presented to parametrize the effects of drizzle upon the droplet effective radius in stratocumulus clouds. The cloud-droplet size distribution in stratocumulus is represented by the sum of a modified Gamma distribution to represent the small (radius <20 ,m) droplets and an exponential Marsh all-Palmer-type distribution to represent the large (drizzle) droplets. Using this approach a relationship is derived to account for the effect of drizzle upon k, the cube of the ratio between the volume and effective radius. Observational evidence from flights in a range of different air-mass types is presented to validate the approach. The results suggest that the value of k pertaining to the small droplets is better parametrized as a function of volume radius rather than of droplet concentration. The results also suggest that, as the ratio of liquid-water content contained in the large droplets to that in the small droplets increases beyond 0.05, the value of it decreases significantly. This results in an underprediction of the effective radius if commonly used parametrizations for k are used. [source]


    Fast simulation of skin sliding

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009
    Xiaosong Yang
    Abstract Skin sliding is the phenomenon of the skin moving over underlying layers of fat, muscle and bone. Due to the complex interconnections between these separate layers and their differing elasticity properties, it is difficult to model and expensive to compute. We present a novel method to simulate this phenomenon at real-time by remeshing the surface based on a parameter space resampling. In order to evaluate the surface parametrization, we borrow a technique from structural engineering known as the force density method (FDM)which solves for an energy minimizing form with a sparse linear system. Our method creates a realistic approximation of skin sliding in real-time, reducing texture distortions in the region of the deformation. In addition it is flexible, simple to use, and can be incorporated into any animation pipeline. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Engineering input/output nodes in prokaryotic regulatory circuits

    FEMS MICROBIOLOGY REVIEWS, Issue 5 2010
    Aitor De Las Heras
    Abstract A large number of prokaryotic regulatory elements have been interfaced artificially with biological circuits that execute specific expression programs. Engineering such circuits involves the association of input/output components that perform discrete signal-transfer steps in an autonomous fashion while connected to the rest of the network with a defined topology. Each of these nodes includes a signal-recognition component for the detection of the relevant physicochemical or biological stimulus, a molecular device able to translate the signal-sensing event into a defined output and a genetic module capable of understanding such an output as an input for the next component of the circuit. The final outcome of the process can be recorded by means of a reporter product. This review addresses three such aspects of forward engineering of signal-responding genetic parts. We first recap natural and non-natural regulatory assets for designing gene expression in response to predetermined signals , chemical or otherwise. These include transcriptional regulators developed by in vitro evolution (or designed from scratch), and synthetic riboswitches derived from in vitro selection of aptamers. Then we examine recent progress on reporter genes, whose expression allows the quantification and parametrization of signal-responding circuits in their entirety. Finally, we critically examine recent work on other reporters that confer bacteria with gross organoleptic properties (e.g. distinct odour) and the interfacing of signal-sensing devices with determinants of community behaviour. [source]


    Joint inversion of multiple data types with the use of multiobjective optimization: problem formulation and application to the seismic anisotropy investigations

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
    E. Kozlovskaya
    SUMMARY In geophysical studies the problem of joint inversion of multiple experimental data sets obtained by different methods is conventionally considered as a scalar one. Namely, a solution is found by minimization of linear combination of functions describing the fit of the values predicted from the model to each set of data. In the present paper we demonstrate that this standard approach is not always justified and propose to consider a joint inversion problem as a multiobjective optimization problem (MOP), for which the misfit function is a vector. The method is based on analysis of two types of solutions to MOP considered in the space of misfit functions (objective space). The first one is a set of complete optimal solutions that minimize all the components of a vector misfit function simultaneously. The second one is a set of Pareto optimal solutions, or trade-off solutions, for which it is not possible to decrease any component of the vector misfit function without increasing at least one other. We investigate connection between the standard formulation of a joint inversion problem and the multiobjective formulation and demonstrate that the standard formulation is a particular case of scalarization of a multiobjective problem using a weighted sum of component misfit functions (objectives). We illustrate the multiobjective approach with a non-linear problem of the joint inversion of shear wave splitting parameters and longitudinal wave residuals. Using synthetic data and real data from three passive seismic experiments, we demonstrate that random noise in the data and inexact model parametrization destroy the complete optimal solution, which degenerates into a fairly large Pareto set. As a result, non-uniqueness of the problem of joint inversion increases. If the random noise in the data is the only source of uncertainty, the Pareto set expands around the true solution in the objective space. In this case the ,ideal point' method of scalarization of multiobjective problems can be used. If the uncertainty is due to inexact model parametrization, the Pareto set in the objective space deviates strongly from the true solution. In this case all scalarization methods fail to find the solution close to the true one and a change of model parametrization is necessary. [source]


    Models of Earth's main magnetic field incorporating flux and radial vorticity constraints

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2007
    A. Jackson
    SUMMARY We describe a new technique for implementing the constraints on magnetic fields arising from two hypotheses about the fluid core of the Earth, namely the frozen-flux hypothesis and the hypothesis that the core is in magnetostrophic force balance with negligible leakage of current into the mantle. These hypotheses lead to time-independence of the integrated flux through certain ,null-flux patches' on the core surface, and to time-independence of their radial vorticity. Although the frozen-flux hypothesis has received attention before, constraining the radial vorticity has not previously been attempted. We describe a parametrization and an algorithm for preserving topology of radial magnetic fields at the core surface while allowing morphological changes. The parametrization is a spherical triangle tesselation of the core surface. Topology with respect to a reference model (based on data from the Oersted satellite) is preserved as models at different epochs are perturbed to optimize the fit to the data; the topology preservation is achieved by the imposition of inequality constraints on the model, and the optimization at each iteration is cast as a bounded value least-squares problem. For epochs 2000, 1980, 1945, 1915 and 1882 we are able to produce models of the core field which are consistent with flux and radial vorticity conservation, thus providing no observational evidence for the failure of the underlying assumptions. These models are a step towards the production of models which are optimal for the retrieval of frozen-flux velocity fields at the core surface. [source]


    An ellipticity criterion in magnetotelluric tensor analysis

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2004
    M. Becken
    SUMMARY We examine the magnetotelluric (MT) impedance tensor from the viewpoint of polarization states of the electric and magnetic field. In the presence of a regional 2-D conductivity anomaly, a linearly polarized homogeneous external magnetic field will generally produce secondary electromagnetic fields, which are elliptically polarized. If and only if the primary magnetic field vector oscillates parallel or perpendicular to the 2-D structure, will the horizontal components of the secondary fields at any point of the surface also be linearly polarized. When small-scale inhomogeneities galvanically distort the electric field at the surface, only field rotations and amplifications are observed, while the ellipticity remains unchanged. Thus, the regional strike direction can be identified from vanishing ellipticities of electric and magnetic fields even in presence of distortion. In practice, the MT impedance tensor is analysed rather than the fields themselves. It turns out, that a pair of linearly polarized magnetic and electric fields produces linearly polarized columns of the impedance tensor. As the linearly polarized electric field components generally do not constitute an orthogonal basis, the telluric vectors, i.e. the columns of the impedance tensor, will be non-orthogonal. Their linear polarization, however, is manifested in a common phase for the elements of each column of the tensor and is a well-known indication of galvanic distortion. In order to solve the distortion problem, the telluric vectors are fully parametrized in terms of ellipses and subsequently rotated to the coordinate system in which their ellipticities are minimized. If the minimal ellipticities are close to zero, the existence of a (locally distorted) regional 2-D conductivity anomaly may be assumed. Otherwise, the tensor suggests the presence of a strong 3-D conductivity distribution. In the latter case, a coordinate system is often found, in which three elements have a strong amplitude, while the amplitude of the forth, which is one of the main-diagonal elements, is small. In terms of our ellipse parametrization, this means, that one of the ellipticities of the two telluric vectors approximately vanishes, while the other one may not be neglected as a result of the 3-D response. The reason for this particular characteristic is found in an approximate relation between the polarization state of the telluric vector with vanishing ellipticity and the corresponding horizontal electric field vector in the presence of a shallow conductive structure, across which the perpendicular and tangential components of the electric field obey different boundary conditions. [source]


    Lithosphere structure of Europe and Northern Atlantic from regional three-dimensional gravity modelling

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2002
    T. P. Yegorova
    Summary Large-scale 3D gravity modelling using data averaged on a 1° grid has been performed for the whole European continent and part of the Northern Atlantic. The model consists of two regional layers of variable thickness,the sediments and the crystalline crust, bounded by reliable seismic horizons,the ,seismic' basement and the Moho surface. Inner heterogeneity of the model layers was taken into account in the form of lateral variation of average density depending on the type of geotectonic unit. Density parametrization of the layers was made using correlation functions between velocity and density. For sediments, sediment consolidation with depth was taken into account. Offshore a sea water layer was included in the model. As a result of the modelling, gravity effects of the whole model and its layers were calculated. Along with the gravity modelling an estimation of isostatic equilibrium state has been carried out for the whole model as well as for its separate units. Residual gravity anomalies, obtained by subtracting the gravity effect of the crust from the observed field, reach some hundred mGal (10,5 m s,2) in amplitude; they are mainly caused by density heterogeneities in the upper mantle. A mantle origin of the residual anomalies is substantiated by their correlation with the upper-mantle heterogeneities revealed by both seismological and geothermal studies. Regarding the character of the mantle gravity anomalies, type of isostatic compensation, crustal structure, age and supposed type of endogenic regime, a classification of main geotectonic units of the continent was made. As a result of the modelling a clear division of the continent into two large blocks,Precambrian East-European platform (EEP) and Variscan Western Europe,has been confirmed by their specific mantle gravity anomalies (0 ÷ 50 × 10,5 m s,2 and ,100 ÷,150 × 10,5 m s,2 correspondingly). This division coincides with the Tornquist,Teisseyre Zone (TTZ), marked by a gradient zone of mantle anomalies. In the central part of the EEP (over the Russian plate) an extensive positive mantle anomaly, probably indicating a core of ancient consolidation of the EEP, has been distinguished. To the west and to the east of this anomaly positive mantle anomalies occur, which coincide with a deep suture zone (TTZ) and an orogenic belt (the Urals). Positive mantle anomalies of the Alps, the Adriatic plate and the Calabrian Arc, correlating well with both high-velocity domains in the upper mantle and reduced temperatures at the subcrustal layer, are caused by thickened lithosphere below these structures. Negative mantle anomalies, revealed in the Western Mediterranean Basin and in the Pannonian Basin, are the result of thermal expansion of the asthenosphere shallowing to near-Moho depths below these basins. [source]


    A comprehensive model of the quiet-time, near-Earth magnetic field: phase 3

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2002
    Terence J. Sabaka
    Summary The near-Earth magnetic field is caused by sources in the Earth's core, ionosphere, magnetosphere, lithosphere and from coupling currents between the ionosphere and the magnetosphere, and between hemispheres. Traditionally, the main field (low degree internal field) and magnetospheric field have been modelled simultaneously, with fields from other sources being modelled separately. Such a scheme, however, can introduce spurious features, especially when the spatial and temporal scales of the fields overlap. A new model, designated CM3 (Comprehensive Model: phase 3), is the third in a series of efforts to coestimate fields from all of these sources. This model has been derived from quiet-time Magsat and POGO satellite and observatory hourly means measurements for the period 1960,1985. It represents a significant advance in the treatment of the aforementioned field sources over previous attempts, and includes an accounting for main field influences on the magnetosphere, main field and solar activity influences on the ionosphere, seasonal influences on the coupling currents, a priori characterization of the influence of the ionosphere and the magnetosphere on Earth-induced fields, and an explicit parametrization and estimation of the lithospheric field. The result is a model that describes well the 591 432 data with 16 594 parameters, implying a data-to-parameter ratio of 36, which is larger than several popular field models. [source]


    Artificial neural networks for parameter estimation in geophysics

    GEOPHYSICAL PROSPECTING, Issue 1 2000
    Carlos Calderón-Macías
    Artificial neural systems have been used in a variety of problems in the fields of science and engineering. Here we describe a study of the applicability of neural networks to solving some geophysical inverse problems. In particular, we study the problem of obtaining formation resistivities and layer thicknesses from vertical electrical sounding (VES) data and that of obtaining 1D velocity models from seismic waveform data. We use a two-layer feedforward neural network (FNN) that is trained to predict earth models from measured data. Part of the interest in using FNNs for geophysical inversion is that they are adaptive systems that perform a non-linear mapping between two sets of data from a given domain. In both of our applications, we train FNNs using synthetic data as input to the networks and a layer parametrization of the models as the network output. The earth models used for network training are drawn from an ensemble of random models within some prespecified parameter limits. For network training we use the back-propagation algorithm and a hybrid back-propagation,simulated-annealing method for the VES and seismic inverse problems, respectively. Other fundamental issues for obtaining accurate model parameter estimates using trained FNNs are the size of the training data, the network configuration, the description of the data and the model parametrization. Our simulations indicate that FNNs, if adequately trained, produce reasonably accurate earth models when observed data are input to the FNNs. [source]


    Conformational Analysis and CD Calculations of Methyl-Substituted 13-Tridecano-13-lactones

    HELVETICA CHIMICA ACTA, Issue 2 2005
    Elena Voloshina
    Conformational models covering an energy range of 3,kcal/mol were calculated for (13S)-tetradecano-13-lactone (3), (12S)-12-methyltridecano-13-lactone (4), and (12S,13R)-12-methyltetradecano-13-lactone (8), starting from a semiempirical Monte-Carlo search with AM1 parametrization, and subsequent optimization of the 100 best conformers at the 6-31G*/B3LYP and then the TZVP/B3LYP level of density-functional theory. CD Spectra for these models were calculated by the time-dependent DFT method with the same functional and basis sets as for the ground-state calculations and Boltzmann weighting of the individual conformers. The good correlation of the calculated and experimental spectra substantiates the interpretation of these conformational models for the structure,odor correlation of musks. Furthermore, the application of the quadrant rule in the estimation of the Cotton effect for macrolide conformers is critically discussed. [source]


    A stabilized pseudo-shell approach for surface parametrization in CFD design problems

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 4 2002
    O. Soto
    Abstract A surface representation for computational fluid dynamics (CFD) shape design problems is presented. The surface representation is based on the solution of a simplified pseudo-shell problem on the surface to be optimized. A stabilized finite element formulation is used to perform this step. The methodology has the advantage of being completely independent of the CAD representation. Moreover, the user does not have to predefine any set of shape functions to parameterize the surface. The scheme uses a reasonable discretization of the surface to automatically build the shape deformation modes, by using the pseudo-shell approach and the design parameter positions. Almost every point of the surface grid can be chosen as design parameter, which leads to a very rich design space. Most of the design variables are chosen in an automatic way, which makes the scheme easy to use. Furthermore, the surface grid is not distorted through the design cycles which avoids remeshing procedures. An example is presented to demonstrate the proposed methodology. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Optimized damage detection of steel plates from noisy impact test

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2006
    G. Rus
    Abstract Model-based non-destructive evaluation proceeds measuring the response after an excitation on an accessible area of the structure. The basis for processing this information has been established in recent years as an iterative scheme that minimizes the discrepancy between this experimental measurement and sequence of measurement trials predicted by a numerical model. The unknown damage that minimizes this discrepancy by means of a cost functional is to be found. The damage location and size is quantified and sought by means of a well-conditioned parametrization. The design of the magnitude to measure, its filtering for reducing noise effects and calibration, as well as the design of the cost functional and parametrization, determines the robustness of the search to combat noise and other uncertainty factors. These are key open issues to improve the sensitivity and identifiability during the information processing. Among them, a filter for the cost functional is proposed in this study for maximal sensitivity to the damage detection of steel plate under the impact loading. This filter is designed by means of a wavelet decomposition together with a selection of the measuring points, and the optimization criterion is built on an estimate of the probability of detection, using genetic algorithms. Numerical examples show that the use of the optimal filter allows to find damage of a magnitude several times smaller. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    State-feedback adaptive tracking of linear systems with input and state delays

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 6 2009
    Boris Mirkin
    Abstract A state-feedback Lyapunov-based design of direct model reference adaptive control is developed for a class of linear systems with input and state delays based only on lumped delays without so-called distributed-delay blocks. The design procedure is based on the concept of reference trajectory prediction, and on the formulation of an augmented error. We propose a controller parametrization that attempts to anticipate the future states. An appropriate Lyapunov,Krasovskii type functional is found for the design and the stability analysis. A simulation example illustrates the new controller. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    On new parametrization methods for the estimation of linear state,space models

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 9-10 2004
    T. Ribarits
    Abstract In this paper we introduce two variants of a new parametrization for state,space systems which we will both call separable least squares data driven local co-ordinates (slsDDLC). SlsDDLC is obtained by modifying the parametrization by data driven local co-ordinates (DDLC). These modifications lead to analogous parametrizations, and we show how they can be used for a suitably concentrated likelihood criterion function. The concentration step can be done by an ordinary or generalized least squares step. An obvious consequence is the reduced number of parameters in the iterative search algorithm. The application of the parametrizations to maximum likelihood identification is exemplified. Simulations indicate that the usage of slsDDLC for concentrated likelihood functions has numerical advantages as compared to the usage of the more commonly used echelon canonical form or conventional DDLC for the likelihood function. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Model-based shape from shading for microelectronics applications

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2006
    A. Nissenboim
    Abstract Model-based shape from shading (SFS) is a promising paradigm introduced by Atick et al. [Neural Comput 8 (1996), 1321,1340] in 1996 for solving inverse problems when we happen to have a lot of prior information on the depth profiles to be recovered. In the present work we adopt this approach to address the problem of recovering wafer profiles from images taken using a scanning electron microscope (SEM). This problem arises naturally in the microelectronics inspection industry. A low-dimensional model, based on our prior knowledge on the types of depth profiles of wafer surfaces, has been developed, and based on it the SFS problem becomes an optimal parameter estimation. Wavelet techniques were then employed to calculate a good initial guess to be used in a minimization process that yields the desired profile parametrization. A Levenberg,Marguardt (LM) optimization procedure has been adopted to address ill-posedness of the SFS problem and to ensure stable numerical convergence. The proposed algorithm has been tested on synthetic images, using both Lambertian and SEM imaging models. © 2006 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 16, 65,76, 2006 [source]


    On open-set lattices and some of their applications in semantics

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 12 2003
    Mouw-Ching Tjiok
    In this article, we present the theory of Kripke semantics, along with the mathematical framework and applications of Kripke semantics. We take the Kripke-Sato approach to define the knowledge operator in relation to Hintikka's possible worlds model, which is an application of the semantics of intuitionistic logic and modal logic. The applications are interesting from the viewpoint of agent interactives and process interaction. We propose (i) an application of possible worlds semantics, which enables the evaluation of the truth value of a conditional sentence without explicitly defining the operator "," (implication), through clustering on the space of events (worlds) using the notion of neighborhood; and (ii) a semantical approach to treat discrete dynamic process using Kripke-Beth semantics. Starting from the topological approach, we define the measure-theoretical machinery, in particular, we adopt the methods developed in stochastic process,mainly the martingale,to our semantics; this involves some Boolean algebraic (BA) manipulations. The clustering on the space of events (worlds), using the notion of neighborhood, enables us to define an accessibility relation that is necessary for the evaluation of the conditional sentence. Our approach is by taking the neighborhood as an open set and looking at topological properties using metric space, in particular, the so-called ,-ball; then, we can perform the implication by computing Euclidean distance, whenever we introduce a certain enumerative scheme to transform the semantic objects into mathematical objects. Thus, this method provides an approach to quantify semantic notions. Combining with modal operators Ki operating on E set, it provides a more-computable way to recognize the "indistinguishability" in some applications, e.g., electronic catalogue. Because semantics used in this context is a local matter, we also propose the application of sheaf theory for passing local information to global information. By looking at Kripke interpretation as a function with values in an open-set lattice ,,U, which is formed by stepwise verification process, we obtain a topological space structure. Now, using the measure-theoretical approach by taking the Borel set and Borel function in defining measurable functions, this can be extended to treat the dynamical aspect of processes; from the stochastic process, considered as a family of random variables over a measure space (the probability space triple), we draw two strong parallels between Kripke semantics and stochastic process (mainly martingales): first, the strong affinity of Kripke-Beth path semantics and time path of the process; and second, the treatment of time as parametrization to the dynamic process using the technique of filtration, adapted process, and progressive process. The technique provides very effective manipulation of BA in the form of random variables and ,-subalgebra under the cover of measurable functions. This enables us to adopt the computational algorithms obtained for stochastic processes to path semantics. Besides, using the technique of measurable functions, we indeed obtain an intrinsic way to introduce the notion of time sequence. © 2003 Wiley Periodicals, Inc. [source]


    Soft Coulomb hole method applied to molecules

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 5 2007
    J. Ortega-Castro
    Abstract The soft Coulomb hole method introduces a perturbation operator, defined by ,e/r12 to take into account electron correlation effects, where , represents the width of the Coulomb hole. A new parametrization for the soft Coulomb hole operator is presented with the purpose of obtaining better molecular geometries than those resulting from Hartree,Fock calculations, as well as correlation energies. The 12 parameters included in , were determined for a reference set of 12 molecules and applied to a large set of molecules (38 homo- and heteronuclear diatomic molecules, and 37 small and medium-size molecules). For these systems, the optimized geometries were compared with experimental values; correlation energies were compared with results of the MP2, B3LYP, and Gaussian 3 approach. On average, molecular geometries are better than the Hartree,Fock values, and correlation energies yield results halfway between MP2 and B3LYP. © 2006 Wiley Periodicals, Inc. Int J Quantum Chem, 2007 [source]


    Global optimization of SixHy at the ab initio level via an iteratively parametrized semiempirical method

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 4-5 2003
    Yingbin Ge
    Abstract Previously we searched for the ab initio global minima of several SixHy clusters by a genetic algorithm in which we used the AM1 semiempirical method to facilitate a rapid energy calculation for the many different cluster geometries explored. However, we found that the AM1 energy ranking significantly differs from the ab initio energy ranking. To better guarantee locating the ab initio global minimum while retaining the efficiency of the AM1 method, we present an improved iterative global optimization strategy. The method involves two separate genetic algorithms that are invoked consecutively. One is the cluster genetic algorithm (CGA), mentioned above, to find the semiempirical SixHy cluster global minimum. A second and separate parametrization genetic algorithm (PGA) is used to reparametrize the AM1 method using some of the ab initio data generated from the CGA to form a training set of different reference clusters but with fixed SixHy stoichiometry. The cluster global optimization search (CGA) and the semiempirical parametrization (PGA) steps are performed iteratively until the semiempirical GA reparametrized AM1 (GAM1) calculations give low-energy optimized structures that are consistent with the globally optimized ab initio structure. We illustrate the new global optimization strategy by attempting to find the ab initio global minima for the Si6H2 and Si6H6 clusters. © 2003 Wiley Periodicals, Inc. Int J Quantum Chem, 2003 [source]


    An augmented system approach to static output-feedback stabilization with ,, performance for continuous-time plants

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 7 2009
    Zhan Shu
    Abstract This paper revisits the static output-feedback stabilization problem of continuous-time linear systems from a novel perspective. The closed-loop system is represented in an augmented form, which facilitates the parametrization of the controller matrix. Then, new equivalent characterizations on stability and ,, performance of the closed-loop system are established in terms of matrix inequalities. On the basis of these characterizations, a necessary and sufficient condition with slack matrices for output-feedback stabilizability is proposed, and an iteration algorithm is given to solve the condition. An extension to output-feedback ,, control is provided as well. The effectiveness and merits of the proposed approach are shown through several examples. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Polynomial control: past, present, and future

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 8 2007
    Vladimír Ku
    Abstract Polynomial techniques have made important contributions to systems and control theory. Engineers in industry often find polynomial and frequency domain methods easier to use than state equation-based techniques. Control theorists show that results obtained in isolation using either approach are in fact closely related. Polynomial system description provides input,output models for linear systems with rational transfer functions. These models display two important system properties, namely poles and zeros, in a transparent manner. A performance specification in terms of polynomials is natural in many situations; see pole allocation techniques. A specific control system design technique, called polynomial equation approach, was developed in the 1960s and 1970s. The distinguishing feature of this technique is a reduction of controller synthesis to a solution of linear polynomial equations of a specific (Diophantine or Bézout) type. In most cases, control systems are designed to be stable and meet additional specifications, such as optimality and robustness. It is therefore natural to design the systems step by step: stabilization first, then the additional specifications each at a time. For this it is obviously necessary to have any and all solutions of the current step available before proceeding any further. This motivates the need for a parametrization of all controllers that stabilize a given plant. In fact this result has become a key tool for the sequential design paradigm. The additional specifications are met by selecting an appropriate parameter. This is simple, systematic, and transparent. However, the strategy suffers from an excessive grow of the controller order. This article is a guided tour through the polynomial control system design. The origins of the parametrization of stabilizing controllers, called Youla,Ku,era parametrization, are explained. Standard results on reference tracking, disturbance elimination, pole placement, deadbeat control, H2 control, l1 control and robust stabilization are summarized. New and exciting applications of the Youla,Ku,era parametrization are then discussed: stabilization subject to input constraints, output overshoot reduction, and fixed-order stabilizing controller design. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Anomalies in the Foundations of Ridge Regression: Some Clarifications

    INTERNATIONAL STATISTICAL REVIEW, Issue 2 2010
    Prasenjit Kapat
    Summary Several anomalies in the foundations of ridge regression from the perspective of constrained least-square (LS) problems were pointed out in Jensen & Ramirez. Some of these so-called anomalies, attributed to the non-monotonic behaviour of the norm of unconstrained ridge estimators and the consequent lack of sufficiency of Lagrange's principle, are shown to be incorrect. It is noted in this paper that, for a fixed,Y, norms of unconstrained ridge estimators corresponding to the given basis are indeed strictly monotone. Furthermore, the conditions for sufficiency of Lagrange's principle are valid for a suitable range of the constraint parameter. The discrepancy arose in the context of one data set due to confusion between estimates of the parameter vector,,,, corresponding to different parametrization (choice of bases) and/or constraint norms. In order to avoid such confusion, it is suggested that the parameter,,,corresponding to each basis be labelled appropriately. Résumé Plusieurs anomalies ont été récemment relevées par Jensen et Ramirez (2008) dans les fondements théoriques de la "ridge regression" considérée dans une perspective de moindres carrés constraints. Certaines de ces anomalies ont été attribuées au comportement non monotone de la norme des "ridge-estimateurs" non contraints, ainsi qu'au caractère non suffisant du principe de Lagrange. Nous indiquons dans cet article que, pour une valeur fixée de,Y, la norme des ridge-estimateurs correspondant à une base donnée sont strictement monotones. En outre, les conditions assurant le caractère suffisant du principe de Lagrange sont satisfaites pour un ensemble adéquat de valeurs du paramètre contraint. L'origine des anomalies relevées se trouve donc ailleurs. Cette apparente contradiction prend son origine, dans le contexte de l'étude d'un ensemble de données particulier, dans la confusion entre les estimateurs du vecteur de paramètres,,,correspondant à différentes paramétrisations (associées à différents choix d'une base) et/ou à différentes normes. Afin d'éviter ce type de confusion, il est suggéré d'indexer le paramètre de façon adéquate au moyen de la base choisie. [source]


    CHARMM general force field: A force field for drug-like molecules compatible with the CHARMM all-atom additive biological force fields

    JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 4 2010
    K. Vanommeslaeghe
    Abstract The widely used CHARMM additive all-atom force field includes parameters for proteins, nucleic acids, lipids, and carbohydrates. In the present article, an extension of the CHARMM force field to drug-like molecules is presented. The resulting CHARMM General Force Field (CGenFF) covers a wide range of chemical groups present in biomolecules and drug-like molecules, including a large number of heterocyclic scaffolds. The parametrization philosophy behind the force field focuses on quality at the expense of transferability, with the implementation concentrating on an extensible force field. Statistics related to the quality of the parametrization with a focus on experimental validation are presented. Additionally, the parametrization procedure, described fully in the present article in the context of the model systems, pyrrolidine, and 3-phenoxymethylpyrrolidine will allow users to readily extend the force field to chemical groups that are not explicitly covered in the force field as well as add functional groups to and link together molecules already available in the force field. CGenFF thus makes it possible to perform "all-CHARMM" simulations on drug-target interactions thereby extending the utility of CHARMM force fields to medicinally relevant systems. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2010 [source]


    A new method for the gradient-based optimization of molecular complexes

    JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 9 2009
    Jan Fuhrmann
    Abstract We present a novel method for the local optimization of molecular complexes. This new approach is especially suited for usage in molecular docking. In molecular modeling, molecules are often described employing a compact representation to reduce the number of degrees of freedom. This compact representation is realized by fixing bond lengths and angles while permitting changes in translation, orientation, and selected dihedral angles. Gradient-based energy minimization of molecular complexes using this representation suffers from well-known singularities arising during the optimization process. We suggest an approach new in the field of structure optimization that allows to employ gradient-based optimization algorithms for such a compact representation. We propose to use exponential mapping to define the molecular orientation which facilitates calculating the orientational gradient. To avoid singularities of this parametrization, the local minimization algorithm is modified to change efficiently the orientational parameters while preserving the molecular orientation, i.e. we perform well-defined jumps on the objective function. Our approach is applicable to continuous, but not necessarily differentiable objective functions. We evaluated our new method by optimizing several ligands with an increasing number of internal degrees of freedom in the presence of large receptors. In comparison to the method of Solis and Wets in the challenging case of a non-differentiable scoring function, our proposed method leads to substantially improved results in all test cases, i.e. we obtain better scores in fewer steps for all complexes. © 2008 Wiley Periodicals, Inc. J Comput Chem, 2009 [source]


    Introducing Radius of Torsure and Cylindroid of Torsure

    JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 8 2003
    David B. Dooner
    Robotic path planning can involve the specification of the position and orientation of an end-effector to achieve a desired task (e.g., deburring, welding, or surface metrology). Under such scenarios the end-effector is instantaneously rotating and translating about a unique axis; the ISA. Alternately, the performance of direct contact mechanisms (viz., cam systems and gear pairs) are dependent on the surface geometry between two surfaces in direct contact. Determination of this geometry can entail the parametrization of a family of geodesics curves embedded within each surface. This parametrization is tantamount to an end-effector rotating and translating about an ISA. In both scenarios there is a unique ISA for each geodesic embedded in a surface. Here, curvature and torsion of a spatial curve are coupled together to define an alternative definition for the radius-of-curvature of a spatial curve. This new definition is identified as radius of torsure to distinguish it from the classical definition for radius-of-curvature. Further, it is shown that the family of twists that corresponds to the pencil of geodesics coincident with a point on a surface defines a cylindroid: the cylindroid of torsure. An illustrative example is provided to demonstrate this difference. © 2003 Wiley Periodicals, Inc. [source]


    Optimal operation of GaN thin film epitaxy employing control vector parametrization

    AICHE JOURNAL, Issue 4 2006
    Amit Varshney
    Abstract An approach that links nonlinear model reduction techniques with control vector parametrization-based schemes is presented, to efficiently solve dynamic constraint optimization problems arising in the context of spatially-distributed processes governed by highly-dissipative nonlinear partial-differential equations (PDEs), utilizing standard nonlinear programming techniques. The method of weighted residuals with empirical eigenfunctions (obtained via Karhunen-Loève expansion) as basis functions is employed for spatial discretization together with control vector parametrization formulation for temporal discretization. The stimulus for the earlier approach is provided by the presence of low order dominant dynamics in the case of highly dissipative parabolic PDEs. Spatial discretization based on these few dominant modes (which are elegantly captured by empirical eigenfunctions) takes into account the actual spatiotemporal behavior of the PDE which cannot be captured using finite difference or finite element techniques with a small number of discretization points/elements. The proposed approach is used to compute the optimal operating profile of a metallorganic vapor-phase epitaxy process for the production of GaN thin films, with the objective to minimize the spatial nonuniformity of the deposited film across the substrate surface by adequately manipulating the spatiotemporal concentration profiles of Ga and N precursors at the reactor inlet. It is demonstrated that the reduced order optimization problem thus formulated using the proposed approach for nonlinear order reduction results in considerable savings of computational resources and is simultaneously accurate. It is demonstrated that by optimally changing the precursor concentration across the reactor inlet it is possible to reduce the thickness nonuniformity of the deposited film from a nominal 33% to 3.1%. © 2005 American Institute of Chemical Engineers AIChE J, 2006 [source]


    On a quadrature algorithm for the piecewise linear wavelet collocation applied to boundary integral equations

    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 11 2003
    Andreas Rathsfeld
    Abstract In this paper, we consider a piecewise linear collocation method for the solution of a pseudo-differential equation of order r=0, ,1 over a closed and smooth boundary manifold. The trial space is the space of all continuous and piecewise linear functions defined over a uniform triangular grid and the collocation points are the grid points. For the wavelet basis in the trial space we choose the three-point hierarchical basis together with a slight modification near the boundary points of the global patches of parametrization. We choose linear combinations of Dirac delta functionals as wavelet basis in the space of test functionals. For the corresponding wavelet algorithm, we show that the parametrization can be approximated by low-order piecewise polynomial interpolation and that the integrals in the stiffness matrix can be computed by quadrature, where the quadrature rules are composite rules of simple low-order quadratures. The whole algorithm for the assembling of the matrix requires no more than O(N [logN]3) arithmetic operations, and the error of the collocation approximation, including the compression, the approximative parametrization, and the quadratures, is less than O(N,(2,r)/2). Note that, in contrast to well-known algorithms by Petersdorff, Schwab, and Schneider, only a finite degree of smoothness is required. In contrast to an algorithm of Ehrich and Rathsfeld, no multiplicative splitting of the kernel function is required. Beside the usual mapping properties of the integral operator in low order Sobolev spaces, estimates of Calderón,Zygmund type are the only assumptions on the kernel function. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    An optimal shape design formulation for inhomogeneous dam problems

    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 6 2002
    Abdelkrim Chakib
    In this paper, the flow problem of incompressible liquid through an inhomogeneous porous medium (say dam), with permeability allowing parametrization of the free boundary by a graph of continuous unidimensional function, is considered. We propose a new formulation on an optimal shape design problem. We show the existence of a solution of the optimal shape design problem. The finite element method is used to obtain numerical results which show the efficiency of the proposed approach. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    The matricial Schur problem in both nondegenerate and degenerate cases

    MATHEMATISCHE NACHRICHTEN, Issue 2 2009
    Bernd Fritzsche
    Abstract The principal object of this paper is to present a new approach simultaneously to both nondegenerate and degenerate cases of the matricial Schur problem. This approach is based on an analysis of the central matrixvalued Schur functions which was started in [24],[26] and then continued in [27]. In the nondegenerate situation we will see that the parametrization of the solution set obtained here coincides with the well-known formula of D. Z. Arov and M. G. Kre,n for that case (see [1]). Furthermore, we give some characterizations of the situation that the matricial Schur problem has a unique solution (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Nonlinear Riemann,Hilbert problems with circular target curves

    MATHEMATISCHE NACHRICHTEN, Issue 9 2008
    Christer Glader
    Abstract The paper gives a systematic and self-contained treatment of the nonlinear Riemann,Hilbert problem with circular target curves |w , c | = r, sometimes also called the generalized modulus problem. We assume that c and r are Hölder continuous functions on the unit circle and describe the complete set of solutions w in the disk algebra H, , C and in the Hardy space H, of bounded holomorphic functions. The approach is based on the interplay with the Nehari problem of best approximation by bounded holomorphic functions. It is shown that the considered problems fall into three classes (regular, singular, and void) and we give criteria which allow to classify a given problem. For regular problems the target manifold is covered by the traces of solutions with winding number zero in a schlicht manner. Counterexamples demonstrate that this need not be so if the boundary condition is merely continuous. Paying special attention to constructive aspects of the matter we show how the Nevanlinna parametrization of the full solution set can be obtained from one particular solution of arbitrary winding number. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]