Polynomials

Distribution by Scientific Domains

Kinds of Polynomials

  • bernstein polynomial
  • characteristic polynomial
  • chebyshev polynomial
  • fractional polynomial
  • legendre polynomial
  • orthogonal polynomial

  • Terms modified by Polynomials

  • polynomial approximation
  • polynomial basis
  • polynomial equation
  • polynomial expansion
  • polynomial form
  • polynomial function
  • polynomial model
  • polynomial models
  • polynomial regression
  • polynomial regression analysis
  • polynomial regression models
  • polynomial system
  • polynomial system states
  • polynomial time
  • polynomial time algorithm

  • Selected Abstracts


    Polynomial and analytic stabilization of a wave equation coupled with an Euler,Bernoulli beam

    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 5 2009
    Kaďs Ammari
    Abstract We consider a stabilization problem for a model arising in the control of noise. We prove that in the case where the control zone does not satisfy the geometric control condition, B.L.R. (see Bardos et al. SIAM J. Control Optim. 1992; 30:1024,1065), we have a polynomial stability result for all regular initial data. Moreover, we give a precise estimate on the analyticity of reachable functions where we have an exponential stability. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Fast High-Dimensional Filtering Using the Permutohedral Lattice

    COMPUTER GRAPHICS FORUM, Issue 2 2010
    Andrew Adams
    Abstract Many useful algorithms for processing images and geometry fall under the general framework of high-dimensional Gaussian filtering. This family of algorithms includes bilateral filtering and non-local means. We propose a new way to perform such filters using the permutohedral lattice, which tessellates high-dimensional space with uniform simplices. Our algorithm is the first implementation of a high-dimensional Gaussian filter that is both linear in input size and polynomial in dimensionality. Furthermore it is parameter-free, apart from the filter size, and achieves a consistently high accuracy relative to ground truth (> 45 dB). We use this to demonstrate a number of interactive-rate applications of filters in as high as eight dimensions. [source]


    Out-of-core compression and decompression of large n -dimensional scalar fields

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    Lawrence Ibarria
    We present a simple method for compressing very large and regularly sampled scalar fields. Our method is particularlyattractive when the entire data set does not fit in memory and when the sampling rate is high relative to thefeature size of the scalar field in all dimensions. Although we report results foranddata sets, the proposedapproach may be applied to higher dimensions. The method is based on the new Lorenzo predictor, introducedhere, which estimates the value of the scalar field at each sample from the values at processed neighbors. The predictedvalues are exact when the n-dimensional scalar field is an implicit polynomial of degreen, 1. Surprisingly,when the residuals (differences between the actual and predicted values) are encoded using arithmetic coding,the proposed method often outperforms wavelet compression in anL,sense. The proposed approach may beused both for lossy and lossless compression and is well suited for out-of-core compression and decompression,because a trivial implementation, which sweeps through the data set reading it once, requires maintaining only asmall buffer in core memory, whose size barely exceeds a single (n,1)- dimensional slice of the data. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Compression, scalar fields,out-of-core. [source]


    Maternal blood glucose in diabetic pregnancies and cognitive performance in offspring in young adulthood: a Danish cohort study

    DIABETIC MEDICINE, Issue 7 2010
    G. L. Nielsen
    Diabet. Med. 27, 786,790 (2010) Abstract Aims, Maternal diabetes is a known risk factor for perinatal complications, but there are little data on consequences for long-term intellectual outcome in offspring. We assess cognitive performance in military conscripts according to maternal blood glucose levels during pregnancy. Methods, We identified a cohort of 60 Danish male offspring of insulin-treated diabetic mothers born between 1976 and 1984 and followed this cohort to military conscription. From medical records, we extracted data on all available values of maternal blood glucose categorized as fasting and non-fasting and by day in pregnancy, together with maternal White class, smoking habits and socio-economic status. The main outcome was cognitive performance at conscription measured with a validated intelligence test. The association between maternal blood glucose level and cognitive performance was assessed by multivariate linear regression and a fitted fractional polynomial. Results, Median fasting blood glucose values in the second half of pregnancy was negatively associated with cognitive scores at conscription [adjusted coefficient ,1.7; 95% confidence interval (CI) ,3.0; ,0.4]. Restriction to only first-born sibling slightly strengthened the association (coefficient ,1.9; 95% CI ,3.3; ,0.5), but after exclusion of two pregnancies with the blood glucose > 10 mmol/l the association became insignificant (coefficient ,0.6; 95% CI ,2.6; 1.4). Conclusions, Maternal blood glucose level during diabetic pregnancy is negatively associated with cognitive performance in offspring at military conscription. In pregnancies with fasting blood glucose levels below 10 mmol/l, the association is weak and considered to be without clinical relevance. [source]


    The effect of bidirectional flow on tidal channel planforms

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 3 2004
    Sergio Fagherazzi
    Abstract Salt marsh tidal channels are highly sinuous. For this project, ,eld surveys and aerial photographs were used to characterize the planform of tidal channels at China Camp Marsh in the San Francisco Bay, California. To model the planform evolution, we assume that the topographic curvature of the channel centreline is a key element driving meander migration. Extraction of curvature data from a planimetric survey, however, presents certain problems because simple calculations based on equally distanced points on the channel axis produce numerical noise that pollutes the ,nal curvature data. We found that a spline interpolation and a polynomial ,t to the survey data provided us with a robust means of calculating channel curvature. The curvature calculations, combined with data from numerous cross-sections along the tidal channel, were used to parameterize a computer model. With this model, based on recent theoretical work, the relationship between planform shape and meander migration as well as the consequences of bidirectional ,ow on planform evolution have been investigated. Bank failure in vegetated salt marsh channels is characterized by slump blocks that persist in the channel for several years. It is therefore possible to identify reaches of active bank erosion and test model predictions. Our results suggest that the geometry and evolution of meanders at China Camp Marsh, California, re,ect the ebb-dominated regime. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Shrinkage of initially very wet soil blocks, cores and clods from a range of European Andosol horizons

    EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 2 2007
    F. Bartoli
    Summary In advanced stages of volcanic ash soil formation, when more clay is formed, soil porosity values and soil water retention capacities are large and the soils show pronounced shrinkage on drying. Soil shrinkage is a key issue in volcanic soil environments because it often occurs irreversibly when topsoils dry out after changes from permanent grassland or forest to agriculture. European Andosols have developed in a wide range of climatic conditions, leading to a wide range in intensity of both weathering and organo-mineral interactions. The question arises as to whether these differences affect their shrinkage properties. We aimed to identify common physically based shrinkage laws which could be derived from soil structure, the analysis of soil constituents, the selected sampling size and the drying procedure. We found that the final volumetric shrinkage of the initially field-wet (56,86% of total porosity) or capillary-wet (87,100% of total porosity) undisturbed soil samples was negatively related to initial bulk density and positively related to initial capillary porosity (volumetric soil water content of soil cores after capillary rise). These relationships were linear for the soil clods of 3,8 cm3, with final shrinkage ranging from 21.2 to 52.2%. For soil blocks of 240 cm3 and soil cores of 28.6 cm3 we found polynomial and exponential relationships, respectively, with thresholds separating shrinkage and nearly non-shrinkage domains, and larger shrinkage values for the soil cores than for the soil blocks. For a given sample size, shrinkage was more pronounced in the most weathered and most porous Andosol horizons, rich in Al-humus, than in the less weathered and less porous Andosol horizons, poor in Al-humus. The Bw horizons, being more weathered and more porous, shrank more than the Ah horizons. We showed that the structural approach combining drying kinetics under vacuum, soil water analysis and mercury porosimetry is useful for relating water loss and shrinkage to soil structure and its dynamics. We also found that the more shrinkage that occurred in the Andosol horizon, the more pronounced was its irreversible mechanical change. [source]


    Determination of moisture content in a deformable soil using time-domain reflectometry (TDR)

    EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 1 2000
    D. J. Kim
    Summary Time-domain reflectometry (TDR) is being used increasingly for measuring the moisture content of porous media. However, successful application for measuring water in soil has been limited to non-deformable soils, and it would be a valuable extension of the technique if it could be used for soils that shrink on drying. We have recently investigated its application to soils rich in clay and organic matter and peats. Here we propose a method for determining moisture content in deformable soils based on the relation between the dielectric constant, K, and the volumetric moisture content, ,, measured by TDR. Parallel TDR probes with a length of 15 cm and a spacing of 2 cm were placed horizontally in soil cores with a diameter of 20 cm and height of 10 cm taken from a forest. The soil is very porous with large proportions of both silt and clay. The sample weight and travel time of the electromagnetic wave guided by parallel TDR probes were simultaneously measured as a function of time, from saturation to oven-dryness during which the core samples shrank considerably. Vertical and horizontal components of shrinkage were also measured to take the air-exposed region of TDR probe into account in the determination of K. The effect of deformation on volumetric moisture content was formulated for two different expressions, namely actual volumetric moisture content (AVMC) and fictitious (uncorrected) volumetric moisture content (FVMC). The effects of air-exposure and expressions of volumetric moisture content on the relation between K and, were examined by fitting the observations with a third-order polynomial. Neglecting the travel time in the air-exposed part or use of the FVMC underestimated the , for a given K. The difference was more pronounced between AVMC and FVMC than between two different dielectric constants, i.e. accounting for air-exposure, Kac, and not accounting for air-exposure, Kau. When the existing empirical models were compared with the fitted results, most underestimated the relation based on the AVMC. This indicates that published empirical models do not reflect the effect of deformation on the determination of , in our forest soil. Correct use of the , expression has more impact on determining moisture content of a deformable soil than the accommodation of travel time through the air-exposed region of TDR probe. [source]


    Computation of time delay margin for power system small-signal stability

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 7 2009
    Saffet AyasunArticle first published online: 19 JUN 200
    Abstract With the extensive use of phasor measurement units (PMU) in the wide-area measurement/monitoring systems (WAMS), time delays have become unavoidable in power systems. This paper presents a direct and exact method to compute the delay margin of power systems with single and commensurate time delays. The delay margin is the maximum amount of time delay that the system can tolerate before it becomes unstable for a given operating point. First, without using any approximation or substitution, the transcendental characteristic equation is converted into a polynomial without the transcendentality such that its real roots coincide with the imaginary roots of the characteristic equation exactly. The resulting polynomial also enables us to easily determine the delay dependency of the system stability and the sensitivities of crossing roots with respect to time delay. Then, an expression in terms of system parameters and imaginary root of the characteristic equation is derived for computing the delay margin. The proposed method is applied to a single-machine-infinite bus (SMIB) power system with an exciter. Delay margins are computed for a wide range of system parameters including generator mechanical power, damping and transient reactance, exciter gain, and transmission line reactance. The results indicate that the delay margin decreases as the mechanical power, exciter gain and line reactance increase while it increases with increasing generator transient reactance Additionally, the relationship between the delay margin and generator damping is found be relatively complex. Finally, the theoretical delay margin results are validated using the time-domain simulations of Matlab. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Estimation of an optimal mixed-phase inverse filter

    GEOPHYSICAL PROSPECTING, Issue 4 2000
    Bjorn Ursin
    Inverse filtering is applied to seismic data to remove the effect of the wavelet and to obtain an estimate of the reflectivity series. In many cases the wavelet is not known, and only an estimate of its autocorrelation function (ACF) can be computed. Solving the Yule-Walker equations gives the inverse filter which corresponds to a minimum-delay wavelet. When the wavelet is mixed delay, this inverse filter produces a poor result. By solving the extended Yule-Walker equations with the ACF of lag , on the main diagonal of the filter equations, it is possible to decompose the inverse filter into a finite-length filter convolved with an infinite-length filter. In a previous paper we proposed a mixed-delay inverse filter where the finite-length filter is maximum delay and the infinite-length filter is minimum delay. Here, we refine this technique by analysing the roots of the Z -transform polynomial of the finite-length filter. By varying the number of roots which are placed inside the unit circle of the mixed-delay inverse filter, at most 2, different filters are obtained. Applying each filter to a small data set (say a CMP gather), we choose the optimal filter to be the one for which the output has the largest Lp -norm, with p=5. This is done for increasing values of , to obtain a final optimal filter. From this optimal filter it is easy to construct the inverse wavelet which may be used as an estimate of the seismic wavelet. The new procedure has been applied to a synthetic wavelet and to an airgun wavelet to test its performance, and also to verify that the reconstructed wavelet is close to the original wavelet. The algorithm has also been applied to prestack marine seismic data, resulting in an improved stacked section compared with the one obtained by using a minimum-delay filter. [source]


    Serum bilirubin levels and mortality after myeloablative allogeneic hematopoietic cell transplantation,

    HEPATOLOGY, Issue 2 2005
    Ted A. Gooley
    Many patients who undergo hematopoietic cell transplantation experience liver injury. We examined the association of serum bilirubin levels with nonrelapse mortality by day +200, testing the hypothesis that the duration of jaundice up to a given point in time provides more prognostic information than either the maximum bilirubin value or the value at that point in time. We studied 1,419 consecutive patients transplanted from allogeneic donors. Total serum bilirubin values up to day +100, death, or relapse were retrieved,along with nonrelapse mortality by day +200 as an outcome measure,using Cox regression models with each bilirubin measure modeled as a time-dependent covariate. The bilirubin value at a particular point in time provided the best fit to the model for mortality. With bilirubin at a point in time modeled as an 8th-degree polynomial, an increase in bilirubin from 1 to 3 mg/dL is associated with a mortality hazard ratio of 6.42. An increase from 4 to 6 mg/dL yields a hazard ratio of 2.05, and an increase from 10 to 12 mg/dL yields a hazard ratio of 1.17. Among patients who were deeply jaundiced, survival was related to the absence of multiorgan failure and to higher platelet counts. In conclusion, the value of total serum bilirubin at a particular point in time after transplant carries more informative prognostic information than does the maximum or average value up to that point in time. The increase in mortality for a given increase in bilirubin value is larger when the starting value is lower. (HEPATOLOGY 2005,41:345,352.) [source]


    Curvature- and displacement-based finite element analyses of flexible slider crank mechanisms

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 10 2010
    Y. L. Kuo
    Abstract The paper presents the applications of the curvature- and displacement-based finite element methods to flexible slider crank mechanisms. The displacement-based method usually needs more elements or high-degree polynomials to obtain highly accurate solutions. The curvature-based method assumes a polynomial to approximate a curvature distribution, and the expressions are investigated to obtain the displacement and rotation distributions. During the process, the boundary conditions associated with displacement, rotation, and curvature are imposed, which leads the great reduction of the number of degrees of freedom that are required. The numerical results demonstrate that the errors obtained by applying the curvature-based method are much smaller than those by applying the displacement-based method, based on the comparison of the same number of degrees of freedom. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Exact integration of polynomial,exponential products with application to wave-based numerical methods

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 3 2009
    G. Gabard
    Abstract Wave-based numerical methods often require to integrate products of polynomials and exponentials. With quadrature methods, this task can be particularly expensive at high frequencies as large numbers of integration points are required. This paper presents a set of closed-form solutions for the integrals of polynomial,exponential products in two and three dimensions. These results apply to arbitrary polygons in two dimensions, and for arbitrary polygonal surfaces or polyhedral volumes in three dimensions. Quadrature methods are therefore not required for this class of integrals that can be evaluated quickly and exactly. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Parametric enrichment adaptivity by the extended finite element method

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2008
    Haim Waisman
    Abstract An adaptive method within the extended finite element method (XFEM) framework which adapts the enrichment function locally to the physics of a problem, as opposed to polynomial or mesh refinement, is presented. The method minimizes a local residual and determines the parameters of the enrichment function. We consider an energy form and a ,strong' form of the residual as error measures to drive the algorithm. Numerical examples for boundary layers and solid mechanics problems illustrate that the procedure converges. Moreover, when only the character of the solution is known, a good approximation is obtained in the area of interest. It is also shown that the method can be used to determine the order of singularities in solutions. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Moving least-square interpolants in the hybrid particle method

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2005
    H. Huang
    Abstract The hybrid particle method (HPM) is a particle-based method for the solution of high-speed dynamic structural problems. In the current formulation of the HPM, a moving least-squares (MLS) interpolant is used to compute the derivatives of stress and velocity components. Compared with the use of the MLS interpolant at interior particles, the boundary particles require two additional treatments in order to compute the derivatives accurately. These are the rotation of the local co-ordinate system and the imposition of boundary constraints, respectively. In this paper, it is first shown that the derivatives found by the MLS interpolant based on a complete polynomial are indifferent to the orientation of the co-ordinate system. Secondly, it is shown that imposing boundary constraints is equivalent to employing ghost particles with proper values assigned at these particles. The latter can further be viewed as placing the boundary particle in the centre of a neighbourhood that is formed jointly by the original neighbouring particles and the ghost particles. The benefit of providing a symmetric or a full circle of neighbouring points is revealed by examining the error terms generated in approximating the derivatives of a Taylor polynomial by using a linear-polynomial-based MLS interpolant. Symmetric boundaries have mostly been treated by using ghost particles in various versions of the available particle methods that are based on the strong form of the conservation equations. In light of the equivalence of the respective treatments of imposing boundary constraints and adding ghost particles, an alternative treatment for symmetry boundaries is proposed that involves imposing only the symmetry boundary constraints for the HPM. Numerical results are presented to demonstrate the validity of the proposed approach for symmetric boundaries in an axisymmetric impact problem. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    A fast boundary cloud method for 3D exterior electrostatic analysis

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2004
    Vaishali Shrivastava
    Abstract An accelerated boundary cloud method (BCM) for boundary-only analysis of 3D electrostatic problems is presented here. BCM uses scattered points unlike the classical boundary element method (BEM) which uses boundary elements to discretize the surface of the conductors. BCM combines the weighted least-squares approach for the construction of approximation functions with a boundary integral formulation for the governing equations. A linear base interpolating polynomial that can vary from cloud to cloud is employed. The boundary integrals are computed by using a cell structure and different schemes have been used to evaluate the weakly singular and non-singular integrals. A singular value decomposition (SVD) based acceleration technique is employed to solve the dense linear system of equations arising in BCM. The performance of BCM is compared with BEM for several 3D examples. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Direct adaptive command following and disturbance rejection for minimum phase systems with unknown relative degree

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2007
    Jesse B. Hoagg
    Abstract This paper considers parameter-monotonic direct adaptive command following and disturbance rejection for single-input single-output minimum-phase linear time-invariant systems with knowledge of the sign of the high-frequency gain (first non-zero Markov parameter) and an upper bound on the magnitude of the high-frequency gain. We assume that the command and disturbance signals are generated by a linear system with known characteristic polynomial. Furthermore, we assume that the command signal is measured, but the disturbance signal is unmeasured. The first part of the paper is devoted to a fixed-gain analysis of a high-gain-stabilizing dynamic compensator for command following and disturbance rejection. The compensator utilizes a Fibonacci series construction to control systems with unknown-but-bounded relative degree. We then introduce a parameter-monotonic adaptive law and guarantee asymptotic command following and disturbance rejection. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Stochastic rationality and Möbius inverse

    INTERNATIONAL JOURNAL OF ECONOMIC THEORY, Issue 3 2005
    Antoine Billot
    C71; D46; D63 Discrete choice theory is very much dominated by the paradigm of the maximization of a random utility, thus implying that the probability of choosing an alternative in a given set is equal to the sum of the probabilities of all the rankings for which this alternative comes first. This property is called stochastic rationality. In turn, the choice probability system is said to be stochastically rationalizable if and only if the Block,Marschak polynomials are all nonnegative. In the present paper, we show that each particular Block,Marschak polynomial can be defined as the probability that the decision-maker faces the loss in flexibility generated by the fact that a particular alternative has been deleted from the choice set. [source]


    An extension of the differential approach for Bayesian network inference to dynamic Bayesian networks

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 8 2004
    Boris Brandherm
    We extend Darwiche's differential approach to inference in Bayesian networks (BNs) to handle specific problems that arise in the context of dynamic Bayesian networks (DBNs). We first summarize Darwiche's approach for BNs, which involves the representation of a BN in terms of a multivariate polynomial. We then show how procedures for the computation of corresponding polynomials for DBNs can be derived. These procedures permit not only an exact roll-up of old time slices but also a constant-space evaluation of DBNs. The method is applicable to both forward and backward propagation, and it does not presuppose that each time slice of the DBN has the same structure. It is compatible with approximative methods for roll-up and evaluation of DBNs. Finally, we discuss further ways of improving efficiency, referring as an example to a mobile system in which the computation is distributed over a normal workstation and a resource-limited mobile device. © 2004 Wiley Periodicals, Inc. Int J Int Syst 19: 727,748, 2004. [source]


    Checking identities is computationally intractable NP-hard and therefore human provers will always be needed

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1-2 2004
    Vladik Kreinovich
    A 1990 article in the American Mathematical Monthly has shown that most combinatorial identities of the type described in Monthly problems can be solved by known identity checking algorithms. A natural question arises: are these algorithms always feasible or can the number of computational steps be so big that application of these algorithms sometimes is not physically feasible? We prove that the problem of checking identities is nondeterministic polynomial (NP) hard, and thus (unless NP = P) for every algorithm that solves it, there are cases in which this algorithm would require exponentially long running time and thus will not be feasible. This means that no matter how successful computers are in checking identities, human mathematicians will always be needed to check some of them. © 2004 Wiley Periodicals, Inc. [source]


    Scapular development from the neonatal period to skeletal maturity: A preliminary study

    INTERNATIONAL JOURNAL OF OSTEOARCHAEOLOGY, Issue 5 2007
    C. Rissech
    Abstract An understanding of the basic growth rates and patterns of development for each element of the human skeleton is important for a thorough understanding and interpretation of data in all areas of skeletal research. Yet surprisingly little is known about the detailed ontogenetic development of many bones, including the scapula. With the intention of describing the changes that accompany postnatal ontogeny in the scapula and algorithms to predict sub-adult age at death, this communication examines the development of the scapula through nine measurements (3 from the glenoidal area, 4 from the body and 2 related to the spinous process) by polynomial regression. Data were collected from 31 of the individuals that comprise the Scheuer Collection, which is housed at the University of Dundee (Scotland). Four of the derived mathematical curves (scapular length, infra- and suprascapular height and spine length) displayed linear growth, whilst three (maximum length of the glenoid mass, acromial width and scapular width) were best expressed by a second-degree polynomial and two (maximum and middle diameter of the glenoidal surface) by a third-degree polynomial. All single measurements proved useful in the prediction of age at death, although derived indices proved to be of limited value. In particular, scapular width, suprascapular height and acromial width showed reliable levels of age prediction until late adolescent years. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Explicitly correlated SCF study of anharmonic vibrations in (H2O)2

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 4-5 2002
    Donald D. Shillady
    Abstract Modeling solvation in high-pressure liquid chromatography (HPLC) requires calculation of anharmonic vibrational frequencies of solvent clusters for a statistical partition function. An efficient computational method that includes electron correlation is highly desirable for large clusters. A modified version of the "soft Coulomb hole" method of Chakravorty and Clementi has recently been implemented in a Gaussian-lobe-orbital (GLO) program (PCLOBE) to include explicit electron,electron correlation in molecules. The soft Coulomb hole is based on a modified form of Coulomb's law: An algorithm has been developed to obtain the parameter "w" from a polynomial in the effective scaling of each primitive Gaussian orbital relative to the best single Gaussian of the H1s orbital. This method yields over 90% of the correlation energy for molecules of low symmetry for which the original formula of Chakravorty and Clementi does not apply. In this work, all the vibrations of the water dimer are treated anharmonically. A quartic perturbation of the harmonic vibrational modes is constrained to be equal to the exact Morse potential eigenvalue based on a three-point fit. This work evaluates the usefulness of fitting a Morse potential to a hydrogen bond vibrational mode and finds it to be slightly better than using MP2 vibrational analysis for this important dimer. A three-point estimate of the depth, De, of a Morse potential leads to a correction formula for anharmonicity in terms of the perturbed harmonic frequency: When scaled by 0.9141, the harmonic Morse method leads to essentially the same results as scaling the BPW91 local density method by 0.9827. © 2002 Wiley Periodicals, Inc. Int J Quantum Chem, 2002 [source]


    Efficient analysis of wireless communication antennas using an accurate [Z] matrix interpolation technique

    INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 4 2010
    Yikai Chen
    Abstract An accurate impedance matrix interpolation technique based on the surface integral equation (SIE) is presented for the analysis of wireless communication antennas over wide frequency bands. The first-order derivative of the impedance matrix at the internal frequency is considered in the cubic polynomial-based interpolation scheme, thus the novel impedance matrix interpolation scheme will provide high accuracy and high efficiency over a frequency band. To demonstrate the efficiency and accuracy of the proposed method, numerical results for planar inverted F antennas (PIFA) and a wideband E-shaped patch antenna are presented. Good agreement among the interpolation results, exact MoM solutions, finite element method (FEM) solutions, and measured data is observed over the bandwidth. Besides, dimensions of the feeding probe are also studied to investigate their effect on the input impedance and radiation patterns. © 2010 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2010. [source]


    On robust stability of uncertain systems with multiple time-delays

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 15 2010
    Tong ZhouArticle first published online: 27 NOV 200
    Abstract On the basis of an infinite to one mapping and the structure of the null space of a multivariate matrix polynomial (MMP), a novel sufficient condition is derived in this paper for the robust stability of a linear time-invariant system with multiple uncertain time-delays, parametric modelling errors and unmodelled dynamics. This condition depends on time-delay bounds and is less conservative than the existing ones. An attractive property is that this condition becomes also necessary in some physically meaningful situations, such as the case that there is only one uncertain time-delay and neither parametric perturbations nor unmodelling errors exist. Moreover, using ideas of representing a positive-definite MMP through matrix sum of squares, an asymptotic necessary and sufficient condition is derived for the robust stability of this system. All the conditions can be converted to linear matrix inequalities. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A Kharitonov-like theorem for robust stability independent of delay of interval quasipolynomials

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 6 2010
    Onur Toker
    Abstract In this paper, a Kharitonov-like theorem is proved for testing robust stability independent of delay of interval quasipolynomials, p(s)+,eqk(s), where p and qk's are interval polynomials with uncertain coefficients. It is shown that the robust stability test of the quasipolynomial basically reduces to the stability test of a set of Kharitonov-like vertex quasipolynomials, where stability is interpreted as stability independent of delay. As discovered in (IEEE Trans. Autom. Control 2008; 53:1219,1234), the well-known vertex-type robust stability result reported in (IMA J. Math. Contr. Info. 1988; 5:117,123) (See also (IEEE Trans. Circ. Syst. 1990; 37(7):969,972; Proc. 34th IEEE Conf. Decision Contr., New Orleans, LA, December 1995; 392,394) does contain a flaw. An alternative approach is proposed in (IEEE Trans. Autom. Control 2008; 53:1219,1234), and both frequency sweeping and vertex type robust stability tests are developed for quasipolynomials with polytopic coefficient uncertainties. Under a specific assumption, it is shown in (IEEE Trans. Autom. Control 2008; 53:1219,1234) that robust stability independent of delay of an interval quasipolynomial can be reduced to stability independent of delay of a set of Kharitonov-like vertex quasipolynomials. In this paper, we show that the assumption made in (IEEE Trans. Autom. Control 2008; 53:1219,1234) is redundant, and the Kharitonov-like result reported in (IEEE Trans. Autom. Control 2008; 53:1219,1234) is true without any additional assumption, and can be applied to all quasipolynomials. The key idea used in (IEEE Trans. Autom. Control 2008; 53:1219,1234) was the equivalence of Hurwitz stability and , -o -stability for interval polynomials with constant term never equal to zero. This simple observation implies that the well-known Kharitonov theorem for Hurwitz stability can be applied for , -o -stability, provided that the constant term of the interval polynomial never vanishes. However, this line of approach is based on a specific assumption, which we call the CNF-assumption. In this paper, we follow a different approach: First, robust , -o -stability problem is studied in a more general framework, including the cases where degree drop is allowed, and the constant term as well as other higher-orders terms can vanish. Then, generalized Kharitonov-like theorems are proved for , -o -stability, and inspired by the techniques used in (IEEE Trans. Autom. Control 2008; 53:1219,1234), it is shown that robust stability independent of delay of an interval quasipolynomial can be reduced to stability independent of delay of a set of Kharitonov-like vertex quasipolynomials, even if the assumption adopted in (IEEE Trans. Autom. Control 2008; 53:1219,1234) is not satisfied. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Polynomial control: past, present, and future

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 8 2007
    Vladimír Ku
    Abstract Polynomial techniques have made important contributions to systems and control theory. Engineers in industry often find polynomial and frequency domain methods easier to use than state equation-based techniques. Control theorists show that results obtained in isolation using either approach are in fact closely related. Polynomial system description provides input,output models for linear systems with rational transfer functions. These models display two important system properties, namely poles and zeros, in a transparent manner. A performance specification in terms of polynomials is natural in many situations; see pole allocation techniques. A specific control system design technique, called polynomial equation approach, was developed in the 1960s and 1970s. The distinguishing feature of this technique is a reduction of controller synthesis to a solution of linear polynomial equations of a specific (Diophantine or Bézout) type. In most cases, control systems are designed to be stable and meet additional specifications, such as optimality and robustness. It is therefore natural to design the systems step by step: stabilization first, then the additional specifications each at a time. For this it is obviously necessary to have any and all solutions of the current step available before proceeding any further. This motivates the need for a parametrization of all controllers that stabilize a given plant. In fact this result has become a key tool for the sequential design paradigm. The additional specifications are met by selecting an appropriate parameter. This is simple, systematic, and transparent. However, the strategy suffers from an excessive grow of the controller order. This article is a guided tour through the polynomial control system design. The origins of the parametrization of stabilizing controllers, called Youla,Ku,era parametrization, are explained. Standard results on reference tracking, disturbance elimination, pole placement, deadbeat control, H2 control, l1 control and robust stabilization are summarized. New and exciting applications of the Youla,Ku,era parametrization are then discussed: stabilization subject to input constraints, output overshoot reduction, and fixed-order stabilizing controller design. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Global optimization for robust control synthesis based on the Matrix Product Eigenvalue Problem

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 9 2001
    Yuji Yamada
    Abstract In this paper, we propose a new formulation for a class of optimization problems which occur in general robust control synthesis, called the Matrix Product Eigenvalue Problem (MPEP): Minimize the maximum eigenvalue of the product of two block-diagonal positive-definite symmetric matrices under convex constraints. This optimization class falls between methods of guaranteed low complexity such as the linear matrix inequality (LMI) optimization and methods known to be NP-hard such as the bilinear matrix inequality (BMI) formulation, while still addressing most robust control synthesis problems involving BMIs encountered in applications. The objective of this paper is to provide an algorithm to find a global solution within any specified tolerance , for the MPEP. We show that a finite number of LMI problems suffice to find the global solution and analyse its computational complexity in terms of the iteration number. We prove that the worst-case iteration number grows no faster than a polynomial of the inverse of the tolerance given a fixed size of the block-diagonal matrices in the eigenvalue condition. Copyright 2001 © John Wiley & Sons, Ltd. [source]


    On the sample-complexity of ,, identification

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 7 2001
    S. R. Venkatesh
    Abstract In this paper we derive the sample complexity for discrete time linear time-invariant stable systems described in the ,, topology. The problem set-up is as follows: the ,, norm distance between the unknown real system and a known finitely parameterized family of systems is bounded by a known real number. We can associate, for every feasible real system, a model in the finitely parameterized family that minimizes the ,, distance. The question now arises as to how long a data record is required to identify such a model from noisy input,output data. This question has been addressed in the context of l1, ,2 and several other topologies, and it has been shown that the sample-complexity is polynomial. Nevertheless, it turns out that for the ,, topology the sample-complexity in the worst case can be infinite. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    The small-angle scattering structure functions of the single tetrahedron

    JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3-1 2003
    W. Gille
    Basic properties of the SAS correlation function , (r) and related functions are represented for a tetrahedron of edge length a. An interval splitting into four basic r -intervals in a sequence of cases for averaging the intersection volume between two tetrahedrons has been performed. Remarkably simple analytic expressions result in the first r -interval. Indeed, ,(r) is a polynomial of degree three. The coefficients are given explicitly. The asymptotic expansion I(h) is compared with the exact scattering intensity I(h). [source]


    A rational rank four demand system

    JOURNAL OF APPLIED ECONOMETRICS, Issue 2 2003
    Arthur Lewbel
    Past parametric tests of demand system rank employed polynomial Engel curve systems. However, by Gorman's (1981) theorem, the maximum possible rank of a utility-derived polynomial demand system is three. The present paper proposes a class of demand systems that are utility derived, are close to polynomial, and have rank four. These systems nest rational polynomial demands, and so can be used to test ranks up to four. These systems are suitable for applications where high rank is likely, such as demand systems involving a large number of goods. A test of rank using this new class of systems is applied to UK consumer demand data. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Modelling the combined effect of temperature, pH and aw on the growth rate of Monascus ruber, a heat-resistant fungus isolated from green table olives

    JOURNAL OF APPLIED MICROBIOLOGY, Issue 1 2003
    E.Z. Panagou
    Abstract Aims: Growth modes predicting the effect of pH (3·5,5·0), NaCl (2,10%), i.e. aw (0·937,0·970) and temperature (20,40°C) on the colony growth rate of Monascus ruber, a fungus isolated from thermally-processed olives of the Conservolea variety, were developed on a solid culture medium. Methods and Results: Fungal growth was measured as colony diameter on a daily basis. The primary predictive model of Baranyi was used to fit the growth data and estimate the maximum specific growth rates. Combined secondary predictive models were developed and comparatively evaluated based on polynomial, Davey, gamma concept and Rosso equations. The data-set was fitted successfully in all models. However, models with biological interpretable parameters (gamma concept and Rosso equation) were highly rated compared with the polynomial equation and Davey model and gave realistic cardinal pHs, temperatures and aw. Conclusions: The combined effect of temperature, pH and aw on growth responses of M. ruber could be satisfactorily predicted under the current experimental conditions, and the models examined could serve as tools for this purpose. Significance and Impact of the Study: The results can be successfully employed by the industry to predict the extent of fungal growth on table olives. [source]